playlist
stringclasses
160 values
file_name
stringlengths
9
102
content
stringlengths
29
329k
European_Civiliization_16481945_with_John_Merriman
5_The_Enlightenment_and_the_Public_Sphere.txt
Prof: This is the beginning of the French part of the course. Today I'm going to talk about the Enlightenment and the cultural concomitants of the French Revolution, and how people began to imagine an alternate sense of sovereignty in the nation. You're in for a treat on Monday, because I have one of the only bootlegged copies of the live speech and then the execution of Louis XVI. I'm going to play that and also the death of Citizen Marat in the bathtub at the hands of Charlotte Corday. That will be on Monday. Then I'll talk about--inevitably, though I don't particularly like him--Napoleon on Wednesday. The next three times are about la belle France, but because the Revolution is terribly important and, indeed, in lots of ways in many places--it's well worth doing. So, I'm going to do four things today. I'm going to first talk about your basic outline of what difference the Enlightenment made, followed by--with reference to my good friend Bob Darnton's work, The Social History of Ideas--and look at surprising ways that Enlightenment influence was felt. I'm not talking about the big-time people like Rousseau and Voltaire, but the Grub Street hacks. Then I'm going to talk a little bit about the public sphere, taking two examples. One from the work of David Bell and one from the work of Sarah Maza, in which you can see this emergence of this possibly different way of viewing sovereignty residing in the nation and not in a king. Then look again at what difference, in a very strange way, the Enlightenment made in all of this. First, the kind of classic stuff, just to review for you. If you had to summarize six ways that the Enlightenment mattered, you might list them like this. First of all, without question, Enlightenment thought--although the Enlightenment thinkers disagreed on many things, and a few were atheists, but not many, most were Deists and believed that God was everywhere--the Enlightenment did weaken the hold of traditional religion, particularly the role of the Catholic Church as a public institution in France. Of course, if you read in high-school French or wherever Candide, which is blatantly antireligious, of Voltaire, you'll see the most extreme example of that. Secondly, and related to this, Enlightenment thinkers taught a secular code of ethics, one that was divorced from religious beliefs. That they were engaged with humanity. They loved humanity. They thought people were basically good, and this shouldn't be just a valley of tears awaiting eternal life, and went out to make such claims. Third is that they developed a critical spirit of analysis not to accept routine tradition. Truths that were passed down from generation to generation, particularly those passed down by the religious establishment and--not to accept routine tradition, such, for example, routine hierarchies. This was part of their spirit of analysis. Fourth, they were curious about history and believed in progress. They were convinced that France had a special role to play in this. To be sure, the Enlightenment was to be found in many places in Europe, and in what became the United States. Paris particularly played a central role in that. Fifth, that they differentiated absolutism from despotism. In order to understand what happens in this remarkable series of events in 1789 and in subsequent years, as I said before, there weren't ten people in France who considered themselves republicans, that is who wanted a republic in 1789 and how it was that two or three years later, in 1793 it became easy for the majority of the population to imagine a life without any king at all. Sixth--and here's the role of the Grub Street hacks, of the third division of Enlightenment thinkers that I'll talk about in just a few minutes--they heaped abuse against what they considered to be unearned, unjustified privilege, and--;how can one put this--disrespected the monarchy and the nobles who hung around the king. One can say in hindsight, because we know what happened next, that the Enlightenment helped prepare the way for the French Revolution and that the French Revolution transferred power, transferred authority to people who were very influenced by the Enlightenment. The classic example which I will give, because I enjoy talking about him so much and he is important, is Maximilian Robespierre, who in many ways was a child of the philosophes. As you know, the philosophes, which is such an important word in French, became a word in English. The philosophes were the thinkers of the Enlightenment. Robespierre, born in Arras in the northern part of France and now what is the Pas de Calais, was very much influenced by Enlightenment thought. As the twelve of the Committee of Public Safety sat around that big green table making decisions that affected the lives of lots of people in France, Enlightenment influence was certainly there. The Enlightenment stretched across frontiers. I think there's a map in the second edition of the book that you are kindly reading where you can find copies of Diderot's famous encyclopedia. You could find it in South Africa. You could find it in Moscow. You could find it in Philadelphia. You could find it in New York. You could find it in papal Rome. You could find it all over the place. One thing that's interesting is that twenty or thirty years ago, when I was starting out, when people did what they used to call intellectual history, intellectual history was the big ideas. You call it the Via Regia of history, where you've got one idea moving along and it hooks up with another idea, and then a third idea comes as a result of that idea. It's rather like traditional art history where you try to discover where it is that Pizarro got his red ochre as a color, or where such and such a Baroque painter got the idea of painting cherubs in a certain way. In the very early ‘70s, my colleague, retired a long time, a great historian Peter Gay, coined the phrase "the social history of ideas." Ideas, too, have a history. Who understood Rousseau? Who read Voltaire? Who read the Encyclopedia of Diderot? How do we know how these ideas were used? He called for the social history of ideas. A number of people, including Bob Darnton who is now at Harvard, but taught at Princeton for decades and who turned out the finest historians of old regime France, took this very seriously. So did another friend of mine, Danielle Roche, who did work on the académies, which I'll discuss in a while. They began to look at how Enlightenment ideas got around. How did copies of Candide, which was illegal in France, censored. How did these things turn up in France? Let me just say a couple things about this. Enlightenment ideas really came into elite popular opinion, in what we call the public sphere--that is, people who are interested in ideas, and people who became interested in politics--in really three major ways in France. There are equivalents of these in other place. The Scottish example because the Scottish Enlightenment is very important. You can read about it in the book. One of these will be quite clear there. First, through, académies. An academy was not a university. It has nothing to do with a university. They still exist. I'm a member of one of these academies, an obscure one in the Ardèche. An academy was a group of erudites, sometimes including clergy, many nobles, many bourgeois people of education. The population that was literate increases in Western Europe decidedly in the eighteenth century. They would get together and discuss ideas. They had contests where people who wanted to make a little money would answer a question put out by the académie. They would write responses to questions about science, religion, and big ideas. Robespierre wins one of these contests. These académies meet in smaller rooms than this, but they discuss ideas. These ideas are putting in sharp analysis, or re-evaluation, the role of the church as an institution. They have to get around some way. People have to know about them. The academy is one way this happened. A second one, moving to number three, are Masonic lodges. Masonic lodges still exist. There's one, I don't know if it's still active, but there's a big Masonic building out on Whitney on the right. I think it became an insurance building. One of the horrors of my seventh grade life was having to be dragged off to the dancing school in Portland, Oregon, which met in a large building which was a Masonic lodge. Masonic lodges begin in Scotland. They are secularizing institutions that the members mostly all agree, agreed in the eighteenth century, that the church's public institution role is too important. Masonic lodges talk about these ideas as well. They talk about Rousseau, Diderot, Montesquieu, and all of these people. This is a second way in which these ideas get out. The third is the salon. There's another French word that's so important it became an English word. A salon was a gathering of pretty elite people, but interested in the life of the mind. They were hosted by hostesses, again, the role of women in the Enlightenment. I give you in the book the example of Madame Geoffrin, which was the classic one. People would come together to eat, to drink, and to discuss ideas. When British guests came to Paris, the salons, they said, "All they do is eat and drink. They spend all their time eating and drinking, and they don't discuss ideas that much." In fact, they did. There's still a wonderful place in Paris called the Palais Royal where you can go, and on very hot days--in the eighteenth century, in the 1770s, you can imagine people meeting there talking about the ideas of young Enlightenment hotshots, those people who have become part of the canon of western civilization. This is another way where these ideas get along. Young, would-be philosophes on the make coming up from the provinces, what they want to do is be introduced to one of these hostesses so that they will be invited to trot out their intellectual wares at one of these gatherings. These are concrete examples of the way that these ideas got around. People didn't pay any attention to this before about thirty, thirty-five years ago. Danielle Roche's book on the académies, two huge volumes not translated, are really marvelous in all of that. That's something to keep in mind. The high Enlightenment really ends in 1778, traditionally. That's a textbook kind of date. But it does matter, because that's when Voltaire and his great enemy, Rousseau, both die. After that, there are no more Montesquieus, or Voltaires, or the big-time all stars of the philosophes. There aren't any more. But there is this next generation of would-be philosophes, people who could think and write and who want to hit the big time. They see that Voltaire made big bucks, big francs, big livre writing. They want to be like him. They want to be like Voltaire. They want to be like Rousseau, his archenemy who paced around his little farm called Les Charmettes in Savoy outside of Chambéry, and who hated Voltaire. They really couldn't stand each other. But he also hit the big time. What the Grub Street hacks, Grub Street refers to--I don't know if it's a real street or imaginary street in London where lots of would-be writers and writers who are peddling their wares hung around. These Grub Street hacks, the third division of people who want to have the kind of entry into the salon life to put forth their ideas. They live on the top floor, where the poorest people live. They're dodging their landlords all the time. They don't have enough money to pay. A lot of them live in Paris around now around Odéon. It doesn't matter if you know Odéon at all, but they live right around that part in the Latin Quarter, but more in what now is the Sixth Arrondissement, and they write. But what do they write about? They need to make money. The big news here, as Darnton discovered, is they write pornography. They write scatological pornography. They write what they call in French libelles. They write broadsides, really, denouncing the royal family, denouncing the queen above all. Indelibly called "the Austrian whore" by her many detractors, who are omnipresent. They write against what they think is unearned privilege, the kind of censorship that they see is keeping them from hitting the big time. The point of Darnton's many wonderful books is that in the long run, although these people were the Grub Street hacks, Voltaire denigrated them as the canaille, the rabble, the scribblers, jealous, eager, anxious, hungry--;is that their attacks on the regime and against the unearned privilege, as they saw it, helps erode belief in the monarchy, and helps suggest that the monarchy itself and the people hanging around the monarchy at Versailles is lapsing into despotism. So, they do make a difference. Let me give you an example. This is sort of a classic one. Imagine you're a bookseller in Poitiers. Poitiers is a very nice town full of lovely old romance churches in central western France. You are writing to Switzerland to order books that you want to sell to people who have ordered books, for example. He writes the following letter: "Here is a short list of philosophical books (books written by the philosophes) that I want to order. Please send the invoice in advance. They include: Venus and the Cloister, or, The Nun in the Nightgown; Christianity Unveiled (that could be the subtitle, too); Memories of Madame la Marquise de Pompadour; Inquiry on the Origins of Oriental Despotism; The System of Nature; Teresa the Philosopher; Margot the Camp Follower. This is not exactly the stuff of Diderot, Voltaire, Rousseau, and these other people. But yet, those who penned such things imagined that they were philosophes and wanted to have the same kind of impact that Voltaire and the others had. These were la canaille, the kind of rabble of Grub Street. Why was he writing Switzerland to begin with? Again, this is the question of how do these ideas get around? One of the things that the Grub Street hacks didn't like is that you've got censors. You've got paid censors who work for the government who say, "This can't be published" or "Un-uh. You shouldn't have published that baby. That wasn't a good idea." The result is, in a system in which privilege, of which monopolies, of which guilds controlled the production and distribution of almost everything, that book-selling and book-printing are monopolies controlled by the state. So, if you're a Parisian printer, unless you're risking being thrown in the slammer, the slammer (that's not a real word in French, obviously), you can't print this stuff out in Paris. So much of the Enlightenment literature is published in--you'll not be shocked to know--you already know this, Amsterdam, or in Brussels, in the southern Netherlands, or Switzerland. Bob Darnton, when he was a young professor, before that a young graduate student, he hit the jackpot. A lot of the stuff was printed in Switzerland in Neufchatel, and he got hold of the archives of this printing company. He was able to do the social history of ideas. Who bought what? By the way, before the Ryan Air or any of these places, how do you get all of this stuff from Switzerland, where there are big mountains, into France? How do you get it to Poitiers? How do you get it there? Again, you have to look at the way that this stuff is distributed. The ideas, we've already seen how they were distributed, but how literally do you get these books, these bouquins, these pamphlets, these brochures from Switzerland or the equivalent, from Amsterdam or Brussels--that's easier, a flat country is a little different than mountains--into France? Well, in France, as in the German states, as in Italy, there were peddlers. There were peddlers. They would go on the road and they had--like a medicine ball in a gym, they'd have these huge leather bags. They'd be stuffed with all sorts of things--pens and pins and--I think I mentioned this is another context--and religious literature, but also hidden at the bottom, beneath the religious literature, they are smuggling into France Enlightenment literature. They have drop-off points. They go over the Jura Mountains, that's not so easy to do, and they take them to a city like Chaumont in the east, or Metz or Nancy. Then somebody else carries the stuff all the way. Avoiding the police around Paris, the gendarmerie, the maréchaussées as they were called then, this stuff, Margot the Camp Follower ends up pleasing this drooling guy in Poitiers, who can buy it from his bookstore. You can really follow not only Diderot's encyclopedia--and how do we know where Diderot's encyclopedia ended up, by the way? Well, for example, people who leave wills, that's how we know about literature in the nineteenth century, because the libraries in estates would be detailed, so we know what books people had. In the eighteenth century we have a tremendous proliferation of ideas, of reading, of literacy, and of ways of discussing these ideas. I've already mentioned three of them there, but if you look at the case of Britain, you've got the coffeehouses. Coffeehouses follow the mania of coffee. Coffee comes from where? The colonies. So, coffeehouses are part of this sort of globalization of the economy, but also the globalization of ideas. This stuff all kind of fits together. Again, Voltaire, Rousseau, Montesquieu, and Diderot and the others would be just horrified to think that anybody intellectual would be mentioning some of these Grub Street hacks along with them, because they didn't accept them and nor were they of the same quality at all. Some of these guys, by the way, there's one who I mention in the book, a guy called Brissot, B-R-I-S-S-O-T, who becomes an important leader of a faction in the Revolution called the Girondins, from around Bordeaux, who are against the Jacobins. More about that later. Brissot is broke. How is he going to pay the landlord? He has no idea. Where is he going to get his next drink? What does he do? He works for the police. He works as a police informer on the other would-be philosophes, on the Grub Street hacks. How do we know this? Darnton found the dossier in which Brissot is being paid off by these people. What are other ways that we know how many of these characters there were? There were about 200 to 300. I don't remember exactly. Why? Because they want money. On one hand they say, "I don't like this censorship, it's keeping people from recognizing my true genius." On the other hand they write little sniveling letters saying, "I am a writer and a very good one, indeed. Therefore, I merit a state pension." They write these letters to the various equivalents of ministries, saying, "Please give me some money, because I am a really wonderful writer and instead of repressing my work, you should be saluting my genius." They write clammy letters like that, that you find in the archives. We piece this stuff together gradually. Let me just race over here and give you just an example of--where is this stuff?--here we go. What would Voltaire think of this? Here is one of these pamphlets that's denouncing the high-livers out at Versailles. "The public is warned that an epidemic disease is raging among the girls of the opera, that has begun to reach the ladies of the court and that has been communicated to their lackeys. This disease elongates the face, destroys the complexion, reduces the weight, and causes horrible ravages where it becomes situated. There are ladies without teeth, others without eyebrows, and some are now completely paralyzed." People want to know what it is. It's obviously a venereal disease. He's obviously exaggerating the--who knows? I don't know--but the results of such a malady. What he's doing is he's suggesting that what's really going on at Versailles is lots of people--how do I put this politely?--hooking up all over the place at the petites réunions, while they're dressing up as peasants, or whatever, and that the result is very demeaning for the French state and for the French monarchy. So, does this have an effect? It does. It really does. It contributes to what has been called the desacralization of the French monarchy. It is very hard to argue that God has put absolute monarchs on earth to bring people better lives if you've got these people--and ordinary people did not get into Versailles, unless they were among the 15,000 lackeys working there. Lackeys would be the term given by the people who employed them. You didn't know. You had to surmise. You had to guess what was going on. I'll give you a couple of examples in a while, and I'd better hurry up and do this, in which you can kind of see how this works. I'll give you kind of a spectacularly interesting example, at least I think so, at the end. By the way, just as an aside, during the French Revolution Louis XVI decides to get the hell out. He and Marie Antoinette, improbably, dress themselves up as ordinary people. They're not people who have to set the alarm clock usually. They get up at 3:00 in the morning and they get into this large carriage that's been stuffed with silver and foie gras and all sorts of other things. They hightail it toward what is now the Belgian frontier, but they get further and further behind. You can read about this when you get to that chapter. It's an interesting story. Finally, they get recognized. She is not a governess, she is the queen. One of the people that first realizes this is the king has actually caught a glimpse of the king himself by looking through the fence at Versailles at the time of a wedding. He sees the king. There aren't photographs. He recognizes the king's nose. He gets down on his knees and says, "Sire, you are the king." This guy can no longer pretend that he's a mere hanger-on assisting a Russian baroness. It's all over but the shoutin' at that point. What these people do is they helped break down this sense of automatic respect for the monarchy as an institution. Of course, the fact that they can't stand Marie Antoinette who, rightly or wrongly, is accused of all sorts of things. This is racing ahead of the story, but Louis XVI was a big-time cuckolded guy. His wife was seriously sleeping around while he was taking apart and putting back together clocks, which he liked to do in the big house when his wife is out in the bushes, to put it crudely. I forgot this is being televised. Anyway, take that back. Can you erase that, please? Anyway, what these people do is over the long run this helps erode respect for the monarchy, and helps us explain why it was in 1789 you could imagine a world without a king and a world without a queen. When they bring them back, they bring the old boy and his wife back from Varennes, which is in the northeast of France, the National Guard turns their backs in serious disrespect to the carriage, and they hold their guns upside down. At that point, that's la fin des haricots, the end of the green beans, as the French say, for the king. But this process started earlier. The third-string hacks of the Enlightenment had something to do with it. Let me give you a couple of examples also from other friends, but really good serious work has been done in the last twenty or twenty-five years. Rocketing right along here, let me give you another example of this relationship of the public sphere to imagining a new source of sovereignty, that is the nation, and give you an example of how that works. This comes from the work of David Bell, who is my colleague and still very dear friend who teaches at Johns Hopkins. This is from his work on lawyers in the eighteenth century. We'll give you an example of how this fits together. It fits into the Enlightenment stuff, because if Enlightenment literature was censored and sometimes hard to get hold of, though an encyclopedia was tolerated and then not tolerated and then tolerated again, what Bell's work on lawyers demonstrates is the way in which lawyers and legal briefs help get these ideas around as well. Because you could not censor legal briefs. To give you just an aside, don't worry about this now. In the case of imperial Russia in the late nineteenth century you had big-time censorship by the police. In fact, when there were these political trials, lots of what was said in the courtroom got around as well and couldn't really be censored in the way that ordinary publications could. You had this same sort of effect there. Let me give you a couple examples. They're complicated examples, but don't worry about them. The first would be from this very strange not really heresy, but I guess the Catholic Church considered it a heresy called Jansenism. I remember once coming in to do the equivalent of this lecture and having to look at my own book for a good definition of Jansenism that I must have found once, because it's so obscure. There was a bishop who didn't think he was obscure, but a Belgian bishop called a Jansen who thought that the Catholic Church was becoming too over-mighty, and full of Baroque masses and huge expenses for archbishops who weren't doing a damn thing. He imagined another kind of religion and became very ascetic. Somebody once called them Calvinists who went to mass. They were still Catholic, but they didn't believe in this high Baroque church. Jansenism was in 1715 or so. Then it comes back in the 1760s or 1770s. It's extremely boring stuff. Louis XIV didn't think it was boring. He sent out the troops to burn down Jansenist abbeys, the big one was called Port-Royal outside of Paris. He thought that this was a threat to the Galician Church, which was sort of the alliance of the Catholic Church with the monarchy in France. Rather like Carthage, they were supposed to plow salt into the land and all this business. So, they wage war on these Jansenist people, who were rather like Calvinists in many ways. But the only point of that is that there are lawyers who begin to defend the Jansenists and begin to see the actions of the king vis-à vis this persecuted religious minority as despotic. When lawyers are publishing legal briefs--and there's enough references in the book, so you can put this together, but I just want you to see the point. When they begin to publish legal briefs defending the Jansenists against these kinds of attacks, these are published by thousands of them. They can't be censored. They begin to suggest at a time in the eighteenth century, particularly after mid-century when we can already begin to speak of French nationalism, at least among the elite at the time of the Seven Years' War, 1756-1763, thus the Seven Years' War. It begins to suggest two things. That monarchies can behave despotically, going beyond the accepted limits of absolute rule, and that the nation, this idea of the nation is being betrayed by bad governments. If this doesn't sound like the French Revolution, then nothing else will. Those are very important defining moments. The same things happen also in the 1760s and 1770s, with various attempts to liberalize the French economy that I describe in the book. The king's attempt to dispense with the parlements, which were really noble law courts that were provincial. You can read about this. But the same thing happens. This is the point. These lawyers begin turning out these legal briefs that imply the same two things: that absolute monarchy is risking stepping over the lines of the acceptable and behaving in a despotic way, and that there is something called the nation in which nobody would have imagined that classes were all equal--the discourse of liberty, fraternity and equality is a hell of a long way away--;but that the traditional rights of the nation are being betrayed by the monarchy and that things isn't so good at Versailles. Again, trying to look ahead and see what happens in 1788 and above all, 1789, which is one--I'm not a big guy on dates in history and having to remember all these dates. But 1789, like 1917, that's a big one. That's a big one. But in order to also understand the emergence of a radical republic and the execution of the king, one has to see that the nation becomes invested with a sense of moral quality that makes it not impossible to imagine a world without kings. And, so, lawyers, who are called barristers, play a major role in all of this. Again, this has to be seen in the context of a century in which more and more people can read. The literacy rate in a country like France is still well below fifty percent, maybe forty or forty-five percent, something like that. More men could read than women. Not only literacy increases among the elite, but the amount of things that are published and the amount of newspapers that are published expands dramatically. A point of reference would be in Britain, you can read about this, the campaign of John Wilkes, who was sort of a rascally character. But Wilkes--and the number forty-five becomes virtually illegal, because forty-five was the number of a newspaper in which Wilkes and his supporters essentially call out the British political system. Again, we're talking mostly about Western Europe, and literacy is much higher in the Netherlands and in Northern France and in Northern Italy and in England than in other places. This is part of this cultural revolution. It's important to see the role of this, because in orthodox Marxist interpretation you had to have the ever-rising bourgeoisie, the rising in the fourteenth century. There they are again in the sixteenth century. They're like some sort of runaway bread or something like that. In the nineteenth century there they are, the bourgeois century. I'll give a lecture on the bourgeoisie, because they do indeed rise. Nonetheless, a kind of class analysis can't be completely thrown out. Classes did exist and people had a sense of themselves as being members of a social class. It was not immutable--these boundaries were more fluid in Britain than in other places, but still are important. But now for the last thirty years, people have paid more attention to the cultural concomitance of revolution and what difference Enlightenment ideas made, and what difference the emergence of a sense of the nation and the infusion of politics with a sense of right and wrong and morality. It's an important part of all this. In the last twelve minutes and thirty seconds that I have today, let me give you another example of this. I think this is a fascinating--it's so fascinating I can't find it. Let me give you an example. This is drawing upon an excellent book by Sarah Maza. It's a very well-known episode, but it shows you and ties together the sense of the nation along with the impact of this sort of third generation of Enlightenment hacks after 1778, to understand their role in the erosion of a sense that the monarchy was immutable in representing the rights of the nation, even if that construct was just coming into being. In this book called--;what's it called?--;Private Lives and Public Affairs. Maza takes a couple of cause célèbres. Cause célèbre would be like one of the things that you find in the tabloids in Britain or the U.S. I don't read that stuff, so I really can't give any good examples. But one of these actors and actresses you always see running around, or whoever this person is, Brittany Spears, or something like that. A singer or actress, I don't know what she does. But anyway, something like that, that people focus their attention on these people. They sort of dominate, if you will--this is almost an insane comparison, but the public sphere in that they're in the news all the time. And, so, what Maza did about the same time that David Bell was working on the role of lawyers in the eighteenth century is that she took a couple of these examples, and shows the way in which private affairs that were kind of sleazy and not too cool--but were sensational--helped bring these threads together and contributed to kind of erode the prestige of the monarchy. This fits into the sense that I've already given you that was extremely pervasive, particularly around Paris is that a lot of things that went on at Versailles weren't so good. The 10,000 nobles who were clustered around Louis XVI and particularly his wife were undermining the authority and the prestige of the monarchy, and that wasn't a good thing. An incident called "The Diamond Necklace Affair" is illustrative and mildly amusing, not more than that. It also involves this Palais Royal place in Paris before. A woman called Jeanne de Saint-Rémy--the name doesn't matter at all--was a poor noble. She claimed descent from the royal family. She had a pretty good education. She had important protectors. She marries an officer of rather dubious noble title, who was called, quite forgettably, the Marquis de la Motte. He met the fifty-year-old cardinal called Louis de Rohan, whose name I should have put on there, R-O-H-A-N. He was from a very famous old family called Rohan Soubise. The national archives used to be and now they're adjacent to it in this fabulous old--why don't they ever have things that work in here? It's just unbelievable. I can't find anything to write with. The family called Rohan Soubise--there's this wonderful, wonderful palace or chateau in the Marais, which is still there. This cardinal is on the make. He's very, very wealthy. He's a cardinal. That's why he's wealthy. Or he's wealthy because he's a cardinal. He thinks he's snubbed the queen. He wants to be one of the people who helped make important decisions, but he thinks he's alienated the queen, that this is standing between him and the power that he thinks he should have. He writes missives to the queen begging her to forgive him. He's met this guy, de la Motte. They begin sending forged replies from the queen that suggest that the queen now is listening, and maybe all is forgiven and it's going to be okay. They have this idea. There's a famous jewel that had 647 flawless gems, worth 1.5 million pounds, which is a whole lot of money in those days and still now. Louis XV had commissioned it for one of his mistresses and then backed down because it was too expensive. In 1788 the necklace was offered to Louis XVI for Marie Antoinette, but he turned it down, saying the realm needs more ships than it does more jewels, which was reasonable enough. They con this old, fairly horny cardinal into showing up at dusk at the Palais Royal and introducing him to the queen herself, whom I guess he'd met once. But it's dusk and his eyesight isn't that good. What they do is they find this prostitute, of which there were about 25,000 in Paris at any one time, who looks vaguely like Marie Antoinette. This cardinal, de Rohan, thinks that his ship has come in, that everything is going to be okay. This purchase order has been forged, supposedly signed by the queen, and the real jewels are delivered. Then, of course, it's broken up into pieces and sold on the street for zillions of francs, sold on the black market in London and in Paris. But there's a problem here, because the word gets out what has happened. And de Rohan, who has really been made a fool, there's no doubt about it. But it's more serious than that, because that he has done is he could be accused of lèse majesté, which is the ultimate kind of insult, plotting against the queen by identifying her prostitute as the queen herself. To make a very long story short, the monarchy, humiliated by all of this--the cardinal is saying mass in his fancy robes in some cathedral, probably Saint-Eustache, but I'm not sure. It might have been Notre-Dame. I don't know. Well, Saint-Eustache isn't a cathedral. But anyway, he's in his robes. The police come in and arrest him. What happens then is they put him on trial. He is not a terribly loveable guy or--not someone to be very much admired. By an incredible series--or not an incredible series, but an almost logical playing out of what I've been saying, the lawyers who defend him and the crowds who salute him portray him as a victim of a regime that is crossing the line between absolutism and despotism. He is a cardinal and he becomes the darling of the people. Poor old Jeanne, this noble, she and her boyfriend get branded and sent off to the galleys, et cetera, et cetera, predictably enough. But the parlement of Paris acquits the good cardinal, and he emerges from the palace of justice or the parement --still now on the Ile de-la-Cité, the largest of the then three but now two islands in the Seine in Paris--to popular acclaim, saying that justice and the interests of the nation have been served by his acquittal. What this sleazy, unsavory incident does is it helps continue the desacralization of the French monarchy. Again, lawyers and the people who see themselves as representing the interests of the French nation are, in their own imaginary, and in their own mental construction, and in the eyes of people who follow these events in legal briefs and in newspapers--the guy who's acquitted is seen as somebody who had been done wrong to by a monarchy that has gone too far. That exactly, not much more than a year later, is what is going to play out in the French Revolution itself. To conclude, Voltaire, de Rohan, Montesquieu, who you've been reading and these folks have big-time impact on the way we look at the world around us. They had an impact on those people who would become the organizers of the revolution and indeed, the leaders of France and their children, their successors in the nineteenth century. But lawyers, part of this culture of increased public sphere that was Western Europe in particular but also parts of the rest of Europe, too, in the eighteenth century, had a role in all of this, and that by 1789, not in any kind of inevitable process, a revolution was not inevitable, but the sense that the monarchy had gone too far and that there was something called the nation out there, was in the public sphere and the results of all of this would be there to see in 1789. Now, have a good weekend and on Monday you're going to hear the execution of the king, the death of Citizen Marat in his bathtub. I hope to make clear why some people supported the revolution and others didn't, and what difference it all made. Have a great weekend. See ya!
European_Civiliization_16481945_with_John_Merriman
12_NineteenthCentury_Cities.txt
Prof: Today, I want to do the impossible and talk about urbanization and urban growth in fifty minutes. It builds on what you're reading and I'll give the classic example, which is the greatest project of human intervention or rebuilding, that is the rebuilding of Paris by that man that my late friend, Richard Cobb, once dissed as the "Alsacian Attila." In doing so, I want to emphasize a couple points. One is that the nineteenth century was a period of phenomenal urban growth and urbanization. I will distinguish those in a minute. Secondly, one of the things that emerges out of this urban growth and urbanization, but particularly the growth of cities large and medium in the nineteenth century, is an increasing geography of class segregation. The theme of course in Paris, as in other cities--London is a good example--is a more prosperous west and an increasingly less prosperous east. Also, one of the things that I really enjoy talking about in trying to help people understand is why it is that European suburbs are not at all like American suburbs. Why is it that some people feared by elites were perched on the edge of European cities, whether it's Vienna, Paris, or lots of other places, and not in the center; whereas, in the United States, if you think of the riots in 1967, before most of your times, in Detroit, or Newark, or Watts, or East L.A., it was people in the center with the wealthy people in the periphery fearing the poor people living in the center. Why is it just completely different? I remember in the early 1990s we were doing a book, just a bunch of essays in France, called Banlieues Rouges, which means the red suburbs. I was supposed to write something tying the book together. It was at the time of the Rodney King trial. Most of you are--not of the trial of Rodney King, but when Rodney King, who was an Afro-American who was beaten up by cops in L.A. It was filmed by somebody who just happened to have a camera and was filming this. There was a big trial. The police who beat the hell out of him were acquitted. They were acquitted by a white jury in the suburbs. People in France couldn't get over that, the idea of wealthy people living in the suburbs as opposed to poor people living in the suburbs in Europe. We had to hold the book for a couple weeks until I could figure out how to explain this. That's one theme also under the rubric of center and periphery. Why are European cities different? Human intervention has something to do with that in the case of Paris. That's fun to talk about, so I'm going to do that in the last half of the talk. Just a few points at the beginning. I sent around, I hope it will reach you, something I sent out on October 16 on this class server, which has most of these terms on the board. The nineteenth century is a period of both urban growth and urbanization. Why are those different? Urban growth is, say, a population of any city--the population of Vienna rises from, say, 500,000 to 1,000,000. I don't remember the statistics. That is urban growth. Vienna is bigger at the end of the nineteenth century than it was at the beginning of the nineteenth century, or any place you want to pick. But the most important point is that there is urbanization in the nineteenth century. You could have urban growth absolutely and de-urbanization if, at the end of any period that you're looking at, you had more people living in cities, but they represented a smaller percentage of the population. You could actually have de-urbanization if you had more people living in the countryside at the end of the period, relative to those living in the cities. So, it depends on how you define what an urban area is. In the case of France, where they have all these great censuses all the time, in the Restoration, that is 1815-1830, a city had 1,500 people in it. There's that many people lined up at the Milford Mall when it opens in this country. In 1841 they start using 2,000 people agglomerated, that is, living in an urban--;the church, the steeple. Open up the church and look at all the people or whatever. You know what I mean. That's an urban area. In the United States, I have no idea. A city used to be 5,000. I think it may be 25,000 or something like that. It doesn't matter. Depending on what you define as urban, there's a remarkable increase in the urban population in the nineteenth century. It's not just big, huge cities like Naples, or Constantinople, or London, which is so enormous compared to Paris, compared to any city spatially. But it's also small towns that increase in size because of industrial, commercial, administrative functions. All this is perfectly obvious. The nineteenth century is the growth of big, big cities and the first conurbations, that is, cities that just run into each other, such as now for example Boston to Washington is practically a conurbation. In France it would be Lille, Tourcoing, Roubaix. In the north of England it would be Manchester and its expanding suburbs. That's all perfectly obvious. The next step is to say, "Where does the population of cities come from, and who are all these people that are increasing the population of cities and are part of this process, in a statistical sense, of urban growth?" The second point that I'll make is what people thought about this. What did they think about these teeming cities? "Teeming" was a word that they started to use to describe these cities that seemed to be kind of runaway cities. First of all, I'm not going to write this on the board, despite the fact it's the only mathematical formula that I even know. If you were trying to explain the growth of any city from Point 1 in time to Point 2 in time, what you do is simply look at the population at Point 1 in time, say 1811 or something like that. Then you try to find out where the population came from that increased it to Point 2 in time, if the city reclassified what was considered urban, if they annexed its suburbs. That's what Lyons does in 1852, or Paris does in 1860 on January 1, or almost anywhere they do this. That would be one factor. Then you would have births minus deaths. Do you have a natural increase in population? The other thing is in versus out migration. Do you have more people arriving in the city than leaving it? There are people leaving and people arriving all the time. But if you look at particularly the first half of the nineteenth century, to make a generalization, more people die in cities than are born there, because cities are very unhealthy places, which helps create this kind of image of biological sickness that I'll discuss in a minute, that I'll evoke with some conservative commentators from those times. What do I mean by that? The average life expectancy in Manchester, counting infant mortality, so it's a little bit exaggerated, was about nineteen years old. You guys would have about had it. Lille, in the north of France, is the same thing. That's pretty young. But that counts life expectancy. If you made it to the ripe old age of eighteen, then your chances of living longer were pretty gray. You still had places where you have phenomenal, still cases particularly in areas where you have all these spinsters in Brittany and in Ireland, of old women who live a very long time. Women lived and still live longer than men. Another reason why you have more people dying in cities, among other reasons, is infanticide. Foundling homes in these big cities. You name the big city--;Rome, Berlin, anywhere you want, St. Petersburg, Moscow--;they have huge foundling homes with thousands and thousands of babies abandoned every year. One-third of those babies die before the end of the next year. Infanticide. The church, which obviously opposes infanticide and obviously opposes abortion in the nineteenth century, what they do is they finally agree to put in these little things called tours, T-O-U-R-S. It's an awful example to give and it comes out of an institution that no longer exists, Machine City, but anyplace that you have those kind of machines and you put money in and you hope that the window is going to turn around and actually give you your M&Ms. To make a crass example, that's what these tours were. They encouraged young women, usually unmarried or uncoupled women, to abandon their babies instead of exposing them and having them die. You put your baby in the foundling homes. You ring the bell, you put the baby in this little thing that turns around and then the good sister comes and takes the baby to the foundling home. If all goes well, the baby will be there in a year, but one-third of them are not there in a year. You've got a lot of babies who die. This increases the mortality rate. Then you've also got a lot of old people who come into the cities--and young people--to beg, seeking those last vestiges of charity, who are clutching at passersby as they go to church, and who some people give them money and most of the people don't give them money. Often they form part of the community, which is obviously the case here at Yale. A lot of the older people die. Policemen going on their rounds in any city, Milan, or Turin, or anywhere, are going to find dead people the next morning. You've got more people that die in cities than are born there, really, in most places way into the nineteenth century. The point is obviously that it's immigration that causes urbanization and urban growth. Massive immigration, usually from the hinterland, that is the region around cities. In the case of Berlin, northern Germans from Brandenburg or Pomerania and many Poles were moving into Berlin. There were very poor people moving into Berlin. In the case of Paris, people who come into Paris are from Normandy, or from Champagne, or from central France where a lot of them are seasonal migrants and they ended up living there permanently. In the 1880s you've got a huge wave of Bretons from Brittany who don't speak French. If you go to the station of Montparnasse, you'll see a lot of the cafes around Montparnasse are named after Breton towns--;"à la ville de Saint-Brieuc," la ville de Dinon. Still today at that station, Montparnasse, the first thing you see when you get off a train there is a sign for public assistance for Bretons. It's in the station at Montparnasse. The population in Marseilles includes lots of people from the south of France, from Provence, but also Italians. This is very obvious. People who move into Barcelona are far more likely to be Catalan than they are to be Galician, or Castilian or something. This is all obvious. There's no surprises there. The image that people had of this rapid migration into cities of poor people. The majority of people who moved to these cities are poorer than the people who are already there. In the case of the 1950s and 1960s, in most of the cities it's not the case. The 1950s and 1960s they were young professional couples who get enough money to come to rent an apartment in London, good luck, or to buy an apartment in London, or in Berlin, or someplace else, or Munich. In the nineteenth century you have waves of poor people. The kinds of people you've seen before who are coming to work as domestic servants, coming to work as day laborers. In the case of London, coming to work on the teeming--;there's that word again--;docks of the Thames River, as London is that imperial city looking out on its vast empire. The interesting thing about London, I wish we had days to talk about this stuff, but it's really only in London that you had people of color. You could find them already by the end of the nineteenth-century, people coming from India, from what would become Pakistan, people coming from the Caribbean. In other cities, you simply didn't have that. The number of North Africans coming to Paris is merely a trickle really until after World War I. Anyway, contemporaries had the idea that what this whole process was, and I should have written this on the board, but you can write it down if you would please, called the uprooting hypothesis. They didn't call it that. That's social science from the 1960s. These people were uprooted from their steady, rural roots of organized religion and from family support and they are thrown into the maelstrom, into the chaos--sometimes a creative chaos, but nonetheless, the perception was a dangerous chaos that was urban life. The result of all of this was that revolution becomes seen as an extension of purse snatching. In fact, some of the bad social science from the 1960s in trying to explain the riots in Detroit and places like that said, "Well, you've got a lot of poor people coming from Alabama and Arkansas," which is certainly the case, "or from South and North Carolina move up to Detroit and try to get a job in the factories there. They get to Detroit and they just freak out, because those rural roots have been cut." You still see this in various electoral campaigns even today, even as I speak, this idea that cosmopolitanism, which is also by the way in almost every language sort of a code word for anti-Semitism. When people say, "Cities are so cosmopolitan," they meant they are full of Jews. This is particularly true in this sort of anti-Semitism and racism of eastern and central Europe where you had such a large Jewish population living in cities like Prague, Budapest, Vilnius, Riga, just about anywhere you could name. With the kind of increase in ethnic populations with the Estonization of Tallinn or the Czechization of Prague, you had fewer Germans and fewer Jews living in those cities, but you had this sort of anti-Semitic discourse that lingers. Vienna is the classic case. Vienna goes from being sort of a liberal city in the 1850s and 1860s to sort of a hotbed of anti-Semitism, where the mayor of Vienna at the end of the nineteenth century, a buffoon called Carl Lueger, says, "I say who's a Jew." That's one of his more infamous statements. "I declare who's a Jew and who isn't." Of course, one wants to understand how Adolf Hitler got his anti-Semitism. It really comes from World War I, but the basis was already laid there by growing up in Austria in the 1890s and the first decade of the twentieth century. If you think about that, does this massive immigration necessarily lead to urban chaos? The answer is obviously not. We have the effect that now we know a lot about, historians, and social scientists, and social geographers, and sociologists, which is called chain migration. Take the case of China. People who have studied Beijing discovered a long time ago, I remember this from days when I was studying Chinese history back at Michigan, and all these people pouring into Beijing in the nineteenth century. They formed native places associations. It's obvious. You get together with people that you know from your part of China. They are sort of the intermediaries between you and the city. The Irish are a classic case. We've already seen how the British elite are scared the hell out of the Irish, of the Irish, because they're Catholic and all that. They don't just pour into Manchester, Liverpool, and London and freak out. It doesn't work like that at all. They live with their families. Their families make a little money and say, "Why don't you come along?" It's the same thing. Look at the case of America, people coming to the United States at the end of the nineteenth century first of all massively from northern Italy and then in the twentieth century from southern Italy. They send money back all the time. It's very different than the Germans who came to the United States. They were almost never in contact with their families again, relatively, and the same with Swiss. But the Italians always stayed in contact. In any case, all these people, it's very sensible. If you come from California to Yale, there are a couple high schools in L.A. that send all of these students to Yale. If you're kind of freaking out when you get to Yale and saying, "All these people are all so smart," or whatever, then the next thing you do is you go to people that you wouldn't even say hello to in high school and say, "Why don't we hang out tonight," or something like that. You find people who have origins like you, geographic or whatever, and hang out with them. It's a logical thing. It happens every single time. In the case of people moving to Paris, Limousins, people from the center part of France, Limousins, they live in certain neighborhoods around the center of Paris in the way that Bretons lived around what became the station of Montparnasse. This is chain migration. You see it in Philadelphia. You see it in London. You see it in Moscow. You see it in St. Petersburg. You see it everywhere. But, having said that, that's not the way contemporaries viewed this. Let me just give you a couple examples of how often well-meaning, but not always, contemporaries saw the phenomenon of urbanization and urban growth. Let's listen to a preacher called James Shergold Boone, minister of St. John's church in the Paddington district of London. This is his sermon. When he's talking about cities, he's evoking Sodom and Gomorrah. "The very extent of edifices and the very collection of vast masses of human beings onto one spot, humanity remaining what it is (bad is what he means), must be fraught with moral infection." They continually use words like "infection," that there's a biological inequality of people with each other. The poor are biologically less likely to, in a world where Darwin and post-Darwin misuse of Darwin is very important, to survive the challenges of illness of disease. In illness, the cholera carries away poor people more than rich people. Going back to the good minister, Cities are the centers and theatres of human ambition, human cupidity, human pleasure. On the one side, the appetites, the passions, the carnal corruptions of man are forced in a hotbed into a rank and file luxuriance, and countless evils which would otherwise have a feeble and difficult existence are struck out into activity and warmth by mere contact with each other. On the other side, many constraints and safeguards are weakened or even withdrawn. This is what I mean by leaving those imaginary rural roots in which solidarity was considered to be automatic and family support so very, very important. "In cities there is a complication of evils. External forces cooperate with inward desires." You can conquer those inward bad desires in the countryside, but all is lost in the demon rum of moral corruption. The reality is from urban life, etc., etc. Sir Charles Shaw, who was police chief of Manchester, described the residents of industrial cities. He called the residents of industrial cities such as his own as "the debris which the vast whirlpool of human affairs deposited here in one of its eddies, associated but not united, contiguous but not connected." That's a perfect description of the uprooting hypothesis. There's nothing you can do to save people from themselves. To take an example from Paris, because I'm going to talk about Paris in a while, in the 1830s a certain Vicomte--quite forgettable--exclaimed, How ugly Paris seems after one year away. How one stifles in these dark, damp, narrow corridors which you are pleased to call the streets of Paris. One would think that it was an underground city, so sluggish in the air, so profound the obscurity. In it thousands of people live, bustle, throng in the liquid darkness, like reptiles in a marsh. Or Victor Hugo. "Cities, like forests, have their dens in which hide all the vilest and most terrible monsters. All is ferocious." It couldn't be any more condemning than--particularly in Germany, where the whole sense of having a hometown, of being attached to a particular space and all of the corruption that comes from big cities. You see this over and over again. Take New York. Here's another reverend, the Rev. Amory D. Mayo, who attacked the city. All the dangers of the town may be summed up here, that here withdrawn from the blessed influence of nature and set face to face against humanity, mankind loses his own nature and becomes a new and artificial creature, an unhuman cog in a social machinery that works like fate. Again, in Émile Zola, the theme of fate is terribly important, of destiny along with, as you'll see, people that have those, if you've read other Zola, that have the bad genes. They've got the drinking genes or whatever. You find all this appalling stuff. Here's another one in a lecture in 1844, The Young Americans. The cities drain the country of the best part of its population, the flower of its youth of both sexes. They go into the town and the country is cultivated by a so much inferior class. Et cetera, et cetera. That was the image. And, of course, all of the revolutions have a lot to do with that. Simon, I can't find my wand. Is it up there? It's not? Voilà. Can you do it? Thanks. Let's look at an example of this. I'm sorry, I can't click. I'm just going to have to wave my hands frenetically, or something, or jump up and down. I want you to look at Paris and think about what I've said. This is Paris in 1839. Compared to London--I don't have to talk about this. In London you can walk miles and you're still staggering around looking for the next Tube stop, because it's about three times as big, at least three times as big physically. Paris, in 1837 you could walk from the Arc de Triomphe, which is up on the left, to the ending about an hour and a half. This is before the inner suburbs are annexed for tax reasons, but also for reasons of imposing order on the troublesome periphery. This is part of the lecture. What you've got is, if you know Paris at all, it doesn't matter if you don't know Paris, here's the Garden of Luxemburg. You've got the Tuileries up along the Seine there. You've got your basic Seine river. You did have three islands, now there are but two. They cemented that one over. Ile Saint-Louis there, which is now one of the great tourist traps in western civilization, but it's still beautiful. There's Cité with old Notre Dame right in the middle. You've got an enormous population. You've got in the central districts three times the density of population that you have now. We have an apartment in the Marais a couple of blocks up from Notre Dame. The density there in the 1840s in this neighborhood which is called the _______, was three times what it is now. You've got this enormous implosion of people into the center from the provinces. Next one, please. Voilà. The reason I put this up here, this is the first photo of Paris. This is what you call the daguerreotype, daguerreotype. I can identify this as--I'm sure not the only one, but this is the faubourg. It's an English word as well as French. It means it's sort of an extension of the town. It's a difficult etymology. It doesn't really come from false bourg, but from other things. It's beyond the walls. But this is the first one. This is also from about 1837. Now, Haussmann, the Alsacian Attila, he built these boulevards that became the staging ground for the so-called Belle Époque. But there were also boulevards anywhere around, because the boulevards were where the walls had once been. Vienna is a great example. The rings around Vienna were the rings where the walls had been that were knocked down as the city grew. In Berlin you find the same thing. Next, please. This is cheating a little bit. This is just unhealthy. This is an eighteenth-century hat. Look at the guy. He's a nineteenth-century guy looking progressively forward to the nineteenth century with his bourgeois outfit. This could be the Restoration, actually. He's going to get hit. "It's raining," he says, but it's not. It's a chamber pot being dumped on his head. In the 1860s only about ten percent of buildings had water above the second floor. Next, please. This is by a guy called Charles Meryon. It doesn't matter. The point of this is this is Cité. There's Notre Dame there, before Viollet-le-Duc's big spire on it. The point is that before the rebuilding of Paris this was among the most densely populated parts of Paris. At the end of the period, that is 1870, it's the least densely populated part of Paris. Because--remember capitalism and the state? They build government buildings. The hospital is one of them. The Prefecture of Police, which had all the fighting around it in 1944, in August, was built as well. These places disappear. The morgue, one of the more important morgues was here also. That again, gave the idea of the disease of the center city. Next, please. Name those people. There we go. Paris on the left. It's 1850. Paris on the right. They're all running together. You see there's a lot more people. You have the filling-in of the center. You've got the periphery with the emergence of these suburbs. It's the suburbs that have increasingly poor people living in them who can't afford to live in the center of Paris. That's the big difference. That's a big, big difference. You have a customs barrier around Paris. Every time you bring things into the city, or any other French city, you have to pay taxes on it. A six-pack of beer, you pay a tax. You have more space outside the city walls, so that's where you build factories. You're nearer to the canals and to railroads, so that's where you build factories. The center city froze the unwanted industries, the dirty industries--;soap, chemical, etc.--;outside. Paris doesn't de-industrialize. You still have the garment industry. But the dirty industries, the big industries stay on the outside. That's where your labor force lives. How different that is than Philadelphia or Detroit, where you already had all this space and you've got the population moving into big central areas and essentially staying there. To be sure, in the United States, we have places like a very small part of New Orleans. You've got San Francisco. You've got Beacon Hill. You've got Manhattan. You've got places where you've got a lot of wealthy people living in the center city. But places like Detroit are more really common of the American experience. Next, please. I can remember when I was a kid, when I was younger than you, I think, going to a Yankee-Tiger doubleheader in 1967. Roy Whit played third base. We walked out and the city of Detroit was on fire because of the riots. It's still all burned out there. One of the interesting things was Grosse Pointe, Michigan, which is a very fancy place. We have a lot of students here from Grosse Pointe. I'm not dissing Grosse Pointe, Michigan, but the municipal council tried to figure out a way that you couldn't get to Grosse Pointe from the center of Detroit unless you had a map, unless you really knew how to get there to try to keep "them," that is, the poor people of the center, from going to the suburbs. The upper classes of Grosse Pointe armed themselves with their hunting rifles, just as the constables had in central London in 1848. The spatial juxtaposition is incredible. Next. Oh, here we go. Those were lots of people in the center. The theme here is overcrowding. Here is the Market of the Innocents, which it's called. Now it's down by the forum Les Halles. Lots of people. Then the big market Haussman built would later be torn down. Lots of people in the middle. That's a theme. Next, please. The Rue Pirouette in 1860. These are really old medieval buildings. A lot of them get destroyed by the rebuilding. Next, please. Violà, this is a good one. This is one of the streets that would disappear. You could say, "What the hell is he talking about? Where are all those people?" If you have really good eyes, you can see that there are a few people here, because of these long exposures, there's actually some people standing there. This had already been condemned to be destroyed, rather like the Rue Transonain. It had part of the collective memory of the massacre there. What is this in the center? It's a ditch down where sewage went. There were some sewers in Paris, but it was very unhealthy. In these areas, this is right near the Panthéon, that is sort of in the center left, eastern part of Paris. People just get destroyed by the cholera in 1832, and again in 1849, and again in the 1880s, in 1884. Fundamental inequality before death of the poor. This street--this is actually a great photo. This is by a guy called Charles Marville, who went around and took pictures of these neighborhoods that were going to disappear. Don't worry about the names. Don't worry about the streets. I'm trying just to make a point. Not everybody lived on those streets. In the second empire, that is 1852-1870, Napoleon III, people lived it up. A lot of Zola's novels are really amazing about that. He didn't like Louis Napoleon terribly much. This is living it up in a big banquet in a big hotel. Already you can see the wine glasses. They're starting to put the wine glasses in connection with the food. That's something that comes in the nineteenth century, the idea of having red wine with cheese. You do have white wine with goat's cheese. Or having white wine with--most with fish and fowl and that kind of thing, and these long, elaborate banquets of one course following another. I can hardly condemn that, having sat through a few thousand of them myself. Anyway, there we go. Fancy people. Next, please. This is revolution in 1830. The bourgeois guy doesn't belong there. He never went out there anyway. Marianne--this is a highly romantic view of death. Here's your street urchin there, who's fighting the good fight in 1830. This is Delacroix's famous Liberty Leading the People. This is an enormous painting. These people look like they're kind of playacting. They're kind of saying, "Oh, I'm dead now." A few minutes later they're going to get up again. This is very romanticized. Next, please. Compare this to Meissonier. This is a very underappreciated painting. Again, the name doesn't matter. It meant something to him and it does to art people, but this is called The Barricade. This is only eighteen years later. This is 1848. This is real death. These are ordinary people. Again, look at the gray-green, the sick corridors. This is an affectionate look at people who are dying for a good cause in the center of Paris. Louis Napoleon, Napoleon III, didn't want revolution again. Since there were barricades in so many of these revolutions, barricades begin on the day of the barricades in Paris in 1572, or something like that, before this course stokes up. He said, "Let's build the boulevards so wide that you can't build barricades across them." In 1944 and in 1968 barricades were in the same places, often, where the revolutionary barricades had been in 1789 or in 1848 or 1871. These guys, that's Napoleon, the guy with the pen. That's Haussmann, who was born in Paris but had Alsacian parents. He's in the middle. Louis Napoleon flops a map down and says, "Build big boulevards through this teeming city," "teeming" used for the third time. He does that for three reasons. One, two, three. One, to bring more air into Paris, more boulevards with sewers underneath them. Boulevards mean better transportation, more light. Secondly, he does this to increase the flow of capital. It's not a coincidence that department stores are built on these boulevards. Some of the ones are still there. The shopkeepers not near department store got wiped out. They were really mad. Very extreme rightwing voting at the end of the nineteenth century. The ones near the department stores do very well. Third, and he says it in Haussmann's memoirs, Louis Napoleon wants these built so you can't have barricades. He builds these boulevards around and through the traditional revolutionary areas. The result is lots of people pack up and they leave. Next. We're going to go through the next ones fairly quickly. Here's Paris 1855. Belleville up there, or La Villette, or Montmartre there, which has a god-awful church, Sacré-Coeur, built on it after 1871. Those were annexed as suburbs in 1860. It's inner suburbs annexed into Paris. This wall here is the limit of Paris today. You still have people farming in Vaugirard and these places, Grenelle. All that will stop. By 1870 this is all sort of packed with people. The people living in Belleville, which is on a hill, were people many of whom were forced out of the central quarters by high rents. They are the ones perched dangerously, from the point of view of the center, on the periphery. Ironically, as the middle classes move further and further west, they lost their contact with ordinary people. A lot of the stuff they're reading is based upon--they don't really see those people up there. Maybe they walk down the hill, they can't afford to take the horse-drawn carriages, the omnibuses, to be servants in their houses. These spatial things are very interesting. You find them in other cities, too. Here, they take the map and they say, "Let's build boulevards." You don't have to know anything about Paris to know that there wasn't any north-south--;north is up there; south is down toward me--;thoroughfare, and they built Boulevard Saint Michel, Boulevard Strasbourg, Boulevard Saint Denis that goes up to the station of the east with the station of the north, right to the left. Étoile, up here, the boulevards help create the kind of star notion. Étoile means star. It looks like a star around the Arc de Triomphe and the big grand crossing that I'll show you in a minute that gives you the west-east access, as well as some other ones, too. They do a lot of building. They knock down a whole hill. This is now where the Palace of Trocadéro is, lots of work employing lots of people, so the housing workers like them. In 1855, let's look at--you're walking through the Gallery of Machines at one of these world fairs. Because Victoria had one in 1851, Louis Napoleon's got to have one, too. He has two. He has one in 1855 and another one in 1866 or 1867. You're walking along here. You can look at paintings. Above all, you can look at machines. You can look at things you can buy. That's the principle of these expositions. Paris is on stage. That's the principle of a department store. You can walk through a department store and you can buy forty-nine different kinds of shawls, ranging from very cheap ones to very expensive ones. It's the same principle. The boulevards are really an extension of, as my friend Phillip Nord once argued and so have a lot of other people, these department stores themselves. They become sort of a staging area for what became known as the Belle Epoque. It wasn't so belle for people who didn't have any money, because those were hard economic times. These department stores still exist. The BHV, the Bazaar de l'Hotel de Ville still exists in Paris. Bon Marché. There's a terrific book on Bon Marché by Michael Miller. It's still there. Zola called them "the cathedrals of modernity." Already in the 1850s they had singing groups singing Christmas carols at Christmas. People would come. They couldn't afford to buy anything and just be part of the spectacle. Paris became a spectacle in itself. The boulevards were part of this. These aren't very good prints, but he was so important he became-to haussmann something was to bulldoze it. Maybe to "Merriman" something would be to drink a good bottle of Côtes du Rhône. Maybe one day that will be mine. Maybe I'll get a French verb. I'm just kidding. But anyway, haussmannisation is to bulldoze something. This is called the haussmannisation of a neighborhood. Here, these people are getting the equivalent of about ten dollars, forced out of their houses. They've got their dog. They've got everything they own there. Look at the mattress. A mattress is the last thing you ever pawned. There's the mattress. They're leaving. The next one, please. This is called Haussmann Part II. Here you've got your Haussmannian vista. You've got the big boulevards with all these not then so fancy balcony railings. The real fancy ones come later in the Third Republic. This is really St. Augustine. It's a hideous church. Some people really like it. My dear friend Bob Herbert thinks it's not bad, the art historian. This is haussmannisation of a corridor, part two, of a neighborhood. Next, please. You've got people building. That's the Tour Saint-Jacques, which is still there. A lot of these are little teeny people who are from the center part of France or the builders. Everybody's aware of this building. Here you can say, "How are we going to get from Point 1, which is down by the Seine, the Rue de Rivoli, which has been expanded? How do you get to this new opera they're building?" First you tip off your friends so they make a lot of money on the deal, knowing what to sell and what to buy. Then you take a ruler. You didn't have to be an architectural genius. You draw a straight line. That's what I mean by "imperialism of the straight line." Next, please. You're getting to the opera. There you can see it rising up, Garnier's opera. It rises up out of the smoke, out of the cloud of destruction. I'm getting carried away. Next, please. There it's being built. There it is. You are here looking at what now is the largest concentration of pickpockets in the western world, because there's an American Express right near there. You see all these Americans. They've got their big wallets. They say, "Where's the American Express, dear?" It's right over here. Voop! Their wallets are out of there before they hit the top step of the Métro. That baby's gone. Anyway, that's the Place Vendôme down there. Next, please. There it is in 1900. Here again, imperialism in a straight line. There you've got--maybe I'll do a thing on blowing up this building here. Sometime we'll come back to this maybe. Anyway, there you go. Here's the great crossing in the center of Paris. This is--you're crossing from Ile of Cité here and you're going up. Here's the Gare de l"Est, the station of the east there. This way you're looking down east. There's the Tour Saint Jacques. The only point is that they expand this down toward Saint Antoine and the Faubourg Saint Antoine where the revolutionaries were in 1848, and where they were in 1789, and where they were in 1792, as well, and at other times in the French Revolution, 1830 for example. They're down there. That's the big crossing point, la grande croisée. There's the Gare de l'Est. When I see that station it's so sad. That's where all the people drinking champagne went in 1914 shouting, "À Berlin." Just as in Hauptbahnhof, in Berlin they were shouting "Nacht Paris." Of course, they don't come back. So many Jews were deported from here, if they were sent from the Galle du Nord to Drancy, if they were on the direct line to Auschwitz or the other death camps, they went through this station. Next. Arrested by French police, 1942-1943. He builds Les Halles, the big market. My friend Vincent Scully, who teaches in here afterward, he would do a better job on this, but I do remember the first time I was ever in Paris. I don't know how old I was. Younger than I once was and than I am now, or younger than I'll be, or whatever the line is. I met somebody on a bus in Germany. She was a sculpturess and she kindly invited me to stay in her apartment. It was all on the up and up. It was no problem. She took me down in the middle of the night to see this place, and to see the restaurants where you'd see the wealthy people eating, and the butchers with their smocks covered with Beaujolais and also with animal blood. It was really cool. Then they tore this down. They tore it down to build a whole bunch of unsuccessful places and the filthy destruction of a monument. We need Vince Scully here to do this. People chain themselves to these things. They say, "Can't you leave just one so you people would know what these are like? Can't you leave just one?" They said, "No, we can make some money here. We can get in--soon we'll have McDonald's all over the place. We'll make some money." They got rid of them and there's none of them left. It's a tragedy. Haussmann built those. I'm not a big Haussmann guy. Here's a building with more Third Republic. These are the buildings that he built along the boulevards. Next, please. I've got to rocket on. If you follow art history you know there's a very famous painting by Caillebotte that shows the anonymity. It's called Paris with the Effect of Rain. It's right here on this place, which is where two boulevards come together. That's part of the thing, too. You've got your basic middle class people. They were carrying umbrellas. They are disconnected. They would never say hello to each other, and they're crossing this intersection that becomes part of the heartbeat of urban life, or something dramatic like that. Next, please. You've got these boulevards. This already existed, because it was where the walls had once been. This is the Boulevard Montmartre. Next, please. Oh, man. There was supposed to be another one there. Anyway, no problem. My bad. If you were flying around overhead, you would look down here. Here is Notre Dame. There's Les Halles up in that direction. That's the church of Saint Sulpice. You see the big crossing points, the big boulevards that have been built. Here's the big old crossing point there, the Tour Saint Jacques. A bird's eye view of all of this. What I left out was Camille Pissarro's image from that same point where he painted. An impressionist painted because of their interest in light and first view and all of that. There's a lot of important paintings of this. Renoir didn't like these boulevards. He said they're lined up like troops at a review. That's the most appropriate image of the centralization of state power. Speaking of state power, what happens in the Paris Commune in 1871 is that ordinary people in Paris take arms, as you know. They build barricades across these places. Next, please. Then the troops of the provisional government from Versailles, appropriately enough, come in and they use these same boulevards, the extension of the Rue de Rivoli, the Rue St. Antoine, to go and gun down ordinary people, 25,000 of them. Welcome to the twentieth century in 1871, when you were guilty for whom you were. "À Paris tout le monde était coupable." "In Paris everybody was guilty," said a prosecuting attorney. Next, please. But the spatial aspects of this are important. This is right. That's the Madeleine. There's just a lot of destruction. Here's Manet's depiction of women being shot. There were these images of rumors that women incendiaries were burning down the wealthy buildings of the property. Les pétroleuses, the female incendiaries. Manet did one and so did Courbet. Next, please. Finally, that's real death. Those are rather small people in tiny caskets who have just been mowed down because they were who they were, that is poor and in people's Paris. They systematically targeted areas like Belleville, because they were identified with the left. If we could speed through these next few, then we'll be out of here. Next, please. Here again, those are the boulevards with the old gates. Here's what I mean by east-west. In the northwestern part of Paris people were moving into slightly better buildings. In the northeast, more rural looking ones just on the outside, the laundry of combat. We know this is after 1900, because there's a reference to the metro. The metro didn't open until 1900. So, this is probably about 1912, actually there. People on the periphery coming in to pawn their mattresses. Here's the gates when you were outside. Again, why are all the factories on the outside, all the ordinary people? Because life was cheaper out there. That's why beyond Montparnasse are all these café areas that are still in Paris, now, but were once out there, because it was cheaper to be out there. At the end of her sad, short, drunken life, Gervaise in L'Assommoir goes out to hook on the periphery, on the boulevard. She gets poorer and poorer. Zola was so well aware of the spatial concomitants of all of this. This is what it was like passing through the barrier. It's outside that the red belt exists. It's outside of all. Vienna is a classic example. In the 1930s you've got the army blasting, firing cannons against the working-class housing perched on the outside of town. "Sires," said one of the ministers to Louis Philippe, "those usines, those factories that you are allowing to be built around Paris, on the outskirts will be the cord that strangles us one day." It's on the outskirts in these industrial suburbs that had once been producing cherries for the urban market and fruit, but now were their factories. It's there that the Communist party did so well in the 1920s and 1930s, and even beyond. They provided social services. They defended people. They were called the mal lotis, people that had inappropriate places to live. So that, again, was what I meant by center versus periphery and the sense of not belonging to the center, of not belonging to the center. You see the same thing with people living inside of American cities, of not belonging to prosperity. It can contribute to a formation of a counter-society, of kind of a sense of not belonging that creates a sense of belonging. As we are rejected, we too can become powerful. The spatial aspects of this are terribly important. Look at the riots in the suburbs in 2005. We don't have time to talk about that, but that's a fascinating thing. Different people who are marginalized by the center, large populations of North Africans, and of West Africans, and people from the Caribbean. It's the same phenomenon, the center and periphery is there. Last, and I think I've pulled it off, if you went westward, this is a Monet. This is one of the many regattas at Angers. You went westward for pleasure, not eastward. You went further and further so the middle classes, particularly in the western part of Paris, plant their flag defiantly in Normandy, in Deauville, and in Angoville and all of these places there. It's there that the impressionists paint the Parisian upper classes, who when you go to Deauville you still see all the 75 license plates and the 78s from the Paris region. It's still the place. There's a social geography of leisure, too, that develops in Paris, as in these other cities. It happens so remarkably in what was not only the bourgeois century, not only the rebellious century, but above all, the urban century where the way in which people lived in very important ways was transformed. Thank you very much. Good luck on the midterm. See you next week. See you on Wednesday.
European_Civiliization_16481945_with_John_Merriman
13_Nationalism.txt
Prof: It's kind of a complicated lecture today. I want to talk about nationalism and I do so with a skepticism that you'll quickly pick up on. Aggressive nationalism helped unleash the demons of the twentieth century, beginning with World War I, which unleashed even more dangerous demons after that. I want to talk about nationalism and particularly in--a little bit of France, but in places that one doesn't usually consider. I'll end up drawing on my friend Tim Snyder's work to talk a little bit about Lithuania and Belarus, and why their nationalism were very different and, in the second case, didn't really exist at all in the nineteenth century. And I'm going to give a counter example, which I treat in the book but is the Austria-Hungarian Empire. It's funny, because one couldn't have imagined in the 1970s, looking nostalgically back on the Austria-Hungarian Empire, this polyglot Habsburg regime. But the horrors of the Balkans really made lots of historians and other social scientists look back and try to figure out how it was that--instead of asking why it was the Austria-Hungarian Empire collapsed during World War I, or really at the end of World War I, turning the question around and saying, "How did it hold together so long?" So, the Austria-Hungarian Empire is sort of a counter example to these nationalisms. One of the things that brought the empire down, along with the war, was competing national claims from ethnic minorities within those vast domains. I want to start with a story. It's a book I read maybe five or six years ago. Histories have their histories, so I'm going to tell the history of this particular book. You'll see kind of what I'm getting at. By the way, I sent out--one of you had a great idea, emailed me saying, "Why don't you send out the terms before the lecture?" That was a great idea. I'd never thought of that. I did it last night, though I didn't put this particular book on it. Anyway, the book is Anastasia Karakasidou's, Fields of Wheat, Hills of Blood: Passages to Nationhood in Greek Macedonia, 1870-1990. When I say that histories have their own history, what I mean is the following. In this book, this anthropologist, who is from both Turkish and Greek extraction on either side of her family, is writing a book about a small part of Macedonia. Macedonia, of course, was heavily contested for centuries. A trade route went through it. In Macedonia there were Turks, and there were Serbs, and there were Bulgarians, and Macedonians, and Greeks. For centuries they had all basically gotten along as that part of the Balkans, as you know, in the past was under the Ottoman Empire and then through a whole series of arrangements, of wars, the Balkan wars before World War I, passed back and forth. Essentially, that is one of the points of the book, is that basically people got along very well, but that gradually what happened is that among competing national claims that part of Macedonia became seen by Greeks as part of greater Greece. Whenever you hear the term "greater Greece," or "greater Serbia," or "greater Germany," or greater anything, look out. What that means is that in the imaginary, in the view of nationalists, particularly aggressive nationalists, parts of the territories that have large percentages of a certain ethnic group or even in some cases only minorities, but in other cases majorities, should be included, come what may, in the greater state of that particular ethnic group. If you take the example of Kosovo, and Kosovo has about eighty-five percent of the population is made up of Albanian Muslims. Kosovo was part of Serbia. When Milosevic was talking about "greater Serbia," greater Serbia for him could not exist unless Kosovo, with its eighty-five percent of people who weren't Serb, was included in that. Anyway, that's another story. What happened with this particular book is that when this book was in manuscript, arguing that basically the idea that Macedonia was Greek was a construction, was an invention, an invented identity by Greek nationalists, the press, the university press, I guess since this is being recorded I shouldn't say which one that was, chickened out and decided not to publish the book. At one point they got a bomb threat from Greek nationalists saying that, "If you publish this book, we will blow up your offices in Europe." So, they chickened out. In an example of just utter, craven cowardice refused to publish the book. They sent this author, whom I don't know--I've read the book. It's a really terrific book--and said, "Sorry. We're not going to publish your book. Too bad, contract or no contract." So, University of Chicago Press published the book, and when the book came out this particular author received a lot of hate mail. She received a picture of herself with a picture of a Greek flag stuck through where her heart would be. These are fairly serious threats. The point of that is not to jump on Greek nationalists or on Serb nationalists, though certainly the Serb ultranationalists have done just an incredible amount of damage in the Balkans over the past decades, but merely to underline the point that national identities are constructed. They're invented. They're, in a way, imaginary. One of the most interesting sort of historical things you could do as an historian is to try to figure out, from where do these identities come? Language plays a lot of it. Maybe if I have time, because I've got to do a lot today, but this is more of a conversation than a lecture. If I have time I might talk a little bit about language in the case of France. But, in doing so, like most people talking about nationalism, I'm drawing on some of the thinking of Benedict Anderson, and his concept that nationalism and the construction of national self-identity represents "imagined communities." Basically, if you consider yourself a member of X nationality, you are creating links or you are agreeing to links with people whom you don't know, people that live in Portland, Oregon, or people that live in Albuquerque, New Mexico, or people that live in New Jersey, even though we are sitting here in Connecticut. One of the useful aspects of Anderson's account is yet again to look back at the construction of nationalism to see that here we have that old story. It's states and large-scale economic change that are the two driving forces in the construction of national identities. I've gone on, at least in two lectures and part of another one talking about British national identity--and I'm certainly not going to go through that again, except to say that it was precociously early, the sense of being British. I also argued along the lines that we can now, at least for elites, say that French national identity began to be constructed in at least by the middle of the eighteenth century. When you think of the real hotspots, the real trouble spots of the twentieth century, when you think of the origins of World War I, which we will be doing and thinking out loud together over the next couple weeks, we will be considering Eastern Europe, Central Europe, and the Balkans. What's important to understand, and this is a reasonably decent transition from the initial discussion of this anthropologist's excellent book, is that in most of those places there was no sense of national identity, of being Slovene, of being Czech, of being Croat, of being Bulgarian, of being Ukrainian or Ruthenian--the two are essentially the same--until quite late in the nineteenth century. Part of what's going on in Europe between the 1880s and 1914 is this is an incredible "advancement," if you want to call it that, in thinking with the emergence of ethnic national identities competing and demanding their own states in that part of the world. When, in late June 1914, a sixteen-year-old heavily-armed guy, Serb nationalist--I once put my feet, which no longer--my feet still exist, but the steps in Sarajevo no longer exist because of all the bombing, in the place where Princip shot Archduke Franz Ferdinand, the assassination that led, because of this sort of entangling diplomatic alliances, to World War I. He was someone who practically could not have existed in the middle of the nineteenth century, even though among Serb elites there was a national sense. I'm going to give you some examples taken from Anderson of even the publication of the very first dictionaries in languages that now are quite common for us to identify with ethnic national states. In fact, some of these languages did not even have their own written dictionaries until the middle of the nineteenth century. That's not so long ago. Nationalism has to be constructed. A sense of self-identity has to be constructed. That's what I want to talk about. Let me say something at the beginning. Because of the French Revolution and because of the development in Europe and in other places of parliamentary regimes and democracies, it's fairly common to think, "National self-consciousness equals a desire for national states and you can't have that with a monarchy." That's not really true at all. That's influenced, for example, by the experience of the United States. In the United States, the thirteen colonies, English was overwhelmingly the language of the thirteen colonies. They are rebelling in 1776 and all of that against other English-speaking people who happened to have a monarchy. So, "no taxation without representation" really became also a kind of an anti-monarchist sentiment. If you think of the Spanish, the rebellions in Latin America against Spain, there, too, the rebellions, though there were millions of indigenous peoples who did not speak Spanish, but basically it was a rebellion of Spanish speakers against a monarch that was Spanish, speaking in the case of Spain. If you think about really extreme ethnic nationalism at the end of the nineteenth century, you think of two states which helped kind of push the world to the catastrophe that was World War I, one has to point the finger at both Russia and Germany, which had autocracies. This is jumping ahead a little bit, but I'm providing you an overview. For example, the campaign--;this is jumping ahead a little bit--;the campaign of Russiafication that was undertaken by the Russian czars, a brutal campaign against non-Russian minorities, was, in part, a response to rebellions within the Russian empire by Poles, for example, who rise up in 1831 and in 1863 and are crushed like grapes. In 1863, Bismarck, the chancellor of Germany, congratulates the czar for stomping on the Polish insurgents. But the campaign of Russiaficiation was part of the re-invention of Russian national identity. When I talked about Peter the Great, I talked about how he saw himself as this great Russian patriot. Well, aggressive Russian nationalism picks its targets rather systematically in the campaigns of Russiaficiation. The big pogroms, the massacres of Jews in Odessa, in Crimea, and in other places, are cheered on by the Russian czar, by Nicholas II, whom I will talk about when I get to the Russian Revolution, who saw this as a healthy thing, that the Jews are being beaten to death by real Russians. This was part of his campaign of Russiafication. In the case of Germany you've got this madcap loser, Wilhelm II, cracking bottles of champagne, or not of champagne, but of Riesling, as I said, over big speedy battleships and all of that. Nobody was a more aggressive nationalist than Wilhelm II, the Kaiser, who kept saying rather disingenuously that he was "the number one German" and all of that. We can get rid of the idea that strong national identity necessarily has a parliamentary outcome. In the case of Britain, we're not going to talk about Britain too much, but the case of Britain is pretty interesting, too. But there you have a monarch without real power. Victoria represents in the imaginary of the British citizens the stability and the constitutional settlement of the British Empire. Yet, a couple of points need to be made. Language is important in all of this, though not always. Maybe if I have time I'll give a Swiss example later on. Basically, in the case of Russian and German nationalism, and French nationalism and even Spanish nationalism, because of the dominance of Castille, one looks back to the time when national languages, which already existed, are used and become identified with this self-identity of national people. Now, Latin was the language. Latin was the language of science, of diplomacy, of everything. Part of what's intriguing and important about the scientific revolution is that vernacular languages begin to be used as a way of communicating scientific discoveries. There's a little bit in that chapter that you read about that. Certainly, language is closely tied to national self-identity. One of the ways when nationalism is most aggressive and most vulgar is when very ordinary people who are whipped up, egged on or in some ways urged on by elites began identifying people who don't speak the same language is somehow not part of this imagined community. An obvious example would be all the Hungarians who, after the Treaty of Versailles in 1919 and the subsequent treaties named after Paris suburbs, are included in Romania and are treated as outsiders. This is very important even in the origins of the 1989 revolution that brought down the dreadful Ceausescu dictators in Romania. Anyway, the vernacular develops. If you exclude the cases of Latin America rebelling against Spain and the Americans rebelling against the British, development of these languages, and the use of the languages and their identity with this imagined community is obviously a very important part of this as well. With the development is the concept of being a citizen. This is one of the many reasons the French Revolution is so important. You were no longer the subject of the king, you were a citoyen, or if you're a female you're a citoyenne. Citizenship takes on this kind of linguistic aspect as well. During the French Revolution, there was a revolutionary priest called the Abbé Grégoire. I think I mention him in the book. He thought that all of these regional languages should be squished like grapes, because somehow they stood in the way of a true French national identity. Language is so terribly complicated. In the case of Italy, which is in some ways a counter example, I think I said before but it's true. At the time of the Italian unification, only about four or five percent of the population of Italy, of the whole boot and Sicily, spoke what is now considered to be Italian. The case of France, which I know more about, is equally fascinating because of the time of the French Revolution half the French population did not speak French. There was a lot of bilingualism, but they did not speak French. If you imagine a map of France, and I think I went through this very quickly before, but if you imagine a map of France and if you start at the top, they spoke Dutch in Dunkirk and places like that. If you move over to Alsace and much of Lorraine, they spoke a German dialect there. That would be a majority language until well after World War I. How the French tried to get rid of the German is another story, a sort of national aggression, even in the context of Germany's defeat after World War I. If you move further south, as you go to Savoy, don't write this down, but Savoy was annexed to France in 1860. People spoke essentially Piedmontese, which is the language spoken in northern Italy in the strongest state of Italy, Piedmont Sardinia. Then you go further down and they spoke what? They spoke Provencal. Provencal, as in Jean de Florette, and Manon des Sources and these Provencal poets setting up at a place called Les Baux and freezing in the winds of the mistral and reading each other Provençal poetry. Then you go to Languedoc and they spoke Occitan, which is a language of Oc. It's a southern French language. It's a written language. You go to Catalonia and they spoke Catalan. No surprise there. You go into the Basque country and they spoke Basque, which is only remotely connected to Finnish and Magyar. Those are the three hardest languages in Europe. How they got there is another whole story. We don't really know. If you go north, they spoke Gascon. If you go into Brittany, they spoke Breton, which has nothing to do with French at all. Even in places that didn't have languages there were patois. Patois is a sort of a denigrating term. "Well, they speak patois." In other words, they don't speak really French. In central France they spoke one patois. In the Limousin they spoke another patois that was related to that one. Even in the Loire Valley people spoke patois. This did not condemn them to eternal backwardness. One might say that in the construction of French national identity, there was an argument a long time ago by my late friend Eugen Weber that said that all French national identity had to be constructed between 1880 and 1910, because of railroads, military conscription, and education. Railroads, military conscription, and education. It's easy to see how that would work. In fact, he missed one of the complexities of this glorious country, which is that lots of Breton soldiers didn't learn French until they were in the trenches, if they were lucky enough to survive in World War I, and they still spoke Breton in the 1920s and 1930s. There are still old ladies in Brittany that still speak Breton and their command of French is a bit problematic. In Corsica they still have many people who speak Corsican. They may or may not feel like they're French. Bilingualism, just as a little aside, in the village where I've spent half my life almost, in the last twenty-five years or so, people spoke patois and not French through the 1930s. That really sort of disappeared. Now older friends of ours understand patois, but they don't speak it. I had something from a book that I needed someone to look at to make sure that what I'd written in patois was correct. Not that I wrote it, but I took it from something. My friend, my boule partner, Lulu, his parents spoke that as their main language, but he couldn't correct it. Those languages are disappearing. The point of all this is that now the more we know about national self-identity, it's possible to have more than one identity. It's also just a leap of faith to say, "Who are you?" You ask who they are. That they're going to say, "Well, I'm German," or "I'm French" is going to be the first thing that they're going to say. They may say, "I'm from this village," or "I'm from this family," or "I'm from this region," or "I'm Catholic," or Protestant, or Jewish, or Muslim, some response like that. But yet when we think of nationalism, we think of these languages as being motors for elites, first, and then ordinary people to demand that the borders of states be drawn in a way that reflects their ethnicity. After World War I in the Treaty of Versailles, you've gone to war over the whole damn question of nationalism. All these millions of people get killed, dying in terrible ways--gas and everything else, flamethrowers and machine guns and all this stuff that we'll talk about. And, so, they say, "If we draw the lines around these people and give everybody a state, that will be cool. Then we won't have wars anymore." So, they get all these big maps and these mapmakers and they try to draw these state boundaries after the collapse of the four empires. It doesn't work. You can't do it. You've got winners and you've got losers. If you're going to punish the losers, like Hungary, then you leave Hungary this small country with much of its population living on the other side of borders, and either imagining that that should still be part of Hungary, or wanting themselves to live back in Hungary where there would be nothing for them at all. Yet, the period we're talking about and the period I began with, you've got this mobilization of elites saying, "Holy cow! We need our own state." Remember a line I already gave you a lecture or two ago, all these Czechs sitting in 1848 in a room like this, not quite as nice. They say, "If the ceiling falls in, that's the end of the Czech national movement." Between 1848, the springtime of the peoples, and 1914, you have millions of people who, a couple decades before that, had absolutely no sense or very little sense of being Slovene, or Slovak, or Croat, or whatever, who are suddenly making national demands and wanting to have a separate state within the context--or to be independent from the Austria-Hungarian Empire. One of those people was the sixteen-year-old boy, Princip, who blows the brains out of Franz Ferdinand and his wife when this car backs up the wrong street in Sarajevo, although some of his friends were out there trying to get him, too. That's just a way of kind of thinking about that stuff. Let me give you a couple of examples here I wrote down. Ukraine is a huge country, a huge important country, very contested relationship with Russia now because of having gotten Crimea, and Russia wants to have Crimea and all of this. It's a highly contested relationship because of the number of Russians who live in Ukraine and all of that. For Ukrainians, the sense that Ukraine always existed is always taken as a given. The first Ukrainian grammar book, and this is not dissing Ukrainians or anybody, but I'm just saying that the reality is that the first Ukrainian grammar book was published not in 1311 or in 1511, but in 1819 is the very first one. The first Czech-German dictionary--if you're going to have a national identity you've got to have a dictionary so you can translate things between German and Czech. It's a long publication process. It's published in 1935 to 1939, A to Z. The first Czech national organization, the one I just described, starts in 1846. That's pretty recent. The first Norwegian grammar book, which distinguished Norwegian as a separate language and a separate identity from say Swedish and Danish, is not until 1848. The first dictionary that is making a distinction between Norwegian and Danish isn't until 1850. That's what I mean about the construction of national identity. You have to have a sense that you are part of this imagined community. Having said that, before I talk about a counter example, let me do this like that. Why not? Let me give you a couple examples that I hope make the point. These I'm drawing from Timothy Snyder. Let's look at why at the end of the nineteenth century Lithuanian nationalism develops. You know Lithuania, capital is Vilnius, big tall basketball players like Sabonis, who played in the NBA. Why Lithuanian nationalism rapidly develops, but only at the end of the nineteenth century, and Belarusian nationalism doesn't develop at all until way in--it's even pushing it to say in the 1920s and 1930s. Now there's this huge Belarus--I was in Poland. The various times I've been to Poland. There was a huge dinner with all these Belarusians who most of them were dissidents and are there to discuss the history of Belarus, but none of them would be claiming that Belarus had a self-identity before the 1930s. But Lithuania existed. Lithuania was part of the Polish-Lithuania commonwealth, which exists basically until the last partition of Poland in 1795, when Poland gets munched, bouffé, by the great powers. Who do these people think they were? They think they're Polish. They consider themselves Polish. Poles already had a basis for nationalism. They had a written language. They have heroes, Chopin. Chopin didn't go to Paris as a refugee from Russian repression. He went there to further his musical career. But anyway, he wrote lots that had to do with Polish national themes, folklore and all of that. There have been dukes of Lithuania, grand dukes, but they didn't accept Lithuanian as a language. If they wanted to get anywhere, they tried to pass themselves off as Poles. Pilsudski, a name you will come back to who destroyed the Polish republic, as one after another of European states goes authoritarian in the 1920s and 1930s. Pilsudski, who was the hero of the miracle of the Vistula River when the Polish army turns back the Red Army at the end of World War I in just sort of an amazing moment. Pilsudski himself was Lithuanian. But he considered himself Polish. He was absolutely a Lithuanian. Yet there was a Lithuanian language, but it was not spoken by the elites. Who spoke the Lithuanian language? It was spoken by the peasants. At the end of the nineteenth century, you've suddenly got all these Lithuanian intellectuals and grand dukes and priests and various people saying, "Wait a minute. We are Lithuanians and happily, the Lithuanian peasantry has saved our language." The last Lithuanian duke who spoke Lithuanian died before Columbus discovered America, Tim Snyder informed me. Some may say, "These Lithuanian peasants, we won't treat them anymore as the scum of the earth. They have preserved our language for us." Suddenly, you have poets writing in Lithuanian. It's no longer a disgrace to be seen as a Lithuanian. One of these poets, a guy called Kudirka, who died in 1899, he recalled when he was in school as a smart Lithuanian kid, he said, "My self preservation instinct told me not to speak in Lithuanian and to make sure that no one noticed that my father wore a rough peasant's coat and could only speak Lithuanian. I did my best to speak Polish, even though I spoke it badly." Polish is a terribly difficult language. There's all these sort of squiggly things. Things don't pronounce like you think they're supposed to. I don't do very well at picking up Polish. "When my father and other relatives visited me, I stayed away from them when I could see that fellow students or gentlemen were watching." He was embarrassed to be basically Lithuanian and the son of a Lithuanian peasant. "I only spoke with them at ease when we were alone or outside. I saw myself as a Pole and thus as a gentleman. I had imbibed the Polish spirit." By the end of the century he sees himself as a Lithuanian. He is one of these people who are pushing Lithuanian nationalism and it is embraced. How does this physically happen? You don't wake up and say, "I was Polish yesterday and a subject of the czar, because Poland is divided between Prussia, Austria-Hungary, and Russia. But if you were in the Russian part of what they called Congress Poland, then suddenly today I'm Lithuanian. How does that happen? Because Lithuania is next to Germany. This is also something that will make you again think of what I said about the Enlightenment. Lots of literature is smuggled into Lithuania in Lithuanian. Therefore, there's this wild profusion of Lithuanian literature that comes into Lithuania, which of course as you know was not independent. It was part of the Russian Empire. So, there's another reason, too, which is for the Russian imperial secret police, the ones that they're really worried about. They're worried about the Poles, because the Poles have risen up in 1831 and in 1863. So they're on the lookout for people that are saying, "Hey, I'm Polish. We want a Polish state." They don't pay much attention. They don't really care about these Lithuanians who are discovering their own self-identity, who are constructing their self-identity. Why doesn't it happen in Belarus? I don't have time to tell you very much about this, but the main thing is that Belarus is a long way away from anywhere at the time. There isn't any kind of elite in Belarus that embraces Belarussian anything. The language has not seen part of a national self-identity that basically does not exist and would not exist until at least after World War I. Now Lithuanians will look back on their country as if Lithuania had always had this sort of self-identity. Part of the Polish-Lithuanian commonwealth, that was more basically a Polish operation and it was a territorial thing more than any kind of construction of two peoples participating in this thing. Furthermore, Belarussians were not allowed to publish in their own language. Whereas Lithuanian priests began giving sermons in Lithuanian and you've got all this written material coming in the vernacular. Nobody read Belarussian in church. There were no priests to say that "this is our language." Belarussians who were literate could read Polish or Russian or both, but in many cases not what would become Belarussian at all. By the end of the nineteenth century when you've got these other people insisting that "we're Slovenes" and "we're this and that," Belarussian speakers called themselves Russian if they were Orthodox religion. They called themselves Polish if they were Roman Catholic. If they were simply looking out for themselves, they just called themselves local. They said, "We live in the Russian empire and that's who we are." There was no sense of being Belarussian. There are different outcomes in all of this stuff. Having said that, we're going to get there. Let me give you another example. I want to find this date that will make you at least realize that you can have a national identity and have more than one language. It's very complex. I guess the most interesting case now would be Belgium, which I don't have a lot of time to talk about. In Belgium, I have friend who works in the Belgian Ministry of Culture in Brussels. About seven years ago I asked him, "Do you think Belgium will exist in ten years?" He said, "I hope not." This guy works for the Belgian Ministry of Culture. This reflects the sharp antagonism between the Flemish, who basically live in the north and east, but above all the northern parts of Belgium, and who are more prosperous and who are more numerous, about say fifty-five percent of the population. Their tensions with the Walloons, that is the French speakers, Liège, and Arlon and all those places, and also in Brussels, which is technically part of the Flemish zone. Because of the bureaucracy and because Brussels is the most important city, it has become this sort of third place hotly contested by the Flemish and real serious tensions there. If you ask in French what time the train is to Bruges, they're not going to reply. They know perfectly well. They just simply won't reply. Not all of them, but those are serious tensions that are compounded also by the fact that there's going to be, not everybody, but the far right is really tied to Flemish self-identity. The Walloons, that is the French, many of the French speakers want to be attached to France, see their lives as very different. Also, the Walloon part of Belgium is basically the rust belt and the Flemish part is very prosperous in comparison. Yet Belgium, which didn't exist legally until 1831, the revolution of 1830 and 1831 is still there. By the way, there's also five percent tacked on after Versailles around a town called Eupen who speak German. Anyway, there we go. But Belgium is still there. When I'm in Belgium, which I am frequently, I think, "Now this is really Europe," because of the complexity of it. You can have a national identity without having a single dominant language, if the two sides are tolerant. Let me give you another quick example, and then we've got to rock and roll onto the A-H Empire, a shortcut now. Not Austria-Hungary. I've got to save time, so "A-H" Empire. What about Switzerland? Here you've got Switzerland. If I remember correctly, the statistics, I think the French speaking population is twenty-two percent. German speaking or Swiss Deutsch speaking population is about maybe seventy-one percent, or something like that. You've got an Italian speaking population of about five percent. And you also have another language called Romansch, which is spoken only by a few hundred thousand people. That's three languages already, plus English, because of the international role of Geneva, is the fourth major or recognized language in Switzerland. Switzerland now is so prosperous, and full of chocolate, and full of banks, and full of watches, and all of that. You think of everybody yodeling and cows running around and everybody's very happy and eating perch out of the lakes. But the Swiss have to create this sense that they have always been a nation. But they haven't. The decentralized, federalist nature of Switzerland was always there. During the Reformation, to say somebody was turning Swiss meant that they were rejecting the demands of their lords, and rejecting the religion imposed by their lords and turning to Protestantism, if they were in a Catholic area or to Catholicism if they were in a Protestant area. The Swiss were big time mercenaries and big time farmers. But Switzerland fought its last war early in the nineteenth century and has been neutral. It's a very complicated story, what happened in Switzerland during World War II. It's very tragic. The Swiss turned so many Jews back at the frontier and sent them back to Germany, and laundering Nazi money, and all that. I'm not dumping on the Swiss, but it's a complicated story in the case of their neutrality. They decided in 1891, on the 600^(th) anniversary of the Swiss confederation that Switzerland began in 1291. That a bunch of people got together between all the cows and eating chocolate and all that stuff, and they announced that they were Switzerland. Here's again what Anderson means about this sort of imagined community, that you're inventing a kind of date that you said, "We've been like that since then and that's all there is to it." But if you've got all these different languages and the languages are not as far apart as French and Dutch, well in a way they are because Dutch is really, although the Dutch would not see it that way, but is a German dialect. Nonetheless, the Swiss are a lot better at learning each other's language than the French speakers certainly are at learning Dutch, which they view as impossible and don't like their kids having to learning it in school and all that. It's terribly complicated. So, they imagine this community, but it exists. Switzerland exists. People have a sense of being Swiss, despite these different languages. There are not the economic disparities. Well, there are between urban and rural life, but nothing like the disparity between the Flemish parts of Belgium and the French parts of Belgium, if you exclude Brussels and all that. Let me end in the last five minutes and seven seconds that is allotted to me. Let me end with a counter example, which you can read about. I said at the beginning, inspired by the sheer horror of the Balkans, and some of you aren't old enough to remember, certainly not, my god, I am, all the stuff that happened in the late 1990s. You can probably remember all the massacres and stuff like that. I said at the very beginning of the hour or the beginning of the fifty minutes that people now tend to look longingly back. They say, "The Austria-Hungarian Empire, it sure lasted a long time." You had fifteen major nationalities. It was kind of a balancing act. It becomes the dual monarchy in 1867, where the Hungarians have, more or less, equal rights. You've got Austria and you've got Hungary. But you've got another thirteen peoples, at least thirteen peoples living within the empire. You've got the Croats, who have their nobility. They're kind of given favorable status. This whole thing is sort of balanced. How does the place stay together? How does Austria-Hungary stay together? I end one of those chapters, that chapter with this very famous scene from the parliament in Vienna where you've got these different ethnic groups playing drums and singing songs and trying to disrupt the speeches by people from the other nationalities. You've got all these problems with the south Slavs wanting at least minimal representation as sort of this "third state" along with Austria and Hungary. How does the thing stay together? Basically, in this way. I'm just telling you briefly about things that you can read about, but I just wanted to make some sense of it. First of all, the language of the empire is German. To get somewhere in the Austria-Hungarian Empire, you need to know German. So, learning German becomes kind of a social mobility, the way that learning French becomes for somebody from Gascony a form of social mobility. You can get a job in the bureaucracy. If you're going to have a humongous empire going all the way to the rugged terrain of Bosnia-Herzevogina, you've got to have officials and their little hats and their little desks who are going to be running all this stuff. You've got to have a language. The language of the empire is German. This does not mean that people feel that they're German. After all, they're not German. They're German speakers within the Austria-Hungarian empire. It gives them an allegiance to this apparatus. Secondly, the middle class. The middle class is German, largely, except in Budapest where it's Hungarian. Still, many Germans live in Budapest as well. One of the things I wish I had time to talk about, but you can't talk about everything, is that what you've got in these cities, and I mentioned this in reference the other day. Cities of all of Eastern Europe and central Europe, you have kind of an ethnicization of these cities. All of the cities, whether you're talking about Budapest or you're talking about Warsaw, or anywhere you're talking about, even Vilnius, you have large German populations and also large Jewish populations. In the course of the last decades of the nineteenth century, you have this sort of rival of Estonian peasants into Talin, of Czech peasants into Prague, of Hungarian peasants into Budapest, of Lithuanian peasants into Vilnius, etc., etc. But you've still got, in the Austria-Hungarian case, you still have, even in Budapest, you still have a large middle class that is fundamentally German and believes in the empire. Next, you've got dynastic loyalty. You've got this old dude, Franz Joseph, who had been there since 1848. He lives until 1916, the same guy. That makes Victoria seem like she had a short reign. People have an allegiance to this dynasty. The Habsburg Dynasty had been dominant in central Europe until they contest the Prussians and lose out in the War of 1966. So you've got this Franz Joseph. Also you've got the Catholic Church. There are lots of Protestants. For example in the Czech lands in Bohemia, where Slovakia is almost overwhelmingly Catholic in what would become Czechoslovakia and then divorce, amicably enough, in 1993. Croatia is overwhelming Catholic, aggressively so. Despite the fact you have these huge Muslim enclaves in the old what had been the Ottoman empire, you still have this church as a unifying force, not for everybody and certainly not for the Jews, not for the gypsies, of whom are the Roma, who are very many there, and not for protestants and not for orthodox Serbs, which is part of the tensions there as well. They saw Russia as being their protector. You can read more about that, but that's another thing. Finally, you've got the army. The army is a form of social promotion as well. The army doesn't have the bad reputation that the French army did for shooting down young girls, young women protesting in strikes. It doesn't have the reputation that the brutal Garda Civila did in Spain. The army is seen as a useful way of representing the empire. It has a good reputation. German, the language, is the language of command. These soldiers and soldiers are drawn from all of these nationalities, they at least have that in common. To conclude, the most important question to ask about this empire, particularly in reference to what I've been saying about this whole hour is to not look at why it came apart, but to look at how it held together so long. Given the horrors perpetuated on Europe by aggressive nationalism from then, and even before, as during the French Revolution to this very day, sometimes, and I never thought I'd ever say this about me looking nostalgically back to an empire, but it is interesting and at least food for thought. On that note, bon appétit and see you on Wednesday.
European_Civiliization_16481945_with_John_Merriman
16_The_Coming_of_the_Great_War.txt
Prof: The second announcement is the movies, the films. I've done what I think is the way to do it. They will be available. I think the first one is available now. You can watch it in the privacy of your rooms in whatever college you are. You are to please see them. Paths of Glory goes with next week. That's the first one. It's very short and it's very good. It's one of the first Kubrick films. It's about the mutinies. I will talk about the mutinies next week. Please have seen the film by Monday. Can you tell them in section how they do that? I did it, but I'm not sure how I did it. They should be set up. Another thing you can do is you can go down to Film Studies in the Whitney Humanities Center, and you can check out the film and watch it there, or I think you can take it back, also. But you can watch it on your computer screens. Those are the three. The first one is the first one and then the second one is the second one. Boy, I'm really awake today. The second one is Triumph of the Will, which will go with the fascism lecture. Be sure to have seen it before. The last one is Au revoir les enfants, a Louis Malle film which will be subtitled in English, I think. Yes, it is. That goes with the second to the last lecture. Make sure you've seen these films. None of them are long and they're all great, great, great films, if you can buy into Kirk Douglas as a French soldier. You have to suspend reality a little bit to do that. Any announcements? Things happening? All right. Today, much of this lecture just parallels the chapter. The origins of World War I can be confusing and I just want to make those perfectly clear so that you know this stuff. So, I hope you read the chapter. Also, we used to have you read Goodbye to All That, which is very long, but very good, by Robert Graves. Then we used the inevitable All Quiet on the Western Front, but we suppressed those. So, it's even more important that you read the chapter. Let me get into that. I'm not going to write all the terms on the board, because there's so many. I sent them around, and it's hard to see anyway. What I have up here is when I talk about birthrights is--between the drilling in the background, gosh darnit--anyway, live births in 1908 were thirteen per 1,000. I'll go into that in a minute. Let me start now. Because World War I--in 1914 so many people wanted war, and they ran to the Gare de l'Est and chanted, "à Berlin, à Berlin," lots of champagne, and then in Hauptbahnhof in Berlin, they chanted, "nacht Paris, nacht Paris." Nobody knew that the war was going to last over four years, and kill millions of people, and mark the end of four empires, and, arguably, help contribute to the end of the fifth, that is the British Empire and the impetus toward decolonization that comes out of World War I. Nobody knew that the war that was supposed to be over by December wasn't going to be over by December. Outside of a couple of journalists, who had been following the Russo-Japanese War in Manchuria and had seen kind of the evolution of trenches, nobody predicted that kind of war. I'll talk about military strategy at the end today, or--in the plans for the war--or, depending on time, the timing, at the beginning of next hour. So, this makes the origins of the war so much more important. There's certainly, in terms of diplomatic history, there's no other event in the history of the world that has been so pored over than the diplomatic origins of World War I, the famous entangling alliances, the house of cards that collapses, all of those very familiar images. After the war, I had this great uncle who fought in the war, a great, great uncle. He was an old dude when I was a very little guy. He had been in France in 1917. At the end of the war, I remember when I was a little kid he gave me this sort of printed out book showing that the Germans had started the war. It was the official account of the origins of World War I. Of course, the fact that at the end of the war, the war ends with German troops inside France. This has a huge, huge impact on what happens because of two things, looking ahead. One, it became very easy for the German right to say, "We weren't defeated. We were stabbed in the back." By whom? By the Jews. By the Communists. By the Socialists. Secondly, because Germany was defeated they had to sign on the bottom line saying, "We started the war alone, we alone." The famous war guilt clause, war guilt clause. Now, the Germans didn't start the war alone. I'll leave it to you to decide whether their responsibility, the famous blank check given to Austria-Hungary, is more important than the roles of other states, Russia declaring mobilization which was tantamount to an act of war for reasons we'll come to, or France, for that matter. But that's why the origins of World War I are so important. The other reason is that clearly World War I unleashes the demons of the twentieth century. The kind of racist stuff, the even somewhat genocidal stuff was out there in the public domain, but World War I turns it loose. We talk about, I hope convincingly, the Europe of extremes, which is the title of a wonderful book by Eric Hobsbawm, and one extreme being communism. But the other extreme, which was more prenant, more victorious, more overwhelming in Europe was the rise of fascism and particularly the rise of National Socialism. This stuff was out there, but National Socialism and the Nazis cannot be understood without World War I. That's why this stuff on the origins, this diplomatic history is so important. That's why I'm paralleling what you are reading. If you asked people in the 1880s and 1890s, "Who will fight in the next war?" most people in Germany and many people in France would say that "it'll be the Germans fighting the French, because of Alsace-Lorraine." Other people, as we'll see, particularly in the 1890s, will say, "No. It's the British and the French who are going to be fighting, colonial rivalries, Fashoda and all that business." But the one in what you're reading, as I put it, the old hatred that cannot be put offstage during the entire period, even when French and British relations are at their nadir, at their worst, is that between Germany united, the empire proclaimed in the Hall of Mirrors at Versailles, the Chateau de Versailles, and France, because, after all, the French had to give the second-most industrialized region, one of the most prosperous regions that is Alsace and much of Lorraine, to Germany. I'm going to end up with an incident that looked like war was possibly going to break out between Germany and France, that is the Saverne incident, and talk a little bit about Alsace-Lorraine and stuff that isn't in the book later, just to make it clear. It is complicated, because the French could never accept the fact that Alsace and much of Lorraine was now German. This is, again, remember we talked about nationalism and constructed identity? Most people in Alsace and in those parts of Lorraine that became part of the Second Reich, the Second Empire, what do they speak? They spoke German dialect. They did not speak French. More about that later. There was bilingualism, but that's interesting. If you asked them, "What nationality are you?" and they reply in German, "I am French." If you were somebody doing a survey now, you'd be sort of shocked by that. But these are complex, these identities. Anyway, the rivalry between France and Germany was already always there. If you went to the Place de la Concorde in Paris, the Statue of Strasbourg, the town of Strasbourg, which is an important European capital now of the new Europe, for better or for worse, was covered in mourning cloth for much of the period because it had been "amputated." They used this image often. The right arm of France had been amputated in the settlement after the Franco-German War. So, that rivalry is there. French military planners, right through the whole period at the time of Boulanger, who was one who built his reputation--you already read about the general Georges Boulanger--he is Mr. Revenge. Military planners said, "When the war comes, we will move into Alsace and take Alsace and parts of Lorraine back. Then we will move to Berlin. Simple, just like that." To the very end, that's their military strategy, attack. They're going to attack and get back Alsace-Lorraine. What the Germans plan to do has a lot to do with the way the war starts, and we will get there. The second big rivalry in Europe--and again think of the 28^(th) of June 1914, Sarajevo, a sixteen-year-old heavily-armed Gavrilo Princip--is that between Russia and Austria-Hungary. Their rivalry is over the South Slavs who are within the Austria-Hungarian Empire and the Serbs, who are not, but who provide a constant force for destabilization in the region. As you know, since the time of Catherine the Great, she set her eyes on Istanbul, Constantinople--they're the same city--on the straits, on access to the Black Sea, that there was always going to be this drive of Russia to the straits. As you know, later Turkey allies with Germany. But the big rivalry is in terms of Russian influence, destabilizing influence, seeing itself as the protector, the mother of all of the Slav peoples, is a permanent force of destabilization in the Austria-Hungarian Empire. Ironically, the guy who gets offed along with his wife, the Archduke Franz Ferdinand and his wife, he was one of the more--he was a prejudiced figure in many ways, but he was considered a moderate, because he believed that the South Slavs should have kind of a third status, possibly, along with Austria and Hungary within sort of a tripartite empire. Of course, he gets gunned down and what comes next is the blank check, where the Germans say, "Do what you want to settle this situation." And the famous ultimatum to Serbia by Austria-Hungary. The Russian government stirs up pan-Slavic fervor in the Balkans. They work consistently to do that. There are religious ties, the Orthodox religion. There are ties of alphabet, the Cyrillic alphabet used in Serbia. Serbo-Croatian is the same spoken language, although Serb friends and Croatian friends would deny that in some ways, but basically it's the same spoken language. But the Serbs use Cyrillic alphabet, which is what the Russians use, and the Croats, who are Catholic, use the alphabet used in Western Europe. So, the European alliance system, these entangling alliances, hinges on French and German enmity and the competing interests of Russia and Austria-Hungary in the Balkans. It also hinges on Bismarck, who was in many ways an odious guy but a very clever guy. His fear was that Germany would have to fight a war on two fronts. So, what these powers are doing are looking for allies. As Bismarck said, it's interesting he said it in French, showing that in many ways French was still the language of diplomacy. He said when you've got these great powers, five of them, "you have to be à trois." You have to be with the three and not the two. His worst nightmare--and Bismarck was somebody who said he liked to lie awake at night and hate--his worst fear was having to fight the Russians and having to fight the French at the same time. When he encourages the French to get into the imperial game at the beginning, he's doing that to try to get them to blow off a little steam out there in Africa. "My map of Africa is here," remember the line of the map of Europe. So, as he said, here's the exact quote, "All international politics reduces itself to this formula: try to be à trois." As long as the world is governed by the unstable equilibrium of five great powers--Germany, Austria-Hungary, Russia, Britain, and France. These treaties, the arrangements--that is, the emergence of the triple alliance and the emergence of the triple entente at the time of the war, Italy is up for grabs, open to the highest bidder. Italy will go to war, despite having been a member originally of the alliance with Austria-Hungary and Germany. It will go to war on the allied side, because the allies promise them more in 1915. But that's another story. But that's very important in the emergence of fascism in Italy, because Italy after the war, though nominally victorious, does not get what it wants. It does not get the Dalmatian Coast. It does not get the Tyrol mountains. If you fought a war based on national claims, why turn around and give regions that have only a minority of Italian populations to Italy? Benito Mussolini goes from being a socialist to being a fascist, helps create that party based upon this idea that Italy had been screwed. They never got what they were supposed to in World War I. So, he comes power as a fascist, as you know, in 1922. In 1879 Bismarck forges this cornerstone alliance between Germany and Austria-Hungary, and it's predicated on German support for Habsburg opposition to the expansion of Russian interests in the Balkans. You can see in this the origins of the famous blank check in the hot summer, as it was, in 1914. In 1880 Italy allies with Germany and Austria-Hungary forming the triple alliance. But the wording is such that it doesn't necessarily bring Italy into the war. As I said, Italy will come in on the side of Austria-Hungary and Germany and Italy comes in, as I just said, in 1915. Now, the details of these treaties, and these diplomats are still under the influence of Metternich and all that, but the details are not known, but the outlines are known. The details are not known but the outlines of these treaties are basically known. One seam right through the period is every time that Russia seeks to expand its influence in the Balkans, Austria-Hungary gets concerned and they turn to Germany saying, "You will back us. You will back us, won't you?" They say, "Yes, of course, we will back you." In the end what happens is that the blank check goes, after the ultimatum, to Serbia by Austria-Hungary. "Do whatever you want to settle this situation. We will back you all the way." Why does Germany become encircled diplomatically and ultimately in war? How does it happen that Russia, czarist autocratic Russia allies with republican France? That the czar, the oppressor of the non-Russian peoples, especially the Jews in Russia, comes to Paris in 1889 and they name a beautiful bridge after him, the Pont Alexandre III, the bridge of Alexander III. The marine band learns the theme song of the czars and the socialists go wild in France. How can you ally with these people who are repressing socialists, who are repressing nationalities, they're repressing everybody, and run this police state? So, the last thing that Bismarck wanted are these two big states to come together on either side of him. How does this happen? Both France and Russia are outside of the triple alliance, which you already know. But there's another reason. As a matter of fact, I read about four or five years ago there are still French companies trying to get their money back from Russia because they lost their money in 1917, when the Bolsheviks came to power and ultimately nationalized industries, big industries particularly. It is economic in that one of the old things the people say about the French economy, but it's still true, is that French money investments, much of it goes outside of France. They build the railroads in Spain, but they invest heavily in Russian industry and in Russian railroads. So, these economic ties are very important. There are also cultural ties. Because of the popularity of the French in aristocratic circles within Russia, but on the other hand, there were lots of Russian nobles who spoke German, who lived in Konigsberg, which is still this sort of enclave now that is still part of Russia, sort of stuck between Poland and Lithuania. But the most important reason is that French investment in Russia increases dramatically in the 1880s and 1890s. And that France seeks an ally against Germany and that relations between Russia and Germany, and this is already obvious, you've already discerned this, are going to deteriorate because of this tender relationship between Austria-Hungary and Germany over the Balkans. In the very end, one of the ludicrous aspects of this whole damn thing is that just as they're about to go to war, and just as Czar Nicholas II, about whom we'll come back and discuss one day, he signs the mobilization order. And mobilization, for reasons I'll come back to, is tantamount to an act of war. He's dashing off letters to his dearest cousin Willie. And Willie is writing back to "My Dear Cousin Nicky." These people are related. They're cousins. But international circumstances, and the tensions over the Balkans, and French fears of Germany, bring Russia and France together and the French marine band plays whatever the theme song of the Russia czars is--it certainly wasn't Doctor Zhivago--;when they arrived. For the Russian government that blames Austria-Hungary for trying to undercut what they view as their logical influence in the Balkans, and Germany will back them right away. In 1892 France and Russia sign a military treaty that says that there'll be a military response if the other were attacked by Germany or by one or more of its allies. They form a formal alliance in 1894. What about Britain? What about Britain? One of the things is that the British don't want to ally with anybody. They're on bad terms with the French and they're on bad terms with the Russians, to make a long story short. The Great Game, as they called it, rivalry over Afghanistan, over the entire sort of extension of that frontier into Asia, means that the chances of Great Britain joining in alliance with Russia and with France seems extremely dim. Britain wants to control the seas and to go it alone. But they discover a fact that shouldn't have surprised them in the Boer War in South Africa. They don't have any friends. Nobody supports what they're doing in South Africa. It's better to have an ally in a world that gets increasingly dangerous. What happens gradually is that the rivalry, again to make a long story short, between Germany and Britain ultimately will cause Britain to look for allies, and that suddenly it seems less probable that France and Britain will go to war. What is the nature of this increasingly bitter rivalry between Germany and Britain? One is obvious--Africa. That's one. Second, economic in that the German economy is growing by leaps and bounds. It is the number one country in chemistry. Those of you that are chemists, the whole university system--in Britain the university system isn't terribly practical, but in Germany chemistry is part of what they do in the German universities, which are great universities. They began to lap the British in chemistry, chemical productions, and they catch up and go ahead, and steel, too. This is a big rivalry. The British government begins to run scared because the City is running scared. Third is this famous naval rivalry, about which Paul Kennedy, my colleague and friend has written a book, The Anglo-German Naval Rivalry. The Germans start turning out these huge ships. Then the British respond. They produce the Dreadnaught, which becomes a symbol for these huge powerful battleships like nothing that had ever been seen before. The naval leagues in both countries--again, this is a culture of imperialism, the culture of aggressive nationalism--put huge pressure on governments to throw every available resource in the building of more and more ships. Britain, which had always basically controlled the seas since the defeat of the Spanish Armada in the late sixteenth century. They're running scared. Now, again, you can't look ahead and say, "Aha! But there was only one naval battle of any consequence in World War I at the Battle of Jutland off the coast of Denmark." It's kind of a draw, but basically the Germans are forced back in their port so they lose. But the British couldn't anticipate that. So, their fear of Germany and the saber rattling of the thoroughly irresponsible idiot, Wilhelm II, helps make it possible to imagine an alliance with "the sneaky French." In the 1890s there were a lot of war novels about future wars. This, in itself, reflects the fact that many people thought there would be another war. Again, they didn't know it was going to be a war of four and a half years, but they think there's going to be another war. I assure you I've never read the following book. But one of the more successful was, for a brief time, was this sort of book about a future war. I guess it's in the early 1890s, or about the time of Fashoda. It's in the 1890s, or maybe the first couple of years of the twentieth century. It doesn't matter. Dover, the middle class of Dover are out parading around in the rain on a Sunday morning, miserable weather. They suddenly find that Dover's been taken over by the sneaky French, that they've been digging a tunnel under the English Channel. Napoleon wanted to dig a tunnel under the Channel. There is a tunnel under the English Channel, the Chunnel. The trains rocket along, at least until they get to Britain and then they sort of plod along at about two kilometers an hour, but they've improved that side of it. Anyway, there's sort of a French bias, but too bad. They suddenly find, as they're strolling along in the pouring rain, the horizontal rain, that the sneaky French, there were soldiers all over. Taking these sort of national stereotypes, the French are disguised as waiters wearing dirty waiter uniforms. This is the British image. I wouldn't even comment on what English kitchens would have been like. That would be a cheap shot. But under these towels were sneaky weapons. They take over Dover. Then, of course, the British get it together and they drive them back into the tunnel, and shoot a few, and then they cement up the tunnel, and then parliament passes more battleship bills, etc., etc., the future novel. But there's another one four or five years later. I haven't read this one, either, and I'm not going to read it. The people in Whitby or Scarborough, speaking of horizontal rain in the east coast, they wake up and they see these huge German battleships just lobbing shells that can reach and blow up York, lobbing one shell after another. The sequel isn't very interesting, but the British parliament passes even more bills. Then the battleships of the "good guys" go and blow up the battleships of the bad guys, and everybody can go back to eating odd things on a Sunday morning. So, how does it happen that that scenario is reversed, of what the future will be? I've just explained it. It has to do with the fears of both of these states of Germany. And that the crises, which you can read about, the Moroccan crisis in 1905 makes even firmer this military alliance. It's called an entente, that word is in English, too, or an understanding, but basically it's an alliance. By 1905 they're already saying, "Look, our navy, the British Navy will take care of the North Sea and the Channel, and you guys take care of the Mediterranean." The crisis in 1911, the second Moroccan crisis, which pushes Germany and France close to war, affirms all of the above things that I've said. Don't get the idea that in 1911 things are more dangerous than 1910, and in 1910 they're more dangerous than in 1909. Again, this sort of hydraulic model of pressure building up and finally there is war. It doesn't work like that. These alliances become firmed up. Of these great powers that Britain, and France, and Russia end up in--Bismarck was dead by then, but in his worst nightmare of being à trois, of being three. The French, by the way, had another reason to be particularly eager to have an alliance. An odd thing happens in la belle France, in most of France. The French population stops growing. It just stops as of 1846-1847. It's regionally specific. In Brittany and in the Auvergne, in the center of France, people are still churning out babies. You still have huge families. We have friends, one of them just died, older people, and they grew up in misery in the mountains. Misery. They had thirteen children and twelve children. They were one of twelve or thirteen children. But in most of France that's not the case. In one part of southwestern France, when people had a second baby they received a condolence card. Isn't that bizarre? The French population stops growing. Why? There are a couple of reasons. This is just an aside, but it's interesting. The Napoleonic Code, remember, ends primogeniture, so you've got to divide up the plot of land into two or three or into two. Birth control. There are two arguments: the peasants start it and then it filters up to the middle classes, or the middle class starts it and it filters down. It depends on where you are in France. But they stop having children. Look at this. I wrote it on the board, and it may be in the book, I don't even remember. Here are live births, 1908-1913 per thousand: Italy 32.4, Austria 31.9, Germany 29.3, England 24.9, USA 24.3, France 19.5. That is so low. The French population would have literally not grown had it not been for immigrants. Immigrants then were people coming from Italy and from Switzerland, but mostly from Italy, and from Spain, some, and from Belgium. What's the effect of this? There's this enormous crisis. It has to do also with this sort of threatened virility. Why do we have fewer children? What's the matter with us? France has become too effeminate, etc., etc. You could just hear the language of this. Women are not serving the state. Why are they not having babies anymore? What's the matter? They want to vote. Is this getting in the way of having babies that can be sent off to war? It causes an enormous problem. It's discussed all over the place, particularly by the nationalists. "We don't have enough children." Jumping ahead, and I'll come back to this, Verdun, 1916. The Germans say, "We're not going to take the forts at Verdun. They're impenetrable, untakeable, cannot be taken, cannot be pris. But we will make them pay so many hundreds of thousands of people, that we will bleed them and they will be forced to sue for peace." Falkenhayn was the general. "We won't take the forts Douaumont and Vaux, but we will kill so many hundreds of thousands of people, and we can afford to lose hundreds of thousands of people, because our birth rate is higher." Nice for the people sent into all this stuff. More about that later. So, this has a big effect. If you're going to go to war and get Alsace-Lorraine back, and if Germany gets more and more aggressive, irresponsible, no question about it. In an age of aggressive nationalism, you'd better have somebody else to help you out. There's a lot of them, and they blew us away in 1870-1871, and they defeated--they didn't blow away, but they defeated Austria. Prussia defeated Austria in 1866, cementing its role as the most important power in Europe. So, that helps as well. The French fears and all that. A couple more points. I don't want to give you an example from this and I mention it just briefly. It's interesting about how this works, how small incidents in a complicated world of national rivalries and competing identities can almost launch a war. Bam! It took the assassination of Franz Ferdinand to start it all off. There would have been a war sometime. This is the case of Zabern, in German, Saverne in French It's a very nice little town. I went to Saverne. You've got to see all these places. So I went to Saverne. There's a nice canal that runs through it. Alsace and Strasbourg were annexed to France in 1681 by the megalomaniac Louis XIV. They had been part of France a very long time. In 1871, for reasons you know, they become part of Germany. But this incident at Saverne, what it does is it reinforces the stereotypes that the French have of the Germans and that the Germans have of the French. It's the image of German quest for domination, and aggressiveness and the role of the German army, which seems to have absolutely no limits. Someone once said about Prussia that it was a state tacked on to an army. The Saverne Affair seemed to indicate that Germany was still the same way. If you go up to Alsace, you go up to the Vosges Mountains. There's this route called the Route des Crêtes, or the route of the peaks. You can look down into the Vosges--it still is France, but from what had been German Alsace. You can see all of these monuments put up by German hiking clubs to try to reaffirm this German identity that people had. Identity is an extremely complex thing. First of all, what is clear is that the vast majority of the population spoke German. Whether this makes them feel German or not, it's not sure. Let me give you a couple examples. I didn't send this around; it's too much. Let's say for the total of Alsace and Lorraine, the parts that were annexed into the German Reich, that the number of communes in which German dialect was the dominant language is 1,225; in which French was the dominant language was 385. The percentage of the population that spoke German is seventy-seven percent. The population that spoke French as their major language was twelve percent. There was some bilingualism, but not a whole lot, actually, and ten percent sort of neither, in that they were probably more or less perfectly bilingual because of intermarriage. So, when the Germans come in after 1871, they are better than what the French did after World War I. The French try to just rip German out as a language of instruction. Get rid of all the street signs in German. The Germans are a little more delicate in the way that they do things, but German is the language of administration. Another important point is that they don't trust the Alsatians. Even though they speak German, they don't trust them. Alsace and those parts of Lorraine are annexed into the Reich, but they don't have the same rights as a region that the other parts of Germany like Wurttemberg and Bavaria have. German deputies from Alsace and those parts of Lorraine don't have the right to vote on issues of war, for example, in the Reichstag. They are not trusted because they are seen as potentially disloyal to the Reich. The idea is that they have been infected with Frenchness. Part of this is religious. It's so complex. Alsace is a wonderfully interesting area. It has the largest percentage of Protestants in France outside of Ardèche in the south center. It's also got a large percentage of Jews, who had been victimized by anti-Semitic riots after 1848. But the majority of the population is Catholic. The German Empire, going back to the Kulturkampf of Bismarck, the war against the Catholics, still doesn't really trust the Catholics. You've got Catholics in Bavaria, usually very right-wing Catholics in Bavaria. You've got Catholics in the Rhineland. You've got some Catholics up in the North in the Palatinate and you've got a lot of Catholics in Alsace. So, they don't trust them, basically. They don't trust them. Relations between the German troops, who, as in the case of Spain, are not coming from that region--people occupying Catalonia come from Galicia or they come from Castile so they won't be infected by the local population, from the point of view of the Spanish state--so, the troops that are in Alsace are not from Alsace, because they don't trust them. So, tensions are very good. What happens in Saverne at a place where military civil relations aren't terribly good, in this town of 8,000 people, is that there is an incident that gets blown out of proportion. There is some drilling. The Germans soldiers are always drilling. And they're drilling and the commander makes a crack about the Alsatians. He calls them an extremely unfortunately scatological term that he meant to refer to all Alsatians. He essentially says, "Well, if you beat the hell out of those people, you'll be doing a service to all." This gets around. One of the reasons that relations weren't very good in this particular town was because there was a German officer who had the bad idea of sleeping with a fourteen-year-old girl. Some of the local guys go get this guy in this room and just pound him into a well-deserved pulp. So, it spins out of control. What happens is on both sides in Berlin and Paris, this becomes a huge incident, confirming the stereotype of the Other. There's nasty language. Bethmann-Hollweg, who was the chancellor then, says some over-the-top things about the French, and the influence of France and Alsace, etc., etc., and that the French are planning a war. And the French government, in a time when there is a nationalist revival, at least among the elites in France, they respond in kind and everything gets big titles, big titres, big headlines and stuff like that. They don't go to war. But what it does is it reaffirms these stereotypes and it makes people a little more edgy. In 1913, but well before that, military planners--I have three minutes and that's just what I need--military planners are looking ahead to the next war. The French we've already talked about. They have a not terribly poetically designated plan number eighteen, which is to invade Alsace-Lorraine with élan. That's all you need, they said, élan, patriotic frenzy, fury. All you need is to be on the offensive and that's the end of it. By the way, they invade wearing red pants and they could be shot, picked out through the fog finally in 1914, until they put a little less-bright color on. How are the Germans going to fight a war on two fronts? How are you going to do that? They're afraid of the Russians. Why? There are a lot of Russians and the other peoples. They think it's going to take about two weeks for the Russian army, once mobilization is declared, that the big bear will roll their forces toward the German frontier in German Poland. So, how are you going to win the war in two weeks? If you invade France not through Alsace-Lorraine, but if you invade--well, you're going to have big trouble. You're going to run into fortification. So, how are you going to invade France? The only way you can defeat them, and a guy called Schlieffen, whose name I wrote in what I sent around to you, is that you have to invade Belgium, and, from his point of view, the Netherlands, though Moltke, his successor, takes the Netherlands out of the equation. Belgium had been declared independent and neutral in 1831. If you go into Belgium the idea is you invade Belgium. You get through the big fort at Liège. You get through the kind of rough country, which is not too much. Then you hit the plat pays, the flatlands, and you roll toward the English Channel. The last thing Schlieffen reportedly said on his deathbed was, "The last soldier, his right arm should touch the English Channel." Then you turn down and you put Paris in a headlock, and they will sue for peace and you will beat them in two weeks before the big bear can come moseying along slowly. That's why mobilization was tantamount to an act of war, because it starts the timetable. They've got to defeat them in two weeks. What happens if you go through Belgium? From the point of view of the British, it's bad enough to have the sneaky French across the Channel. But what if you've got the Germans in Ostend eating moules frites? What if you have the Germans across the Channel? Big-time enemies a very short, choppy boat ride away. What's this going to do? It's going to reaffirm the alliance. Sir Edward Grey, the one who said most famously, and he got it right, "Lights are going out in Europe. They will not be relit again in our lifetime." At this point, the British hesitate. The French said, "Will the word ‘honor' be struck from the English dictionary?" The French ambassador is chasing around a high official in the czarist regime in Russia saying, "You must back us all the way." So, the invasion guarantees that the worst nightmare of Bismarck will come true, that they will be à trois. The fact that it doesn't work out, for a variety of reasons, the way the German high command intended, and the way Schlieffen intended, and von Moltke, means that they don't, for reasons I'll come back to, can't get Paris in that headlock, force them to sue for peace, and the race to the sea begins to try to outflank--as in a football game, to make a ridiculous analogy--the outside linebacker. They end up at the sea. Then shovels, and defensive weapons like barbed wire and machine guns, become the weapons of the war. That explains why there wasn't and subsequently could never be a knockout punch, and why millions of people died in and around those trenches.
European_Civiliization_16481945_with_John_Merriman
24_The_Collapse_of_Communism_and_Global_Challenges.txt
Prof: I guess what I'll do today is talk a little bit about the fall of communism. It's hard to believe, because all that happened, and now, next year it will be the twentieth anniversary of the fall of the Wall. You guys were not born yet, a few of you. Some of you were born in 1990. Is that possible? Were you born in 1990, some of you? See, that's after the Wall fell. I can remember seeing on TV the quick trial of the Ceaucescu couple, and them being gunned down in the garden behind their house. They were really a nasty pair. But now it seems like old history. I'm going to talk a little bit about that and then talk about kind of global challenges, and themes that we've talked about. We talk about immigration; we talk more about globalization. But I'm going to talk a little bit about that, too. Let's do that. Then we're out of here. Again, it seems like this is--to some of you it's not history at all. It's something that we all lived through, and in a way anticipated, and then saw developing. We were in France when all that was happening, and just listening on the radio, and BBC, and all this stuff. It was really quite amazing. The big difference, of course, that made possible the dramatic, dramatic changes that happened in Eastern Europe between 1989 and 1992 must begin with a guy who, independent of the fact that the Soviet system just didn't work, but with a guy whose declining reputation in Russia I find just incredible. That's Mikhail Gorbachev. It was relegated a few years ago, he actually did a TV ad for Burger King, at that point, because he needed the money. The fall of his image in the former Soviet Union and Russia I find extraordinarily hard to imagine. It made all the difference in the world that when people in Hungary and in Czechoslovakia, which then split between the Czech Republic and Slovakia, and in Poland--and in other places, but mostly those three places--when they began to push for reforms, and they decide that they wanted to reform communism, and they didn't want even communism at all. The big difference was in 1953 when there were riots in East Berlin, and I'm old enough to remember crossing the border, the Wall in East Berlin. They were squished like grapes. In 1968, when Dubcek, who ends up basically with janitorial duties after that, tried to put a human face on communism and reform communism, there were, as the expression goes, tanks before teatime. The Soviet tanks rolled in and squished them, too, like grapes. They had their martyrs in Wenceslaus Square. One guy burned himself to death in protest. It's still a memory embedded in the collective memory of that place. But there was a big difference. There was a big, big difference, in that Gorbachev made clear that there wouldn't be tanks before teatime. When he went to Berlin, and when he went to Prague and his name became a sign of protest, when they were chanting, "Gorby, Gorby," they're chanting for the demands of reform in their own states. When the various groups had been meeting off and on, particularly in Poland and Hungary, where dissidence was most developed, and where there was an alternate kind of civic society or civic space developed, Gorby's name had become symbolic with the possibility of change. He made clear that there weren't going to be tanks sent in. At that point, these huge changes were inevitable. But these changes in Eastern Europe in the former satellite states really were facilitated, were accentuated, were made inevitable by the fact that communism didn't work in the Soviet Union, that the lines were longer and longer. There was more attention to consumer goods. But also that Gorbachev was a very different leader. Gorbachev was educated. Unlike Brezhnev, who was his extremely elderly predecessor, he could give a speech without reading it off note cards. Well, Ronald Reagan really couldn't either. But Gorbachev was compelling. He was smart. He was educated. He understood the system. He had come up through the system. And he was committed to change. But until the very end, even when they kind of kidnap him--not kind of, they kidnapped him, and he was held under house arrest in Crimea. He stuck until the end with the belief that communism could be reformed, and that you could put a human face, à la Dubcek--he didn't look at Dubcek as a model, but on communism. Until the very end, he believed that you could have communism, this good idea gone terribly wrong, that could be reformist. He's very, very different than his predecessors. Khrushchev has often been underappreciated. Khrushchev, after all, did come out against Stalinism in the famous Party Congress, and all of that. Khrushchev was a very wily guy who knew a lot about agriculture and, in some ways, was a compelling character. But his disappearance in 1964--isn't that when Khrushchev leaves power, in 1964?--he was followed by a couple of really orthodox Stalinians. Gorbachev's rise really must be seen in that context. Gorbachev was born in southern Russia in 1931. He worked his way up, as you had to, in the party organization. He studied law at the University of Moscow. He knew the West and respected many things about the West. He also knew how just devastating Stalinism had been for his own country. Both his grandfathers had been arrested on false charges when he was a boy. He was very talented. He knew how to manipulate the system, and he becomes secretary to the Communist Central Committee. Like Khrushchev before, in his origins, he was responsible for Soviet agriculture. Unlike his predecessors, really including Khrushchev, he was less xenophobic. He had less of this suspicion of non-Russians, and the Soviet Union was dominated by Russia, let us leave no doubt about that. So, he believed that the communist dream had been destroyed by Stalinism, and by the rigidity of the structure, and by the inability to enact serious economic reforms. The Soviet Union, like the other powers, had simply miserable economies, miserable economic situations. The East European satellite states were in many ways victimized by unfavorable economic arrangements with the Soviet Union, who exploited them. But nonetheless, they were able to be kept afloat by the Soviet Union. He embraces the--only two terms, I didn't send these out, to be remembered, I suppose. One is the policy of liberalization which is called glasnost, openness in government combined with a greater degree of free expression. He takes people who are liberals, who are real reformers not just party hacks, and he gives them positions of responsibility. He realized that if you live in northern Russia, you can see Finnish television. If you live in Estonia, where the language is somewhat similar to Finnish, that most difficult of languages, you can see what's going on. These images of, as in East Berlin, West Berlin and of a different way of life. You can't simply pretend that there wasn't a better way of life for many people. There were lots of people in the former Soviet Union who may have had their doubts within the following ten years about the kind of runaway, bandit capitalism, the high-crime capitalism that developed in the victimized Russia, and Bulgaria, in particular, and in other places as well. He spoke openly, publicly about the failings of the system. That was the hush-hush where you didn't talk about the failings of the system. You were always talking about the "radiant future." Remember the radiant future. But the future wasn't radiant. There were long lines. It just simply didn't work. He realized that if you're going to supply the cities with consumer goods, you have to return to the free market. You really have to return to the old New Economic Policy that you already know about in the early 1920s. Also, the second is perestroika, the restructuring of the whole system with the belief, that he had, that communism could be made responsible to the desires of ordinary people in the Soviet Republic. He said--and he could toss off memorable phrases very easily, he was an extremely bright guy. He's still very much alive. "We need a revolution of the mind." You had to recommence with zero. You had to begin from the beginning and reconstruct this reformed communism that would be responsive to ordinary people. It didn't work out the way he thought it was going to work out. The entire system collapsed. Three things made this possible, the fall of communism in the Soviet Union and in Eastern and Central Europe. First, within all of these states--but most notably the Baltic states where people held hands, they formed a human chain all the way across Lithuania, Latvia, and Estonia holding hands, a human chain across the entire Baltic region--these places had strong nationalist movements, such as the Lithuanian movement that we discussed earlier. These continued. But also in Eastern Europe, particularly in Hungary and in Poland, but also in Ukraine. Remember, Ukrainian is a different language. It's a related but different language. There's still huge problems because so many Russians still live in Ukraine. We don't have time to talk about the ethnic complexities of these regions. The Russians who were left in Latvia, there were more Russians in Latvia than in Estonia or Lithuania, faced all sorts of discrimination. This is a problem. Anyway, these cultural demands, these nationalistic demands could not be placated by talk about a reformed communism. In the Soviet Union, the idea that the republics were equal was a sheer myth. The idea that there would be tolerance, toleration of different ways of looking at the world--basically, there was some showcase stuff about the flowering of the cultures, but it was basically myth. Secondly, in 1989 in these countries, amid economic crisis, the great horrors of deprivation, the long lines of people wearing threadbare coats waiting for trams that were late, a reform movement, a politically democratic movement emerges in all of these states. In Russia it was led by the Nobel-Prize-winning physicist Andrei Sakharov, who had helped develop, of all things, the hydrogen bomb. Then, whereas the works of Solzhenitsyn--Solzhenitsyn, whose vision--you have to separate Solzhenitsyn's critique of the gulag from his vision of the return of the czar, or whatever. Solzhenitsyn, I used to run into him here in the Sterling Memorial Library. He was here for a year or two working in the stacks, in the collection there. Solzhenitsyn, whereas before his stuff on the gulag was passed from hand to hand, typed scripts passed secretly from hand to hand, you could read it. You could read Solzhenitsyn on what was the increasingly no longer hidden secret of the gulag, and what happened to people sent to the gulag. These dissidents begin to reach an increasing audience within all of these countries, within all of the Soviet Republics and in the United States. Gorbachev comes to Washington, D.C. in the Mall, and he scares the hell--in a country that had had political assassinations, you will remember this one--out of those people who were supposed to protect him. He leaves the limousine, and he plunges into the crowd, and gives the Russian equivalent of high fives and shakes hands with people. They were just scared to death someone was going to blow them away. He charms the Reagans, and of course his intellectual capacity was many times those folks. His interest, I shouldn't have said capacity, his interest. He charms people. He was a real live, functioning intellectual in politics, obviously committed in putting his reputation and putting the whole estate on the line. This was the second thing. Third, it was the extenuation, acceleration of this economic crisis. Things weren't getting better. Poland is the great example of that, the reason that Solidarity starts in 1980 in the shipyards of Gdansk. It's bizarre to go back there. They're probably going to close. There's huge pictures of the pope all over the place. But it still is a sight of memory when you go to Gdansk. The reason that solidarity starts with Lech Walesa--and not just alone--and my friends in Poland, who are a little bit younger than Lech Walesa, and lots of other people--is because there wasn't enough to eat. You had a terrible situation. So they unionized. They said, "We're going to put forth our claims," like unions had done in France, and in Italy, and in Spain, and in other places, as people had wanted to do in the early days of the Soviet regime, and they had been squished like grapes. Everybody's been squished like grapes. The economic crisis makes these three things merge: nationalism, democratic reform, and the desire for economic change. You've got this charming man who takes big-time decisions. Lots of Jews, for example, wanted to leave the Soviet Union. They were victimized by anti-Semitism there. They were often treated as second-class citizens. Gorbachev says, "Fine." He says, "Yes, you can emigrate. You can go to Israel or the United States." So, things change. There's palpable change, and people have a sense of what's going to happen, that new things are going to occur. The speed with which this happened took Western leaders by surprise. They were not Thatcher. That's a good example. They were not ready for the speed at which these changes were coming. When people are shouting, "Gorby, Gorby, Gorby," the subtext is that we want the reforms in Hungary, in Poland, in Czechoslovakia, and in other countries as well, but the movements were much smaller in Bulgaria or in Romania, which was under the police state of Ceausescu. You had the same situation in Albania, kind of the cult of Hoxha, who was very tied to communist China, etc., etc. He makes clear in Strasbourg, in a speech to the Council of Europe in July 1989, that he rejects the Brezhnev doctrine, his predecessor Leonid Brezhnev, that the Soviet Union, as in 1953, and as in 1968, or in 1956 in Hungary--I remember when I was a really little kid, I remember Hungarian children who had been lucky enough to escape the revolution coming to Ainsworth School in Portland, Oregon. He said, "Any interference in domestic affairs and any attempts to restrict the sovereignty of states, both friends and allies or any others, are inadmissible." Gorbachev says that these movements in Hungary and Poland are inspiring. He found them personally inspiring. So, the rest, as they say, is history. You've all seen images of the Wall, first of young students your age, your age, putting flowers in the guns of the Vopos, who were the East German guards, flowers in the guns. Then the whole goddamned thing just collapses. Suddenly people are pouring over the Wall. People on trains are--and the East German government, Honecker was one of the very, very worst of all of them. He really was just awful. The Stasi infiltrated almost every organization. There's a great movie called The Lives of Others, a great, great movie. If this course went this far, I would recommend you see The Lives of Others, about spying, and integrity, and just all sorts of things. Honecker was saying, "Give these people, return them to East Germany." The Hungarians say, "No, we won't return them to East Germany." They start taking down the barbed wire borders around their own country. The whole thing just happens like that. The Berlin Wall collapses, and within a month Ceaucescus, for better or for worse, have been gunned down in a garden after a very hasty televised trial. They were very bad people. There's no doubt it. But there was no due process. But that was the end of that. And Honecker, whose slogan was "Always forward, never backward" until the very end--and the Czech leader was very much the same, and the Bulgarian and Romanian leaders, and Albanian leaders--Albania is a case apart--were going to keep the whole thing alive. The whole communist system was going to survive, no matter what. Of course, it didn't work out that way. In Czechoslovakia the group of writers and intellectuals, including Václav Havel, who had signed Charter 77 and were put in jail as a result of that, who demanded reform. You already had in Czechoslovakia, and in Poland, and in Hungary, you already had intellectuals who were anti-communist, or who were reforming communists, meeting sometimes very openly. In these countries that transition to democracy or to parliamentary rule would be easier, because the passing of the torch was easier. In Czechoslovakia you know there were some parliamentary antecedents. Poland also did. Hungary less so, but you had this sort of flourishing, alternative civil society that had been developing. So, the passing to the new generation, despite all the economic problems, and despite the ethnic tensions that would remain, was much easier than it would be in Bulgaria, for example. In Bulgaria, what the leadership does, they feel cornered, so they try to accentuate anti-Turkish feelings, because there were many Turks who lived in Bulgaria. Lots of Turks flee and then they go to Turkey. In fact, they find that things are worse in Turkey and many of them go back to Bulgaria. The tensions between the Romanians and the Hungarians in Romania helps generate change for reform, because outside of Bucharest the big calls for reform and the organization is the work of Hungarians who are living in the Hungarian parts of Romania. But the transition to parliamentary regime would be much harder in those places. The case of Bulgaria is particularly interesting. It's also the one I know the least about. They have just had the change is so slow in many ways they're happening. The kind of banditization, or the kind of infiltration of major crime networks in Bulgaria really continue to run the show. You find that to an extent, as everybody knows, in Russia. But that's another case. So, the Velvet Revolution occurs in Czechoslovakia, where the entire Politburo, that is the ruling group, resigned on November 19,1989. This is just a matter of a short period of time after the Berlin Wall essentially goes down. One of the interesting things about all this is that despite the huge ethnic tensions in many of these places, you didn't have the kind of awful blood bath that you would have in ex-Yugoslavia, which was primarily the work of the Serbs in those horrible, horrible wars that began bloodletting, ethnic cleansing. Mass murder is a less fancy way of putting it than ethnic cleansing. For example, you had all these tensions between Poles and Ukrainians, because of the parts of Eastern Poland had passed back and forth, and lots of Ukrainians live in that part of Poland, and lots of Poles live near Lviv in Ukraine. Actually, for all of the persecution of ethnic Russians living in Latvia, above all, but also in Estonia and Lithuania, you really didn't have the kinds of massacres that happened in ex-Yugoslavia. Two reasons for that. One is because of the ethnic religious complexity, in that the massacres were primarily perpetuated against Muslims by Orthodox Serbs who were inspired by one of the real villains of the last century, or any century, Slobodan Milosevic, who died during his trial in the Hague, who kept talking about a "Greater Serbia" to include Kosovo, to include everywhere else. Also on a more minor scale, those carried out by some Croatians against Muslims and all that. That was one major reason why you didn't have that same thing, that is, the religious difference. Secondly is that in Ukraine nobody was really talking about "Greater Ukraine." People in Poland weren't talking about "Greater Poland," imagining annexing anybody they could possibly do, the way Hitler had done, or the way that Milosevic perpetuated his sleazy career as leader of the Yugoslav and then Serb Communist Party by giving inflammatory speeches in Kosovo, etc., etc. So, the whole thing collapses. Of course, this doesn't eliminate problems. If you don't have a real tradition of parliamentary rule, how do you suddenly create parties that are viable? How do you create this sort of civic culture? That's not very easy. Also, the Americans, particularly from the University of Chicago economic school, were giving advice in Poland saying, "You just need an automatic infusion of capitalism. That will solve everything." That's not what happens at all. If anything, it increases the gap between the very, very wealthy people, who formerly would have been party cadres in the Communist Party and very ordinary people. Anyone who follows contemporary Russia now knows all that, or in the Côte d'Azur in Nice, in the Negresco Hotel in Nice. I shouldn't knock the Negresco, I've stayed there while guiding a Yale alumni tour. But anyway, you find these extraordinarily wealthy Russian billionaires buying up everything, including soccer teams in England, while there's still people with not enough to eat. There are other problems. These ethnic challenges, of course, are nowhere more graphically and horribly revealed than in the Balkans. The problem of all of these communist systems--;they said above all, you must have large-scale industry. So, they start building these awfully soon out-of-date factories that crank out pollution at unimaginable levels. One of the effects is, for example, the obstruction of the Black Forest in Germany by these clouds of pollution coming from the Czech Republic, to say nothing of the fact that a lot of the Soviet nuclear installations were in Kazakhstan, and other places, and trying to get these diffused and immobilized, particularly when the United States has been, under this last regime, has been trying to restart the arms race. This is a personal comment, but too bad. I'm talking about how Europeans view America. The Americans now have this idea to put bases in Poland. This is a terrible idea, because these bases could be transformed into offensive weapons, as well. This can very well, as Putin, who sometimes can't be trusted, and who was a vigorous, aggressive Russian nationalist, for better or for worse, that this could start again, unleash, whatever you call it, this arms race, and that would be awful. So, there's still lots of problems. What can I say? Yet every time I go to Poland, which, as I said, is very frequently now, and to other countries, there is just great hope. There wasn't a lot of hope in 1987 or in 1986. Suddenly, there was this new, incredibly transformed world. In many places it was easier to tear down, to say what you are against, that you didn't want this unreformed communist state, or you didn't want communism at all, than it was to sort of miraculously create this new affair or world. In Warsaw I'm constantly amazed. Warsaw was completely rebuilt. When I was a kid I was there. All you saw was rubble, basically. Now when I walk on the Hotel Bristol, a very fancy, famous hotel, and I turn left, it looks like the Champs-Elysées, or the Rue Saint-Honoré in Paris, all these fancy shops. As we go out to the university, then you see all these people still wearing the same threadbare coats, waiting in line for the trams as before the communist revolution. Yet, things are better. One of the reasons, by the way, things are better in Poland is that they never did completely collectivize agriculture at all. Petites propriétaires, small units still existed, and so the transition there was easier than other places. Well, what can I say? This is what I'm going to say now is how Europeans view Europe. Also, as kind of a European, how they view the United States. I might certainly be tempted at the end to talk a little bit about that, and about human rights. We talk about globalization and all of that. José Bové lived in Los Angeles for two years. He's actually a city guy, but he made his reputation in the south of France marching, and with tractors blocking French Air Force installations, trying to keep part of lower Massif Central called Larzac from being turned into a place for bomb testing, and all that business. Then he took his campaign against McDonalds. McDonalds, MacDo, became identified with globalization and with Americanization. So, the old anti-American sentiment among intellectuals, and José Bové is that. When he came to Yale a couple years ago, Jim Scott brought him here to Yale. My daughter took him to Rudy's. That's what she did. When he came here, he came here as sort of a symbol of anti-globalization. All of you have seen images of people in Seattle throwing themselves against the police, or in Nice against police barricades, or in Italy as well. Globalization is sort of a catchall. But if you don't believe that we live in a more global society, look at the impact of the economic crisis and how quickly that spread within the last two months. It's an obvious thing that we live in a world where Adidas, and all these shoes, and shirts, and T-shirts, are often outsourced to the poorest people they can find in Indonesia and other places. When you have some problem with your cell phone--I still don't have a cell phone, or whatever--but you'll end up talking to somebody in India or Pakistan as easily as you are talking to somebody in New Jersey. One aspect of globalization that is so much more visible now than even fifteen years ago, and which fits exactly into one of the themes of this course, is obviously immigration. There are no borders anymore. The creation of the European Union, for better or for worse, means that you can go essentially from Calais all the way to Lithuania and never have your ID checked not once. I travel on my French ID there. It's only in England and in coming back to the United States that you need a passport at all. But the result is you've got all of these immigrants. You've all seen pictures of bodies bobbing in the sea of Moroccans, and people from Mali, or Tunisia, or Senegal, trying to get into Spain. Once you got into Spain you've essentially got it made. The same passages that Spanish refugees fled Franco's terror during and after the Spanish Civil War bring people from Mali into France. Of course, female and male sex trafficking from Moldavia in particular, from Bulgaria and from Albania, those are the three major points, is something that is just everywhere. Immigrants are not new. In the 1960s these governments said, "Please." They put up signs. "Please come to work in France." "Come to work in Germany, in Istanbul." So many Turks went to Germany. Then, all of a sudden, when the bottom of the economy falls out with the Arab oil embargo in 1973 and 1974, then some of these people who helped make the economy run, and who still help make the economy run--there's a whole underground economy and do jobs of lots of other people--they suddenly they say, "We don't want them." One of the risks that's an obvious risk to anyone who has studied Germany in the 1920s and 1930s, which you have, is that economic crisis causes people to scapegoat and to stereotype into scapegoat. In countries like France where the Gaullists made their pact with the devil and joined with the National Front in an over-the-top, aggressively racist political party whose leader was a torturer, Jean-Marie Le Pen in Algeria, and who described the Holocaust as "a minor detail" of World War II and whose supporters are négationnistes, negationists who believe that there wasn't a Holocaust. When they start, when their discourse becomes extremely, extremely--not only prevalent but acceptable, then you have a problem. Even in Switzerland, where it is very hard to become a resident of Switzerland, which does have a large immigrant population, you had a party of the extreme right. In Denmark, one of the most tolerant places one could ever imagine, you had one of the most over-the-top--and still have right wing organizations. Jörg Haider, who just got himself killed running his car off the road a couple weeks ago in Austria, he was, unapologetic is probably a bit too strong, but he said things were much better when the Nazis were in control of the economy. We didn't have all these other people around. Economic crisis, and national stereotyping, and racism is a recipe for disaster. Even in countries where democracy really, really works, again returning to the case of Poland, when you have black players being taunted in soccer games. Poland, as in other countries, as in Spain, one of the worst of a kind of racist baiting that goes on when these clubs play. In France, with Paris Saint-Germain, that's another classic example, or Lazio, which is the Mussolini granddaughter's favorite team in Italy, then you've got a real problem. When it becomes acceptable, and maybe some of you may consider unfair, but when Sarkozy, president of France, he's the son of Hungarian immigrants, and when he borrows the language, the language of racism, the language of Le Pen, to help put him over the top--and when someone interviewed Le Pen and said, "Why do you do so badly in the elections?" he said, and he was right for once, he said, "Because they said the same things we're saying." Then you've got a problem. "Fortress Europe" may be trying--all these human rights documents give people the right to emigrate, but not to immigrate. How these countries, including ours, treat people who are legal immigrants and those who are illegal immigrants is a true test of the kinds of values that they have. This is an obvious thing to say, but this is the future. This is an ongoing problem, an ongoing challenge, in every single European country. Toleration, civic harmony, generosity, caring in hard times is under assault. Our country has never been immune from that as well. This is an obvious case, but it's something that's going to concern people that work on Europe. Look at the role of xenophobia in the rise of the right in the 1920s and, above all, in the 1930s. It's the same thing over and over again. Just to end with this. There's the question of human rights. Europeans have a hard time understanding the United States. They don't understand capital punishment. They don't understand why you can just pick up a gun. You can't vote, but you can buy a machine gun at some gun show almost anywhere you are, north or south. They can't understand that. One of the other things they can't understand is why in this country we have a deep, abiding, institutionalized believe in the right to bear arms, etc., etc., but in civic rights or civil rights, your rights as defined by being a member of the state--but we have often not accepted human rights as a category. Europeans are often just mystified by this. Let me give you an example there. Again, this is not politics, but I can't help saying this. It's very difficult to explain to people how it is that the United States in the last few years finds itself on a list of countries that torture. Not big-time, not Nazi Germany, not Stalin, not even the level of Pinochet, who they tried to extradite and they tried to do everything. Not on the level of Milosevic, who finally was carted off to tribunal. But the United States, in the smirks of President George Bush, and Cheney, and these people, these people put us on the list of torturers. Guantanamo hurt the United States, the view that people have of the United States, in ways that are simply unimaginable. The idea that these people--some of them are some really bad people, other people just got sort of caught up in the wrong thing--but even if they're bad people, they never had charges pressed against them. You see them chained to the ground with their little orange uniforms. You see the images that came out of the prisons, or you have Blackwater or these private contractors gunning down civilians with impunity. This stuff didn't used to happen in this country. Even during Vietnam, when Lieutenant Calley, who murdered all those people in Vietnam--you don't remember Vietnam. Bob sitting amng you and a few others remember Vietnam--;Calley went on trial. But when states become involved with this, with kidnapping people off the streets, what do they call it? And secret plane flights to England, or to wherever, this is what made the United States lose so much of its image, of its respect. It's incredible. Even in a place that I live with 330 people--and people are not terribly politicized, politics is still families that have hated each other for generations--but there is this image of, "How could this happen in the United States?" It was always the place that you wanted to go to, because things were fair. Things were right. I believe, nobody asked me, but since we're talking about the view of Europeans, I believe that people like Bush and Cheney ought to go before the tribunal at The Hague, if human rights is going to mean anything. Because they are from the most powerful country in the world doesn't mean that they shouldn't face the same kind of standards that you all believe. It should be that way. Bernard Kouchner is a sort of moderate politician in France. He's somewhat socialist, but he's in the government of Sarko, Sarkozy. He was the one who helped one of the original creators of Médecins sans frontières, Doctors Without Borders. French, but not just French, Americans and other people, many of you may do this, go off and try to help. I have a friend who's a physician's assistant who goes off to Guatemala all the time to help people in Nicaragua. Kouchner is a really good guy. He's very pro-American. He said--this is just chilling, it ought to be chilling for you--he said that the magic is done. "The magic is over." That's exactly what he said. He said it in English, too. He said, "The magic is over." What was the magic? It was what this country represents to Europeans. The magic is over. Then he paused and he said, "Things will never be the same again." So, I guess just in conclusion, it's up to you to believe in human rights and believe in the value of people, whether they're clandestine, or legal immigrants or not, and that human rights should be written on the face of this country as well, and that you can return and restore that magic.
MIT_940_Introduction_to_Neural_Computation_Spring_2018
13_Spectral_Analysis_Part_3_Intro_to_Neural_Computation.txt
[AUDIO PLAYBACK] - Good morning, class. [END PLAYBACK] MICHALE FEE: Hey, let's go ahead and get started. So we're going to finish spectral analysis today. So we are going to learn how to make a graphical representation like this of the spectral and temporal structure of time series, or in this case, a speech signal recorded on a microphone. Well, actually let me just tell you exactly what it is that we're looking at here. So this is a spectrogram that displays the amount of power in this signal as a function of time and as a function of frequency. So you remember we've been learning how to construct the spectrum of a signal. And today, we're going to learn how to construct a representation like this, called a spectrogram, that shows how that spectrum varies over time. So as you recall, we have learned how to compute the Fourier transform of a signal. This is one of the signals that we actually started with. So if you compute the Fourier transform of this square wave, you can see that in the frequency domain, now we plot the amount of, essentially, the components of this signal at different frequencies. So the Fourier transform of this square wave has a number of peaks. Each of these peaks correspond to a cosine contribution to this time series, OK? All right. And we also discussed how you can compute the power spectrum of a signal from the Fourier transform. So here what I've done is I've taken the Fourier transform of this square wave. And now we take the square magnitude of each of these values, and we just plot the spectrum, the square magnitude of just the positive frequency components. For real-valued functions, the power spectrum is symmetric. The power in each of these-- at each of these frequencies in the positive, half of the-- for the positive frequencies is exactly the same as the power in the negative frequencies. So if we plot the power spectrum of that square wave, we can see that there are multiple peaks at regular intervals. Now the problem with plotting power spectra on a linear scale here is that you often have contributions-- important contributions to signals that actually have a very small amount of power when you plot them on a linear scale. And so you can barely see them. You can barely see those contributions at these frequencies here on a linear scale. But if you plot this on a log scale, you can see the spectrum much more easily. So for example, what we've done here is we've plotted the square magnitude of the Fourier transform, taken the log base 10 of that spectrum-- spectrum is the square magnitude of the Fourier transform. And now we can take the log base 10 of the power spectrum to get the power in units of bels, and multiply that by 10 to get the power in units of decibels, OK? So each tick mark here of size 10 corresponds to one order of magnitude in power, OK? So this peak here is about 10 dB lower than that peak there, and that corresponds to about a factor of 10 lower in power. OK, any questions about that? I want to be able-- want you to understand what these units of decibels are. They're going to be on the test. OK. Questions about that? You want to just ask me right now? OK. Remember this. OK. And keep in mind that the power in a signal is proportional to the square of the amplitude, OK? So if I tell you that a signal has 10 times as much amplitude, it's going to have 100 times as much power. 100 times as much power is 10 to the 2 bels, which is 20 decibels. Does that make sense? OK. All right. So we also talked about some Fourier transforms of different kinds of functions. So this is the Fourier transform of a square pulse. So here I showed you a square pulse that has a width of 100 milliseconds. The Fourier transform is this sinc function, and for a square pulse of width 100 milliseconds, the sinc function has a half-- sorry-- has a full width at half height of 12 hertz. If we have a square pulse that's five times as long, 500 milliseconds long, the Fourier transform is the sinc function again, but it has a width of this central lobe here of 2.4 hertz. So you can see that the longer the pulse, the shorter-- the narrower the structure in the frequency domain. Up here, if we'd look at the Fourier transform of a square pulse that's 25 milliseconds long, then the Fourier transform is again a sinc function, and the width of that central lobe is 48 hertz. So you can see that the width in the time domain and the width in the frequency domain are inversely related. So the product of the width in the time domain and the width in the frequency domain is constant. And that constant is called the time-bandwidth product of that signal. The time-bandwidth product of this square pulse and sinc function is constant. It's independent of the width. The time-bandwidth product is a function of that. It's a characteristic of that functional form. OK. Now we also talked about the convolution theorem, which relates the way signals get multiplied in time or convolved in frequency domain. So for example, if we have a square pulse in time multiplied by a cosine function in time to get a windowed cosine-- so this function is zero everywhere except it's cosine within this window-- we can compute the Fourier transform of this windowed cosine function by convolving the Fourier transform of the square pulse with the Fourier transform of the cosine function, like this. So the Fourier transform of the square pulse is, again, this sinc function. The Fourier transform of the cosine are these two delta functions. Now if we convolve the sinc function with those two delta functions, we get a copy of that sinc function at the location of each of those delta functions. And that is the Fourier transform, OK? Any questions about that? Just a quick review of things we've been talking about. All right. So we can look at this Fourier transform here. We can look at the power spectrum of this windowed cosine function, like this. So there's the windowed cosine function. The power spectrum-- the power spectrum plotted on a linear scale is just the square magnitude of what I've plotted here. And we're just going to plot the positive frequencies. That's what the power spectrum of that signal looks like on a log scale. So you can see that it has a peak at 20 hertz, which was the frequency of the cosine function. You see some little wiggles out here. But if you look on a log scale, you can see that those wiggles off to the side are actually quite significant. The first side lobe there has a power that's about 1/10 of the central peak. That may not matter, sometimes, when you're looking at the spectrum of a signal, but sometimes it will matter because those side lobes there will interfere. They'll mask the spectrum of other components of this signal that you may be interested in. We also talked about how this spectrum depends on the function that you multiply your cosine by here. So for example, if you take a cosine and you multiply it by Gaussian, the power spectrum has the shape of a Gaussian, it's a Gaussian that has a peak at 20 hertz. And if you look at the-- if you look at that spectrum on a log scale, you can see that it loses it-- you've lost all of these high-frequency wiggles, up here. All of those wiggles come from the sharp edge of this square pulse windowing function, OK? So the shape of the spectrum that you get depends a lot on how you window the function that you're looking at. Questions about that? Right, again, more review. OK. So we talked about estimating the spectrum of a signal. If you have many different measurements of some signal, you can actually just compute the spectrum of each one. This little hat here means an estimate of the spectrum. You compute some estimate of the spectrum of each of those trials, samples of your data, and you can just average of those together. OK, now if you have a continuous signal, you can also-- you could estimate the spectrum just by taking the Fourier transform of a long recording of your signal. But it's much better to break your signal into small pieces, compute a spectral estimate of each one of those small pieces and average those together. Now how do you construct a small sample of a signal? If you have a continuous signal, how do you take a small sample of it? Well, you can think about that as taking your continuous signal and multiplying it by a square window. Setting everything outside that window to zero and just keeping the part that's in that window. And you know that when you take a signal and you multiply it by a square window, what have you done? You've convolved the spectrum of this original signal with the spectrum of this square pulse. And that spectrum of the square pulse is really a nasty looking thing, right? It is this what we call the Dirichlet kernel, which is just the power spectrum of a square pulse that we just talked about, OK? So that's called the Dirichlet kernel. And using a square pulse to select out a sample of data introduces two errors into your spectral estimate, narrowband bias. It broadens your estimate of the spectrum of, let's say, sinusoidal or periodic components in your signal. And it also introduces these side lobes. So the way we solve that problem is we break our signal into little pieces, multiply each of those little pieces by a smoother windowing function by something that isn't a square pulse, multiply it by something that maybe looks like a Gaussian, or a-- or half of a cosine function. That gives us what we call tapered segments of our data. We can estimate the spectrum of those tapered pieces and averaged those together, OK? Any questions? OK, again, that's a review. And I showed you briefly what happens if we take a little piece of signal. The blue is white noise with a little bit of this periodic sine function added to it. And if you run that analysis, you can see that there is a large component of the spectrum that's due to the white noise. That's this broadband component here. And that sinusoidal component there gives you this peak in the spectrum, OK? And there's a-- I've posted-- or Daniel's posted a function called wspec.m that implements this spectral estimate like this. So now today, we're going to turn to estimating time varying signals, estimating the spectrum of time varying signals. So this is a microphone recording of a speech signal. Let me see if I can play that. [AUDIO PLAYBACK] - Hello. [END PLAYBACK] MICHALE FEE: All right. So that's just me saying hello in a robotic voice. OK. So that is the signal. That's basically voltage recorded on the output of a microphone. It's got some interesting structure in it, right? So first these little pulses here, you see this kind of periodic pulse. Those are called glottal pulses. Does anyone know what those are? What produces those? No? OK, so when you speak a voiced sound, your vocal cords are vibrating. You have two pieces of flexible tissue that are close to each other in your trachea. As air flows up through your trachea, the air pressure builds up and pushes the glottal folds apart. Air begins to flow rapidly through that open space. At high velocities, the velocity flowing through the constriction is higher than the velocity of air anywhere else in the trachea because it's flowing through a tiny little space. At high velocities, at constrictions where you have a high fluid flow, the pressure actually drops. And that pulls the vocal folds back together again. When they snap together, all the airflow stops, and you have a pulse of negative pressure above the glottis, right? Imagine you have airflow coming up. And all of a sudden, you pinch it off. There's a sudden drop in the pressure as that mass of air keeps flowing up, but there's nothing more coming up below. So you get a sharp drop in the pressure. Then the air pressure builds up again. The glottal folds open, velocity increases, and they snap shut again. And so that's what happens as you're talking, OK? And so that periodic signal right there, those pulses in pressure-- the microphone is recording pressure, remember. So those pulses are due to your glottis snapping shut each time it closes during the cycle, during that oscillatory cycle. The period of that-- those glottal pulses is about 10 milliseconds in men and about 5 milliseconds in women. OK. But you can see there's a lot of other structure changes in this signal that go on through time. But let's start by just looking at the spectrum of that whole signal. Now what might we expect? So if you have periodic pulses at 10 millisecond period, what should the spectrum look like? If you have a train of pulses, let's say delta functions with 10 millisecond period, what would the spectrum of that look like? Anybody remember what the spectrum of a train of pulses looks like? Almost, yes. There would be. But there would be other things as well. What would a signal look like that just has a peak at 100 hertz? What is that? Has one peak at 100 hertz? Or let's say [INAUDIBLE] Fourier transform would have a peak at 100 and at minus 100. That's just a cosine. That's not a train of pulses. What's the Fourier transform of a train of pulses? Those of you who are concentrating on this right now are going to be really glad on the midterm. What's the Fourier transform of a train of pulses? OK, let's go back to here because there was a bit of a hint here at the beginning of lecture. What's the Fourier transform of a square wave? Any idea what happens if we make these pulses narrower and narrower? The pulses get more and more narrow, these peaks get bigger and bigger. And as we go to a train of delta functions, you just get Fourier transform of a train of delta functions in time, is just a train of delta functions in frequency. The spacing between the peaks in frequency is just 1 over the spacing between the peaks in time, right? Make sure you know that. OK, so now let's go back to our speech signal. These are almost like delta functions. Maybe not quite, but for now, let's pretend they are. If those are a train of delta functions spaced at 10 milliseconds, what is our spectrum going to look like? I just said it. What is it going to look like? Yep. Spaced by? AUDIENCE: One. MICHALE FEE: Which is? 100 hertz. Good. So here's the spectrum of that speech signal. What do you see? You see a train of delta functions separated by about 100 hertz, right? That's a kilohertz, that's 500 hertz, that's 100 hertz. So you get a train of delta functions separated by 100 hertz, OK? That's called a harmonic stack. OK. And the spectrum of a speech signal has a harmonic stack because the signal has these short little pulses of pressure in them. OK, what are these bumps here? Why is there a bump here, a bump here, and a bump here? Does anyone know that? What is it that shapes the sound as you speak? That makes an "ooh" sound different from an "ahh?" [INAUDIBLE] This is hello. Sorry, I'm having trouble with my pointer. That's hello. What is it that makes all things sound different? So the sound, those pulses, are made down hearing your vocal tract. As those pulses propagate up from your glottis to your lips, they [AUDIO OUT] filter, which is your mouth. And that the shape of that filter is controlled by the closure of your lips, by where your tongue is, where different parts of your tongue are closing the opening in your mouth. And all of those things produce filters that have peaks. And the vocal filter has three main peaks that move around as you move the shape of your mouth. And those are called formants, OK? OK. Now you can see that this temporal structure, this spectral structure isn't constant in time. It changes-- right-- throughout this word. So what we can do is we can take that signal, and we can compute the spectrum of little parts of it, OK? So we can take that signal and multiply it by a window here, a taper here, and get a little sample of the speech signal and calculate the spectrum of it just by Fourier transforming, OK? We can do the same thing. Shift it over a little bit and compute the spectrum of that signal, all right? So we're going to take a little piece of the signal that has width in time, capital T-- OK-- that's the width of the window. We're going to multiply it by a taper, compute the spectrum. And we're going to shift that window by a smaller amount, delta t, so that you have overlapping windows. Compute the spectrum of each one, and then stack all of those up next to each other So now you've got a spectrum that's a function of time and frequency, OK? So each column is the spectrum of one little piece of the sound at one moment in time. Does that make sense? OK. And that's where this spectrogram comes from. Here in this spectrogram, you can see these horizontal striations are the harmonics stack produced by the glottal pulse. This is a really key way that people study the mechanisms of sound production and speech, and animals vocalizations, and all kinds of signals, more generally, OK? All right, any questions about that? All right. Now what's really cool is that you can actually focus on different things in a signal, OK? So for example, if I compute the spectrogram with signals where that little window that I'm choosing is really long, then I have high frequency-- high resolution and frequency, and the spectrogram looks like this. But if I compute the spectrograph with little windows in time that are very short, then my frequency resolution is very poor, but the temporal resolution is very high. And now you can see the spectrum. You can see these vertical striations. Those vertical striations correspond to pulse of the glottal pulse. And we can basically see the spectrum of each pulse coming through the vocal tract. Pretty cool, right? So how you compute the spectrum depends on whether you're actually interested in. If you want to focus on the glottal pulses, for example, the pitch of the speech you look with a longtime window. If you want to focus on the formants, here you can see the performance very nicely, you would use shorttime window. Any questions? So now I'm going to talk more about the kinds of tapers that you use to get the best possible spectral estimate. So a perfect taper, in a sense would give you perfect temporal resolution. It would give you really fine temporal resolution. And it would give you really fine frequency resolution, but because there is a fundamental limit on the time bandwidth product, you can't measure frequency infinitely well with an infinitely short sample of a signal. Imagine you have a sine wave, and you took like two samples of a sine wave. It would be really hard to figure out the frequency, whereas if you have many, many, many samples of a sine wave, you can figure out the frequency. So there's a fundamental limit there. So there's no such thing as a perfect taper. If I want to take a sample of my signal in time, if I have a sample that's limited in time, if it goes from one time to another time and a zero outside of that, then in frequency, it's spread out to infinity. And so all we can do is choose how it is. We can either have things look worse in time and better in frequency or better in time and worse in frequency. So the other problem is that when we taper a signal, we're throwing away data here at the edges. But if you take a square window and you keep all the data within that square window, well, you've got all the data in that window. But as soon as you taper it, you're throwing away stuff at the edges. So you taper it to make it smooth and improve the spectrum, the spectral estimate, but you're throwing away data. So you can actually compute the optimal taper. Here's how you do that. What we're going to do is we're going to think of this as what's called the spectral concentration problem. We're going to find a function W. This is a tapering that is limited in time from some minus T/2 to plus T/2. So it's 0 outside of that. It concentrates the maximum amount of energy in it's Fourier Transform, in its power spectrum within a window that has widths 2W. So W is this [INAUDIBLE]. Does that makes sense? We're going to find a function w that concentrates as much energy as possible in square window. And of course, that's going to have the result that the energy in the side lobes is going to be as small as possible. And there are many different optimizations you can do in principle. But this particular optimization is about getting as much of the power as possible into a central low. Here's this function of time. We simply calculate the Fourier Transform of W. We call that U of f. And now we just write down a parameter that says how much of that Fourier-- how much of the power in U is in the window from minus w to w compared to how much power there is in U overall, overall frequencies? So if lambda is 1, then all of the power is between minus w and w. Does that make sense? So you can actually solve this optimization problem, maximize lambda, and what you find is that there's not just one function that gives very good concentration of the power into this band. There's actually a family of functions. There's actually k of these functions, where k is twice the bandwidth times the duration of the window minus 1. So there are a family of k functions called Slepian functions for which lambda is very close to 1. There are also discrete probate spheroid sequence functions, dpss. And that's the command that Matlab uses to find those functions dpss. Here's what they look like. So these are five functions that give lambda close to 1 or for a particular bandwidth in a particular time window. The n equals 1 function is a single peak. It looks a lot like a Gaussian, but it's not a Gaussian. What's fundamentally different between this function and a Gaussian? This function goes to 0 outside that time window, whereas a Gaussian goes on forever. The second slepian in this family has a peak, a positive peak in the left half, a negative peak in the right. The third one has positive, negative, positive, and then goes to 0. And the higher order functions just have more wiggle. They all have the property that they go to 0 at the edges. And the other interesting properties that these functions are all orthogonal to each other. That means if you multiply this function times that function and integrate, you get 0. Multiply any two of these functions and integrate over the window minus T/2 to plus T/2 the integral [INAUDIBLE] What that means is that the spectral estimate you get by windowing your data with each of these functions separately are statistically independent. You actually have multiple different estimates of the spectrum from the same little piece of [AUDIO OUT] The other cool thing is that remember the problem with windowing our [AUDIO OUT] with one peak like this is we were throwing away data at the edges. Well, notice that the higher order slepian functions have big peaks at the edges. And so they are actually measuring the spectrum of the parts of the signal that are at the edge of the window. Now notice that those functions start crashing into the edges. So you start getting sharp, sharp edges out here, which is why the higher order functions have worse ripples outside that central lobe. Any questions about that? It's a lot [AUDIO OUT] Just remember that for a given width of the window in time and within frequency, there are multiple of these functions that put the maximum amount of power in this window 2W. So that's great. So good question. What would you do if you're trying to measure something and you measure it five different times, how would you get an estimate of what the actual number is? How would you get an error bar on how good your estimate is? For deviation of your estimates, right. And that's exactly what you do. So not only can you get a good estimate of the average spectrum by averaging all of these things together, but you can actually get an error bar. And that's really cool. So here's the procedure that you use. And this is what's in that little function W spec.m. So you select a time window of a particular width. How do you know what with to choose? That's part of it. The other thing is if your signal is changing rapidly in time and you actually care about that change, you should choose-- you're more interested in temporal resolution. If your signal is really constant, like it doesn't change very fast, then you can use bigger windows. So we're going to choose a time width. Then what you're going to do is you're going to select this parameter p, which is just the time-bandwidth product And if you've already chosen T, what you're doing is you're just choosing the frequency resolution. Once you compute p and you know T, you just stuff those numbers into this Matlab function dpss, which sends back to you this set of functions here. It sends you back k of those functions that once you've chosen p, k is just 2p minus 1. And then what do you do? You just take your little snippet of data. You multiply it by the first taper, compute the spectrum, compute the Fourier Transform. And then take your little piece of data, multiply it by the second one, compute the Fourier transform, and the power spectrum. And then you're just going to average. This square magnitude should be inside the window, your piece of data. You Fourier transform it, square magnitude, and then average all those spectra together. I see this is Fourier transform right here. This sum is the Fourier transform that. We square magnitude that to get the spectral estimate of that particular sample. Then we're going to average that spectrum together for all the different windowing tapering function. Now you get then multiple spectral estimates. You're going to average them together to get the mean. And you can also look at the variance to get the standard deviation. Questions? Let's stop there. That was that was a lot of stuff. Let's take a breath and [INAUDIBLE] to see whether we [AUDIO OUT] Questions? No. Don't worry about it. This is representation of the Fourier transform. You sum over all the time samples. This, you will just do as fast Fourier transform. So you'll take the data, multiply it by this taper function, which is the slepian and then do the Fourier transform, take the square magnitude. We just want make sure that we've got the basic idea. So you've got a long piece of data. You're going to lose some time window, capital T. You're going to choose a bandwidth W or this time bandwidth product p. Bend T and p to this dpss function. It will send you back a bunch of these dpss functions that fit in that window. Now you're going to take your piece of data. You're going to break it into little windows of that length, multiply them by each one of the slepian functions Do the Fourier transform of each one of those products. Average them all altogether. Take the square magnitude of each one to get the spectrum, and then average all those spectra together. So now so what does p do? p chooses bandwidth of the slepian function in that window. So if you have a window that's 100 milliseconds wide-- so we're going to take our data and break it into little pieces that's milliseconds long. It's goes from minus 50 to plus 50 milliseconds. Choose a window that has a narrow bandwidth, the small p, then the bandwidth is narrow. The bandwidth is narrow, because the function is wide, or you can choose a large bandwidth. What does that mean? It's a narrower function in time. Now if p is 5, you have a broader bandwidth. And that means that the window, the tapering function is narrower in time. Look at the Fourier transform of each of two different tapering functions. You can see that if p equals 1.5, the tapering function is broad. But that Fourier transform, a kernel in frequency space is narrower. Take the p equals 5 function, a broader bandwidth, it's narrower in time and broader in frequency. Does that makes sense? p just for a given size time window tells you how many different samples we're going to take within that time window. no So let me just go back to this example right here. So I took this speech signal that I just showed you that was recorded on the microphone. I chose a time window of 50 milliseconds. So I broke the speech signal down into little 50 millisecond chunks. I chose a bandwidth of 60 hertz. That corresponds to p equals 1.5 and k equals 2. That gives me back a bunch of these little functions. And I computed this spectragram. For this spectragram, I chose a shorter time window, eight milliseconds, choose a bandwidth of 375 hertz, which also corresponds to p equals 1.5 and k equals 2. And if you Fourier transform the spectragram with those parameters, you get this example right here. So in this case, I kept the same p, the same time-bandwidth product, but I made the time shorter. So the best way to do this, when you're actually doing this, practically is just to take a signal by some of these different things [INAUDIBLE] That's really the best way to do it. You can't-- I don't recommend trying to think through beforehand too much exactly what it's going to look like if you choose these different values when it's easier just to try different things and see what it looks like. Yes. AUDIENCE: What are looking for? MICHALE FEE: Well, it depends on what you're trying to get out of the data. If you want to visualize formants, you can see that the formants are much clearer. These different windows give you a different view on the data. So just look through different windows and see what looks interesting in the results. That's the best way to do it. So I just want to say one more word about this time-bandwidth product. So the time-bandwidth product of any function is greater than 1. So you can make time shorter, but bandwidth is worse. The way that you can think about this as that you're sort of looking at your data through a window in time and frequency. What you want is to look with infinitely fine resolution in both time and frequency, but really you can't have infinite time and frequency resolution. You're going to be smearing your view of the data with something that has a minimum area, the time-bandwidth product which has a minimum of size of 1. You can either make time small and stretch the bandwidth out, or you can stretch out time and make the bandwidth shorter or make time short and make the bandwidth long. But you can't squeeze both, because of this fundamental limit on the time bandwidth product. This all depends on how you're measuring the time and the bandwidth. These are kind of funny shaped functions. So there are different ways of measure what the bandwidth of a signal is or what the time width of a signal is. Now, the windows that you're looking in time and frequency with are the smallest time bandwidth product. So notice that if the time-bandwidth product is small, close to 1, the number of tapers you get in this dpss, this family of functions you get is just 1. If p is 1, then k is 2P minus 1, which is 1. So you only get one window. You only get one estimate of the spectrum, but you can also choose to look at your data with worse, [AUDIO OUT] that have a worse time frequency time-bandwidth product. Why would you do that? Why would you ever look at your data with functions that have a worse time-bandwidth product? Well, notice that if the time-bandwidth product is 2, how many functions do you have? Why does that matter, because now you have three independent estimates of what that spectrum is. So sometimes you would gladly choose to have a worse resolution in time and frequency, because you've got more independent estimates means better. So sometimes your signal might be changing very slowly. And then you can use a big time-bandwidth product. It doesn't matter. Sometimes your signal is changing very rapidly in time. And so you want to keep the time-bandwidth product small. Does that begin to [INAUDIBLE] bigger time-bandwidth products and now you get even more independent estimates. Most typically, you choose p's that go from 1.5 to multiples of 0.5, because then you have an integer number of k's. But usually, you choose p equals 1.5 or higher in multiples of 0.5. If you really care about temporal resolution and frequency resolution, you want that box that's smearing out your spectrum to be as small as possible. Small as possible means it has an area of 1. That's the minimum area it can have, but that only gives you one taper. But if you really care about both temporal and frequency-- time and frequency resolution, then that's the trade-off you have to make. Slowly you can air out more in time, maybe more in frequency, and you can choose your time-bandwidth product, in which case you get more tapers and a better estimate of the spectrum. So this is state of the art spectral estimation. It doesn't get better than this. To put it like this, you're doing it the best possible way, a bit of digesting. So lets spend the rest of the lecture today talking about filtering. So Matlab has a bunch of really powerful filtering tools. So here's an example of the kind of thing where we use filtering. So this is a [INAUDIBLE] finch song recorded in the lab. OK, so now I want you to just listen-- so you were probably listening to the song, but now listen at very low frequencies. Tell me what you hear. Listen at very low frequen-- [AUDIO PLAYBACK] [FINCH CHIRPING] [END PLAYBACK] [HUMS] --background, that's hum from the building's air conditioners, air handling. It all makes this low rumbling, which adds a lot of noise to the signal that can make it hard to see where the syllables are in the [AUDIO OUT] time series. Here are the syllables right here. And that's the background noise. But the background noise is at very low frequencies. So sometimes you want to just filter stuff like that away because we don't care about the air conditioner. We care about the bird's song. OK, so we can get rid of that by applying-- what kind of filter would we apply to this signal to get rid of these low frequencies? [INAUDIBLE] AUDIENCE: A high pass. MICHALE FEE: A high-pass filter, very good. OK, so let's put a high pass filter on this. Now, in the past, previously we've talked about using convolution to carry out a high pass filtering function. But Matlab has all these very powerful tools. So I wanted to show you what those look like and how to use them. OK, so this is a little piece of code that implements a high-pass filter on that signal. Now, you can see that all of that low frequency stuff is [AUDIO OUT] You have a nice clean, silent background. And now you can see the syllables on top of that background. Here's the spectrogram. You can see that all of that low frequency stuff is gone. And this is a little bit of sample code here. I just want to point out a few things. You give it the Nyquist frequency, which is just the sampling rate divided by 2. I'll explain later what that means. You set a cutoff frequency. So you tell it to cut off below 500 hertz. You put the cutoff and Nyquest frequency together, you get a ratio of those two that's basically the fraction of the spectral width that you're going to cut off. And then you tell it to give you the parameters for a Butterworth filter. It's just one of the kinds of filters that you use. Tell it whether it's a high-pass or low-pass. Send that filter, those filter parameters, to this function called filtfilt. You give it these two parameters, B and A, and your data vector. And when run that, that's what the result looks like, OK? Let me play that again for you after filtering. All that low frequency hum is gone. All right, so here's an example of what-- I mean, we would never actually do this in the lab. But this is what it would look like if you wanted to emphasize that low frequency stuff. Let's say that you're the air conditioner technician who comes and wants to figure out what's wrong with the air conditioner. And it turns out that the way it sounds really is helpful. So you now do a low-pass filter. And you're going to keep the low frequency part. Because all those annoying birds are making it hard for you to hear what's wrong with the air conditioner. OK, so here's-- Didn't quite get rid of the birds. But now you can hear the low frequency stuff much better. OK, all right, so now we just did that by, again, giving it the Nyquist. The cutoff, we're going to cut off above 2,000, pass below 2,000. We're going to tell it to use a Butterworth filter, now low-pass. And again, we just pass it the parameters and the data. And it sends us back the filtered data. OK, you can also do a band-pass. OK, so a band-pass does a high-pass and a low-pass together. Now you're filtering out everything above some number and below some number. And here we give it a cutoff with two numbers. So it's going to cut off everything below 4 kilohertz and everything above 5 kilohertz. Again, we use the Butterworth filter. You leave off the tag to get a band-pass filter. And here's what that sounds like. [AUDIO PLAYBACK] [BIRDS CHIRPING] [END PLAYBACK] And thats a band-pass filter. Questions? Yeah, so there are many different ways to do this kind of filtering. Daniel, do you know how filtfilt actually implements this? Because Matlab has a bunch of different filtering functions. And this is just one of them. [INAUDIBLE] how it's actually implemented [AUDIO OUT] Right, so there's a filt function, which actually does a convolution in one direction. And filtfilt does the convolution one direction and then the other direction. And what that does it [AUDIO OUT] output center with respect to the input, centered [AUDIO OUT] Anyway, there are different ways of doing it. And the nice thing about-- yeah? AUDIENCE: [INAUDIBLE] MICHALE FEE: Well, for the bird data, it doesn't necessarily make all that much sense, right, on the face of it? But there are applications where there's some signal at that particular band. So for example, let's say you had a speech signal and you wanted to find out when the formants cross a certain frequency. Let's say you wanted to find out if somebody could learn to speak [AUDIO OUT] if you blocked one of their formants whenever it comes through a particular frequency. OK, so let's say I have my second formant and every time it crosses 2 kilohertz I play a burst of noise. And I ask, can I understand if I've knocked out that particular formant? I don't know why you'd want to do that. But maybe it's fun, right? So I don't know, it might be kind of cool. So then you would run a band-pass filter right over 2-kilohertz band. And now, you'd get a big signal whenever that formant passed through that band, right? And then you would send that to an amplifier and play a noise burst into the person's ear. All right, we do things like that with birds to find out if they can learn to shift the pitch of their song in response to errors. OK, so yes, they can. Yes-- AUDIENCE: [INAUDIBLE] MICHALE FEE: Formants are the peaks in the filter that's formed by your vocal tract by the [AUDIO OUT] air channel from your glottis to your lips. The location of those peaks changes [INAUDIBLE] So ahh, ooh, the difference between those is just the location of those formant peaks. All those things just have formants at different location. AUDIENCE: [INAUDIBLE] MICHALE FEE: So explain a little bit more what you mean by analog interference. AUDIENCE: [INAUDIBLE] MICHALE FEE: Oh, OK, like 60 hertz. OK, so that's a great question. So let's say that you're doing an experiment. And you [AUDIO OUT] contamination of your signal by 60 hertz noise from the outlet. OK, it's really better to spend the time to figure out how to get rid of that noise. But let's say that you [INAUDIBLE] advisor your data [AUDIO OUT] quite figured out how to get rid of the 60 hertz yet [AUDIO OUT] How would you get rid of the 60 hertz from your signal? You could make what's called a band-stop filter where you suppress frequencies within a particular band. Put that band-stop filter at 60 hertz. The thing is, it's very hard to make a very narrow band-stop filter. So we learned this in the last lecture. How would you get rid of a particular [AUDIO OUT] Yeah, so take the Fourier transform of your signal, that 60 hertz [AUDIO OUT] one particular value of the Fourier transform. And you can just set that [AUDIO OUT] Because the filtering in that case would be knocking down a whole band of [AUDIO OUT] frequencies. AUDIENCE: [INAUDIBLE] MICHALE FEE: Well, it's just that with filtfilt, like I said, there are many different ways of doing things. filtfilt won't do that for you. But once you know this stuff that we've been learning, you can go in and do stuff. You don't have to have some Matlab function to do it. You just know how it all works and you just write a program to do it. OK, that's pretty cool, right? All right-- oh, and here's the band-stop filter. [INAUDIBLE] that lag there stop, OK? OK, let's keep going. Oh, and there's a tool here that's part of Matlab. It's called a filter visualization tool, FV tool. You just run this and you can select different kinds of filters that have different kinds of roll-off in frequency, that have different properties in the time domain. It's kind of fun to play with. If you have to do filtering on some signal, just play around with this. Because there are a bunch of different kind of filters that have different weird names like Butterworth and Chebyshev and a bunch of other things that have different properties. But you can actually just play around with this and design your own filter to meet your own [AUDIO OUT] OK, so I want to end by spending a little bit of time talking about some really cool things about the Fourier transform and talk about the Nyquist Shannon theorem. This is really kind of mind boggling. It's pretty cool. So all right, so remember that when you take the Fourier transform-- the fast Fourier transform of something-- [INAUDIBLE] take the Fourier transform of something analytically, the Fourier transform is defined continuously. At every value of F, there's a Fourier transform. But when we do fast Fourier transforms, we've discretized time and we've discretized frequency, right? So when we take the fast Fourier transform, we get answer back where we have a value of the Fourier transform at a bunch of discrete frequencies. So frequency is discretized. And we have frequencies, little samples of the spectrum at different frequencies, that are separated by a little delta f. [INAUDIBLE] What does that mean? Remember when we were doing a Fourier series? What was it that we had to have to write down a Fourier series where we can write down an approximation to a function as a sum of sine waves and multiples of a common frequency? What was it about the signal in time that allowed us to do that? It's periodic. We could only do that if the signal is periodic. So when we write down our fast Fourier transform of a signal, it's discretized in time and frequency. What that means is that it's periodic in time. So when we pass a signal that we've sampled of some duration and the fast Fourier transform algorithm passes back a spectrum that's discreted in frequency, what that means is that you can think about that signal as being periodic in time, OK? Now, when you discretize the signal in time, you've taken samples of that signal in time separated by delta t. What does that tell you about the spectrum? So when we pass the Fourier transform FFT algorithm, a signal that's discretized in time, it passes us back this thing here, right, with positive frequencies in the first half of the vector, the negative frequencies in the second half. It's really a piece-- it's one period of a periodic spectrum. [AUDIO OUT] right? Mathematically, if our signal is discretized in time, it means the spectrum is periodic. And the FFT algorithm is passing back one period. And then there's a circular shift to get this thing. Does that makes sense? OK, now, because these are real functions, this piece here is exactly equal to that piece. It's symmetric. The magnitude of the spectrum is symmetric. So what does that mean? What that means, if our signal has some bandwidth-- if the highest frequency is less than some bandwidth B-- if the sampling rate is high enough, then you can see that the frequency components here don't interact with the frequency components here. You can see that they're separated. OK, one more thing. The period of a spectrum 1 is over delta t [INAUDIBLE] which is equal to the sampling rate. So when we have a signal that's discretized in time, the spectrum is periodic and there are multiple copies of that spectrum, of this spectrum, at intervals of [AUDIO OUT] rate. OK, so if the sampling rate is high enough, then the positive frequencies are well separated from the negative frequencies if the sampling rate is higher than twice the bandwidth [AUDIO OUT] If I sample the signal at a slower and slower rate but it's the same signal, you can see at some point that negative frequencies are going to start crashing into the positive frequencies. So you can see that you don't run into this problem as long as the sampling rate is greater than twice the highest frequency [INAUDIBLE] So what? So who cares? What's so bad about this? Well, it turns out that if you sample at a frequency higher than twice the bandwidth, the highest frequency in the signal, then you can do something really cool. You can perfectly reconstruct the original signal even though you've sampled it only discretely. Put an arbitrary signal in. You can sample it discretely. And as long as you've sampled it at twice the highest frequency in the original signal, you can perfectly reconstruct the original signal. Back to this. Here's our discretely sampled signal. There is the spectrum. It's periodic. Let's say that the sampling rate is more than twice the bandwidth. How would I reconstruct the original signal? But remember that the convolution theorem says that by multiplying the frequency domain, I'm convolving in the time domain, OK? So remember that this piece right here was the spectrum of the original signal, right? As I sampled it in time, I added these [AUDIO OUT] copies at intervals of the sampling rate. If I want to get the original signal back, I can just put a square window around this, keep that, and throw away all the others. [INAUDIBLE] By sampling regularly, I've just added these other copies. But they're far enough away that I can just throw them off. I can set them to zero. Now, when I put a square window in the frequency domain, what am I doing in the time domain? Multiply by a square window in frequency, what am I doing in time? [INAUDIBLE] So basically what I do is I take the original signal sampled regularly in time. And I just convolve it with what? What's the Fourier transform of a square pulse? [INAUDIBLE] If I could just convolve the time domain [INAUDIBLE] with a kernel, that's the Fourier transform of that square window. It's just the sync function. And when I do that, I get back the original function. But it's actually easier to do. Rather than convolving with a sync function, it's easier just to multiply in the frequency domain. So I can basically get back my sampled function at arbitrarily fine [AUDIO OUT] Here's how you actually do that. That process is called zero-padding. OK, so what you can do is you can take a function, Fourier transform it, get the spectrum. And what the Fourier transform hands us back is just this piece right here [INAUDIBLE] But what I can do is I can just move those other peaks away. So that's what my FFT sends back to me. Now what am I going to do, I'm just going to push that away and add zeros in the middle. Now, inverse Fourier transform, and [INAUDIBLE] So the sampling rate is just the number of frequency samples I have times delta f. And here I'm just adding a bunch of frequency samples that are zero. And my new delta t is just going to be 1 over that new sampling rate. Here's an example. This is a little bit of code that does it. Here I've taken a sine wave at 20 hertz. You can see 50 millisecond spacing sampled four times per cycle. I just run this little zero-padding algorithm. And you can see that it sends me back these red dots. [INAUDIBLE] have more completely reconstructed the sine wave that I sampled. OK, but you can do that with any function as long as the highest frequency in your original signal is less than [AUDIO OUT] half the sampling rate. [INAUDIBLE] So zero-padding, so what I showed you here is that zero-padding in the frequency domain gives you higher sampling, faster sampling, in the time domain. OK, and you can also do the same thing. You can also zero-pad in the time domain to give finer spacing in the frequency domain. FFT samples will be closer together in the frequency domain. OK, so here's how you do that. So you take a little piece of data. You multiply it by your DPSS taper. And then just add a bunch of zeros. And then take the Fourier transform of that longer piece with all those zeros added to it. And when you do that, what you're going to get back is an FFT that has the samples in frequency more finely spaced. Your delta f is going to be smaller. Now, that doesn't [AUDIO OUT] frequency resolution. There's no magic getting around the minimum time [INAUDIBLE] product. OK? But you have more samples in frequency. All right, any questions? [INAUDIBLE] We're going to be starting a new topic next time. We're done with spectral analysis.
MIT_940_Introduction_to_Neural_Computation_Spring_2018
2_Resistor_Capacitor_Circuit_and_Nernst_Potential_Intro_to_Neural_Computation.txt
MICHALE FEE: OK, good morning, everyone. OK, so today we are going to continue the process of building our equivalent circuit model of a neuron. This model was actually developed in the late '40s and early '50s by Alan Hodgkin and Andrew Huxley, who started working on the problem of understanding how neurons make action potentials. And so they studied the squid giant axon, which is actually a very cool preparation, because that axon is actually about a millimeter across, and so you can stick wires inside of it. And they did a bunch of very cool experiments to figure out how these different ionic conductances and how these different components of the circuit work together to make an action potential. So that's what we're going to continue doing, we're going to essentially continue describing and motivating the different components of this circuit. So today, we're going to get through the process of introducing a voltage-measuring device, a current source, a capacitor, a conductance, and we're going to start introducing a battery, OK? OK. So here's what we want to accomplish today. So we want to understand how kind of at the simplest level how neurons respond to injected currents, we want to understand how membrane capacitance and membrane resistance allows neurons to integrate their inputs over time, and to filter their inputs or smooth their inputs over time-- and that particular model is called a resistor capacitor model or an RC model of a neuron. We're going to go through how to derive the differential equations that describe that model-- it's actually quite simple, but some of you may not have been through that before, so I want to go through it step by step so we can really understand where that comes from. And we're going to learn to basically look at a current-- a pattern of current injection, and we should be able to intuitively see how the voltage of that neuron responds. And we're going to start working on where the batteries of a neuron actually come from, OK? OK. So-- all right. So-- all right. So we're going to basically talk about the following sort of thought experiment, OK? The following conceptual idea. We're going to take a neuron and we're going to put it in a bath of sailing, OK? A saltwater solution that represents the extracellular solution that neurons-- extracellular solution in the brain. And we're going to put an electrode into that neuron so that we can inject current, and we're going to put another electrode into the neuron so that we can measure the voltage, and we're going to study how this neuron responds-- how the voltage of the neuron responds to current injections, OK? Now why is it that we want to actually do that? Why is that an interesting or important experiment to do? Anybody have any idea why we would want to actually measure voltage and current for a neuron in the brain? Yes? AUDIENCE: Be able to use the mathematical model [INAUDIBLE] provided for us? MICHALE FEE: OK, but it's more than just so that we can describe it mathematically, right? It's because these things-- something about voltage and current are actually relevant to how a neuron functions. Yes? AUDIENCE: Like the resistance inside? MICHALE FEE: Yeah. So that's an important quantity, but we're looking for something more fundamental, like why is it actually important that we understand how voltage changes when a neuron has current injected into it? Habiba? AUDIENCE: Equals [INAUDIBLE] different like ion channels, a different set of voltages [INAUDIBLE].. MICHALE FEE: Exactly. So ion channels are sensitive to voltage, and the way they function depends very critically on voltage. So many-- if not most-- ion channels are voltage sensitive and are controlled by voltage, OK? And that's exactly why. So nearly every aspect of what neurons do in the brain as you're walking around looking at things and doing things is controlled by voltage, and that goes through the voltage sensitivity of buying channels, OK? But what is it that changes the voltage in a neuron? Yes? AUDIENCE: The action potential. MICHALE FEE: That's on the output side. Yes? AUDIENCE: Is it ion concentration? MICHALE FEE: Good. That's correct. I'm looking for something a little bit different. Habiba? AUDIENCE: Do you have pumps or [INAUDIBLE].. MICHALE FEE: Yeah, those are all good answers. Not quite what I'm looking for. AUDIENCE: [INAUDIBLE] MICHALE FEE: Yes. So the answer is that the voltage of a neuron changes because-- the reason current is important is because the reason voltage changes in a neuron is because other cells are injecting current into our neuron. Sensory inputs are injecting current into our neuron, OK? Everything that a neuron receives, all the information that a neuron receives from other neurons in the network and from the outside world comes from currents being injected into that neuron, OK? And so it's really important that we understand how the neuron transforms that current input from other cells and from the sensory periphery into voltage changes that then change the behavior of ion channels. Is that clear? That link of current inputs to voltage output is really crucial, and that's why we're doing this experiment, OK? OK, and that's this point right here. OK, so one of the first things we're going to see when we go through this analysis is that neurons can perform analog integration. They can perform numerical integration over time, OK? That's pretty cool. Voltage is that integral over time of the injected current. To first order. It's the simplest behavior of a neuron. So if you measure the voltage of a neuron and you turn on current and you turn the current on, the voltage of a neuron will ramp up, integrating that input over time, OK? Pretty cool. So we're going to see how that happens, why that happens biophysically. OK. So let's come back to our neuron in the dish. Let me just explain a little bit how you would actually do this experiment. So these electrodes are little pieces of glass tubing. So you take a fine glass tube about a millimeter across you heat it up in the middle over a flame, and you pull it apart when it melts in the middle, and it makes a very sharp point. You break off the fine little thread of glass that's left, and you have a tube that narrows down to a very sharp point, but it's still a tube, and you can literally just-- there are some cells, like in the old days people studied large neurons and in snails where the cells are a millimeter across, you can take an electrode and literally just by hand poke it into the cell. And then you fill that electrode with a salt solution, and then you put a wire in the back of that electrode, and you hook it up to an amplifier. Now we want to measure the voltage in the cell. Remember, voltage is always voltage difference. We're always measuring the difference between the voltage in one place and the voltage somewhere else. So this amplifier has two inputs. It's called a differential amplifier, and we're going to hook the electrode that's in the cell to the plus terminal, we're going to put a wire in the bath, hook it to the minus terminal, and this amplifier is measuring the difference between the voltage inside the cell and the voltage outside the cell, OK? Any questions about that? So we're going to take the other half of that piece of glass that we pulled, fill it with salt solution, stick it in the cell, and we're going to hook it up to a current source. Now our current source is basically just a battery, OK? But it's got some fancy electronics such that the current that flows is equal to whatever value you set, OK? And of course, remember, that voltage is in units of volts and current is charge per second. Charge is coulombs, so coulombs per second, and that's equal to the unit of current, which is amperes. All right. Now let's take a closer look at our little spherical neuron, our little neuron. We've chopped all the dendrites and axons off, so it's just a little sphere, and you can basically model a neuron just like any other cell as a spherical shell of insulating material, OK? In this case, a lipid bilayer. This is a phospholipid bilayer. Phospholipids are just little fat molecules that have a polar head on one side-- that means they're soluble in water on this side, and they have a non-polar tail, so they don't like to be in contact with water, and the two polar tails go end to end-- sorry, the non-polar tails go end to end, the polar heads face out into the water. Does that makes sense? And they are very closely packed together so that ions can't pass through that membrane, so it's insulating. It's very thin, it's only about 23 angstroms across, OK? An angstrom is about the size of a hydrogen atom. They're very thin, OK? OK, we have saline inside. What is saline? What is it in our model? Remember on Tuesday what-- AUDIENCE: A wire. MICHALE FEE: Good. It's a wire and we have saline outside, which is also a wire. So we have two wires separated by an insulator, what is that? That's a capacitor, because it's two conductors separated by an insulator, OK? So an electrical component that behaves like a capacitor-- like if you were to build one of those, you would take like a piece of aluminum foil, put a piece of paper on it, put another piece of aluminum foil next to it, and attach wires to that, and you would squeeze the stack of aluminum foil paper and aluminum foil together, and that becomes a capacitor, OK? And it has a symbol that looks like this electrically. So this is now our equivalent circuit of this model neuron, OK? It's very simple. It's a capacitor with one wire here that represents the inside of the cell, another wire that represents the outside of the cell. We have a current source that connects the outside of the cell to the inside of the cell. When we turn on the current source, it takes charges from inside of here and sticks them through the electrode and pumps them into the cell. Does that makes sense? This is our-- this is sort of a simplified symbol for a voltage-measuring device. The voltage difference between the inside of the cell and the outside of the cell is what we're measuring here, and that difference is called the membrane potential. It's the voltage difference between the inside and the outside of the membrane, all right? Any questions about that? Yes? AUDIENCE: There's a narrow resistance for [INAUDIBLE].. MICHALE FEE: What's that? AUDIENCE: The resistance for-- MICHALE FEE: Yes, but we're going to do it one piece at a time. So we're going to start with a capacitor. The resistor will come in a few slides. Yes? AUDIENCE: So the [INAUDIBLE]. MICHALE FEE: Exactly. So we've simplified our neurons so that it's just an insulating shell, OK? No ion channels, no current anywhere else. If we want to inject current into this simple model neuron, we have to inject it through this electrode here, OK? So we're just going down to the very simplest case, because this is already kind of interesting enough to understand just by itself. Yes? AUDIENCE: So if the cell's acting as a capacitor, is their energy stored in their myelin? MICHALE FEE: The energy is stored in the electric field that crosses the bilayer, and I'll get to that in a second. Any other questions? OK, great questions. All right, so what happens when we inject current into our neuron? As I said, the current source is pulling charges from the outside and pumping them into the inside, all right? So what happens when goes on? So what we're doing is we are injecting current-- let's say this is our capacitor. There are charges, there are ions on the inside that are just up against the inside of the cell membrane. There are charges on the outside, OK? And when we inject a charge from the outside to the inside-- let's put one of those charges right here. And we're going to push it into this cell, when you inject a charge, you get an excessive charge on the inside of the cell membrane, OK? And what does that do? You now have more positive charges inside than outside, like-charges repel-- so it pushes one of those charges away from the outside of the membrane. Does that makes sense? OK, that's kind of interesting. We took a charge, we pushed it in, and a charge comes out. Right? We have a current flowing. We have charges coming in and charges leaving. We have a current flowing through an insulator. How is that possible? It's a capacitive current, OK? No charges are actually passing through the insulator, but it looks like you have a current flowing. That's called a capacitive current. And we represent that in our diagram by a current I sub C, capacitive current that flows through the capacitor. Pretty cool, right? You have a current flowing through an insulator. That's what a capacitor is. OK. Now notice that you have a charge imbalance. You have three positive charges here and only one positive charge here. So there is an excess of two positive charges on the inside. That's because we added a positive charge to the inside and took away a positive charge from the outside, so that leaves a charge imbalance of 2, OK? What do you get between positive charge and negative charge if you hold them next to each other? What is there in between? AUDIENCE: It's attraction. MICHALE FEE: Good, it's attraction, but what is it that causes that attraction? Remember yesterday, we talked about a something on a charge produces a force, what is it? AUDIENCE: Electric field. MICHALE FEE: Good. So there's an electric field between the positive-- the excess positive charges here and the excess negative charges here, OK? That's an electric field, all right? And that electric field stores energy. How do you know there's energy in this system, though? What could you do to demonstrate that there is energy stored in that system? Any ideas? You have two plates, two metal plates, let's say, in the metal version of this. Separated by an insulator. What would happen if you pulled away the insulator? Those two things would do that again, but louder-- boom. What does that take to make that sound? Energy, OK? So there's energy stored in that electric field. So there's a charge imbalance, there's an electric field. What does an electric field over some distance correspond to? AUDIENCE: A voltage difference. MICHALE FEE: A voltage difference, OK? Now, there's a charge imbalance and a voltage difference, and they're proportional to each other. So there's a proportionality constant that's called the capacitance, all right? If you can put a lot of charge and have a small voltage difference, that's a big capacitor. Now you can get a big capacitor just by having a big area. You can see, you can have a lot of charges with a small voltage difference if you have big plates on your capacitor, OK? So the capacitance is actually proportional to the area of the plates, and it's inversely proportional to the distance between them. It's a very thin membrane, which means you can get a lot of capacitance in a tiny area, OK? That's pretty cool. All right, any questions? So charge is coulombs, and there are 6 times 10 to the charges in a coulomb, the elemental charges, electron or monovalent ion charges. Voltage is in units of volts, and the capacitance is in units of farads. Any questions? All right. So we have our relation between voltage difference and charge difference. And what we're going to do is we're going to calculate this capacitive current. How do you think we would calculate the capacitive current? Well, the capacitive current is just the rate at which the charge imbalance is changing, right? Current is just charge per unit of time. OK? So we're going to calculate the capacitive current as the time rate of change of the charge-- and I've dropped the deltas here. So capacitive current is dQ/dt, all right? And remember that Q is just CV, so the capacitive current is just C dV/dt, and the Vm here represents the membrane potential, OK? So the capacitive current through a membrane is just the capacitance times the time rate of change of the membrane potential. Any questions? OK. Pretty straightforward. Now what we're going to do now is we're going to relate the injected charge to the-- sorry, the injected current to the capacitor. And what does Kirchhoff's current law tell us? It tells us that the amount of current going into this wire has to be equal to the amount of current leaving that wire. OK? So we can write that down as follows. The difference in sign here is because the electrode current is defined as positive inward, the capacitive current is defined as positive outward. OK? So you can see that we just calculated the capacitive current, it's C dV/dt, so we can just plug that in here, and now we see this very simple relation between the injected current and the voltage. And again, the current has unit of amperes, which is coulombs per second. OK, so we have that. Now we have a differential equation that describes the relation between current and voltage, we can just integrate it to get the solution. So that membrane potential will just be some initial membrane potential at time 0 plus 1 over C integral over time of the injected current. Just integrate both sides. You get V here, you get integral of I there, and divide both sides by C. Any questions? It's either really confusing or really obvious. Yeah? Everybody OK? All right, good. Now what is this? What is the integral of current over time? AUDIENCE: It's charge [INAUDIBLE] MICHALE FEE: Good. It's the amount of charge you injected between time 0 and time t, right? And what is the amount of charge you inject-- if I tell you that I injected an amount of charge delta Q, how much did I change the voltage? Delta V is delta Q over C, and that's exactly. So the voltage is just the starting voltage plus delta voltage. Does that makes sense? This integral here is just-- that part is just the amount of charge you injected divided by C gives you the change in the voltage. So the voltage is just the starting voltage plus delta V, OK? Yes? AUDIENCE: [INAUDIBLE] MICHALE FEE: What's that? AUDIENCE: [INAUDIBLE] MICHALE FEE: Oh. Because this equation here just came from here. That was our charge-- our relation between charge balance and voltage difference. Yeah? If it's not clear, just please ask, thank you. OK. There we go. So what is the interval constant? It's just some constant times time, right? So our voltage is just some initial voltage plus the injected current over C times time. And so you can see where this comes from, right? When you turn the current on, the voltage just increases linearly over time with a slope that's given by the current divided by the capacitance. OK? All right, any questions? You guys are being very quiet. This is the point where I start feeling nervous, I went too fast. Yes? AUDIENCE: You continually draw the curve for a while-- MICHALE FEE: Yep. AUDIENCE: [INAUDIBLE] MICHALE FEE: Yes, it does. And it breaks at about a volt or so. Because the electric field gets so strong, it literally just rips the atoms apart in the molecules of the lipid bilayer. Yes? AUDIENCE: Why do you [INAUDIBLE] MICHALE FEE: Sorry, I shouldn't say-- it doesn't rip the atoms apart, it rips the molecules apart. You need much higher electric fields to do that. Yes? AUDIENCE: [INAUDIBLE] MICHALE FEE: Oh. Because we're integrating from time 0 to time t. We want to know the voltage at time t, OK? We're starting at 0, we're integrating the current from time 0 to time t, which is where we're wanting to know the voltage, right? So you have to integrate the current from 0 to t. We can't use t in here. t is the endpoint. Yeah. Does that makes sense? Good question, thank you. OK, everybody all right? I'm going to stand here until I hear one more question. Yes? AUDIENCE: [INAUDIBLE] MICHALE FEE: Oh, here. Because it's a current of value I0. Great question. Yes? AUDIENCE: So to maintain this constant current [INAUDIBLE] the amount of [INAUDIBLE] you're pumping it and-- MICHALE FEE: Yes. That's right. This-- OK, I should have maybe been more clear. This current source has a knob on it that I get to set. I get to like-- there would be an app now, and I'd pull out my current source app and I type in 10 milliamps, boom. And this thing-- because there's a Bluetooth connection to this thing, and it like sets this thing to 10 milliamps, and it just keeps pumping 10 milliamps until you tell it to do something else, OK? Yes? AUDIENCE: [INAUDIBLE] make the right direction instead of it being constant current state that usually-- MICHALE FEE: Oh, OK. AUDIENCE: --that would create some weird kind of-- MICHALE FEE: Sure. Yeah. What would that be, actually? If you put in a linear ramp in current? It would be a parabolic voltage profile. Yeah, very good. That's exactly it. This current profile that you-- this voltage profile is literally just the numerical integral of this function. So all you have to do is look at this and integrate it in your head, and you can see what the voltage does, OK? Great. That's exactly right. So let's do another example. Let's put in a current pulse. So we start at zero current, we step it up to I0, we hold it there for tau, then we turn the current off. So let's start our neuron right here. What's going to happen? What's your name? I'm going to ask you to do this problem. What-- yeah. AUDIENCE: Sammy. MICHALE FEE: Sammy. What's this voltage going to do? AUDIENCE: So I'd say constant. MICHALE FEE: Good. Because it's zero current. Then what's going to happen? AUDIENCE: And it's going to [INAUDIBLE] MICHALE FEE: Good. AUDIENCE: [INAUDIBLE] MICHALE FEE: Good. AUDIENCE: And then the [INAUDIBLE] go back to constant at that point. MICHALE FEE: Awesome. That's it. It's that simple. OK? Good. All right. Now somebody brought up resistors. Who brought up ion conductances? Somebody mentioned that. So that's the next thing we're going to add. This is sort of the zero order model of a neuron. It has the simplest view and it's often not so bad, OK? For short periods of time. But neurons actually have ion channels, all right? Allow current to flow through the membrane. So we're going to start today by analyzing the case of the simplest kind of ion channel which is the kind of ion channel you get when you take a needle and you poke a hole in the membrane, OK? It's called the leak or a hole, all right? And we're going to analyze what our neuron does when you do that. So what we're going to find is that the ion channel-- a leak conductance, it can be represented in our model simply by a resistor, OK? And we're going to have our capacitive current, membrane capacitive current, and a membrane ionic current that's due to ions flowing through ion channels in the membrane, OK? And that current will be-- we're going to call that our leak resistance, and that current will be our leak current, OK? So now, Kirchhoff's current law tells us what? That the leak current plus the capacitive current has to equal the injected current, all right? We know the capacitive current is just C dV/dt, so we just plug that in, and now we have I leak plus C dV/dt equals the injected electrode current. That is called membrane ionic current, that is called membrane capacitive current, and that is our electrode current. There's a sign convention in neuroscience, which is that membrane ionic currents that are outward from the inside of the cell to the outside of the cell membrane are positive in sign. Positive charges leaving the cell have a positive sign. It's just convention, it could have been the other way. But you have to choose something, so that's what-- I think it was Hodgkin and Huxley, actually, who decided that. Inward currents, positive charges entering the cell through the membrane going this way from extracellular to intracellular, are defined as negative. Electrode currents, the other way. Electrode current-- inward current is-- into the cell is positive. OK, so we're going to poke a hole in our membrane and we're going to model that ion channel using Ohm's law. So the leak current is just membrane potential divided by resistance, leak resistance. So what do we get if we plug this into our-- I leak plus I capacitance? Capacitive equals injected current. We get V membrane potential, Vm over RL plus C dV/dt equals injected current, OK? We multiply both sides by resistance, we get V plus RC-- that why it's called an RC model-- dV/dt equals leak resistance times the injected current, all right? Now that's looking a little complicated, but we're going to simplify things now. Tau-- I'm sorry, RC is resistance times capacitance, and it turns out, that has units of time. And so we're going to call that tau-- got a little ahead of myself. That's called tau, we're going to make that substitution in a minute. But first we're going to calculate the steady state solution to that little equation, OK? So bear with me, hang on, it's all going to make sense in a minute. Does anyone know how to calculate the steady state solution of a differential equation? Yes? AUDIENCE: [INAUDIBLE] MICHALE FEE: Good. What's your name? AUDIENCE: Rebecca. MICHALE FEE: Rebecca. So you said the derivative-- so let's do that. Set dV/dt equal to 0. And what do you find? Sorry, we flashed the answer up there. What do you get if you said dV/dt equals 0? AUDIENCE: [INAUDIBLE] MICHALE FEE: Good. So you inject some current, we're going to hold the current constant, let's say, OK? We've put some current in and we hold it constant. The voltage will change, and eventually things will settle down and the dV/dt will go to 0. At that point, we know the voltage. It's just RL times Ie. What is the voltage equals resistance times current? What is that? AUDIENCE: Ohm's law. MICHALE FEE: Ohm's law. It's just when we inject current, a bunch of stuff happens, and when the dust settles, the voltage difference is just the injected current divided by the resistance. Does that makes sense? Yes? AUDIENCE: Where is [INAUDIBLE] MICHALE FEE: Well, right now we just took a needle and poked a hole in our cell. So-- AUDIENCE: How big the hole is? MICHALE FEE: Yep, exactly. It's how big the hole is. Now cells-- real cells do have leak channels. They're actually ion channels that leak kind of any ion that's in there. It's not very common. And you'll see why in a minute. Actually, that's not quite true. There are ion channels that look essentially like leaks, and it turns out that the ion channels that many neurotransmitter receptors like glutamate and acetylcholine, the ion channels actually look like little leaks. They pass multiple ions that makes them look like leaks. OK. So in steady state, the membrane potential goes to RL times Ie, and we call that voltage something special, V infinity, because it's the voltage that the system reaches at time equals infinity, OK? Any questions about that? OK. So we're going to just rewrite this equation as Vm plus tau dV/dt equals V infinity. And that equation we're going to see over and over and over again in this class in many different contexts, all right? It's a first order linear differential equation and it's very powerful, so I want you to get used to it. So what does this mean? So let's rewrite this equation a little bit. Let's move this term to the other side and divide both sides by tau, and here's what you get. dV/dt equals minus 1 over tau V minus-- times V minus V infinity. OK? Now let's take a look at what the derivative dV/dt looks like as a function of voltage. Yes? AUDIENCE: So why couldn't we have said we didn't mean to [INAUDIBLE]. MICHALE FEE: V infinity is defined as RL times Ie. AUDIENCE: Oh, so that's a [INAUDIBLE].. MICHALE FEE: It's a definition. Sorry, I should have put like three lines there to indicate that it's the definition. AUDIENCE: OK. MICHALE FEE: Yeah. Sorry, that's a very important question. What's your name? AUDIENCE: Rishi. MICHALE FEE: Rishi. OK. I'm going to make an attempt to remember names. So is everyone clear about that? V infinity is defined as the resistance times this injected current. So the injected current, when we inject current into our neuron, you're changing V infinity, OK? You're controlling it. Does that makes sense? OK. So let's look at how the derivative changes as a function of voltage, it's very simple. Bear with me. All of this is going to crystallize in your mind in one beautiful construct very shortly. Yes? AUDIENCE: [INAUDIBLE] MICHALE FEE: Yep. Resistance times capacitance has units time, OK? And so we call it tau. Tau is just a constant, OK? So hang on, bear with me. The derivative is a function of voltage. And at V equals V infinity, the derivative is 0, right? That's the definition of the infinity, it's the voltage at which the derivative is 0. Yeah? And the voltage is less than V infinity, the derivative is positive, right? When the voltage is below the infinity, the derivative of voltage is positive. So what is the voltage doing? It's approaching V infinity. If the voltage is above V infinity, voltage greater than V infinity, the derivative is negative. So voltage does what? AUDIENCE: [INAUDIBLE] MICHALE FEE: Yes, but it's-- AUDIENCE: Approaches-- MICHALE FEE: It approaches V infinity. So no matter where voltage is, it's always approaching V infinity. If it's below V infinity, the slope is positive, and it approaches V infinity. If it's above V infinity, the slope is negative, and it approaches V infinity from above. Pretty cool, right? So V is always just relaxing toward V infinity. And how does it get there? Does it go linearly? Well, you can see that the slope-- the rate at which it approaches V infinity is proportional to how far away it is from V infinity. And so it doesn't just go vroom, boom, crash into V infinity, it kind of slowly approaches V infinity, OK? Anybody know what that function is called? AUDIENCE: Exponential. MICHALE FEE: It's an exponential, good. And it approaches with a timescale of tau. So if tau is small, it approaches quickly. If tau is long, it approaches slow. You can see that if tau is big, the derivatives are small. If tau is small, the derivatives are bigger, all right? Any questions about that? Yes? AUDIENCE: So a tau usually is a-- MICHALE FEE: Sorry, say it again? AUDIENCE: --times equals to tau-- MICHALE FEE: Yes. AUDIENCE: --is V infinity plus 1 over e. MICHALE FEE: At time tau, the-- AUDIENCE: Times V0. MICHALE FEE: At time tau-- at time 0, the difference between V infinity-- sorry, V and V infinity is V0 minus V infinity. At time tau, that initial difference drops by about a third. AUDIENCE: OK. MICHALE FEE: About 1 over e, which I think is 2.7-something, OK? So in 1 tau, this voltage difference falls by about a-- drops by about a third. And in another tau, it drops by another third, it keeps going, OK? All right. So let's just write down the general solution. The general solution for the case where you have constant current, that voltage difference from the voltage at time t to V infinity is just equal to the initial voltage difference times e to the minus t over tau, OK? So if t is equal to tau, then this is e to the minus 1, so the voltage difference will be 1/3 of the original voltage difference. Is that clear? OK. So now let's see what this looks like in our neuron. We have our neuron, we have a current pulse, we have zero current, we turn the current on to I0 at this time, we hold the current constant, and we turn current off at this time right here, OK? So what does the voltage do? Let's go step by step. The first thing is that voltage-- sorry-- the current controls what in that equation? The current controls V infinity. So we can plot V infinity immediately, because V infinity is just the resistance times the injected current. So what does V infinity look like? It's constant here, and then what happens to V infinity? AUDIENCE: Increases. MICHALE FEE: It increases, that's correct, but-- AUDIENCE: It'll just be [INAUDIBLE].. MICHALE FEE: Good. It will just be the resistance times the current. Resistance is a constant, so V infinity will just go up here, right? Good. It'll go up, and then it stays constant at R times I. And then at this point, the current goes back to 0, so V infinity is resistance time 0, so V infinity drops back to 0. Does that makes sense? That's V infinity, that's not the voltage of the cell. That's the steady state voltage of the cell. So now what does voltage of the cell actually do? So let's start our voltage here at 0. What happens? Good. What is it doing? AUDIENCE: Approaching from infinity. MICHALE FEE: Good. It approaches. So V at every point is relaxing toward V infinity exponentially, right? And that looks like-- at some time constant, and that looks like this, all right? Now what happens here? AUDIENCE: [INAUDIBLE] MICHALE FEE: Good. Because V infinity suddenly change to 0, and so V relaxes toward V infinity exponentially with some time constant. OK? Any questions? OK. Now this-- that is our RC model neuron, OK? Resistance times capacitance-- they got a resistor and a capacitor, but the solutions are just exponential decays toward some steady state solution. Now it turns out that an RC system, a first order linear system acts like a filter, OK? So remember, our neuron that just has a capacitor is an integrator, it integrates over time. When you add a resistor, this thing-- it's kind of integrating here, but then it gets tired and stops integrating, OK? It relaxes to some steady state. So this actually looks like a filter. It takes time to respond to something. So that system responds well to things that are changing slowly in time, and it responds very weakly to things that are changing rapidly in time. So here's an example-- I'll put together this demonstration. In red is the injected current. So if you have long pulses of injected current, the time constant of this garners about 10 milliseconds-- I think this is probably a-- what is that, a 50 or 100-millisecond pulse? 80-- AUDIENCE: Nanofarads. MICHALE FEE: Yeah. You can see that in blue is the voltage, it relaxes toward V infinity. And then the current goes off, it relaxes back. And you can see that the voltage is responding very well to the current injection. But now let's make really short pulses that are much shorter than tau. You can see that the voltage starts relaxing toward V infinity, but it doesn't get very far, and all of a sudden the current's turned off and it relaxes back. And so you can plot the peak voltage response as a function of the width of these pulses, and you can see that for long pulses, it responds very well, but for short pulses-- really responds at all. And that's called a low pass filter, OK? It responds well to slowly-changing things, but barely responds to rapidly-changing things. So it's passing low frequencies, low pass filter, OK? All right. Any questions? That was a lot of stuff all at once. Yes? AUDIENCE: I'm just curious, like on what order is the capacitance for nanofarads? MICHALE FEE: OK. It's 10 nanofarads per-- 1 microfarads per square centimeter. 10 nanofarads per square millimeter, OK? We're going to get to that in a second, that's a great question. We're going to get to what the actual numbers look like for real neurons, OK? I think you had a question. AUDIENCE: [INAUDIBLE] MICHALE FEE: Sorry, say it again? AUDIENCE: Past [INAUDIBLE]? Is that [INAUDIBLE] reacts to [INAUDIBLE]?? MICHALE FEE: What happens is it reacts to it, but because it's changing kind of linearly at these short times, it doesn't get very far. If the current stays on for a long time, you can see it has exactly the same profile here, but it just has time to reach V infinity. Here, it doesn't have time to reach V infinity, it just gets a little bit away from 0 and then it decays back. Yes? AUDIENCE: Are there other sorts of filters for non-responses like [INAUDIBLE]? MICHALE FEE: You can build different kinds of filters from circuits of neurons, but neurons themselves tend to be high pass filters. You can-- sorry, low pass filters. You can make-- so you can put in different kinds of ion channels that change the properties of neurons, but sort of to first order, you should think of neurons as low pass filters. Yeah? AUDIENCE: Can you show the [INAUDIBLE] MICHALE FEE: I wrote it like this-- you can write it as V equals V infinity plus this other stuff, but I wrote it like this because what you should really be seeing here is that the voltage difference from-- time between the voltage and V infinity decays exponentially. So the distance you are from V infinity decays exponentially, OK? It makes it more obvious that you're decaying toward V infinity, OK? Yes? AUDIENCE: If you were to arrange like the physical properties of the neuron itself, you [INAUDIBLE] MICHALE FEE: Not-- not really. Not simply. You can put in certain ion channels that could make a neuron less responsive at low frequencies, OK? So you can make them kind of responsive to some middle range of frequencies that won't respond to very high frequencies and they won't respond much to very low frequencies, but for the most part, again, just you should-- at this point, let's just think of them as low pass filters. We're going to start adding fancy stuff to our neuron that's going to make it more complicated. So don't get too hung up on this. All right, let me just make this point. This one right here, V equals tau-- V plus tau dV/dt equals V infinity appears everywhere, it's ubiquitous in physics, chemistry, biology, we'll be using it in multiple different contexts in different parts of the class, and in-- computation, OK? And even slightly more complicated versions of this, like the Michaelis-Menten equations in chemistry, you can kind of understand them in simple terms. If you like have a handle on this, other slightly more complicated things become much more intuitive, OK? All right, so try to really make sure that you understand this equation and how we derived it, OK? OK, let's talk about the origin of this-- the timescale of a neuron. So the tau of a neuron-- of most neurons is about 100 milliseconds to-- sorry, 10 milliseconds to 100 milliseconds, kind of in that range. And it comes from the values of resistance and capacitance of a neuron. So a resistor-- the resistance of a neuron-- range of 100 million ohms, OK? And the capacitance is about 10 to the minus 10 ohms or about 100 picofarads. And you multiply those two things together and you get a time constant of about 10 milliseconds, OK? So what that means is if you inject current into a neuron, it takes about 10 to 100 milliseconds for it to fully respond to that step of current. The voltage will jump up and relax to the new V infinity in about 10 to 100 milliseconds, OK? So let's take a little bit closer look at the resistance-- what this resistance in capacitance looks like in a neuron. So we've described the relation between leak current and voltage as current equals voltage over resistance, but rather than using resistance to think about currents flowing through a membrane, it's much more useful, usually, to think about something called conductance, and conductance is just 1 over resistance, OK? So conductance, G-- and we use the simple G for conductance-- it's equal to 1 over resistance. So now we can write Ohm's law as I equals G times V. Resistance has units of ohms, G-- conductance has units of inverse ohms or siemens is the SI unit for conductance. So if we have conductances, if we have two-- let's say two ion channels in the membrane, they operate in parallel. Current flows through them separately, right? They're not in series, like it flows through one and then flows through the other, right? They are in parallel-- the current can flow through both like this up in parallel. And we can write down the current using Kirchhoff's law, the total current is just the sum of the current through those two separate conductances, right? Now the total current is-- we can just expand this in terms of the inductance of each one of those. So the total current is G1 times the voltage difference plus G2 times the voltage difference for this conductance. So the total current is just-- you factor out the V, the total current is just G1 plus G2, so we can write down the total conductance as just G1 plus G2. Does that makes sense? So conductances in parallel add together. So if we have a piece of membrane that's got some ion channels in it-- or holes, and we add another piece of membrane that kind of has the same density of ion channels, you have twice the holes, twice the current, and twice the conductance. So we can write the current as conductance times membrane potential, but we can rewrite that conductance as the area times that conductance per unit area, and that's called specific membrane conductance-- in this case, it's a leak, so we call it specific leak conductance, and it has units of conductance per area. We multiply that by the area and we get that total conductance. Any questions about this? No? So you can see that we can now plot the current through the membrane as a function of voltage. This is called a IV current plotted as a function of voltage. You can see that the current is linear as a function of voltage, right? That's just Ohm's law. And you can see that for a low conductance, G is small, so the slope is small. For a high conductance, you get a lot of current for a little bit of voltage, and so the-- deeper, OK? So if you plot current versus voltage, you get a curve, and the slope of that curve is just equal to the conductance, all right? OK. Now let's look at capacitance. The total current through these two capacitors in parallel is just the current through one capacitor plus the current through the other. We can write the current through each capacitor separately. I total equals C1 dV/dt plus C2 dV/dt. Factor out the dV/dt and you get that the total current is just C1 plus C2 dV/dt. So the total capacitance is the sum of the capacitances. So if you have a patch of membrane, you measure the capacitance, if you put another one next to it, you'll get the sum of those two capacitances. So the capacitance also scales with area. So we can write down the total membrane capacitance as capacitance per unit area times the area of the cell, right? And the area of a cell is like the-- if it's a sphere, it's 4 pi r squared where is the radius, OK? All right. And here's the-- this C sub m is called this specific membrane capacitance, and it's 10 nanofarads per square millimeter, all right? OK, now, I want to show you something really cool. We have a cell that has a membrane with some-- this cell has some time constant-- remember, 10 milliseconds. Now you might think, oh, the capacitance of the cell depends on how big it is, right? And so the time constant will change depending on how big the cell is, you might think. But let's actually calculate the time constant from this capacitance and this conductance, OK? So here we go. Time constant C. R is just 1 over the conductance, right? So the time constant is capacitance divided by conductance-- total capacitance divided by total conductance. But you can rewrite this capacitance as capacitance per unit area times area, you can rewrite that conductance as conductance per unit area times the area of the cell, and the areas cancel. And so the time constant is just that capacitance per unit area of the membrane divided by the conductance per unit area of the membrane. And what that means is that the time constant of a cell-- nothing to do with the cell. The time constant is the membrane time constant, and it's a property only of the membrane. That's pretty cool, right? Any questions about that? Now in a more complicated neuron where you have a soma and dendrites and axons, different parts of the cell can have different conductance per unit area-- like more ion channels out here on the dendrite and maybe fewer on the soma. And so one part of a cell can have a different membrane time constant than some other part of the cell, OK? But again, it's a property of the membrane. Any questions about that? Yes? AUDIENCE: So in that case, like different time constants, do you have to like consider flow between different areas? MICHALE FEE: You sure do. Absolutely. That's one of the interesting things about-- when cells have multiple-- they have different properties, you have current flowing between them, but you have to understand this kind of basic stuff before you even get anywhere close to understanding a more complicated neuron, right? OK. So we're going to add a new component to our model. It's a battery, and it's going to solve one really fatal problem with this model. What's the problem with this model? Can anyone see-- I'm kind of showing it right here. What happens to this neuron if I turn the current off? It goes back to zero. And in order to get the voltage to go different from zero, I have to inject current through my current source. Without me, the experimenter with an electrode in it injecting current, this neuron literally just sits at zero and stays there, all right? It's actually a good model of a dead neuron, OK? So in order to change that, we need to add a battery here, OK? And that battery is going to power this thing up, so now it can change its own voltage. And then things start getting really interesting. So how can these batteries allow a neuron to change its own voltage? Well, the way a neuron controls its own voltage is it has ion channels-- conductances-- that have little knobs on them that the cell can control called voltage. So these conductances are voltage-dependent, and now the cell connects these batteries to its inside wire at different times in different ways. So let's say we want to make an action potential. So we have a battery that's got a positive voltage, we have a battery that's got a negative voltage, and we make an action potential by connecting the back to the inside of the cell by turning on this conductance, and then we're going to connect the battery with a negative voltage to our cell, and we're going to do that one after the other. So watch this. We're going to connect the positive battery and the voltage is going to go up; we're going to turn off the positive battery, connect the negative battery, the voltage goes down; and then we're going to turn off both batteries, and the voltage just relaxes, OK? Cool, right? So now the neuron can control its own voltage. But before we do that, we need to put batteries in our neuron, OK? All right. So anybody know what the-- yes? AUDIENCE: [INAUDIBLE] What does that [INAUDIBLE] MICHALE FEE: Good, we're going to get to that. That was-- exact next question. What is it that makes a battery in a neuron? Yeah? AUDIENCE: Well I mean, like, you have like something, right? Even in like-- MICHALE FEE: Good. AUDIENCE: --concentration gradient-- MICHALE FEE: Good. AUDIENCE: --so that give us a [INAUDIBLE] MICHALE FEE: Here. You give the rest of the lecture. That's exactly right. OK? Well concentration gradients. There's one more thing we need. Concentration gradients by themselves don't do it. We need ion channels that are permeable only to certain ions, OK? And that's what we're going to do now. So you need concentration gradients and ion-selective permeability, OK? So we're going to go through that. So let's take a beaker, fill it with water, have a membrane, dividing it into two. We're going to have an electrode on one side, we measure the voltage difference-- sorry, we have an electron on both sides, we hook it up to our differential amplifier, and measure the voltage difference on the two sides, OK? Then we're going to put-- we're going to take a-- we're going to buy some potassium chloride from sigma, we're going to take a spoonful of it and dump it into this side of the beaker. Stir it up, and now you're going to have lots of potassium ions and chloride ions on this side of the beaker, right? Now we're going to take a needle and poke a hole in that membrane. That becomes a leak, a leak channel. It's a non-specific-- a non-selective pore that passes any iron, OK? So what's going to happen? AUDIENCE: [INAUDIBLE] MICHALE FEE: Good. The ions are going to diffuse. From where to where? Somebody else. AUDIENCE: To the lower concentration. MICHALE FEE: To the lower concentration, good. So some of those ions are going to diffuse from here to here. And we can plot the potassium-- let's focus on potassium now. We're going to plot-- we can plot the potassium concentration on this side over time and on this side over time. By the way, this side of the beaker is going to represent the inside of the neuron, which has lots of potassium, and this is going to represent the outside of our neuron, which has very little potassium, OK? So if we plot that potassium concentration on this side, it's going to increase over time. It's going to take a long-- and the concentration here will decrease and eventually they'll meet in the middle somewhere. They'll become equal. And that's going to take a really long time, right? Because it takes a long time for this half of this spoonful of potassium chloride to diffuse to the other side. Yeah? OK. Now let's get a different kind of needle, a very special needle. It's really small and poke a hole in the membrane that is only big enough to pass potassium ions but not chloride ions. So what's going to happen now? Yeah? Somebody-- yes? AUDIENCE: Half of it flows to the other side-- MICHALE FEE: Good. AUDIENCE: [INAUDIBLE] MICHALE FEE: Good. So some potassium ions are going to diffuse through this pore and go to the other side. And what's going to happen is if we plot the potassium concentration here, it will decrease-- sorry-- over here it will increase a little bit, and the potassium concentration on this side will decrease a little bit, but then it will stop changing, and it will never come to equilibrium. It will also take a very short time for that equilibrium to happen, because just a few potassium ions need to go to the other side before it stops. So why does the concentration stop changing here? Well, it's because the potassium current from this side to this side goes to zero, and it goes to zero very quickly. So why is that? Well, one hint to the answer to that question comes if we look at the voltage difference between the two sides. So what you see is that the voltage difference started at 0, and when we poked that hole, all of a sudden there was a rapidly-developing voltage difference across the two sides, OK? Why does that voltage go negative? Anybody? What happened when these positive charges started diffusing from this side to this side? AUDIENCE: Didn't you say it can bond to the positive and negative charge [INAUDIBLE] MICHALE FEE: Basically this is like a capacitor, right? And some charges diffused from here to here, some positive charges diffused from here to here, that charges up this side, and so the voltage is positive. We put positive charge-- more positive charges here, the voltage here goes up, OK? So if the voltage here is higher than the voltage here-- we're plotting V1 minus V2, the voltage here is lower than the voltage here, and so this is going negative, OK? And that voltage difference, that negative voltage here, positive voltage here, what does that do? It repels, it makes-- the positive side starts repelling which ions? It's repelling positive ions. So it keeps more potassium ions from diffusing through the hole. Does that makes sense? Blank stares. Are we OK? OK, good. And it continues to drop until it reaches a constant voltage, and that's called the equilibrium potential, OK? The voltage changes until it comes to equilibrium, and that voltage difference is called the equilibrium potential. And that voltage difference is a battery, OK? And I'm going to-- we're going to explain a little bit-- I think it's in the next lecture, the one that's on tape that explains how you actually can justify representing that as a battery. But right now, I'm going to show you how to calculate what that voltage difference actually is-- what's the size of the battery, how big is the battery, OK? So you can see that when ions-- positive ions defuse to this side, this voltage goes up, you have a voltage gradient that corresponds to a field pointing in this direction. And that electric field pushes against the ions-- remember, we talked about drift in an electric field, so when those ions are trying to diffuse across, that electric field is literally dragging them back to this side, right? So we have a current flowing this way from diffusion, and we have a current flowing that way from being dragged in the electric field. And so we can calculate that voltage difference because at equilibrium, the current flowing this way from diffusion has to equal the current flowing that way from drift in an electric field, OK? So one way to calculate this is we're going to calculate the current due to drift, current due to diffusion, add those up, and that's equal to the total, and at equilibrium, that has to equal to 0, right? So what I'm showing you now is just sort of the framework for how you would calculate this using this drift and diffusion equation. So we have Ohm's law that tells us how the voltage difference produces a current due to drift, and we have Fick's law that tells us how much current is due to diffusion, and you can set the sum of those two things to equal zero, all right? And so this is the way it would look. I don't expect you to follow anything on this slide just except to see that it can be done in this way, OK? So you don't even have to write this down, OK? So the drift current is proportion-- it's some constant times voltage. Fisk's law is some constant times that concentration gradient, remember that. And we just set those two things equal and solve for delta V, and that's what you find. What you find is the delta V is just some constant times the log of the difference in concentrations on the inside and outside, OK? Now it turns out, there's a way of calculating this that's much simpler and more elegant, and I'm going to show you that calculation, OK? Yes? AUDIENCE: And so like the concentrations inside and outside, like at the [INAUDIBLE] beginning of the-- MICHALE FEE: Yes, at the beginning. And the answer is that the concentrations don't change very much through this process, so you can even ignore that change, OK? All right, everybody got this? All right. So here's how we're going to calculate an alternative way of calculating this voltage difference, just really beautiful. We're going to use the Boltzmann equation. The Boltzmann equation tells the probability of a particle being in two states as a function of the energy difference between those two states. And one of those states is going to correspond to a particle being on the left side of the beaker or inside of our cell, and that particle being outside of our cell or on the other side of the beaker. And those two, left side and right side, have different energies, OK? So our system has two states, a high-energy state and a low-energy state. Boltzmann equation just says the probability of being in state 1 divided by the probability of being in state 2 is just e to the minus energy difference divided by kT. k is the Boltzmann constant, T is temperature-- this is the same kT that we talked about in the last lecture. Now, you can see if the temperature is very low, all those particles are-- if the temperature is 0, they're not being jostled, they're just sitting there quietly. They can't move, they can't get into state 1, they just sit in state 2, OK? So the probability of being in state 1 divided by the probability of being in state 2 is 0, OK? If kT is 0, this is a very big number, e to the minus big number is 0. Now let's say that we heat things up a bit so that now kT gets big. So the Katie actually gets approximately the same size as the energy difference between our two states. So now some of those particles can get jostled over into state 1. And we can write down-- we can just calculate-- you can see now that the probability of being in state 1 is-- divided by the probability of being in state 2 is bigger than 0 now. You can actually just calculate it. If the energy difference is just twice kT, then energy difference, 2kT divided by kT, that ratio of probabilities is just e to the minus 2, OK? If the energy difference is bigger, then you can see, that probability ratio is smaller. The probability of being a state 1 goes to 0 if you increase that energy. If you make that energy difference very small at temperature kT, you'd see that if the energy difference is about the same as kT, then the probability of being at state 1 is equal to the probability in state 2. Those particles just jostle back and forth between the two states. OK, so we're ready to do this. Ratio of probabilities is e to the minus energy difference over KT. What's the energy of a particle here versus here? Well, that's a charged particle, the energy difference is just given by the voltage difference. So energy is charged times voltage where Q is the charge of an ion. The ratio of probabilities is e to the-- Q times voltage difference over kT. Take the log of both sides, solve for the voltage difference, V in minus V out equals minus kT over Q times the log of the probability ratio, but the probability is just this concentration. And so we can write this as delta V-- the voltage difference is equal to kT over Q, which is 25 millivolts, times the log of the ratio of potassium concentration outside to potassium concentration inside. And that's exactly the same equation that we get if you do that much more complicated derivation based on Fick's law and Ohm's law, balancing Fick's law and Ohm's law, OK? And this is equilibrium potential here, not electric field. OK, so let's take a look at potassium concentrations in a real cell. This is actually from squid giant axon. 400 millimolar inside. So 20 millimolar outside, so there's a lot of potassium inside of a cell, not very much outside. Plug those into our equation. kT over Q is 25,000 millimolar-- temperature at room temperature. The log of that concentration ratio is minus 3, so ek is minus 75 millivolts. That means if we start with a lot of potassium inside of our cell, open up a potassium-selective channel, what happens? Potassium diffuses out through that channel and the voltage goes to minus 75 millivolts. How do you know it's negative? Like, I always can't remember whether this is concentration inside or outside, outside over inside, I don't know. But the point is, you don't actually have to know, because you can just look at it and see the answer what sign it is. If you have positive ions-- a high concentration of positive ions inside, they diffuse out, the voltage inside of the cell when positive ions leave is going to do what? It's going to go down, so that's why it's minus, OK? So-- the battery and in those video modules that I recorded for you, it's going to explain how you actually incorporate that into a battery in our circuit model. And so we've done all of these things, we've looked at how membrane capacitance and resistance allows neurons to integrate over time, we've learned how to write down the differential equations, we're now able to just look at a current input and figure out the voltage change, and we now understand where the batteries in a neuron come from.
MIT_940_Introduction_to_Neural_Computation_Spring_2018
12_Spectral_Analysis_Part_2_Intro_to_Neural_Computation.txt
MICHALE FEE: OK, good morning, everyone. So today, we're going to continue with our plan for developing a powerful set of tools for analyzing the temporal structure of signals, in particular a periodic structure and signals. And so this was the outline that we had for this series of three lectures. Last time, we covered Fourier series, complex Fourier series, and the Fourier transform, discrete Fourier transform. And we started talking about the power spectrum. And in that section, we described how you can take any function and write it as a-- or any periodic function and write it as a sum of sinusoidal component. So even functions we can write down as a sum of cosines. And odd functions we can write down as a sum of sines. Today, we're going to continue with that. We're going to talk about the convolution theorem, noise and filtering Shannon-Nyquist sampling theorem and spectral estimation. And next time, we're going to move on to spectrograms and an important idea of windowing and tapering, time bandwidth product, and some more advanced filtering methods. So last time, I gave you this little piece of code that allows you to compute the discrete Fourier transform using this Matlab function FFT. And we talked about how in order to do this properly, you should first circularly shift. You have to take the time series. And actually, the FFT algorithm is expecting the first half of the data in the second half of that data vector. Don't ask me why. But we just do a circular shift, run the FFT. And then circular shift again to get the negative frequencies in the first half of the vector. And then you can plot the Fourier transform of your function. This shows an example where we took a cosine as a function of time. At some frequency, here, 20 hertz. We compute the Fourier transform of that and plot that. So that's what this looks like. Here is a cosine at 20 hertz. And you can see if you take the fast Fourier transform of that, you can see that what you see is the real part as a function of frequency. It has two peaks, one at plus 20 hertz, one at minus 20 hertz. And the imaginary part is 0. So we have two peaks. One produces a complex exponential that goes around the unit circle like this at 20 hertz. The other peak produces a complex exponential that goes around the other way at 20 hertz. The imaginary parts cancel and leave you with a real part that goes back and forth at 20 hertz OK? So that's what those two peaks are doing. Here is the Fourier transform of a sine wave at 20 hertz. This is phase shifted so the cosine is a symmetric function or an even function. The sine is an odd function. And you can see that in this case, the Fourier transform again has two peaks. In this case, the real part is 0. And the two peaks are in the imaginary part. The one at plus 20 hertz is minus I. And the one at minus 20 hertz is plus I. Now, the interesting thing here, the key thing, is that when you take the Fourier transform of a function, symmetric functions, even functions, are always real. The Fourier transform of even functions is always real. The Fourier transform, if you will, the Fourier series of the even part of the function into the real part of the Fourier transform and the Fourier series of the odd part of the function into the imaginary part of the Fourier transform. OK? Now, we introduced the idea of a power spectrum, where we just take the Fourier transform. And we take the square magnitude of that Fourier transform. And you can see that the power spectrum of the sine and cosine function is just a single peak at the frequency of the sine or cosine. OK? And you can see why that is because the sine and cosine have a peak at plus 20 hertz. For the cosine, it's real. And for the sine, it's imaginary. But the square magnitude of both of those is 1 at that frequency. OK, any questions about that? Feel like I didn't say that quite as clearly as I could have? OK. So any questions about this? OK, let's take a look at another function that we've been talking about, a square wave. In this case, you can see that the square wave is symmetric or even. And you can see that the Fourier transform of that is all real. The peaks are in the real part of the Fourier transform. You can see the imaginary part in red is 0 everywhere. And you can see that the Fourier transform of this has multiple peaks at intervals that are again equal to the frequency of this square wave. OK? If you look at the power spectrum of the square wave, you can see again it's got multiple peaks at regular intervals. One thing that you often find when you look at power spectra of functions is that some of the peaks are very low amplitude or very low power. So one of the things that we often do when we're plotting power spectra is to plot not power here but log power. OK? And so we plot power in log base 10. A difference of an order of magnitude in two peaks corresponds to a unit called a bel, b-e-l. So 1 bel corresponds to a factor of 10 difference in power. So you can see this peak here is about 1 bel lower than that peak, right? And more commonly used unit is called decibels, which are 10 decibels per bel. So decibels are given by 10 times the log base 10 of the power of the square magnitude of the Fourier transform. Does that make sense. Yes. AUDIENCE: So [INAUDIBLE] square magnitude [INAUDIBLE] so just like [INAUDIBLE] MICHALE FEE: No, so you take this square magnitude because-- OK, remember, last time we talked about power. So if you have an electrical signal, the power in the signal would be voltage squared over resistance. Power, when you refer to signals, is often kind of used synonymously with variance. And variance is also goes as the square of the signal. Now, because the Fourier transform is a complex number, what we do is we don't just square it, but we take the squared magnitude. So we're measuring the distance from the origin in the complex plane. OK? Good question. All right, any questions about this and what the meaning of decibels is? So if a signal had 10 times as much amplitude, the power would be how much larger? If you had 10 times as much amplitude, how much increased power would there be? 100 times. Which is how many bels? Log base 10 of 100 is 2. How many decibels? AUDIENCE: 22? MICHALE FEE: No, it's 10 decibels per bel. Deci just means a tenth of, right? Remember those units? So a factor of 10 in signal is a factor of 100 in power, which is 2 bels, which is 20 decibels. OK. All right, now I just want to show you one important thing about Fourier transforms. There's an interesting property about scaling in time and frequency. So if you have a signal like this that's periodic at about-- I don't know, it looks like-- OK, there it is-- about 5 hertz. If you look at the Fourier transform of that, you can see a series of peaks, because it's a periodic signal. Now, if you take that same function and you make it go faster-- so now, it's at about 10 hertz, instead of 5 hertz, you can see that the Fourier transform is exactly the same. It's just scaled out. So the faster something moves in time, the more stretched out the frequencies are. Does that makes sense. So if I show you any periodic function at one frequency and I show you the Fourier transform of it, you can immediately write down the Fourier transform of any scaled version of that function, because if this goes faster, if this same function but at a higher frequency, you can write down the Fourier transform just by taking this Fourier transform and stretching it out by that same factor. OK? All right, so that was just a brief review of what we covered last time. And here's what we're going to cover today in a little more detail. So we're going to talk some more about the idea of Fourier transform pairs. These are functions where you have a function. You take the Fourier transform of it. You get a different function. If you take the Fourier transform of that, you go back to the original function. OK, so there are pairs of functions that are essentially for transforms of each other. OK? An example of that you saw here. A square wave like this has a Fourier transform that's this funny function a set of peaks. If you take the four transform of that, you would get this square wave. OK? We're going to talk about the convolution theorem, which is a really cool theorem. Convolution in the time domain looks like multiplication in the frequency domain. Multiplication in the time domain looks like convolution in the frequency domain. And it allows you to take a set of Fourier transform pairs that you know, that we'll learn, and figure out what the Fourier transform is of any function that's either a product of those or a convolution of those kind of base functions. It's a very powerful theorem. We're going to talk about the Fourier transform a Gaussian noise and this power spectrum of Gaussian noise. We'll talk about how to do spectral estimation. And we'll end up on the Shannon-Nyquist theorem and zero padding. And there may be, if there's time at the end, I'll talk about a little trick for removing the line noise from signals. OK, so let's start with Fourier transform pairs. So one of the most important functions to know the Fourier transform of is a square pulse like this. So let's just take a function of time. It's 0 everywhere, 0 everywhere. But it's 1 if the time is within the interval plus delta t over 2 to minus delta t over 2, OK, so a square pulse like that. And let's just take the case where delta t is 100 milliseconds. The Fourier transform of a square pulse is a function called the sinc function. It looks a little bit messy. But it's basically a sine wave that is weighted so that it's big in the middle and it decreases as you move away from 0. And it decreases as 1/f. So this is frequency along this axis. It's just imagine that you have a sine wave that gets smaller as you go away from the origin by an amount 1 over f. That's all it is. Now, really important concept here, remember, we talked about how you can take a function of time. So once you know that the Fourier transform of a square wave, of this square wave of with 100 milliseconds, is a sinc function. You know what the Fourier transform is of a square pulse that's longer. Right? What is it? Remember, if you just take a function in time and you stretch it out, the Fourier transform just does what? It compresses. It shrinks. And if you take this pulse and you make it narrower in time, then the Fourier transform just stretches out. So if we take that pulse and we make it narrower, 25 milliseconds, then you can see that the sinc function, it's the same sinc function, but it's just stretched out in the frequency domain. So you can see that here if it's 10 milliseconds, the width of this is 12 hertz. The full width at half max of that peak is 12 hertz. If this is 4 times narrower, then this width will be 4 times wider. What happens if we have a pulse here that is 500 milliseconds longer? So 5 times longer, what's the width of the sinc function here going to be? It'll be five times narrower than this, so a little over 2 hertz. OK? Does that makes sense. So you should remember this Fourier transform pair, a square pulse and a sinc function. And there's a very important concept called the time bandwidth product. You can see that as you make the width in time narrower, the bandwidth in frequency gets bigger. And as you make the pulse in time longer, the bandwidth gets smaller. And it turns out that the product of the width in time and the width in frequency is just a constant. And for this square pulse sinc function, that constant is 1.2. So there's a limit. If you make the square pulse smaller, the sinc function gets broader. All right, let's look at a different Fourier transform pair. It turns out that the Fourier transform of a Gaussian is just a Gaussian. So, here, this Gaussian pulse is 50 milliseconds long. The Fourier transform of that is a Gaussian pulse that's 20 hertz wide. If I make that Gaussian pulses in time narrower, then the Gaussian in frequency gets wider. And inversely, if I make the pulse in time wider, then the Gaussian in frequency space gets narrower. Yes. AUDIENCE: I just have a question [INAUDIBLE] MICHALE FEE: Yes. AUDIENCE: [INAUDIBLE] MICHALE FEE: Yep. So I'm trying-- it's a little bit unclear here, but I'm measuring these widths at the half height. OK, and so you can see that for a Gaussian, this time bandwidth product, delta f times delta t is just 1. So there's a time bandwidth product of 1. Who here has taken any quantum mechanics? Who here has heard of the Heisenberg uncertainty principle? Yeah. This is just the Heisenberg uncertainty principle. This is where the Heisenberg uncertainty principle comes from, because wave functions are just-- you can think of wave functions as just functions in time. OK? So the spatial localization of an object is some-- the wave function is just some position of space. And the momentum of that particle can be computed as the Fourier transform of the wave function. And so if the particle is more lo-- [AUDIO OUT] in space, then if you compute the Fourier transform of that wave function, it's more dispersed in momentum. OK, so the uncertainty in momentum is larger. So this concept of time bandwidth product in the physical world is what gives us the Heisenberg uncertainty principle. It's very cool. Actually, before I go on, you can see that in this case, the Fourier transform of this function is the same function. Does anyone remember what a [AUDIO OUT] function is that the Fourier transform of it is another version of that same function? We talked about it last time. AUDIENCE: Pulse train. MICHALE FEE: Pulse train, that's right. So a train of delta functions has a Fourier transform that's just a train of delta functions. And 1 over spacing in the time domain is just equal to 1 over the spacing in the frequency domain. So that's another Fourier transform pair that you should remember. All right? OK, convolution theorem. Imagine that we have three functions of time, y of t, like this one, y of t. We could calculate the Fourier transform of that. And that's capital Y of omega. And then we have some other function, x of t, And its Fourier transform, X of omega, and another function g of tau and its Fourier transform, capital G of omega. So remember, we can write down the convolution of this time series, x with this kernel g as follows. So in this case, we're defining y as the convolution of g with x. So y of t equals this integral d tau g of tau x of t minus tau, integrating over all tau. So that's a convolution. The convolution theorem tells us that the Fourier transform of y is just the product of the Fourier transform of g and the Fourier transform of x. So that, you should remember. All right? I'm going to walk you through how you derive that. I don't expect you to be able to derive it. But the derivation is kind of cute, and I enjoyed it. So I thought I'd show you how it goes. So here's the definition of the convolution. What we're going to do is we're going to just take the Fourier transform of y. So here's how you calculate Fourier transform of something. Capital Y of omega is just the integral over all time dt y of t e to the minus i omega t. So we're going to substitute this into here. Now, you can see that-- OK, so the first [AUDIO OUT] is actually reverse the order of integration. We're going to integrate over t first rather than tau. Then what we're going to do is we can move the g outside the integral over t, because it's just a function of tau. So we pull that out. So now, we have an integral dt x of t minus tau e to the minus i omega t. And what we can do is do a little modification here. We're going to pull an e to the minus i omega tau out of here. So we have e to the minus i omega tau times e to the minus j omega t minus tau. So you can say if you multiply those two things together, you just get back to that. Now, what we're going to do is because this is we're integrating over t, but [AUDIO OUT] a function of tau, we can pull that out of the integral. And now we have integral dt x t minus tau e to the minus i omega t minus tau. And what do you think that is? What is that? What would it be if there were no tau there? If you just cross that out and that, what would that be? Anybody know? What that's? AUDIENCE: The Fourier transform. MICHALE FEE: This is just the Fourier transform of x. Right? And we're integrating from minus infinity to infinity. So it does it matter if we're shifting the inside by tau? No, it doesn't change the answer. We're integrating from minus infinity to infinity. So shifting the inside of it by a small amount tau isn't going to do anything. So that's just the Fourier transform of x. Good? And what is this? Fourier transform of g. So the Fourier transform of y is just Fourier transform of g times the Fourier transform of x. All right, that's pretty cool. Kind of cute. But it's also really powerful. OK, so let me show you what you can do with that. First, let me just point out one thing that there is a convolution theorem that relates convolution in the time domain to multiplication in the frequency domain. You can do exactly the same derivation and show that convolution of two functions in the frequency domain is the same as multiplication in the time domain. So that's the convolution theorem. So let's see why that's so powerful. So I just showed you the Fourier transform of a Gaussian. So what we're going to do is we're going to calculate transform of a Gaussian times a sine wave. So if you take a Gaussian, some window centered around 0 in time-- this is a function of time now right? So there's a little Gaussian pulse in time. We're going to multiply that by this sine wave. And when you multiply those together, you get this little pulse of sine. OK? [WHISTLES] Sorry, constant frequency. Boy, that's harder to do than I thought. [WHISTLES] OK, just a little pulse of sine wave. So what's the forehead transform of that? Well, we don't know, right? We didn't calculate it. But you can actually just figure it out very simply, because you know the Fourier transform of a Gaussian. What is that? A Gaussian. You know the Fourier transform of a sine wave. What is that? Yeah. Hold it up for me. What does it look like? Yes, sine wave, thank you, like this. OK. And so what do you know-- what can you tell me right away what the Fourier transform of this is? You take the Fourier transform of that and can involve it with the Fourier transform of that. So let's do that. So there's the Fourier transform of this Gaussian. If this is 200 milliseconds wide, then how wide is this? AUDIENCE: [INAUDIBLE] MICHALE FEE: It's 1 over 200 milliseconds, which is what? 5 hertz, right? 1 over 0.2 is 5. The Fourier transform of this sine wave-- and I think I made it a cosine instead of a sine. Sorry, that's why I was going like this, and you were going like this. So I made it a cosine function. The Fourier transform of the cosine function has two peaks. This is 20 hertz. So one peak at 20, one at minus 20. And the Fourier transform of this is just the convolution of this with that. You take this Gaussian and you slide it over those two peaks. You essentially smooth this with that. Does that makes sense? Cool. So you didn't actually have to stick this into Matlab and compute the Fourier transform of that. You can just know in your head that that's the product of a Gaussian and a sine wave, or cosine. And therefore, the Fourier transform of that is the convolution of a Gaussian with these two peaks. And there are many, many examples of interesting and useful functions in the time domain that you can intuitively understand what their Fourier transform is just by having this idea. It's very powerful. Here's another example. We're going to calculate the Fourier transform of this square windowed cosine function. So it's a product of the square pulse with this cosine to give this. So what is the Fourier transform of this. What is that? So what is the Fourier transform of that? AUDIENCE: The sinc function. MICHALE FEE: It's the sync function. It's that kind of wiggly, peaky thing. And the Fourier transform of that is just two peaks. And so the Fourier transform of this is just like two peaks-- yeah-- with wiggly stuff around them. That's exactly right. All right, any questions about that? OK. All right, change of topic, let's talk about Gaussian noise. The Fourier transform of noise and the power spectrum of noise. And we're going to eventually bring all these things back together. OK? All right, so what is Gaussian noise? So first of all, Gaussian noise is a signal in which the value at each time is randomly sampled from a Gaussian distribution. So you can do that in Matlab. That's a very simple function. This returns a vector of length N, sampled from a normal distribution, with variance 1. So here's what that sounds like. [STATIC NOISE] Sounds noisy, right? OK, I just wanted to show you what the autocorrelation function of this looks like, which I think we saw before. So if you look at the distribution of all the samples, it just gives you a distribution that it has the shape of a Gaussian. And the standard deviation of that Gaussian is 1. Now, what if you plot the correlation between the value of value of this function at time t and time t plus 1? Is there any relation between the value of this function at any time t and another time t plus 1? So they're completely uncorrelated with each other. The value of time t is uncorrelated with the value of time t plus 1. So there's zero correlations between neighboring samples. What about the correlation of the signal at time t with the signal at time t? Well, that's perfectly correlated, obviously. So we can plot the correlation of this function with itself at different time lags. Remember, that was the autocorrelation function. And if we do that, you get a 1 at 0 lag and 0 at any other lag. So that's the autocorrelation function of Gaussian noise. All right, now, the Fourier transform of Gaussian noise is just Gaussian noise. It's another kind of interesting Fourier transform pair. And it's a Gaussian distribution. It's a Gaussian random distribution in both the real and the imaginary part. So you can see that the blue and red-- the red here is the imaginary part-- are both just Gaussian noise. OK? All right, now what is the power spectrum? So we can take this thing-- and, remember, when we plot the power [AUDIO OUT] just plot the square magnitude of just the positive frequencies. Why is that again? Why do we only have to plot the square magnitude of the positive frequencies? AUDIENCE: [INAUDIBLE] Gaussian, so they're all [INAUDIBLE] MICHALE FEE: Yep. So it turns out that the Fourier transform in a positive frequency is just the complex conjugate of the Fourier transform in the negative frequency. So the square magnitude is identical. So the power spectrum on this side is equal to the power spectrum on that side. So we just plot half of it. So that's what the power spectrum of this particular piece of signal looks like. The power spectrum of noise is very noisy. We're going to come back, and I'm going to show you that on average, if you take many different signals, many copies of this, and calculate the power spectrum and average them all altogether, it's going to be flat. But for any given piece of noisy signal, the power spectrum is very noisy. Any questions about that? OK. All right, so now let's turn to spectral estimation. How do we estimate the spectrum of a signal? So let's say you have a signal, S of t. And you have a bunch of short measurements of that signal. You have some signal, let's say, from the brain, like you record some local field potential or some ECG or something like that. And you want to find the spectrum of that. Let's say you're interested in measuring the theta rhythm or some alpha rhythm or some other periodic signal in the brain. What you could do is you could have a bunch of independent measurements of that signal. Let's in this case call them four trials, a bunch of trials. What you can do is calculate the power spectrum, just like [AUDIO OUT] for each of those signals. So this is a little sample y of t, y1 of t, y2 of t. You can calculate the square of magnitude of the Fourier transform of each one of those samples. And you can estimate the spectrum of those signals by just averaging together those separate, independent estimates. Does that make sense? So literally, we just do what we did here. We have a little bit of signal. We Fourier transform it, take the square magnitude. And now, you average together all of your different samples. Does that makes sense? That's the simplest form of spectral estimation. It's like if you want to estimate the average height of a population of people. You take a bunch of different measurements. You randomly sample. You take a bunch of different measurements. And you average them together. That's all we're doing here. OK? Now, you can apply that same principle to long signals. What you do is you just take that signal and you break it into short pieces. And you compute the power spectrum in each one of those windows. And again, you average them together. Now, extracting that little piece of signal from this longer signal is essentially the same as multiplying that long signal by a square pulse. 0 everywhere, but 1 in here, 1 here. 0 everywhere else. Right? So that process of taking a long signal and extracting out one piece of it has a name. It's called windowing. Sort of like you're looking at a scene through this window, and that's all you can see. OK, so one way to estimate the spectrum of this signal is to take the signal in this window, compute the FFT of that, take this power spectrum. And then apply this window to the next piece. Apply this window to the next piece and compute the spectrum and average them all altogether. What's the problem with that? Why might that be a bad idea? Yeah. AUDIENCE: Could we [INAUDIBLE]. MICHALE FEE: Good. That's a very good example. But there's sort of a general principle that we just learned that you can apply to this problem. What happens to the Fourier transform of the signal when we multiply it by this square pulse? AUDIENCE: Convolving. MICHALE FEE: We're convolving the spectrum of the signal with a sinc function. And the sinc function is really ugly, right? It's got lots of wiggles. And so it turns out this process of windowing a piece of data with this square pulse actually does really horrible things to our spectral estimate. And we're going to spend a lot of time in the next lecture addressing how you solve that problem in a principled way and make a good estimate of the signal by breaking it up into little pieces. But instead of just taking a square window we do something called tapering. So instead of multiplying this signal by square pulses, we sample the signal by applying it by little things that look like little smooth functions, like maybe a Gaussian, or other functions that we'll talk about do an even better job. OK? All right. OK, so that process is called tapering, multiplying your data by a little [AUDIO OUT] paper that's smooth, unlike a square window. Computing spectral estimates from each one of those windowed and tapered pieces of data gives you a very good estimate of the spectra. And we're going to come back to that, how to really do that right, on Thursday. OK? All right, let me point out why this method of spectral estimation is very powerful. So, remember, we talked about how you can see-- remember, we talked about if you have a noisy signal that has a little bit of underlying sine wave in it, we talked about in class, if you take the autocorrelation of that function, you get a delta function and then some little wiggles. So there are ways of pulling periodic signals, periodic structure out of noisy signals. But it turns out that this method of spectral estimation [AUDIO OUT] did the most powerful way to do it. I'm just going to show you one example. This blue function here is noise, plus a little bit of a sine wave at, I think it's 10 hertz. OK, yeah. Anyway, I didn't write down the frequency. But the blue function here is noise plus the red function. So you can see the red function is small. And it's buried in the noise, so that you can't see it. But when you do this process of spectral estimation that we're learning about, you can see that that signal buried in that noise is now very easily visible. So using these methods, you can pull tiny signals out of noise at a very bad signal to noise ratio, where the signal is really buried in the noise. So it's a very powerful method. And we're going to spend more time talking about how to do that properly. All right, so let me spend a little bit more time talking about the power spectrum of noise, so that we have a better sense of what that looks like. So remember, I told you if you take a sample of noise like this and you estimate the spectrum of it, you compute the power spectrum of one sample of noise, it's extremely noisy. Let's see, I'm just going to remind you what that looks like. That's the power spectrum of one sample of noise. In order to estimate what the spectrum of noise looks like, you have to take many examples of that and average them together. And when you do that, what you find is that the power spectrum of noise is a constant. It's flat. [AUDIO OUT] Gaussian noise. The power spectrum, really, you should think about it properly as a power spectral density. There is a certain amount of power at different frequencies in this signal. So there is some power at low frequency, some power at intermediate frequencies, some power at high frequencies. And for Gaussian noise, that power spectral density is flat. It's constant as a function of frequency. OK? And the units here have units of variance per unit frequency-- variance per frequency. OK? Or if it were an electrical signal going through a resistor, it would be power per unit frequency. So you can see that here the value here is 0.002. The bandwidth of this signal is 500 hertz. And so the variance is the variance per unit per unit frequency times the bandwidth. And that's 1. And we started with a Gaussian noise that has variance 1, that when we calculate the power spectrum of that we can correctly read out from the power spectrum how much variance there is per unit frequency in the signal. OK? All right, it's kind of a subtle point. I actually don't expect you to know this. I just wanted you to see it and hear it. So you know formally what it is that you're looking at when you look at a spectral estimate of a noisy signal. All right, let's talk about filtering in the frequency domain. So remember, we learned how to smooth a signal, how to filter a signal, either high pass or low pass, by convolving a signal with a kernel. So you remember that the kernel for a low pass was something like this. So when you convolve, that's the kernel for a low pass. And for a high pass, anybody remember what that looks like? AUDIENCE: [INAUDIBLE] MICHALE FEE: Yep. So-- sorry, I should be a little more careful here not to mix up my axes with-- I'm going to remove that. So that's the kernel for a low-pass filter. The kernel for a high-pass filter is a delta function that reproduces the function. And then you subtract off a low-pass filtered version of the signal. OK? So that's the kernel for a high pass. OK, so this was how you filter a signal by convolving your signal with a function, with a linear kernel. We're going to talk now about how you do filtering in the frequency domain. So if filtering in the time domain is convolving your [AUDIO OUT] with a function, what is filtering in the frequency domain going to be? AUDIENCE: [INAUDIBLE] MICHALE FEE: Right. It's going to be multiplying the Fourier transform of your signal times what? AUDIENCE: The Fourier transform [INAUDIBLE] MICHALE FEE: The Fourier transform of things like that. All right, so let's do that. So this is what we just talked about. We introduced the idea before. This was actually a neural signal that has spikes up here and local field potentials down here. And we can extract the local field potentials by smoothing this [AUDIO OUT] by low pass filtering it, by convolving it with this kernel here. So this is what we just talked about. So if filtering in the time domain is convolving your data with a signal, then filtering in the frequency domain is multiplying the Fourier transform of a [AUDIO OUT] times the Fourier transform of the kernel. And you can see that what this does to the power spectrum is just what you would expect. The power spectrum of the filtered signal is just the power spectrum of your original signal times the power spectrum of the kernel. All right, so here's an example. So in blue is the original Gaussian noise. In green is the kernel that I'm smoothing it by, filtering it by. Convolving the blue with the green gives you the red signal. What kind of filter is that called again? High pass or low pass? AUDIENCE: Low pass. MICHALE FEE: Low pass, good. All right, so let me play you what those sound like. So here's the original Gaussian noise. [STATIC] Good. And here's the low pass Gaussian noise. [LOWER STATIC] It got rid of about the high frequency parts of the noise. OK, so here's the power spectrum of the original signal in blue. In order to get the power spectrum of the filtered signal in red, we're going to multiply that by the magnitude squared Fourier transform of this. What do you think that looks like? So this is a little Gaussian filter in time. What is the Fourier transform of that going to look like? AUDIENCE: [INAUDIBLE] MICHALE FEE: The Fourier transform of a Gaussian is a Gaussian. So the power spectrum of that signal is going to just be a Gaussian. Now, how would I plot it? It's peaked where? The Fourier transform of a Gaussian is peaked at 0. So it's going to be a Gaussian here centered at 0. We're only plotting the positive frequencies. So this, we're ignoring. And it's going to be like that, right? So that's the Fourier transform, squared magnitude Fourier transform of that Gaussian. It's just another Gaussian. And now if we multiply this power spectrum times that power spectrum, we get the power spectrum of our filtered signal. Does that makes sense? So convolving our original blue signal with this green Gaussian kernel smooths the signal. It gets rid of high frequencies. In the frequency domain, that's like multiplying the spectrum of the blue signal by a function that's 0 at high frequencies and 1 at low frequencies. Does that makes sense? So filtering in the frequency domain, low filtering in the frequency domain, means multiplying the power spectrum of your signal by a function that's low at high frequencies and big at low frequencies. So it passes the frequencies and suppresses the high frequencies. It's that simple. Any questions about that? Well, yes-- AUDIENCE: So why is it that like-- you need to like-- of I guess when you multiply in the frequency, could you theoretically multiply by anything and that would correspond to some other type of filter. So why don't we just like throw away high frequencies? Or something like multiply by a square in the frequency domain and correspond to some different filter we don't know. MICHALE FEE: Yeah. You can do that. You can take a signal like this, Fourier transform it, multiply it by a square window to suppress high frequencies. What is that equivalent to? What would be the corresponding temporal kernel that that would correspond to? AUDIENCE: [INAUDIBLE] MICHALE FEE: Good. It would be convulsing your function with a sinc function. It turns out that's-- the reason you wouldn't normally do that is that it mixes the signal across all time. The sinc function goes on to infinity. So the nice thing about this is when you smooth a signal with a Gaussian, you're not adding some of the signal here that were over here. Does that makes sense? Convolving with a sinc function kind of mixes things in time. So normally you would smooth by functions that are kind of local in time, local in frequency, but not having sharp edges. Does that makes sense? So we're going to talk about how to smooth things in frequency with signals with kernels that are optimal for that job. That's Thursday. What would a high-pass filter look like in the frequency domain? So high-pass filter would pass high frequencies and suppress low frequencies. Right? You've probably not heard of it, but, what would a band pass filter look like? It would just pass a band. So it'd be 0 here. It would be big somewhere in the middle and then go to 0 at higher frequencies. OK? Does that makes sense? Any questions about that? OK. Good. If we plot this on a log plot in decibels, you can see that on a log plot, a Gaussian, which is e to [AUDIO OUT] like f squared. On a log plot, that's minus f squared. Right? That's why on a log plot this would look like an inverted parabola. So that's the same plot here, but plotted on a log scale. Any questions about that? I want to tell you about a cool little theorem called the Wiener-Khinchin theorem that relates the power spectrum of a signal with the autocorrelation of a signal. So in blue, that's our original Gaussian noise. In red is our smooth Gaussian noise. If you look at the correlation of neighboring time points in the blue signal, you can see they're completely uncorrelated with each other. But what about neighboring time points on the smooth signal? Are they correlated with each other? If we look at for the red signal, y of i and y of i plus 1, what does that look like for the red signal? They become correlated with each other, right? Because each value of the smooth signal is some sum over the blue points. So neighboring points here will be similar to each other. That's what smoothness means. Neighboring time points are close to each other. So if you look at the correlation of neighboring time points in the smooth signal, it looks like this. It has a strong correlation. So if you look at the autocorrelation of the original Gaussian noise, it has a delta function at zero. The autocorrelation of the smoothed function has some width to it. And the width to that autocorrelation function tells you the time [AUDIO OUT] this signal was smoothed. Right? OK. Now, how does that relate to the power spectrum? So it turns out that the power spectrum of a signal, the magnitude squared of the Fourier transform, the power spectrum of the signal is just the Fourier transform of the autocorrelation. So what's the Fourier transform of a delta function? Anybody remember that? AUDIENCE: Constant. MICHALE FEE: It's a constant. And how about our smoothed? Our smooth signal has a power spectrum that's a Gaussian in this case. What's the transform of a Gaussian? AUDIENCE: [INAUDIBLE] MICHALE FEE: OK. So if you have a signal that you have some sense of what the power spectrum is you immediately know what the autocorrelation is. You just Fourier transform that and get the autocorrelation. What's the width of this in time? How would I get that from here? How are the width in time and frequency related to each other for-- AUDIENCE: [INAUDIBLE] MICHALE FEE: Right. The width of this in time is just 1 over the width of [AUDIO OUT] So you have to take the full width. Does that makes sense? OK. Wiener-Khinchin theorem, very cool. All right, let's talk about the Shannon-Nyquist theorem. Anybody heard of the Nyquist limit? Anybody heard of this? All right, so it's a very important theorem. Basically anybody who's acquiring signals in the lab needs to know the Shannon-Nyquist theorem. It's very important. All right, so remember that when we have discrete Fourier transforms, fast Fourier transforms, our frequencies are discretized and so is our time. But the discretization in frequency means that the function is periodic. Any signal that has discrete components and frequencies is periodic in time. Remember, we started with Fourier series. And we talked about how if you have a signal that's periodic in time, that you can write it down as a set of frequencies that are integer multiples of each other. Now, in these signals, time is sampled discretely at regular time intervals. So what does that tell us about the spectrum, the Fourier transform? AUDIENCE: It's periodic. MICHALE FEE: It's also periodic. OK, so discretely sampled in frequency at regular intervals means that the signal is periodic in time. Discretely sampled in time means that the Fourier transform is periodic. Now, we don't usually think about this. We've been taking these signals, sines and cosines and square pulses and Gaussian things, and I've been showing you discretely sampled versions of those signals. And I've been showing you the Fourier transforms of those signals. But I've only been showing you this little part of it. In fact, really be thinking that those discreetly sampled signals have a Fourier transform that's actually periodic. There's another copy of that spectrum sitting up here at 1 over the sampling rate and another copy sitting up here. Remember, this is like a train of delta functions. The Fourier transform of that is like another train of delta functions. So there are copies of this spectrum spaced every 1 over delta t. It's kind of a strange concept. So the separation between those copies of the spectra in the frequency domain are given by 1 over the sampling rate. Any questions about that? It's a little strange. But we'll push on because I think it's going to be more clear. So what this says is that if you want to properly sample this signal in time, you need these [AUDIO OUT] copies of its spectrum to be far away so they don't interfere with each other. So what that means is that you need the sampling rate to be high enough-- the higher the sampling rate is, the further these spectra are in time. Delta t is very small, which means 1 over delta t is very big. The sampling rate needs to be greater than twice the bandwidth of the signal. [AUDIO OUT] bandwidth B. So if the sampling rate is less than twice the bandwidth, what happens? That means delta t is too big. These copies of the spectrum are too close to [AUDIO OUT] and they overlap. That overlap is called aliasing-- a-l-i-a-s-i-n-g. OK? So you can see that if you sample a signal at too low a sampling rate and you look at the spectrum of the signal, you see that it has-- like you'll see this part of the spectrum, but you'll also see this other part of the spectrum kind of contaminating the top of your Fourier transform. Does that makes sense? OK. So let me just say it again. If your signal has some bandwidth B that in order to sample that signal properly, your sampling rate needs to be greater than twice that bandwidth, 1, 2. OK? All right, any questions about that? Actually, there was actually recently a paper where somebody claimed-- I think I told you about this last time-- there was a paper where somebody claimed to be able to get around this limit. And they were mercilessly treated in the responses to that paper. So don't make that mistake. Now, what's really cool is that if the sampling rate is greater than twice the bandwidth, something amazing happens. You can perfectly reconstruct the signal. Now that's an amazing claim. Right? You have a [AUDIO OUT] time. All right, it's wiggling around. What this is saying is that I can sample that signal at regular intervals and completely ignore what's happening between those samples, have no knowledge of what's happening between those samples. And I can perfectly reconstruct the signal I'm sampling at every time point, even though I didn't look there. So how do you do that? Basically, your sampled signal, you're regularly sampled signal, has this spectrum-- has this Fourier transform with repeated copies of the signal, repeated copies of the spectrum. So how would I recover the spectrum's original signal? Well, the spectrum of the original signal is just this piece right here. So all I do is in the frequency domain I take that part. I keep this, and I throw away all those. In other words, I multiply my Fourier transfer sampled signal in the frequency domain by a square pulse that's 1 here and 0 everywhere else. Does that makes sense? And when I inverse Fourier transform that I've completely recovered my original signal. What is multiplying this spectrum by the square wave in the frequency domain equivalent to in the time domain? AUDIENCE: So I was going to ask-- MICHALE FEE: Yeah. AUDIENCE: [INAUDIBLE] MICHALE FEE: Yeah. AUDIENCE: So why do you want to do that? MICHALE FEE: Yeah, so it's amazing, right? It's cool. So let me just what it is. And then we can marvel at how that could possibly be. Multiplying this spectrum by this square wave, throwing away all those other copies of the spectrum and keeping that one is multiplying by a square wave in the frequency domain, which is like doing what? AUDIENCE: Convolving. MICHALE FEE: Convolving the time domain sinc-- that regular train of samples, convolving that with a sinc function. So let me just say that. Here, we have a function that we've regularly sampled at these intervals. If we take that function, which is a bunch of delta functions here, here, here, here, just samples, and we can evolve that with a sinc function, we perfectly reconstruct the original signal. Pretty wild. So that's the Nyquist-Shannon theorem. What it says is that we can perfectly reconstruct the signal we've sampled as long as we sample it at a sampling rate that's greater than twice the bandwidth of the signal. OK? All right. OK, good. So there's this cute trick called zero-padding, where you don't perfectly reconstruct the original signal, but basically you can interpolate. So you can extract the values of the original signal times between where you actually sampled it. OK? And basically the trick is as follows. We take our sampled signal. We Fourier transform it. And what we do is we just add zeros. We pad that Fourier transform with zeros. OK? So we just take positive frequencies and the negative frequencies, and we just stick a bunch of zeros between and make it a longer vector. And then when we inverse Fourier transform this, you can see that you have a longer array. When you inverse transform, inverse Fourier transform, what you're going to have is your original samples back, plus a bunch of samples in between that interpolate, that are measures of the original signal at the times where you didn't measure it. So you can essentially increase the sampling rate of your signal after the fact. Pretty cool, right? Again, it requires that you've sampled at twice the bandwidth of the original signal. Yes. AUDIENCE: Like how do you know the bandwidth of the original signal if you don't have samples? MICHALE FEE: Good question. How might you do that? AUDIENCE: Can you like [INAUDIBLE] different sampling lengths to get [INAUDIBLE] MICHALE FEE: You could do that. From nearly all applications, you have a pretty good sense of what the frequencies are that you're interested in a signal. And then what you do is you have to put a filter between your experiment and your computer that's doing the sampling that guarantees that it's suppressed all the frequencies above some point. OK? And that kind of filter is called an anti-aliasing filter. So in that case, even if your signal had higher frequency components, the anti-aliasing filter cuts it off so that there's nothing at higher frequencies. Does that makes sense? Let me give you an example of aliasing. Let's say I had this signal like this. And I sample it here, here, here, here, here, here. I need to do regular intervals. So you can see that if I have a sine wave that is close in frequency to the sampling rate, you can see that when I sample the signal, I'm going to see something at the wrong frequency. That's an example of aliasing. OK? OK, so here's an example. We have a 20 hertz cosine wave. I've sampled it at 100 hertz. So I'm, you know, 5-- so what frequency would I have to sample this in order to reconstruct the cosine? I'd have to sample at least 40 hertz. Here, I'm sampling at 100. The delta t is 10 milliseconds. So those are the blue points. And now, if I do this zero-padding trick, I Fourier transform. I do zero-padding by a factor of 4. That means if I take the Fourier transform signal and I'm now making that vector 4 times as long by filling in zeros, then I inverse Fourier transform. You can see that the red points show the interpolated values of that function after zero-padding. OK? So it can be a very useful trick. You can also sample the signal in the time domain and then add a bunch of zeros to it before you Fourier transform. And that gives you finer samples in the frequency domain. OK? And I think that's-- so zero-padding in the time domain gives you finer spacing in the frequency domain. And I'll show you in more detail how to do this after we talk about tapering. And it's very simple code actually. Matlab has built into it the ability to do zero-padding right in the FFT function. OK, let's actually just stop there. I feel like we covered a lot of stuff today.
MIT_940_Introduction_to_Neural_Computation_Spring_2018
4_HodgkinHuxley_Model_Part_1_Intro_to_Neural_Computation.txt
MICHALE FEE: Today we're going to continue building our equivalent circuit model of a neuron. Again, this is the Hodgkin-Huxley model, and the model was really developed around explaining how neurons generate action potentials. There are two key ion channels that are associated with making spikes. There's a sodium channel that we model as a conductance in series with a battery, and there's a potassium conductance that we model the same way. And again, those two conductances cooperate to produce an action potential. And we saw, essentially, how those two conductances, together with their batteries-- the sodium battery is up at plus 50 or so millivolts. The potassium battery is down at minus 75 or so millivolts. And those two conductances then work to essentially connect the inside of the neuron to the plus battery and then to the minus battery to give you an action potential. And you may remember that we saw what that looks like. So here is plotting membrane potential in blue. Here, we turn on this sodium conductance. The voltage of the cell races up to about plus 55. Then we turn off the sodium. We turn on the potassium. The voltage goes down to minus 75 or so. And then we turn off both conductances, and the cell recovers. So you can see that that basically produces what looks like an action potential. That's the basis of action potential production. But in order to understand how these things turn on and off in the time course that they do, we need to understand a little bit more about how these sodium and potassium conductances work. And that's what we're going to focus on today. So let's just start building our model. So each one of those conductances with a battery is associated with a current. There's a current that goes through each of those conductances. The total current through the membrane, the ionic current through the membrane in the Hodgkin-Huxley model is a sum of three components, actually-- a sodium current, a potassium current-- those two we just talked about-- and a leak current that is just a fixed current. So the sodium and potassium currents are functions of time and voltage. The leak current, the leak conductance, is just fixed. And it has a battery of about minus 50 millivolts. And it just tends to keep the cell sort of hyperpolarized. And these two currents, these two conductances, do the job of making an action potential. So the total membrane current is just a sum of those three parts. And now we can just take that membrane current and plug it into this equation for the voltage as a function of current now and solve that differential equation to get the voltage to calculate how the voltage evolves in time in the presence of these membrane currents. All right, so you recall from the last lecture that you watched on video that these currents can be written down as a conductance. So let's just start here. This is the most similar one to the one that you saw in the previous lecture. The current is just a conductance times a driving potential. And we described how that equation can be summarized in electrical circuit components as a resistor, which is one over the conductance, times a driving potential, which is basically just the voltage drop across the conductance. Now, each of these conductances, each of these currents, is going to be written down by a very similar equation. So the potassium current is just the potassium conductance times the driving potential for potassium, which is just the membrane potential minus the Ek. And the sodium current is just the sodium conductance times the driving potential for sodium, which is the membrane potential minus the sodium battery. Any questions? Yes? AUDIENCE: How did the [INAUDIBLE].. MICHALE FEE: Yeah. So unless I've made a mistake, I've tried to put these-- remember that the potassium battery is minus, is negative, right? And the sodium battery is positive. So I've tried to show that by putting the batteries in the opposite direction, right? So the battery symbol has one side that's supposed to indicate positive voltage, and the other side is negative. So because they have the opposite sign, I've put them in backwards, in the opposite direction. Does that make sense? Don't worry about that too much. If I ask you to draw this, I don't really care that much which way these things go. I just want you to know that this one is negative on the inside and that the sodium is positive on the inside. This is the inside of the cell here, right? The sodium battery drives the inside of the cell toward positive voltage, because you have positive ions flowing into the cell. So you can see now that the membrane potential here depends on the membrane currents through this differential equation. But the membrane currents-- the sodium, potassium, and leak currents-- all depend on these conductances, right? And these conductances for the sodium and potassium are voltage dependent and time dependent. So you can see that the conductances depend on the voltage, right? So the membrane potential depends on current. Current depends on conductances, but the conductances depend on the membrane potential. So it goes around and around and around, right? So those things all depend on each other. And so what we are setting out to do is to write down the way those things depend on each other, the way you can think about that system evolving in time. So let me just show you what the plan is. So the plan is to write down an algorithm-- basically, a for loop-- that describes how the neuron generates an action potential. Let me just walk you through the steps of that, and then I'll get to your question. So we're going to start with some membrane potential V, and we're going to calculate that the voltage-dependent parameters of the sodium and potassium conductance using that membrane potential. And once we have those parameters, we can actually calculate the conductance for each of those, for the sodium and potassium. Once you know the conductance, you can get the currents. Once you have the currents, you can compute the total membrane current, just by adding them all together. Then, you can compute-- once you have all those currents, you just compute V infinity, which is just the current times the resistance, the effective resistance. Then we're going to integrate our first order linear differential equation to get a new voltage as a function of time and V infinity. And we're just going to go back and start again. The so that's the algorithm that a neuron uses to generate a spike. And that's what we're going to work out. Now, we've talked about these things over the last few lectures, how you can relate total current to V infinity and then integrate a first order linear differential equation, which is just relaxing exponentially toward V infinity. But now we're going to put these things in, figure out the voltage and time dependence of the sodium and potassium conductances. Yes? AUDIENCE: [INAUDIBLE]. MICHALE FEE: Yes. It's primarily potassium. It has a negative potential. But it's just constant, so we're not really going to pay much attention to it. So if the sodium and potassium currents are off, then the leak current still keeps the cell hyperpolarized. All right, any questions? That's the big picture. So here are our learning objectives. I'd like you to be able to draw that circuit diagram, not worrying about the long and short sides of the battery. We're going to talk about how we measure the properties of ion channels. That's called a voltage clamp. So I want you to be able to describe what that is. I'd like you to be able to plot the voltage and time dependence of the potassium current for today. The next lecture, we'll talk about the sodium current, so we'll add that to our list of things that we need to know. But for today, I'd like you to able to plot the voltage and time dependence of the potassium current and conductance. And be able to explain, biophysically, where the time and voltage dependence of that potassium conductance comes from and be able to write it down in terms of quantities that are called the Hodgkin-Huxley gating variables. So that's the plan. All right, so let's come back to our circuit. Again, we have a sodium current that's sodium conductance times the sodium driving potential. The conductance is voltage and time dependent. The equilibrium potential for sodium, again, is plus 55 millivolts. The potassium current is just potassium conductance times the potassium driving potential. The driving potential is reference to a battery in equilibrium potential at minus 75. And the leak has a battery at minus 50 millivolts. So those are the parts we're going to use. We're going to describe now the experiments that Hodgkin and Huxley did to extract the parameters of the sodium and potassium conductances. Today, we're going to focus on the potassium conductances. And then on next Tuesday, I guess, we're going to do the sodium. All right, so the reason Hodgkin and Huxley studied-- so they studied these channels, the potassium and sodium channels, in the giant squid axon. Now, most of our axons are about a-- most of the axons in our brain are about a micron across. This axon is about a millimeter across. Action potentials propagate much faster in large axons, and this axon is involved in transmitting an action potential from the brain to the tail that drives an escape reflex. The squid squirts water out of sort of a chamber that has water in it. If the squid senses danger, it contracts muscles that squeeze water out of that, and it makes a jet. And it squirts the squid forward away from danger. So that action potential has to propagate very quickly from the brain to the tail, and it does that through this enormous axon. That axon is so big you can now put multiple wires inside of it. You don't even need to pull these glass electrodes. You can just take little wires and stick them in-- chop out a little piece of the axon and stick wires inside of it and study it. Yes? AUDIENCE: So if the body of this squid is like a giant [INAUDIBLE] it has arms coming out of its head? MICHALE FEE: Yes. You eat that part, and you eat that part. Not that. You throw away the most interesting part. OK, any other questions? All right, so now we're setting out to measure these sodium and potassium conductances, OK? So how do we do that? So what we really want to do is to set-- we want to measure conductance, which is the relation between voltage and current. So we'd like to do is to be able to set the voltage at a certain level and measure the current that flows through these channels. So you really want to plot the IV curve, right? You want to set the voltage, measure the current, and do that at a bunch of different voltages. And you recall that the conductance is basically just the slope of that curve, right, that line. So the job is set voltage, measure current, extract conductance. Now, the problem with that is that as soon as you set the voltage of the axon somewhere up here in an interesting range, the thing begins to spike. And then the voltage is no longer constant, right? So it becomes really hard to make measurements like this if you depolarize the cell a little bit to set try to set the voltage and all of a sudden, it's [MIMICS BUZZING]. It's generating spikes. So what do you do? So the trick is to develop a device called a voltage clamp. This thing basically holds the-- so, look, if the action potential were really, really slow, then you could actually set the voltage. You could change the current being injected into the cell by hand. You say, OK, I'm trying to set the voltage at zero. Oh, it got a little bit too high, so I turn the current down. Now the voltage has gone too low, so now I turn the current up. You could do it by hand if the action potential were super slow, if it took a minute to generate, right? But the action potential takes a millisecond. So that's just too fast for you to follow. So you just make a little electrical circuit that does that job for you. It uses feedback to set the voltage where you want. So you put a-- here's your cell. Here's your membrane resistance or conductance that you're trying to measure. You put a electrode in the cell. You put it into a little amplifier called an operational amplifier. And then on the other side of that amplifier, that differential amplifier, you put the command voltage that you're trying to set. So here's the way it works. Basically, this thing tries to set the membrane potential. It tries to make this value, this voltage, equal to that voltage. And it does that by feeding current back into the cell. Does that make sense? OK, so you use an operational amplifier. An op amp has two inputs-- a plus input, a minus input. The output is just a gain times the plus input minus-- the positive input minus the negative input. And the gain is really big. It's about a million. So if this input is a little bit above that input, this output is big and positive. If this input is less than that input, you can see this is negative. And so the output is big and negative. Any questions? So don't get confused here. That G is gain, not conductance, just for the next few slides. So how does this work? You can see that if the membrane potential is less than the command voltage, then the output voltage is positive and big. That drives current into the cell, which increases the membrane potential and makes it approach the command voltage. If the membrane potential is larger than the command voltage, then this thing-- this is bigger than this. So this is negative. And that pulls current out of the neuron and decreases the membrane potential. And in both cases, the membrane potential is being pulled toward the command voltage. And you can show, if you just plug in these variables into a couple of equations, that as long as the gain is big enough, the membrane potential is forced to be very close to the command voltage. All right, so that's the voltage clamp. It drives whatever current is necessary to clamp the voltage of the cell at the command voltage. And then what we do is during an experiment, we step the command voltage around. The cell tries to spike. Those currents turn on. The cell tries to spike, but this thing keeps the voltage locked at whatever it is that you want the voltage to be. And then you measure the amount of current required. You just measure the amount of current flowing through this resistor here that's required to hold the cell at any voltage. All right, any questions? Voltage clamp-- very cool. Yes? AUDIENCE: Can you explain gain again? MICHALE FEE: Gain is just the multiplier here in this equation. So if there's a tiny difference between the two inputs, the output is bigger. That's what gain means, right? If the gain is a million, if there's a microvolt difference between the two inputs, the output will be about a volt. And it would be a plus or minus, depending on which of those two was more positive. OK, so now let's get to the actual voltage clamp experiment that Hodgkin and Huxley did. Here, we have two wires in our cell-- one to measure voltage and the other one to inject current. That's exactly what they did. There's one wire here. That's a little piece of axon. You can literally cut the squid open, find that axon. It's a big, white-looking tube about a millimeter across. Cut two pieces of it, take two wires, stick one in each end. I drew it like this, but if you did it that way, they'd probably short together. So then one of those measures the voltage. You set a command, and the other wire allows you to inject current inside the axon. And then you seal the ends with a little bit of Vaseline. Yes? AUDIENCE: Sorry, what is the command VC? MICHALE FEE: VC is the command voltage that you're trying to set the inside of the cell to. Remember, here, we're setting the-- VC is what you're controlling as the experimenter. You're setting the voltage with that command. So the voltage clamp then holds the inside of the cell at that command voltage, and then you're measuring the current with this device. There's a readout that tells you how much current it's putting into the cell, or it goes onto an oscilloscope, because it's time dependent. So let's do an experiment. Here's an example of an experiment. They hold the command at minus 65. And suddenly, they drop the command voltage to minus 130. What does the cell do? What is the current? Nothing. There's a little transient here, which is the amount of current. It took to charge that capacitor up to minus 130 millivolts, and then nothing happens. All right, let's do another experiment. Now we're going to start our cell at minus 65 and suddenly jump the voltage up to zero. So we're going to depolarize our cell. And now something happens. We get a big pulse of current that's negative. What does negative mean? Anybody remember what negative current means by our definition? Negative means that there are positive ions going into this cell. So it's charging the cell up. This is membrane current now. So you just have to remember that definition. Negative membrane current means that positive charges are going into the cell, and that's depolarizing the cell. And then that negative current lasts for a few milliseconds, and then the current reverses sign and becomes positive and stays on. So what is that? So the first thing that Hodgkin and Huxley did was they tried to figure out what causes that pattern of currents. So here's what they did. They had this idea that part of this might be due to sodium. And so what they did was they did an experiment where they replaced sodium outside the cell, outside the axon, with an ion. They kept the chloride, but they replaced the sodium with choline. So they used choline chloride, so it's a salt solution, but it has no sodium in it. And then they redid that experiment. And here's what they found. What they found was-- so here it is-- with sodium, and if they replace the sodium, they find that they get almost the same thing, except that initial negative pulse goes away. And so they hypothesized that that part is due to sodium. And so now you just subtract this from this to get the difference. And that is the sodium current. Does that make sense? And then one other thing. Through another set of experiments, they were able to show that this part that's left after you block or remove sodium is actually due to potassium. So this is the potassium current. That is a sodium current. So by doing different kinds of experiments-- so one of the things that you're able to do that they didn't do initially, but later, they were able to do things like take that little piece of axon, take out a little miniature paint roller, squish the axon, roll the roller over the axon, squish its guts out, and then fill it up again with solutions that they control that have different ions in them. And so they're able to study-- just do multiple different kinds of experiments to be sure that this slow thing that turns on, this slow positive current is potassium, and this fast negative current is sodium. All right? So now what you can do is you can do this experiment at different voltages here, right? We want to measure how these currents depend on voltage, right? We can see here how they depend on time. We can also see how they depend on voltage, by doing this experiment at different voltages. You start at some negative potential. You step the voltage up to minus 40, or you step it up to zero, or you step it up to 40. And now you can measure that potassium current as a function of time, or the sodium current as a function of time. All right? At different voltages. All right, so those look kind of weird, especially that one. That looks kind of scary, like what the heck is going on there? But it turns out that both of these things are actually pretty simple. Once we dig a little bit more into how these currents are produced, you're going to see that there's a very simple way to understand what's happening there. All right, any questions? Bear with me. So now what we want to do is we want to measure the voltage dependence of these things kind of separately from the time dependence. So what we're going to do is we're going to measure the peak potassium current, kind of this steady state potassium current as a function of voltage. So here's our IV curve that I promised that we were going to plot. Peak current as a function of voltage. You can see that it's approximately linear above minus 50 or so millivolts. The sodium current looks kind of weird. We're going to plot the peak sodium current as a function of voltage. And we see that the peak sodium current has this weird shape. It's sort of linear up here at positive voltages, and then it crashes down to zero at negative voltages. All right? Still kind of weird and scary. So let's see if we can understand where this comes from. I just replotted them here. So now, remember that we use this voltage clamp to measure current as a [AUDIO OUT] voltage. But what is it that we're really trying to understand? We're really trying to understand the conductances, those resistors, right? We're under trying to understand the voltage and time dependence of those conductances. So we're trying to extract conductance as a function of voltage. And remember that [AUDIO OUT] is just conductance times a driving potential for potassium. Sodium current is just sodium conductance times the sodium driving potential. So we you could imagine extracting the conductance as current divided by the driving potential. This is what we're really trying to find. Rather than doing this division, because this one over-- this thing goes to zero at the places where the voltage is equal to the equilibrium potential. So we don't want to do that. We're going to solve this problem graphically. So here's the driving potential, right, V minus Ek. It goes to zero at Ek, right? And we want to find a conductance that makes this look like this. So if this is kind of a straight line, if the current as a function of voltage is kind of a straight line and this is a straight line, what does that tell us about the conductance in this region up here? It's constant. Excellent. Now, if the driving potential is very negative here but the current is zero, then what's the conductance? Driving potential is big and negative. Current is zero. What does that tell us about the conductance? It's zero. OK, so not so hard, right? We have [AUDIO OUT] zero here and constant here. So can anyone just show me what that might look like? Good. It could be like a jump. It could be kind of smooth. And that's exactly what it looks like. So the conductance is zero here, which it has to be, because the current is zero, even though the the potential is negative. And the conductance here is constant, because the driving potential is constant here. The driving potential is linear, and the current is linear, so this needs to be constant, all right? So the conductance is very simple. It turns off at negative voltages and turns on and then stays on at higher voltages. Let's do that for sodium. This thing looks crazy, weird, right? But let's go through the same operation. Here's our driving potential for sodium. Remember, it's got a reversal at plus 55. But it's a straight line, right? That's a battery and a resistor. So there's our driving potential. Now, this is linear. This is linear. So what does the conductance look like up here? Good. This is zero, but this is big and negative, so what does the conductance look like down here? Good . Starting to look pretty familiar, right? Boom. It's exactly the same. Both of these conductances are off at negative potentials, and they turn on at positive voltages and remain constant. Yes? AUDIENCE: [INAUDIBLE]. MICHALE FEE: Great. Great question. Because the sodium conductance turns on, and then it shuts off right away. And the shutting off is a different mechanism. So we're trying to figure out how to ignore that and just understand the voltage depends of how it turns on. Does that make sense? And we'll get into that next Tuesday. But I'm showing both of these at the same time because they look similar at the level of the voltage dependence. In fact, the way they turn on is very similar. It's just that the sodium has some other weird thing that shuts it off after a few milliseconds, and we'll talk about that next time. Yes? AUDIENCE: [INAUDIBLE]. MICHALE FEE: No, it's not. But any non-linearity here, we're going to account for by changes by voltage dependence of the conductance. So great point. It's a subtlety that we have kind of imposed by this way of writing down the voltage dependents of the current. Any other questions? No? OK, pushing on. So this kind of gradual turning on, this sort of zero conductance down here and a [INAUDIBLE] up here, that's called a sigmoidal voltage dependence. And it's the voltage dependence of activation. It's how these channels turn on. And as I said before, we're going to deal with the other properties of the sodium channel that turn it off later. So both have a sigmoidal voltage dependence of activation. And if you plot this, you can see that-- if you plot this on a log scale here-- log scale on conductance, linear here on potential. You can see that both of these curves, both the potassium and the sodium, have this very characteristic exponential turn-on followed by his saturation and constant conductance at higher voltages. All right, any questions? That's voltage dependence. Now we're going to turn to time dependence. So you can see that the time dependence-- so this driving potential, that's just constant. That just depends on the voltage and the reversal potential. And so we can separate out-- this thing is just any time dependence this has is dependent on the time dependence of the voltage. But in our voltage clamp experiments, the voltage is constant. So this thing is constant. So any time dependence [AUDIO OUT] current has to be due to time dependence of the conductance, right? And what that means is we can just look at the shape of this potassium current-- remember, this is the potassium current here. That time dependence is just due to time dependence of the conductance. Does that make sense? So what's happening is the potassium conductance is starting at zero. The moment you step the voltage up, that thing begins to grow gradually and then runs up to a constant potassium conductance in time. It starts off, ramps up, and then becomes constant, all right? So that's the time dependence. Sort of gracefully turns on. That process of turning on is called activation. The sodium conductance-- or current, the same thing. The sodium current is just the sodium conductance. The sodium current is a function [AUDIO OUT] is the sodium conductance as a function of time, times a constant. In our voltage clamp experiment, again, this voltage is constant. So the sodium conductance turns on. That's activation. But then it turns off, and that's called inactivation. So the sodium current has [AUDIO OUT] sodium conductance has two things going on-- one, activation, and the second is inactivation. And it turns out these are two separate biophysical mechanisms. And we're going to spend more time on this next week. So, notice something interesting. The sodium conductance turns on. You depolarize the cell. Sodium conductance turns on right away and then shuts off. The potassium conductance has a delay, and then it turns on. Does that look familiar? It looks an awful lot like this, right? Here's the sodium conductance. Turns on and then shuts off. And then the potassium conductance turns on with a delay. And that gives us an action potential. So you can see that when you use voltage clamp and dissected out the time difference of the sodium and potassium conductance, it looks just like the thing we concocted earlier, just sort of our toy example for how to make an action potential. Pretty cool, right? OK, it's starting to come together piece by piece. So we're now going to dig in a little bit deeper into the biophysics of how you get these voltage and time dependencies. So we're going to derive the equation for the voltage dependence. Anybody want to take a crazy guess on how we're going to do that? Just a wild guess, how you might derive the voltage dependence of something? No? OK. We're going to use the Boltzmann equation. And we're going to derive different equations that describe the way those channels turn on, how those conductances turn on. All right? And once we do that, we're going to have a simple set of equations-- and not just equations. We're going to have a set of processes that we can think of as happening a loop, in a for loop. That's our algorithm for an action potential. All right, so let's dive into single channels and see how they work. So, of course, currents result from ionic flow through ion channels. It's actually possible to record currents from single ion channels. We can actually make a version of our voltage clamp that we can attach to a single ion channel. And the way you do that is-- so when you take this piece of glass and you pull it, instead of poking it through the cell, instead of making it really sharp and poking it through the cell, what you do is you make it a little bit blunter, so it's got kind of a rough end. And then you can fire a polish-- you can hold the end of that electrode into a flame. Not quite. It's usually a filament that heats up hot. You hold the end of the electrode near this hot filament and it melts the tip into a nice, round-- it's still a tube, but the edges of the tube are nice and smooth. And now when you take that tube and you press it up against the cell-- actually, you attach a little plastic tube to the end of the glass, and you press that that electrode up against the cell. And you actually literally suck on it with your mouth onto that tube. And it sucks the membrane up against that smooth end of the electrode. And it sticks. The lipids of the membrane actually seal themselves onto the end of the glass. So now no currents can flow out through these edges here, all right? And then you hook it up to a very sensitive current amplifier. And now you can control the voltage. You can actually just rip that off of the cell, so now there's no more cell. You just have an ion channel sitting there on a piece [AUDIO OUT] on the end of your glass. Now you can do a voltage clamp experiment and study the current-- the voltage dependence of the current through that ion channel. So here's what this looks like. Here's one experiment. We're going to start at minus 100. This is a potassium channel. You depolarize the potassium channel up to 50 millivolts, and you see that that current, through that single channel, starts flickering on and off. Here's another trial. Turns on, turns off, turns on, turns off. You can do that a bunch of times. You can see something interesting. The current is either off-- doesn't turn on gradually, doesn't change smoothly. It just flickers between on and off. That's a very important aspect of ion channels. But if you average all those trials together, you see that you get an average current that looks just like the current that Hodgkin-Huxley measured in the whole axon. How is that possible? AUDIENCE: [INAUDIBLE]. MICHALE FEE: Yeah. Good. So, basically, what we're doing is we're measuring one ion channel many times. But on a cell, you're measuring a bunch of ion channels, each of which is doing something like this. But they're happening all at the same time, and the current is being averaged. So here, we're averaging the current one at a time. And on a whole cell, we're just averaging a bunch of them at once. It's called ergodicity in physics. It's called the ensemble average. OK, you can do the same thing for sodium. You take your patch, a new patch electrode. Fire polish it. Push it up to a cell. Apply some suction. Glues on. This time, we had a sodium channel. And now you can see that the thing, again, flickers on, flickers off, flickers on, flickers off. But now, they all flicker on right at the beginning, and then they flicker off and stay off. And if you average all those different trials, you see an ensemble average sodium current that looks just like when you measure the sodium current on a whole axon, OK? But the key thing is that these channels have two states-- on and off-- and they flicker back and forth between those two states, conducting and non-conducting. So we can now write down-- we could start working with this idea that our ion channels are either open or closed. And we can think of a probability that the channel is being open, that the channel is open. And we can have a total number of channels. The number of open channels is just the number of channels you have times the probability that any one of them is open. If g is the inductance of one open channel, then we can write down the total potassium conductance as the probability that any given ion channel is open times the number of channels times the inductance of one open channel. Does that make sense? And now the claim here [AUDIO OUT] all of the interesting voltage and time dependence of these channels happens here. Obviously, the number of them isn't changing very rapidly. The conductance per channel is constant, per open channel. So the interesting stuff is in the probability that the channel [AUDIO OUT]. And if we want to get the current, we're just going to plug this conductance into here, OK? All right, so let's start with a potassium channel. Let's dig in a little bit deeper into what the potassium channel looks like. Potassium channel is formed by four identical subunits. They're produced separately by ribosomes. They form a heteromer, a tetramer. And that tetramer has a hole that runs down the middle of it, which is where the ions flow. Each of these subunits has a voltage sensor that allows it to turn on and off. In order for the channel to be on, all four of those subunits has to be open. So each subunit has an open state and a closed state. And for the channel to be open, all four of them have to be in the open state. So if n is the probability that any one subunit is open-- I meant to make you guys answer this question before I showed the answer. But is it clear how if any one subunit has a probability of being open of n, then the probability that the whole channel is open is n to the four? This n is called a gating variable. I would like you to know that the probability that a sodium potassium channel is open is n to the four. That's an important thing for you to remember. That assumes that those four subunits are independent. And in potassium channels, that's a very good approximation. So we can now write down the conductance of our potassium channel-- something times n to the four, where that something is that inductance of one ion channel. So we can now write down the current as n. Open conductance times n to the four times a driving potential. And that n is called the gating variable for the potassium conductance. All right, any questions? No? Yes? AUDIENCE: [INAUDIBLE]. MICHALE FEE: n absolutely does depend on voltage. Very good. That's where we're going next. But before we go on to that, I wanted to add one other thing, which I think is really cool. We're going to do the voltage dependence of a potassium channel using the Boltzmann equation. So here's the way you think about a potassium channel working. Here's a potassium channel. We're showing a cross-section. Here it is. Here's the membrane, the lipid bilayer. Here's our potassium channel, sitting in the membrane. And we're taking a cross-section through that tetramer that shows two subunits. And I'm showing the voltage sent-- I'm showing the mechanism that that opens and closes one of those subunits. This subunit we'll also have a voltage sensor and a gate that looks the same. So look, the voltage sensor-- how do you sense voltage? You sense voltage with charge, right? Voltage differences, I should say, you sense with a charge. Because voltage gradients are electric fields, and electric fields push on charges. So if we want to detect the voltage difference across this membrane, we put a charge in the membrane. When the voltage difference is zero, there's little force on those charges. Now, if we suddenly hyperpolarize the [AUDIO OUT] cell so it's very negative, now there's an electric field inside the membrane that points toward the inside of the cell, which pushes those charges toward the inside of the cell. And now you can just have a little mechanical linkage. That's not really what it looks like, but there's some way that the amino acids and the protein are configured so that when those charges get pushed on, it closes a gate. And now the current can no longer flow through the ion channel. OK, so now we're going to derive how this-- we're going to see how to derive this voltage dependence from the Boltzmann equation. All right, so, everybody, this is just for fun. I don't expect you to know how to do this. I just want you to see it, because I personally get chills when I see this. It's really cool. But I'm not expecting you to be able to reproduce it, OK? So just watch. So, again, the Boltzmann equation says that the probability of being in two states, open or closed, depends on the energy difference between them. So we have an open state and a closed state. And when the voltage inside the cell is zero-- when the voltage difference between the inside and the outside of the cell is zero, [AUDIO OUT] know that the sodium channel likes to be open. So what that means is that the open state has a lower energy than the closed state, right? Sodium channel likes to open when the cell is depolarized. That means the voltage inside and outside are close to each other, right? Open state has a lower energy than the closed state. Let's call that energy difference delta u. And it's close to kt, because when it's open, the channel kind of flickers back and forth between open and closed. Does that make sense? Now let's put on-- let's hyperpolarize the inside of our cell. So now the voltage inside is low. There's a voltage gradient, an electric field that is trying to push those charges in. Now, you can see that those charges here are sitting at a lower voltage. So in the closed state, those charges are down here at a lower potential. What does that mean for the energy of the closed state when the cell is hyperpolarized? Its lower. The energy of the closed state is low because those charges are toward the inside of the cell, and the voltage is low. Now, what happens if the cell is hyperpolarized, but it happens to be in the open state? You can see those charges are closer this way. So you can see that the energy of-- these charges are still sitting in a voltage that's lower than outside. So that open state has a slightly lower energy. But you can see that the closed state still has a much lower energy than the open state, OK? And we can write down that voltage difference as a gating charge times this voltage difference. So now let's just take-- here is an open state. It has an energy difference of a little amount, w. Open state is lower than closed by an amount w. When the voltage inside the cell is low, we've decreased the energy of the closed state by this amount-- gating charge times membrane potential. And now we have an energy difference in the open state and the closed, the energy difference between the open state and closed state as a function of voltage. We have a simple equation that describes the energy difference between the open and closed state as a function of the membrane potential. And now we can just plug that into the Boltzmann equation and derive the probability of being open and closed. So we just plug that delta u into here, w minus gating charge times voltage. Now let's calculate the probability of being open. This gives us the ratio of open to closed. How do we calculate the probability of being open? Well, n is the probability of being open. That's just probability of being open divided by open plus closed. What's the probability of being open plus the probability of being closed? Well, if it's in one or the other, then the sum of those has to be one, OK? And now divide both top and bottom by p0. The probability of being open it's just 1 over 1 plus p closed over p open, which is just the inverse of this. And that's equal to that. All right, that may have gone by a little bit too fast. And I wasn't very smooth on that. But you can see the idea, right? It's estimating how the energy difference between the open and closed state depends on the voltage of the cell, and it's just an energy difference. So it has to be a charge times a voltage, yeah? And that's right [AUDIO OUT] charge times a voltage. And now we're just doing a little bit of algebra to extract the probability of open from open divided by closed. And now if we just plug that into there, we get this. All right? So now let's see how that compares to the actual answer. Probability of open is just 1 over 1 plus this exponential. Here's what that data looked like. Remember, that was the data for the conductance as a function of voltage. Here's a fit to a functional form that looks like that. And here is the prediction from Boltzmann. You can see that it almost exactly fits. And you can actually extract, biophysically, what the gating charge is inside this tiny, little protein simply by fitting this to the data. Pretty cool, right? Yes? AUDIENCE: What is w? MICHALE FEE: It's the energy difference between the open and closed state when the voltage is zero. So you kind of have to fit that, too. If the voltage is zero, it's the energy difference between the open and closed state when the voltage is zero. And then you subtract from that the energy of the gating charge as a function of voltage inside the cell, OK? Yes AUDIENCE: So is that the [INAUDIBLE].. MICHALE FEE: Yes, each has the sensor, and they all have to be open for the ion channel to be open. Yes? AUDIENCE: Do you not need to put it to the power of four? MICHALE FEE: No, because this is the probability that one subunit is open. But that's a good point. If you want to compare that to the-- so you're right. If you want to compare that to the conductance of the whole channel, then it has to be raised to the power of four. And that's been accounted for here. Good question. Any other questions? Boltzmann men equation is pretty cool. If you know the mass of a nitrogen molecule and the acceleration due to gravity, what can you calculate with the Boltzmann equation? Any idea? The mass of a nitrogen molecule and the acceleration due to gravity. AUDIENCE: Pressure of nitrogen? The partial pressure of nitrogen? MICHALE FEE: Close. You can calculate the height of the atmosphere. You can do all kinds of really cool stuff with the Boltzmann equation. OK, there was another question here. No? So you can extract, actually, these quantities-- the gating charge and this energy difference in the zero voltage state. And the fit is very good. OK, so that's voltage dependence. I highlighted these slides that I don't expect you to be able to reproduce in blue, just to make it more clear for your review what you have to focus on. OK, let's look at the time dependence. The time dependence is pretty simple. It's going to just involve a linear first order differential equation. You guys are all super experts on that now, right? So we have an ion channel-- sorry, a subunit that's either open or closed, right? We have an open state, closed state. What we're going to do is-- so the way to think about this is the ion channel, the subunit, if the cell is polarized, is sitting in the closed state, right? When you depolarize the neuron, that changes the energy levels. [AUDIO OUT] Which way was it? I forget. The closed state has a lower energy. Now, when you depolarize the cell, the closed state suddenly has a much higher energy, and it's close to the open state. And so, at some point, that subunit will jump over to the open state, right? But that takes time. You change the energy levels, but it takes time for the system to jump into the open state. Why is that? Because that transition is caused by thermal fluctuations. And so you have to wait for one of those fluctuations to kick you over into the open state. So we're going to model those transitions between open and closed states with a simple rate equation that's voltage dependent. We have an open state, and we imagine that n is the probability of being in the open state. And we can equivalently think of it as if we have a population of subunits. And let's think of it more as the fraction. It's also equivalent to-- just whichever way you want to think about it, either works well. But you can also think of it as the probability of being in the open state, or the number of subunits that are in the open state in a population. You also have a closed state. So if n is the probability of being in the open state, [AUDIO OUT] of a closed state with probability 1 minus n, right? If you're in the closed state, you have some transition rate, probability per unit time, of going from the closed state to the open state. And if you're in the open state, you have some probability per unit time beta of going into the closed state. So those things have units of per second, probability per second. Yes? Rebecca, right? AUDIENCE: Yeah. What's the cause for the fluctuations? Just regular [INAUDIBLE]? MICHALE FEE: Just warmth. And these things are voltage dependent, remember? Those depend on the energy difference between that open and closed state. All right, let's develop our first order linear equation. It's going be very simple. We have a closed state, an open state. The change in the number of open states is just going to be the number of closed states, the number of closed subunits that open, minus the number of open subunits that close. That makes sense? All right, that's simple enough. The change in the number open subunits per unit time is going to be the number of closed subunits that there are times the probability that a closed subunit opens per unit time-- that's alpha-- minus-- remember, the number of closed subunits that open is the number of closed subunits times the probability per unit time that a closed subunit opens, all right? And the number of open subunits that close is just the number of open subunits times the probability that any one of them closes per unit time. Does that make sense? A lot of words, but the equation ends up being very simple. The change per unit time of n is just the number of close subunits, one [AUDIO OUT],, times the probability that those open per unit time alpha minus beta times n. Alpha times 1 minus n minus beta times n. Any questions about that? Alpha, beta are voltage dependent. So I've rewritten that equation. n is the probability that a subunit is open. Let's just rewrite this. Let's expand this. Alpha minus alpha times n minus beta times n. Factor out the n. So you have dn dt equals alpha minus alpha plus beta times n. Divide both sides by 1 over alpha plus beta. What's the steady state of this-- the steady state solution of this equation? That is the steady state solution, right? If you set dn dt equal to zero, then n is equal to that. [AUDIO OUT] just n infinity. And what's that? Alpha and beta have units of per unit time. So what is one-- what units do 1 over alpha plus beta have? Time. So what might that be? AUDIENCE: Tau. MICHALE FEE: Tau. It's a time constant. So, after all of this, what we end up with is an equation that looks exactly like what we had for-- we have a first order linear differential equation exactly the same form as the equation we used to understand the way the voltage changes in a cell in response to current injection. So if we change n infinity, what is this thing going to do? What is n going to do? It's going to relax [AUDIO OUT] n infinity with a time constant tau. In all of these things, the tau is tau sub n, because it's for the n gating variable. So that's why this has an n here. So n infinity and tau are voltage dependent, because they come from alpha and beta, which are voltage dependent. But we actually just derived the steady state voltage dependence of the potassium conductance, right, from the Boltzmann equation? What is n infinity for very negative voltages? Do you remember it, just approximately? Big, small, [AUDIO OUT]? What is the steady state? What's the probability that a potassium channel is open, that a subunit is open, at very negative voltages? Do you remember? Zero. It's off. For big voltage, n infinity has to be-- if we think of it as a probability, it's close to one. So n infinity goes from zero at negative voltages, sigmoidal activation up to one at high voltages. OK, so now let's look at how n changes as a function of time. So here's our [AUDIO OUT] potential. We're going do a voltage clamp experiment. We're going to start at minus 80 millivolts and step up to zero. So what is n infinity going to do? n infinity is just a function of voltage, right? It's like those energy levels. They change immediately. So what is n infinity going to do? Good. It's going to start at close to zero, jump up to one, and then jump back to close to zero immediately following the voltage. But what is n going to do? Now let's plot n. n is going to start at zero. When you step the voltage up, n infinity will jump up, and n will relax exponentially to a high n infinity close to one, right? And then when we turn back, n infinity jumps back down to zero, and n relaxes exponentially back. Yes? AUDIENCE: On the n [INAUDIBLE],, that's not relaxing [INAUDIBLE] to one. It's to n infinity, right, at the top? MICHALE FEE: Yes, but for a voltage around zero, n infinity is going to be close to one . Any questions? That is called activation. That's the activation. Now, this looks a little funny, right? This thing is turning on immediately. It doesn't have that nice sigmoidal shape that the potassium current had, or the potassium conductance. Why is that? What are we looking at here? We're plotting n. What is the potassium conductance or the current? How does that relate to n? n to the fourth. So what do we do to plot the potassium current or the potassium conductance? We just take this [AUDIO OUT],, right? So what does that look like? This process of turning off-- the gating variable getting bigger is called activation. The gating variable getting smaller, n getting smaller, is called disactivation. So now let's plot this to the fourth. So this conductance turns on gradually, but the conductance is proportional to n to the fourth. So let's plot that. So if we plot n to the fourth, you can see that that function now turns on smoothly in time. This is time now, right? So that gating variable n relaxes exponentially, but the conductance goes as the gating variable to the fourth. And so it has this nice, graceful turn-on, right? Because it's the gating [AUDIO OUT] exponential to the fourth looks exactly like this. In fact, that's how Hodgkin and Huxley figured out that it's n to the four, because they knew that if they assume that it's an exponentially decaying gating variable, that the only way they could fit the turn on of the conductance was by raising it to the fourth power. If they raised it to the second power, it was still too sudden. If they raised it to the third power, it was still not quite right. But if they raised it to the fifth power, oops, it's too delayed. If they raised it to the fourth power, it exactly fits the shape of the conductance turning on. And so they inferred-- they didn't know about subunits. They just had a piece of axon, a piece of squid lying on a table in front of them. And they were able to figure out that there were four independent processes that turn on the potassium conductance. Pretty cool, right? That's what you get by doing things quantitatively. So they could [AUDIO OUT] the shape of that potassium conductance turning on by this exponential gating variable raised to the fourth power. And from that, they were able to infer that it's four independent first order processes that combine to produce that activation. OK, the offset also fits if you raise it to the fourth power. They were able to measure the size of the potassium conductance to measure n infinity directly. So we derived it using the Boltzmann equation, but they measured it directly just by the size of the conductance. And you can also measure the time course. You don't need to worry about this. I'm not expecting-- you should know this. I'm just showing you that just for fun. You don't need to know that. And you can extract these tau's just by measuring this exponential decay at different voltages, or measuring the inferred first order process. You can infer the time constant of the first order process on the onset and the offset to extract these tau's as a function of voltage. This is tau as a function of voltage. From these two quantities, you can actually extract alpha and beta. And so you can write down a simple algebraic expression for alpha and beta. And that's the way they actually wrote those things down. They wrote them down as alpha and beta, rather than n infinity and tau. Those are simple expressions for alpha and beta in units of per millisecond as a function of voltage in units of millivolts. I think-- yeah, it's millivolts. There it is right there. So you can actually just take those parameters and calculate n infinity and tau n and calculate the gating variable from that using that differential equation. So we have these nice expressions for what the steady state, n infinity, and tau n are. Now, why did we-- yes? AUDIENCE: [INAUDIBLE]? MICHALE FEE: Yes, they're per unit time. Yeah, so they have units of per millisecond. OK, so now let's come back to our picture. We have n infinity and tau n as a function of voltage. Now we just can plug those into this differential equation and solve for n. Well, we already know what that does. n relaxes exponentially toward n infinity with a time constant tau. But you can integrate that numerically. You get the potassium conductance as n to the four, g times n to the four. You get the potassium current as g n to the four times the driving potential, or V minus Ek. And now let's come back to our algorithm for making an action potential. So we have the parts related to the potassium current. We still have to add the parts related to sodium, but it's going to look very similar. So here's the idea. We start with the membrane potential at time step t. We compute n infinity and tau n. We integrate dn dt one time step to get the next n. Plug n into our equation to get the potassium current. We then add that to all the other currents to get the total membrane current. We compute V infinity of the cell. We integrate dv dt one time step to get the next voltage. And you plug that in and calculate the next n infinity, all right? So we still have to add the sodium parts, but you can see we've gone through all of these steps for the potassium. And so we're just this shy of having a full-blown algorithm for [AUDIO OUT] an action potential in a neuron. And not only do you understand all the little steps, but you understand the fundamental biophysics that leads to that voltage and time dependence. All right, so, again, what I'd like you to be able to do is to draw that circuit, the Hodgkin-Huxley model. I'd like you to be able to explain, at a basic level, what a voltage clamp is and how it works. I'd like you to be able to plot the voltage and time dependence of the potassium current-- remember, this sigmoidal activation of the potassium current-- and the conductance, voltage and time dependence. And be able to explain the time and voltage dependence of the potassium conductance in terms of the Hodgkin-Huxley gating variables. OK?
MIT_940_Introduction_to_Neural_Computation_Spring_2018
16_Basis_Sets_Intro_to_Neural_Computation.txt
MICHALE FEE: OK, let's go ahead and get started. All right, so today, we're going to continue talking about feed-forward neural networks, and we're going to keep working on some interesting aspects of linear algebra-- matrix transformations. We're going to introduce a new idea from linear algebra, the idea of basis sets. We're going to describe some interesting and important properties of basis sets, such as linear independence. And then we're going to end with just a very simple formulation of how to change between different basis sets. So let me explain a little bit more, motivate a little bit more why we're doing these things. So as people, as animals, looking out at the world, we are looking at high-dimensional data. We have hundreds of millions of photoreceptors in our retina. Those data get compressed down into about a million nerve fibers that go through our optic nerve up to our brain. So it's a very high-dimensional data set. And then our brain unpacks that data and tries to make sense of it. And it does that by passing that data through layers of neural circuits that make transformations. And we've talked about how in going from one layer of neurons to another layer of neurons, there's a feed-forward projection that essentially does what looks like a matrix multiplication, OK? So that's one of the reasons why we're trying to understand what matrix multiplications do. Now, we talked about some of the matrix transformations that you can see when you do a matrix multiplication. And one of those was a rotation. Matrix multiplications can implement rotations. And rotations are very important for visualizing high-dimensional data. So this is from a website at Google research, where they've implemented different viewers for high-dimensional data, ways of taking high-dimensional data and reducing the dimensionality and then visualizing what that data looks like. And one of the most important ways that you visualize high-dimensional data is by rotating it and looking at it from different angles. And what you're doing when you do that is you take this high-dimensional data, you rotate it, and you project it into a plane, which is what you're seeing on the screen. And you can see that you get a lot out of looking at different projections and different rotations of data sets. Also, when you're zooming in on the data, that's another matrix transformation. You can stretch and compress and do all sorts of different things to data. Now, one of the cool things is that when we study the brain to try to figure out how it does this really cool process of rotating data through its transformations that are produced by neural networks, we record from lots of neurons. There's technology now where you can image from thousands, or even tens of thousands, of neurons simultaneously. And again, it's this really high-dimensional data set that we're looking at to try to figure out how the brain works. And so in order to analyze those data, we try to build programs or machines that act like the brain in order to understand the data that we collect from the brain. It's really cool. So it's kind of fun. As neuroscientists, we're trying to build a brain to analyze the data that we collect from the brain. All right, so the cool thing is that the math that we're looking at right now and the kinds of neural networks that we're looking at right now are exactly the kinds of math and neural networks that you use to explain the brain and to look at data in very powerful ways, all right? So that's what we're trying to do. So let's start by coming back to our two-layer feed-forward network and looking in a little bit more detail about what it does. OK, so I introduced the idea, this two-layer feed-forward network. We have an input layer that has a vector of firing rates, a firing rate that describes each of those input neurons, a vector of firing rates. That, again, is a list of numbers that describes the firing rate of each neuron in the output layer. And the connections between these two layers are a bunch of synapses, synaptic weights, that we can use to calculate to transform the firing rates at the input layer into the firing rates at the output layer. So let's look in a little bit more detail now at what that collection of weights looks like. So we describe it as a matrix. That's called the weight matrix. The matrix has in it a number for the weight from each of the input neurons to each of the output neurons. The rows are a vector of weights onto each of the output neurons. And we'll see in a couple of slides that the columns are the set of weights from each input neuron to all the output neurons. A row of this weight matrix is a vector of weights onto one of the output neurons. All right, so we can compute the firing rates of the neurons in our output layer for the case of linear neurons in the output layer simply as a matrix product of this weight vector times the vector of input firing rates. And that matrix multiplication gives us a vector that describes the firing rates of the output layer. So let me just go through what that looks like. If we define a column vector of firing rates of each of the output neurons, we can write that as the weight matrix times the column vector of the firing rates of the input layer. We can calculate the firing rate of the first neuron in the output layer as the dot product of that row of the weight matrix with that vector of firing rates, OK? And that gives us the firing rate. v1 is then W of a equals 1 dot u. That is one particular way of thinking about how you're calculating the firing rates in the output layer. And it's called the dot product interpretation of matrix multiplication, all right? Now, there's a different sort of complementary way of thinking about what happens when you do this matrix product that's also important to understand, because it's a different way of thinking about what's going on. We can also think about the columns of this weight matrix. And we can think about the weight matrix as a collection of column vectors that we put together into matrix form. So in this particular network here, we can write down this weight matrix, all right? And you can see that this first input neuron connects to output neuron one, so there's a one there. The first input neuron connects to output neuron two, so there's a one there. The first input neuron does not connect to output neuron three, so there's a zero there, OK? All right. So the columns of the weight matrix represent the pattern of projections from one of the input neurons to all of the output neurons. All right, so let's just take a look at what would happen if only one of our input neurons was active and all the others were silent. So this neuron is active. What would the output vector look like? What would the pattern of firing rates look like for the output neurons in this case? Anybody? It's straightforward. It's not a trick question. [INAUDIBLE]? AUDIENCE: So-- MICHALE FEE: If this neuron is firing and these weights are all one or zero. AUDIENCE: The one neuron, a-- MICHALE FEE: Yes? This-- AUDIENCE: Yeah, [INAUDIBLE]. MICHALE FEE: --would fire, this would fire, and that would not fire, right? Good. So you can write that out as a matrix multiplication. So the firing rate vector, in this case, would be the dot product of this with this, this with this, and that with that. And what you would see is that the output firing rate vector would look like this first column of the weight matrix. So the output vector would look like 1, 1, 0 if only the first neuron were active. So you can think of the output firing rate vector as being a contribution from neuron one-- and that contribution from neuron one is simply the first column of the weight matrix-- plus a contribution from neuron two, which is given by the second column of the weight matrix, and a contribution from input neuron three, which is given by the third column of the weight matrix, OK? So you can think of the output firing rate vector as being a linear combination of a contribution from the first neuron, a contribution from the second neuron, and a contribution from the third neuron. Does that make sense? It's a different way of thinking about it. In the dot product interpretation, we're asking, what is the-- we're summing up all of the weights onto neuron one from those synapses. We're summing up all the weights onto neuron two from those synapses and summing up all the weights onto neuron three from those synapses. So we're doing it one output neuron at a time. In this other interpretation of this matrix multiplication, we're doing something different. We're asking, what is the contribution to the output from one of the input neurons? What is the contribution to the output from another input neuron? And what is the contribution to the output from yet another input neuron? Does that makes sense? OK. All right, so we have a linear combination of contributions from each of those input neurons. And that's called the outer product interpretation. I'm not going to explain right now why it's called that, but that's how that's referred to. So the output pattern is a linear combination of contributions. OK, so let's take a look at the effect of some very simple feed-forward networks, OK? So let's just look at a few examples. So if we have a feed forward-- this is sort of the simplest feed-forward network. Each neuron in the input layer connects to one neuron in the output layer with a weight of one. So what is the weight matrix of this network? AUDIENCE: Identity. MICHALE FEE: It's the identity matrix. And so the firing rate of the output layer will be exactly the same as the firing rates in the input layer, OK? So there's the weight matrix, which is just the identity matrix, the firing rate. And the output layer is just the identity matrix times the firing rate of the input layer. And so that's equal to the input firing rate, OK? All right, let's take a slightly more complex network, and let's make each one of those weights independent. They're not all just equal to one, but they're scaled by some constant-- lambda 1, lambda 2, and lambda 3. The weight matrix looks like this. It's a diagonal matrix, where each of those weights is on the diagonal. And in that case, you can see that the output firing rate is just this diagonal matrix times the input firing rate. And you can see that the output firing rate is just the input firing rate where each component of the input firing rate is scaled by some constant. Pretty straightforward. Let's take a look at a case where the weight matrix now corresponds to a rotation matrix, OK? So we're going to let the weight matrix look like this rotation matrix that we talked about on Tuesday, where are the diagonal elements are cosine of sum rotation angle, and the off-diagonal elements are plus and minus sine of the rotation angle. So you can see that this weight matrix corresponds to this network, where the projection from input neuron one to output neuron one is cosine phi. Input neuron two to output neuron two is cosine phi. And then these cross-connections are a plus and minus sine phi. OK, so what does that do? So we can see that the output firing rate vector is just a product of this rotation matrix times the input firing rate vector. And you can write down each component like that. All right, so what does that do? So let's take a particular rotation angle. We're going to take a rotation angle of pi over 4, which is 45 degrees. That's what the weight matrix looks like. And we can do that multiplication to find that the output firing rate vector looks like-- one of the neurons has a firing rate that looks like the sum of the two input firing rates, and the other output neuron has a firing rate that looks like the difference between the two input firing rates. And if you look at what this looks like in the space of firing rates of the input layer and the output layer, we can see what happens, OK? So what we'll often do when we look at the behavior of neural networks is we'll make a plot of the firing rates of the different neurons in the network. And what we'll often do for simple feed-forward networks, and we'll also do this for recurrent networks, is we'll plot the input firing rates as in the plane of u1 and u2. And then we can plot the output firing rates in the same plane. So, for example, if we have an input state that looks like u1 equals u2, it will be some point on this diagonal line. We can then plot the output firing rate on this plane, v1 versus v2. And what will the output firing rate look like? What will the firing rate of v1 look like in this case? AUDIENCE: [INAUDIBLE] MICHALE FEE: Yeah let's say this is one and one. So what will the firing rate of this neuron look like? [INAUDIBLE]? AUDIENCE: [INAUDIBLE] MICHALE FEE: What's that? AUDIENCE: [INAUDIBLE] MICHALE FEE: So the firing rate of v1 is just this quantity right here, right? So it's u1 plus u2, right? So it's like 1 plus 1 over root 2. So it will be big. What will the firing rate of neuron v2 look like? It'll be u2 minus u1, which is? AUDIENCE: Zero. MICHALE FEE: Zero. So it will be over here, right? So it will be that input rotated by 45 degrees. And input down here-- so the firing rate of the one will be the sum of those two. Those two inputs are both negative. So v1 for this input will be big and negative. And v2 will be the difference of u1 and u2, which for anything on this line is? AUDIENCE: Zero. MICHALE FEE: Zero. OK. And so that input will be rotated over to here. So you can think of it this way-- any input in this space of u1 and u2, in the output will be just rotate by, in this case, it's minus 45 degrees. So that's clockwise, are the minus rotations. So you can just predict the output firing rates simply by taking the input firing rates in this plane and rotating them by minus 45 degrees. All right, any questions about that? It's very simple. So this little neural network implements rotations of this input space. That's pretty cool. Why would you want a network to do rotations? Well, this solves exactly the problem that we were working on last time when we were talking about our perceptron, where we were trying to classify stimuli that could not be separated in one dimension, but rather, can be separated in two dimensions. So if we have different categories-- dogs and non-dogs-- that can be viewed along different dimensions-- how furry they are-- but can't be separated-- the two categories can't be separated from each other on the basis of just one dimension of observation. So in this case, what we want to do is take this base of inputs and rotate it into a new what we'll call a new basis set so that now we can take the firing rates of these output neurons and use those to separate these different categories from each other. Does that make sense? OK, so let me show you a few more examples of that. So this is one way to think about what we do when we do color vision, OK? So you know that we have different cones in our retina that are sensitive to different wavelengths. Most colors are combinations of those wavelengths. So if we look at the activity of, let's say, a cone that's sensitive to wavelength one and the activity in a cone that's sensitive to wavelength two, we might see-- and then we look around the world. We'll see a bunch of different objects or a bunch of different stimuli that activate those two different cones in different ratios. And you might imagine that this axis corresponds to, let's say, how much red there is in a stimulus. This axis corresponds to how much green there is in a stimulus. But let's say that you're in an environment where there's some cloud of contribution of red and green. So what would this direction correspond to in this cloud? This direction corresponds to more red and more green. What would that correspond to? AUDIENCE: Brown. MICHALE FEE: So what I'm trying to get at here is that the sum of those two is sort of the brightness of the object, right? Something that has little red and little green will look the same color as something that has more red and more green, right? But what's different about those two stimuli is that the one's brighter than the other. The second one is brighter than the first one. But this dimension corresponds to what? Differences in the ratio of those two colors, right? Sort of changes in the different [AUDIO OUT] wavelengths, and that corresponds to color. So if we can take this base of stimuli and rotate it such that one axis corresponds to the sum of the two colors and the other axis corresponds to the difference of the two colors, then this axis will tell you how bright it is, and this axis will tell you what the hue is, what the color is. Does that makes sense? So there's a simple case of where taking a rotation of a inputs base, of a set of sensors, will give you different information than you would get if you just had one of those stimuli. If you were to just look at the activity of the cone that's giving you a red signal, if one object has more activity in that cone, you don't know whether that other object is just brighter or if it's actually more red, that looked red. Does that makes sense? So doing a rotation gives us signals in single neurons that carries useful information. It can disambiguate different kinds of information. All right, so we can use that simple rotation matrix to perform that kind of separation. So brightness and color. Here's another example. I didn't get to talk about this in this class, but there are-- so barn owls, they can very exquisitely localize objects by sound. So they hunt, essentially, at night in the dark. They can hear a mouse scurrying around in the grass. They just listen to that sound, and they can tell exactly where it is, and then they dive down and catch the mouse. So how did they do that? Well, they used timing differences to tell which way the sound is coming from side to side, and they use intensity differences to tell which way the sound is coming from up and down. Now, how do you use intensity differences? Well, one of their ears, their right ear pointed slightly upwards. And their left ear is pointed slightly downwards. So when they hear a sound that's slightly louder in the right ear and slightly softer in the left ear, they know that it's coming from up above, right? And if it's the other way around, if it's slightly louder in the left ear and softer in the right ear, they know it's coming from below horizontal. And it's extremely precise system, OK? So here's an example. So if they're sitting there listening to the intensity, the amplitude of the sound in the left ear and the amplitude of the sound in the right ear, some sounds will be up here with high amplitude in both ears. Some sounds will be over here, with more amplitude in the right ear and less amplitude in the left ear. What does this dimension correspond to? That dimension corresponds to? AUDIENCE: Proximity. MICHALE FEE: Proximity or, overall, the loudness of the sound, right? And what does this dimension correspond to? AUDIENCE: Direction. MICHALE FEE: The difference in intensity corresponds to the elevation of the sound relative to the horizontal. All right? So, in fact, what happens in the owl's brain is that these two signals undergo a rotation to produce activity in some neurons that's sensitive to the overall loudness and activity in other neurons that's sensitive to the difference between the intensity of the two sounds. It's a measure of the elevation of the sounds. All right, so this kind of rotation matrix is very useful for projecting stimuli into the right dimension so that they give useful signals. All right, so let's come back to our matrix transformations and look in a little bit more detail about what kinds of transformations you can do with matrices. So we talked about how matrices can do stretch, compression, rotation. And we're going to talk about a new kind of transformation that they can do. So you remember we talked about how a matrix multiplication implements a transformation from one set of vectors into another set of vectors? And the inverse of that matrix transforms back to the original set of vectors, OK? So you can make a transformation, and then you can undo that transformation by multiplying by the inverse of the matrix. OK, so we talked about different kinds of transformations that you can do. So if you take the identity matrix and you make a small perturbation to both of the diagonal elements, the same perturbation to both diagonal elements, you're basically taking a set of vectors and you're stretching them uniformly in all directions. If you make a perturbation to just one of the components of the identity matrix, you can take the data and stretch it in one direction or stretch it in the other direction. If you add something to the first component and subtract something from the second component, you can stretch in one direction and compress in another direction. We talked about reflections and inversions through the origin. These are all transformations that are produced by diagonal matrices. And the inverse of those diagonal matrices is just one over the diagonal elements. OK, we also talked about rotations that you can do with this rotation matrix. And then the inverse of the rotation matrix is, basically, you compute the inverse of a rotation matrix simply by computing the rotation matrix with a minus sign for this, using the negative of the rotation angle. And we also talked about how a rotation matrix-- for a rotation matrix, the inverse is also equal to the transpose. And the reason is that rotation matrices have this antisymmetry, where the off-diagonal elements have the opposite sign. One of the things we haven't talked about is-- so we talked about how this kind of matrix can produce a stretch along one dimension or a stretch along the other dimension of the vectors. But one really important kind of transformation that we need to understand is how you can produce stretches in an arbitrary direction, OK? So not just along the x-axis or along the y-axis, but along any arbitrary direction. And the reason we need to know how that works is because that formulation of how you write down a matrix to stretch data in any arbitrary direction is the basis of a lot of really important data analysis methods, including principal component analysis and other methods. So I'm going to walk you through how to think about making stretches in data in arbitrary dimensions. OK, so here's what we're going to walk through. Let's say we have a set of vectors. I just picked-- I don't know, what is that-- 20 or so random vectors. So I just called a random number generator 20 times and just picked 20 random vectors. And we're going to figure out how to write down a matrix that will transform that set of vectors into another set of vectors that stretched along some arbitrary axis. Does that make sense? So how do we do that? And remember, we know how to do two things. We know how to stretch a set of vectors along the x-axis. We know how to stretch vectors along the y-axis, and we know how to rotate a set of vectors. So we're just going to combine those two ingredients to produce this stretch in an arbitrary direction. So now I've given you the recipe-- or I've given you the ingredients. The recipe's pretty obvious, right? We're going to take this set of initial vectors. Good. Lina? AUDIENCE: You [INAUDIBLE]. That's it. MICHALE FEE: Bingo. That's it. OK, so we're going to take-- all right, so we're going to rotate this thing 45 degrees. We take this original set of vectors. We're going to-- OK, so first of all, the first thing we do when we want to take a set of points and stretch it along an arbitrary direction, we pick that angle that we want to stretch it on-- in this case, 45 degrees. And we write down a rotation matrix corresponding to that rotation, corresponding to that angle. So that's the first thing we do. So we've chosen 45 degrees as the angle we want to stretch on. So now we write down a rotation matrix for a 45-degree rotation. Then what we're going to do is we're going to take that set of points and we're going to rotate it by minus 45 degrees. So how do we do that? How do we take any one of those vectors x and rotate it by-- so this that rotation matrix is for plus 45. How do we rotate that vector by minus 45? AUDIENCE: [INAUDIBLE] multiply it by the [INAUDIBLE].. MICHALE FEE: Good. Say it. AUDIENCE: Multiply by the inverse of that. MICHALE FEE: Yeah, and what's the inverse of a-- AUDIENCE: Transpose. MICHALE FEE: Transpose. So we don't have to go to Matlab and use the inverse matrix in inversion. We can just do the transpose. OK, so we take that vector and we multiply it by transpose. So that does a minus 45-degree rotation of all of those points. And then what do we do? Lina, you said it. Stretch it. Stretch it along? AUDIENCE: The x-axis? MICHALE FEE: The x-axis, good. What does that matrix look like that does that? Just give me-- yup? AUDIENCE: 5, 0, 0, 1. MICHALE FEE: Awesome. That's it. So we're going to stretch using a stretch matrix. So I use phi for a rotation matrix, and I use lambda for a stretch matrix, a stretch matrix along x or y. Lambda is a diagonal matrix, which always just stretches or compresses along the x or y direction. And then what do we do? AUDIENCE: [INAUDIBLE] MICHALE FEE: Good. By multiplying by? By this. Excellent. That's all. So how do we write this down? So, remember, here, we're sort of marching through the recipe from left to right. When you write down matrices, you go the other way. So when you do matrix multiplication, you take your vector x and you multiply it on the left side by phi transpose. And then you take that and you multiply that on the left side by lambda. And then you take that. That now gives you these. And now to get the final answer here, you multiply again on the left side by phi. That's it. That's how you produce an arbitrary stretch-- a stretch or a compression of a data in an arbitrary direction, all right? You take the data, the vector. You multiply it by a rotation matrix transpose, multiply it by a stretch matrix, a diagonal matrix, and you multiply it by a rotation matrix. Rotate, stretch, unrotate. So let's actually do this for 45 degrees. So there's our rotation matrix-- 1, minus 1, 1, 1. The transpose is 1, 1, minus 1, 1. And here's our stretch matrix. In this case, it was stretched by a factor of two. So we multiply x by phi transpose, multiply by lambda, and then multiply by phi. So we can now write that down. If you just do those three matrix multiplications-- those two matrix multiplications, sorry, yes? One, two. Two matrix multiplications. You get a single matrix that when you multiply it by x implements this stretch. Any questions about that? You should ask me now if you don't understand, because I want you to be able to do this for an arbitrary-- so I'm going to give you some angle, and I'll tell you, construct a matrix that stretches data along a 30-degree axis by a factor of five. You should be able to write down that matrix. All right, so this is what you're going to do, and that's what that matrix will look like, something like that. Now, we can stretch these data along a 45-degree axis by some factor. It's a factor of two here. How do we go back? How do we undo that stretch? So how do you take the inverse of a product of a bunch of matrices like this? So the answer is very simple. If we want to take the inverse of a product of three matrices, what we do is we just-- it's, again, a product of three matrices. It's a product of the inverse of those three matrices, but you have to reverse the order. So if you want to find the inverse of matrix A times B times C, it's C inverse times B inverse times A inverse. And you can prove that that's the right term as follows. So ABC inverse times ABC should be the identity matrix, right? So let's replace this by this result here. So C inverse B inverse A inverse times ABC would be the identity matrix. And you can see that right here, A inverse times A is i. So you can get rid of that. B inverse times B is i. C inverse times C is i. So we just proved that that is the correct way of taking the inverse of a product of matrices, all right? So the inverse of this kind of matrix that stretches data along an arbitrary direction looks like this. It's phi transpose inverse lambda inverse phi inverse. So let's figure out what each one of those things is. So what is phi transpose inverse, where phi is a rotation matrix? AUDIENCE: Just phi. MICHALE FEE: Phi, good. And what is phi inverse? AUDIENCE: [INAUDIBLE] MICHALE FEE: [INAUDIBLE]. Good. And lambda inverse we'll get to in a second. So the inverse of this arbitrary rotated stretch matrix is just another rotated stretch matrix, right? Where the lambda now has-- lambda inverse is just given by the inverse of each of those diagonal elements. So it's super easy to find the inverse of one of these matrices that computes this stretch in an arbitrary direction. You just keep the same phi. It's just phi times some diagonal matrix times phi transpose, but the diagonals are inverted. Does that makes sense? All right, so let's write it out. We're going to undo this 45-degree stretch that we just did. We're going to do it by rotating, stretching by 1/2 instead of stretching by two. So you can see that compresses now along the x-axis. And then we rotate back, and we're back to our original data. Any questions about that? It's really easy, as long as you just think through what you're doing as you go through those steps, all right? Any questions about that? OK. Wow. All right. So you can actually just write those down and compute the single matrix that implements this compression along that 45-degree axis, OK? All right. So let me just show you one other example. And I'll show you something interesting that happens if you construct a matrix that instead of stretching along a 45-degree axis does compression along a 45-degree axis. So here's our original data. Let's take that data and rotate it by plus 45 degrees. Multiplied by lambda, that compresses along the x-axis and then rotates by minus 45 degrees. So here's an example where we can take data and compress it along an axis of minus 45 degrees, all right? So you can write this down. So we're going to say we're going to compress along a minus 45 degree axis. We write down phi of minus 45. Notice that when you do this compression or stretching, there are different ways you can do it, right? You can take the data. You can rotate it this way and then squish along this axis. Or you could rotate it this way and squish along this axis, right? So there are choices for how you do it. But in the end, you're going to end up with the same matrix that does all of those equivalent transformations. OK, so here we are. We're going to write this out. So we're writing down a matrix that produces this compression along a minus 45-degree axis. So there's 5 minus 45. There's lambda, a compression along the x-axis. So here, it's 0.2001. And here's the phi transpose. So you write all that out, and you get 0.6, 0.4, 0.4, 0.4. Let me show you one more. What happens if we accidentally take this data, we rotate it, and then we squish the data to zero? Yes? AUDIENCE: [INAUDIBLE] MICHALE FEE: It doesn't. You can do either one. Let me go back. Let me just go back to the very first one. So here, we rotated clockwise and then stretched along the x-axis and then unrotated. We could have taken these data, rotated counterclockwise, stretched along the y-axis, and then rotated back, right? Does that make sense? You'll still get the same answer. You'll still get the same answer for this matrix here. OK, now watch this. What happens if we take these data, we rotate them, and then we compress data all the way to zero? So by compressing the data to a line, we're multiplying it by zero. We put a zero in this element of the stretch matrix, all right? And what happens? The data get compressed right to zero, OK? And then we can rotate back. So we've taken these data. We can write down a matrix that takes those data and squishes them to zero along some arbitrary direction. Now, can we take those data and go back to the original data? Can we write down a transformation that takes those and goes back to the original data? Why not? AUDIENCE: Lambda doesn't [INAUDIBLE].. MICHALE FEE: Say it again. AUDIENCE: Lambda doesn't [INAUDIBLE].. MICHALE FEE: Good. What's another way to think about that? AUDIENCE: We've lost [INAUDIBLE].. MICHALE FEE: You've lost that information. So in order to go back from here to the original data, you have to have information somewhere here that tells you how far out to stretch it again when you try to go back. But in this case, we've compressed everything to a line, and so there's no information how to go back to the original data. And how do you know if you've done this? Well, you can take a look at this matrix that you created. So let's say somebody gave you this matrix. How would you tell whether you could back to the original data? Any ideas? Abiba? AUDIENCE: [INAUDIBLE] MICHALE FEE: Good. You look at the determinant. So if you calculate the determinant of this matrix, the determinant is zero. And as soon as you see a zero determinant, you know right away that you can't go back. After you've made this transformation, you can't go back to the original data. And we're going to get into a little more detail about why that is and what that means. And the reason here is that the determinant of lambda is zero. The determinant of a product matrices like this is the product of the determinants. And in this case, the determinant of the lambda matrix is zero, and so the determinant of the product is zero, OK? All right, so now let's talk about basis sets. All right, so we can think of vectors in abstract directions. So if I hold my arm out here and tell you this is a vector-- there's the origin. The vectors pointing in that direction. You don't need a coordinate system to know which way I'm pointing. I don't need to tell you my arm is pointing 80 centimeters in that direction and 40 centimeters in that direction and 10 centimeters in that direction, right? You don't need a coordinate system to know which way I'm pointing, right? But if I want to quantify that vector so that-- if you want to quantify that vector so that you can maybe tell somebody else precisely which direction I'm pointing, you need to write down those numbers, OK? So you can think of vectors in abstract directions, but if you want to actually quantify it or write it down, you need to choose a coordinate system. And so to do this, you choose a set of vectors, special vectors, called a basis set. And now we just say, here's a vector. How much is it pointing in that direction, that direction, and that direction? And that's called a basis set. So we can write down our vector now as a set of three numbers that simply tell us how far that vector is overlapped with three other vectors that form the basis set. So the standard way of doing this is to describe a vector as a component in the x direction, which is a vector 1, 1, 0, sort of in the standard notation; a component in the y direction, which is 0, 1, 0; and a component in the z direction, 0, 0, 1. So we can write those vectors as standard basis vectors. The numbers x, y, and z here are called the coordinates of the vector. And the vectors e1, e2, and e3 are called the basis vectors. And this is how you would write that down for a three-dimensional vector, OK? Again, the little hat here denotes that those are unit vectors that have a length one. All right, so in order to describe an arbitrary vector in a space of n real numbers, Rn, the basis vectors each need to have n numbers. And in order to describe an arbitrary vector in that space, you need to have n basis vectors. You need to have-- in n dimensions, you need to have n basis vectors, and each one knows basis vectors has to have n numbers in them. So these vectors here-- 1, 0, 0; 0, 1, 0; and 0, 0, 1-- are called the standard basis. And each one of these values has one element that's one and the rest are zero. That's the standard basis. The standard basis has the property that any one of those vectors dotted into itself is one. That's because they're unit vectors. They have length one. So i dot ei is the length squared of the i-th vector. And if the length is one, then the length squared is one. Each vector is orthogonal to all the other vectors. That means that each e1 dot e2 is zero, and e1 dot e3 is zero, and e2 dot e3 is zero. You can write down as e sub i dot e sub j equals zero for i not equal to j. You can write all of those properties down in one equation-- e sub i dot e sub j equals delta i j. Delta i j is what's called the Kronecker delta function. The Kronecker delta function is a one if i equals j and a zero if i is not equal to j, OK? So it's a very compact way of writing down this property that each vector is a unit vector and each vector is orthogonal to all the other vectors. And the set with that property is called an off an orthonormal basis set. All right, now, the standard basis is not the only basis-- sorry. I'm trying to do x, y, and z here. So if you have x, y, and z, that's not the only orthonormal basis set. Any basis set that is a rotation of those three vectors is also an orthonormal basis. Let's write down two other orthogonal unit vectors. We can write down our vector v in this other basis set as follows. We just take our vector v. We can plot the basis vectors in this other basis. And we can simply project v onto those other basis vectors. So we can project v onto f1, and we can project v onto f2. So we can write v as a sum of a vector in the direction of f1 and a vector in the direction of f2. You can write down this vector v in this different basis set as a vector with two components. This is two dimensional. This is R2. You can write it down as a two-component vector-- v dot f1 and v dot f2. So that's a simple intuition for what [AUDIO OUT] in two dimensions. We're going to develop the formalism for doing this in arbitrary dimensions, OK? And it's very simple. All right, these components here are called the vector coordinates of this vector basis f. All right, now, basis sets, or basis vectors, don't have to be orthogonal to each other, and they don't have to be normal. They don't have to be unit vector. You can write down an arbitrary vector as a sum of components that aren't orthogonal to each other. So you can write down this vector v as a sum of a component here in the f1 direction and a component in the f2 direction, even if f1 and f2 are not orthogonal to each other and even if they're not unit vectors. So, again, v is expressed as a linear combination of a vector in the f1 direction and a vector in the f2 direction. OK, so let's take a vector and decompose it into an arbitrary basis set f1 and f2. So v equals c1 f1 plus c2 f2. The coefficients here are called the coordinates of the vector in this basis. And the vector v sub f-- these numbers, c1 and c2, when combined into this vector, is called the coordinate vector of v in the basis f1 and f2, OK? Does that makes sense? Just some terminology. OK, so let's define this basis, f1 and f2. We just pick two vectors, an arbitrary two vectors. And I'll explain later that not all choices of vectors work, but most of them do. So here are two vectors that we can choose as a basis-- so 1, 3, which is sort of like this, and minus 2, 1 is kind of like that. And we're going to write down this vector v in this new basis. So we have a vector v that's 3, 5 in the standard basis, and we're going to rewrite it in this new basis, all right? So we're going to find the vector coordinates of v in the new basis. So we're going to do this as follows. We're going to write v as a linear combination of these two basis vectors. So c1 times f1-- 1, 3-- plus c2 times f2-- minus 2, 1-- is equal to 3, 5. That make sense? So what is that? That is just a system of equations, right? And what we're trying to do is solve for c1 and c2. That's it. So we already did this problem in the last lecture. So we have this system of equations. We can write this down in the following matrix notation. F times vf-- vf is just c1 and c2-- equals v. So there's F-- 1, 3; minus 2, 1. Those are our two basis vectors. Times c1 c2-- the vector c1, c2-- is equal to 3, 5. And we solve for vf. In other words, we solve for c1 and c2 simply by multiplying v by the inverse of this matrix F. So the coordinate vector in this new base is said is just the old vector times f inverse. And what is f inverse? F inverse is just a matrix that has the basis vectors as the columns of the matrix. So the coordinates of this vector in his new basis set are given by f inverse times v. We can find the inverse of f. So if that's our f, we can calculate the inverse of that. Remember, you flip the diagonal elements. You multiply the off-diagonals by minus 1, and you divide by the determinant. So f inverse is this times v is that, and v sub f is just 13/7 over minus 4/7. So that's just a different way of writing v. So there's v in the standard basis. There's v in this new basis, all right? And all you do to go from the standard basis to any arbitrary new basis is multiply the vector by f inverse. And when you're actually doing this in Matlab, this is really simple. You just write down a matrix F that has the basis sets in the columns. You just use the matrix inverse function, and then you multiply that by the data matrix, by the data vector. All right, so I'm just going to summarize again. In order to find the coordinate vector for v in this new basis, you construct a matrix F, whose columns are just the elements of the basis vectors. So if you have two basis vectors, it's a two-- remember, each of those basis vectors. In two dimensions, there are two basis vectors. Each has two numbers, so this is a 2 by 2 matrix. In n dimensions, you have n basis vectors. Each of the basis vectors has n numbers. And so this matrix F is an n by n matrix, all right? You know that you can write down v as this basis times v sub f. You solve for v sub f by multiplying both sides by f inverse, all right? That performs whats called change of basis. Now, that only works if f has an inverse. So if you're going to choose a new basis to write down your vector, you have to be careful to pick one that has an inverse, all right? And I want to show you what it looks like when you pick a basis that doesn't have an inverse and what that means. All right, and that gets to the idea of linear independence. All right, so, remember I said that if in n dimensions, in Rn, in order to have a basis in Rn, you have certain requirements? Not any vectors will work. So let's take a look at these vectors. Will those work to describe an-- will that basis set work to describe an arbitrary vector in three dimensions? No? Why not? AUDIENCE: [INAUDIBLE] vectors, so if you're [INAUDIBLE].. MICHALE FEE: Right. So the problem is in which coordinate, which axis? AUDIENCE: Z-axis. MICHALE FEE: The z-axis. You can see that you have zeros in all three of those vectors, OK? You can't describe any vector with this basis that has a non-zero component in the z direction. And the reason is that any linear combination of these three vectors will always lie in the xy plane. So you can't describe any vector here that has a non-zero z component, all right? So what we say is that this set of vectors doesn't span all of R3. It only spans the xy plane, which is what we call a subspace of R3, OK? OK, so let's take a look at these three vectors. The other thing to notice is that you can write any one of these vectors as a linear combination of the other two. So you can write f3 as a sum of f1 and f2. The sum of those two vectors is equal to that one. You can write f2 as f3 minus f1. So any of these vectors can be written as a linear combination of the others. And so that set of vectors is called linearly dependent. And any set of linearly dependent vectors cannot form a basis. And how do you know if a set of vectors that you choose for your basis is linearly dependent? Well, again, you just find the determinant of that matrix. And if it's zero, those vectors are linearly dependent. So what that corresponds to is you're taking your data and when you transform it into a new basis, if the determinant of that matrix F is zero, then what you're doing is you're taking those data and transforming them to a space where they're being collapsed. Let's say if you're in three dimensions, those data are being collapsed onto a plane or onto a line, OK? And that means you can't undo that transformation, all right? And the way to tell whether you've got that problem is looking at the determinant. All right, let me show you one other cool thing about the determinant. There's a very simple geometrical interpretation of what the determinant is, OK? All right, sorry. So if f maps your data onto a subspace, then the mapping is not reversible. OK, so what does the determinant correspond to? Let's say in two dimensions, if I have two orthogonal unit vectors, you can think of those vectors as kind of forming a square in this space. Or in three dimensions, if I have three orthogonal vectors, you can think of those vectors as defining a cube, OK? And if there unit vectors, then they define a cube of volume one. Here, you have the square of area one. So let's think about this unit volume. If I transform those two vectors or those three vectors in 3D space by a matrix A, those vectors get rotated and transformed. They point in different directions, and they define-- it's no longer a cube, but they define some sort of rhombus, OK? You can ask, what is the volume of that rhombus? The volume of that rhombus is just the determinant of that matrix A. So now what happens if I have a cube in three-dimensional space and I multiply it by a matrix that transforms it into a rhombus that has zero volume? So let's say I have those three vectors. It transforms it into, let's say, a square. The volume of that square in three dimensional space is zero. So what that means is I'm transforming my vectors into a space that has zero volume in the original dimensions, OK? So I'm transforming things from 3D into a 2D plane. And what that means is I've lost information, and I can't go back. OK, notice that a rotation matrix, if I take this cube and I rotate it, has exactly the same volume as it did before I rotated it. And so you can always tell when you have a rotation matrix, because the determinant of a rotation matrix is one. So if you take a matrix A and you find the determinant and you find that the determinant is one, you know that you have a pure rotation matrix. What does it mean if the determinant is minus one? What it means is you have a rotation, but that one of the axes is inverted, is flipped. There's a mirror in there. So you can tell if you have a pure rotation or if you have a rotation and one of the axes is flipped. Because in the pure rotation, the determinant is one. And in an impure rotation, you have a rotation and a mirror flip. All right, and I just want to make a couple more comments about change of basis, OK? All right, so let's choose a set of basis vectors for our new basis. Let's write those into a matrix F. It's going to be our matrix of basis vectors. If the determinant is not equal to zero, then these vectors, that set of vectors, are linearly independent. That means you cannot write one of those vectors as a linear combination of-- any one of those vectors as a linear combination of the others. Those vectors form a complete basis in that n dimensional space. The matrix F implements a change of basis, and you can go from the standard basis to F by multiplying your vector by F inverse to get the coordinate vector and your new basis. And you can go back from that rotated or transformed basis back to the coordinate basis by multiplying by F, OK? Multiply by F inverse transforms to the new basis. Multiplying by F transforms back. If that set of vectors is an orthonormal basis, then-- OK, so let's take this matrix F that has columns that are the new basis vectors. And let's say that those form an orthonormal basis. In that case, we can write down-- so, in any case, we can write down the transpose of this matrix, F transpose. And now the rows of that matrix are the basis vectors. Notice that if we multiply F transpose times F, we have basis vectors in rows here and columns here. So what is F transpose F for the case where these are unit vectors that are orthogonal to each other? What is that product? AUDIENCE: [INAUDIBLE] MICHALE FEE: It's what? AUDIENCE: [INAUDIBLE] MICHALE FEE: Good. Because F1 dot F1 is one. F1 dot F2 is zero. F2 dot F1 is zero, and F2 dot F2 is 0. So that's equal to the identity matrix, right? So F transpose equals F inverse. If the inverse of a matrix is just its transpose, then that matrix is a rotation matrix. So F is just the rotation matrix. All right, now let's see what happens. So that means the inverse of F is just this F transpose. Let's do this coordinate-- let's [AUDIO OUT] change of basis for this case. So you can see that v sub f, the coordinate vector in the new basis, is F transpose v. Here's F transpose-- the basis vectors are in the rows-- times v. This is just v dot F1, v dot F2, right? So this shows how for a orthonormal basis, the transpose, which is the inverse of F-- taking the transpose of F times v is just taking the dot product of v with each of the basis vectors, OK? So that ties it back to what we were showing before about how to do this change of basis, OK? Just tying up those two ways of thinking about it. So, again, what we've been developing when we talk about change of basis are ways of rotating vectors, rotating sets of data, into different dimensions, into different basis sets so that we can look at data from different directions. That's all we're doing. And you can see that when you look at data from different directions, you can get-- some views of data, you have a lot of things overlapping, and you can't see them. But when you rotate those data, now, all of a sudden, you can see things clearly that used to be-- things get separated in some views, whereas in other views, things are kind of mixed up and covering each other, OK? And that's exactly what neural networks are doing when they're analyzing sensory stimuli. They're doing that kind of rotations and untangling the data to see what's there in that high-dimensional data, OK? All right, that's it.
MIT_940_Introduction_to_Neural_Computation_Spring_2018
3_Resistor_Capacitor_Neuron_Model_Intro_to_Neural_Computation.txt
MICHALE FEE: Good morning, everybody. So we're going to continue today developing our model of a neuron. Again, this is called the "Equivalent Circuit Model," and it was developed by Alan Hodgkin and Andrew Huxley in the '40s and '50s. Let me just give a brief recap of what we've covered in the last couple of lectures. So we've been analyzing a neuron. We've been imagining an experiment in which we have a neuron in a dish filled with extracellular solution, which is a salt solution. We have an electrode in the cell that's measuring the voltage difference between the inside and the outside of the cell. So we have the electrode connected to an amplifier, a differential amplifier, and a wire in the bath connected to the other side of the differential amplifier. And we have a current source, which injects current into the cell. And we imagine that, as the experimenter, we have our hand on a knob that we can adjust the amount of current that's being injected into the cell. We've described the neuron basically as a capacitor, because it has a-- it's two conductors separated by an insulator. So we have a conductor, one conductor inside the cell, which is that conductive intracellular salt solution, and another conductor outside the cell, which is the conductive extracellular solution. And those two conductors are separated by an insulator, which is a phospholipid bilayer. We wrote down the equivalent circuit for this, which for this situation right here it has a voltage measuring device that represents the membrane potential as the difference between the intracellular voltage and the extracellular voltage. We have a current source here, and we've represented our neuron so far as a capacitor. We then introduce the idea of ion channels, or conductances, or pores in the membrane that allow ions to pass through the membrane, through these little pores. And we described the idea that there are-- we talked about the idea that there are many different kinds of ions, ion channels, that have different interesting properties. So we then extended our analysis of our neuron in the dish to include these ion channels. We began by putting-- by representing ion channels as a resistance that connects the intracellular space. In the extracellular space, this resistor is in parallel with our capacitor. And we represented the current going through that resistance, which we called the "leak resistance," R sub L. We represented the current as "leak current," I sub L. We noted that we were going to model this leak resistance using Ohm's law. So we wrote down that current, that leak current, as the membrane potential, V, divided by the leak resistance. So this is just Ohm's law. And we also rewrote the quantity 1 over R, 1 over the resistance, as the leak conductance. So the leak current is just the leak conductance times the membrane potential. And we also introduced the idea of an I-V curve, which plots the amount of current flowing through our leak conductance here as a function of the membrane potential, this V. And for the case of a resistor, for the case where we're modeling our leak conductance with Ohm's law, the current, as a function of voltage, is a straight line going through the origin, whose slope is just the leak conductance. Any questions about that? Then we derived a simple equation for how the membrane potential evolves over time as a function of the injected current. And we did that using Kirchhoff's current law, which says that the sum of all the currents into a node has to equal the sum of currents out of a node, where a node here means a wire. And so the sum of these occurrences equal to 0. There's a minus sign here, because this current, the electrode injected current, is defined as positive going into the cell, and these currents are defined as positive when they go out of the cell. We substituted for the leak current, the expression from Ohm's law. So into this part of the equation here, this term, we substitute VM over R sub L. And I apologize for the slight inconsistency in notation here. Sometimes I'm using VM, and sometimes I'm using V. I'll try to fix that, but those-- VM is the same as V in what you've seen so far. And we also have an expression here for the capacitive current. The capacitive current through this capacitor is just C, dV, dt. So we put those two expressions, make those two substitutions, and we have this expression that relates the voltage in the cell to the injected current. We rewrote this a little bit by multiplying through by the leak resistance, and we rewrote this again making a couple substitutions. We use tau in place of RC. So that becomes a time constant. And the infinity is this expression on the right-- R leak times the injected current. So the equation that we now have, the differential equation that we now have, for the dependence of the membrane potential on injected current looks like this-- V plus tau, dV, dt, is equal to V infinity. When we inject current into the cell, we're changing the infinity. And then the voltage evolves, as described by this differential equation. So we're going to plot a bunch of things, the infinity and the voltage in the cell, as we inject a pulse of current. So we start out with injected current equal to 0. We step up to I naught, hold a constant injected current I naught for some period of time, and then reset the current back to 0. So what does V infinity do as a function of time, anybody? What does that mean? AUDIENCE: It does the same things. MICHALE FEE: Good. It does the same thing, but multiplied by R sub L-- very good. And what does the membrane potential do? So let's start the membrane potential at 0. So starting here, what does the membrane potential do [INAUDIBLE]? Yes? What's your name? AUDIENCE: I'm Kate. MICHALE FEE: Kate. Yes, stays at 0. AUDIENCE: [INAUDIBLE] MICHALE FEE: Good. How do we say-- how do we say it? Exponentially toward V infinity, very good. So it relaxes exponentially toward infinity, which is here. And then what happens? AUDIENCE: And then it [INAUDIBLE].. MICHALE FEE: Very good. And it does-- it relaxes with a time constant of RC. And what that means is that the time it takes for it to relax to 1 over e of its original distance from V infinity is given by tau. Very good-- and you can solve this differential equation that was on the previous slide for periods where the injected current is constant, where V infinity is constant. And what you can see is that the voltage difference from the current V infinity relaxes exponentially to 0. It starts out at the initial voltage minus V infinity, and you multiply that difference times an exponential that decays to 0. And so the voltage difference between the voltage and V infinity decreases. So now we then introduce the idea that neurons have batteries. Where do the batteries of a neuron come from, anybody remember? Stacey, I remember you. Where do the batteries of a neuron come from? Good. That's what the battery produces. The battery produces a voltage difference. But biophysically, what causes that voltage difference? AUDIENCE: [INAUDIBLE] MICHALE FEE: Good, ion selective pores. Good, that's one component, and there's another important component. Somebody else want to answer? AUDIENCE: Ion concentration. MICHALE FEE: Good, excellent. So we have two components-- ion concentration gradients and ions selective pores. So basically, if you have a high concentration of potassium inside the cell, when you open up a potassium selective ion channel, the potassium ions diffuse out of the cell, leaving an excess of negative charges inside the cell that causes the voltage inside the cell to go down until there's a sufficient voltage gradient that the drift of potassium ions back into the cell driven by the voltage gradient is equal to the diffusion rate of ions out of the cell produced by the concentration gradient. At equilibrium, at the equilibrium potential, there's a particular voltage at which there is no net current into-- potassium current into or out of the cell. And that's what we mean by equilibrium potential, otherwise known as the "Nernst potential." Last time we derived the Nernst potential using the Boltzmann equation. The Boltzmann equation gives us the ratio of the probability of finding an ion inside or outside of the cell, and that's equal to e to the minus energy difference of an ion inside and outside of the cell divided by KT. The energy difference is given by the charge times the voltage difference, the charge of the ion that we're considering times the voltage difference. So we took the log of both sides, solved for V in minus V out, and we found that the voltage difference at equilibrium. This is at thermal equilibrium. That's what the Boltzmann equation tells us-- at thermal equilibrium. What's the ratio of probabilities? At thermal equilibrium, the voltage difference is 25,000 millivolts times the log, the natural log, of the ratio of concentrations, and that's E sub K. If you plug-in the potassium concentrations here, that gives you the equilibrium potential for potassium or the Nernst potential for potassium. So real neurons here on Earth are kind of like caesium neurons on that-- cesium ions on that alien planet. The potassium concentration inside of a cell is around 400 millimolar, and I think this is for squid giant axon, which is a little bit alien. Yes? Go ahead. AUDIENCE: From the previous [INAUDIBLE].. MICHALE FEE: Yep. So if you think about the probability of finding a potassium ion inside of a cell and finding a potassium ion outside of the cell, the ratio of probabilities is going to be proportional to the ratio of concentrations. Does that make sense? So given these concentrations, a high concentration of positive potassium ions inside the cell, a low concentration outside the cell, we can take the ratio of those concentrations, take the natural log of that. That's about 3 log units. The sign is negative, because this ratio is smaller than 1. So the equilibrium potential for potassium is about 25 millivolts, KT over Q for a monovalent iron is 25 millivolts. So the Nernst potential is 25 millivolts times minus 3. So the equilibrium potential is about minus 75. So let's now look at a different ion, sodium. So sodium has a very low concentration inside of a cell compared to outside. That ratio is about a factor of 10. The natural log of 10 is about 2 log units, right? So what's the equilibrium potential for sodium, anybody has a guess? [LAUGHS] AUDIENCE: [INAUDIBLE] MICHALE FEE: Yeah. Plus or minus? AUDIENCE: Oh, plus. MICHALE FEE: Good. So it's about 2 log units times 25 millivolts, right? So it's plus 50 or so. Good. How about chloride? So this is interesting. Chloride is a negative ion. It has a high concentration outside the cell and a low concentration inside the cell. So when we open up a chloride channel, what happens to the chloride ions? Which way do they go, into the cell or out of the cell? Good. They're negative. So what is that going to do to the voltage inside the cell? We're negative, good. And the ratio of concentrations here is about 10. So log of 10. It's 2-ish. So what's that Nernst potential going to be? About minus 50, exactly. If you plug-in the actual numbers, it's minus 60. Great. Good. Here's an interesting one-- calcium. Calcium is kept at an extremely low concentration inside of cells, because it's used as a signaling molecule. When calcium comes into a cell, it actually does things important. So the cell buffers calcium. It sequesters calcium into the endoplasmic reticulum and keeps the concentration in the cytoplasm very low. The concentration outside the cell is about 2. The ratio is a pretty big number. So if you-- OK. Why is the coefficient here, KT over Q, why is that 12 millivolts here instead of 25? Excellent. So it's KT over 2 times the electron charge. So that's 12 millivolts, very good. And so the equilibrium potential for calcium is very positive. Great. Any questions about that? So these are the most important ions that we have to think about in terms of ionic conductances across the membrane. Any questions? Good. So let's go back to our neuron in the dish, and we're going to consider a neuron that has a potassium conductance and a high potassium concentration inside the cell. Remember, we can write down the magnitude of that conductance as G sub K. And now we're going to do an experiment. We're going to measure the voltage in our cell, the steady state voltage in our cell, as a function of the amount of current that is either being injected or passing through the membrane-- its steady state. Those two things are the same. So we're going to plot the potassium current through the membrane as a function of the voltage of the cell. So let's say that we inject 0 current. What's the steady state voltage in the cell going to be, anybody? We just went through this. Uh-oh, I have to-- oh, OK. I have a volunteer back here. Why 25 millivolts? And we will assume the channel is open. So what is the-- so your answer is that the-- if you inject 0 current, the voltage is going to be the Nernst potential for potassium. And that is correct, but what is the actual number? Good. It's going to be around negative 75 millivolts. So if we inject 0 current, we know a voltage of our neuron is going to be around EK or minus 75 millivolts. Now, let's start injecting current into the cell until the voltage gets to 0. Now, are we going to be injecting positive current into this cell or negative current? What do we have to do, inject positive or negative current to make the voltage get up to 0? Positive, right, because the voltage inside is negative. So we need to inject positive charges. And how much current do we need to inject? If the conductance is g, how much current do we need to inject? What? V is the voltage inside of our cell, so you're getting there. You're really close. What's the answer? A times G. So we're going to inject the current into our cell until the voltage gets to 0. And if we inject different amounts of current into the cell and measure what the voltage is, we get kind of a straight line. So for a potassium conductance, if you hold the voltage positive above the equilibrium potential, potassium ions are going to flow out through the membrane. Is that clear? Let's hold the voltage at 0. Which way are our ions flowing? So the ions are going to be flowing out. To hold the voltage at EK, what happens to the potassium current? It goes to 0, because you're now at the Nernst potential. And if you hold the voltage below, the inside of the cell is now negative relative to the Nernst potential, and current's going to flow in. So notice that the current actually reverses sine around EK. And so sometimes this voltage is called the "reversal potential." So you'll hear me sometimes refer to the reversal potential, and that's just the same as the Nernst potential or the equilibrium potential. So the equation for something that looks like this, a straight line that's offset from 0, is just this. So this expression right here is going to be our basic model for how we describe the current through an ion channel as a function of the voltage of the membrane. So current, the potassium current, is just the potassium conductance times the difference of the membrane potential from the reversal potential. So IK equals GK times V minus EK. We also have a circuit, a little simple equivalent circuit, that describes this relation, and this is what it looks like. We have a-- what we're going to do to include the effects of this ion-specific conductance in the presence of a concentration gradient is to take our conductor, our resistor here, our conductance, and put it in series with a battery. So that's the basic circuit element that describes this kind of I-V relation. Why is that? So let's break this down a little bit. Basically, what we're going to do is we're going to equate this membrane potential difference between the inside and the outside of the cell, this potential difference, to the sum of the voltage drops across these two elements. Is it 1.5 volts, the same as the battery in here? No. What is it? AUDIENCE: [INAUDIBLE] MICHALE FEE: Yes. That's the battery we've been talking about, exactly. So the voltage drop across this battery is EK. What's the voltage drop across this resistor, this potassium conductance? What does it depend on? What if the current is 0? What's the voltage drop across a resistor whose current is 0? 0. Ohm's law, right? What is it at arbitrary current? What's the voltage drop across a resistor as a function of current? AUDIENCE: [INAUDIBLE] MICHALE FEE: Good. Can somebody tell me what it is in terms of the quantities that we have? We're looking for a delta V across here. So VM, the membrane potential, has to just equal the sum of those two voltage drops, right? So VM equals EK plus IK over GK. And now let's just solve this for IK. So that's why this little circuit element describes our potassium conductance. That's the equation that goes with it. Any questions? No? All right, let's push on. This quantity right here, by the way, V minus EK, is called the driving potential. If V is equal to EK, then the driving potential is 0, and there's no current. The current through the channel is proportional to the driving potential. So let's just-- so there is our new circuit diagram for our cell that's a capacitor whose membrane has little leaks in it, where there's a ion-specific permeability of the pore and ion concentration gradient to produce a battery. There's our new circuit element. You can see that we're getting awfully close now to the whole equivalent circuit model that Hodgkin and Huxley wrote down. So let's just flesh out this equation here. What is I sub K? I sub K is just times V minus EK, very good. Remember that the resistance, RK is just 1 over GK, and tau, again, is RK times C. So let's massage this a little bit more. We're going to write this down as V plus dV, dt equals EK plus RKIE. So what is that? Any guess what that is? AUDIENCE: [INAUDIBLE] MICHALE FEE: Very good. Can everyone see why that's V infinity? Because if you set dV, dt equal to 0, V is equal to IK plus RKIE. If you inject a constant current at steady state, dV, dt equals 0, you can see that the injected current is just equal to the potassium current leaking out through the membrane. That all makes great sense. So that's V infinity. The voltage is just-- the differential equation is V plus tau dV, dt equals V infinity. It's exactly the same equation we had before. So you know that at every moment V is going to be doing what? Everybody together-- relaxing toward V infinity, right? Where that's our new V infinity. So we're going to do the same experiment again. Here is our pulse of injected current. V infinity is now starting at EK, jumps up to EK plus RKI naught and then goes back down. And the voltage of the cell is relaxing toward V infinity. So adding that just shifted this voltage trace down from 0. Remember, before this was sitting at 0, and now it's sitting at minus 75 millivolts. Any questions? So we're going to come back in a minute and finish fleshing out this model where we make those conductances is dependent on time as a way of describing the spikes that a neuron produces. But before we do that, we're going to talk about a much simpler model of how a spiking neuron behaves. Basically, what we're going to do is we're going to instead of write down a detailed biophysical model of action potentials, which is where we're heading, we're just going to ask some simpler questions about how a spiking neuron behaves. So notice that action potentials are really important. They're the way neurons communicate with other neurons. They send a signal down their axon, release neurotransmitter on other neurons. But most of the time a neuron is not spiking. What is it doing? Well, could be. Other thoughts? Certainly sometimes it could be resting. Right. Well, and how does it do that? [LAUGHS] It's part of it. Anybody? Yes? Integrating its inputs, and it integrates those inputs, and eventually it spikes. So we're going to develop a model or look at a model now called, not surprisingly, "integrate and fire," that captures exactly that idea. That a neuron spends most of its time integrating its inputs, making a decision about when to spike, and then spiking, and then starting over again. So the other crucial piece of this is that for most types of neurons the spikes are really all the same. The details of the spike wave form don't carry extra information beyond the fact that there's a spike. We're going to treat our spikes as delta functions, just discrete events at a single time, and the spikes are going to occur when the voltage in the neuron reaches a particular membrane potential, called the "spike threshold." Now, that's a reasonable approximation for many neurons. It's not absolutely the case, but many neurons spike when the neuron reaches a particular voltage threshold. And that's captured in this model called the integrate and fire neuron. And what we're going to do is we're going to take our Hodgkin-Huxley model, and we're going to replace these sodium and potassium conductances that actually generate the spike. We're going to replace that with a very simplified model of a spike generator, and it's going to look like this. So basically, the idea is that the cell gets input from either an electrode or from a synaptic input, and it integrates that input until it reaches a voltage, called V threshold. And once it reaches a threshold, we simply reset the voltage back down to a lower value, called "V reset." And then we just-- if we want to, we can just draw a line at that time, and that's the spike. But what really happens to the voltage here is that once the voltage hits V threshold, it gets reset back down to this [INAUDIBLE].. Any questions? Now, that kind of behavior is not-- it's fairly common in neurons. So this is an example of, does the voltage in a cell in motor cortex of the songbird-- and this is from a paper from Rich Mooney's lab back in 1992. And you can see that if you inject current, the voltage in this cell ramps up until it hits a threshold voltage. It makes a spike, which is very narrow in time, but then the voltage resets down to a lower voltage. And that process just repeats over and over. So now what we're going to do is we're going to calculate the rate at which a neuron fires as a function of how much current gets injected into the cell, how much input a cell gets. So we're going to start by considering the case where the cell has no leaks in its membrane, no conductances. So let's plot voltage as a function of time, so this is voltage as a function of time, when we inject a step of current into the cell. So what's going to happen? So the cell starts at some voltage, and the current turns on. What's going to happen? Some fresh [INAUDIBLE]. Yes? AUDIENCE: [INAUDIBLE] MICHALE FEE: Excellent. Like that? AUDIENCE: Yes. MICHALE FEE: Very good. Then what's going to happen? Yeah. Excellent-- like that. And then what's going to happen? AUDIENCE: [INAUDIBLE] MICHALE FEE: So we inject a constant current into our cell, and now our cell is going to generate spikes, action potentials, at regular intervals. And the interval between those spikes is going to be controlled by how long it takes that capacitor to charge up from the reset voltage to the threshold. So we can actually just very simply calculate the firing rate of this neuron as a function of how much current we inject into the cell. So the firing rate is just 1 over that interval between the spikes. So now what do we do? Anybody want to guess what the next step is? How do we figure out how long it takes? We know this distance. This is kind of like some third grade word problem here. Come on, somebody help me out. You cross a river. You're paddling one second. The river's 10 meters across. Anybody, anybody? How about right there, white shirt, what do you think? Yes. [LAUGHS] Well, we're trying to calculate the time it takes for the voltage to go from here to here. What would it depend on? Yes? Yeah. Well, what are you doing? Yeah, but what are you-- you're calculating what? AUDIENCE: Delta t. MICHALE FEE: Yeah, you're calculating delta t. And you're using what about this line? Slope, exactly. So if we knew the slope of this line, we could calculate how long it takes to go from here to here. So let's do that. So we have this equation, C, dV, dt equals I sub E. What's the slope of this line? Yeah, it's just dV, dt. Good. So it's dV, dt. That's delta V over delta t. Delta V is just this voltage difference, right? And delta t is what we're trying to calculate. So we just solve-- we just plug this into here, and we just solve for 1 over delta t. And that's just C delta V times the injected current. That's probably what you were saying, right? Any questions about that? So let's look at what that looks like. The firing rate is proportional to the projected current. It goes through 0. At 0 injected current, the voltage is constant and the neuron will never fire. But if we inject a tiny bit of current, the voltage will slowly ramp up, and it will eventually hit the threshold and then reset. So the firing rate is proportional to the injected current slope 1 over C delta V. If the capacitor is much bigger, what happens to the firing rate? It slows down, because as you inject current, the voltage increases more slowly because the capacitance is bigger. Now, let's add our leak conductance back in. So this leak conductance, we're going to think of it as a potassium conductance. So I'm going to call it "G leak," because I'm going to use something like that later in Hodgkin-Huxley model, but you should think about it just as a potassium conductance. So what's going to happen? Let's go to our plot of voltage as a function of time during our injected current. Let's say that we start at V reset, our voltage at V reset. Or actually, let's start it E leak. Now what happens, anybody? Yes? AUDIENCE: It relaxes [INAUDIBLE].. MICHALE FEE: Good. The voltage starts here, because there's 0 injected current. As soon as you inject current, V infinity jumps up to here, let's say. The voltage relaxes exponentially toward V infinity until it hits the threshold-- very good. Then what happens? Somebody else. Yeah? How about you in the gray shirt. AUDIENCE: It drops. MICHALE FEE: Great. So if you know the answer, raise your hand. We'll go much faster. So at this point, it will jump back down to where? Good. And then what. Anybody else? If you know the answer, raise your hand. Let's just do an exercise. Raise your hand, everybody who knows-- up high. Raise your hand up high. Now, one of you say it. [LAUGHTER] Everybody, say it. AUDIENCE: It's relaxes. MICHALE FEE: Great, music to my ears. It relaxes exponentially back V infinity until it hits threshold and it jumps back. It keeps doing it. Now, what happens here? The current turns off. What happens? Who knows the answer to that question? Raise your hands up high. Shout it out. AUDIENCE: It goes back to the-- AUDIENCE: It relaxes. MICHALE FEE: Excellent. It relaxes back to? AUDIENCE: [INAUDIBLE] MICHALE FEE: Why is it a leak, not V reset? When there's no current injected, that's V infinity. So that's why it relaxes back to E leak. Any questions? No? It's pretty simple, right? So it's pretty simple. And we can actually derive the expression now for this delta t, and therefore, we can drive the equation for the firing rate as a function of injected current. It's a little more complicated than the other one. I'm going to go through it in a little bit less detail, because the answer turns out to actually be pretty simple [AUDIO OUT]. So we're going to calculate the firing rate, which is just 1 over this delta t here. Before we actually get into the math, what happens if v infinity is down here? Who knows the answer to that question? Raise your hands. Shout it out. It won't spike, right? Good. So V infinity actually has to be above V threshold in order for that neuron to spike. Does that makes sense? So something is different already, because before we found in the current we injected the cell would eventually spike. But now what we see is when we have a leak, the V infinity actually has to be above V threshold. And so that means there's going to be some threshold current below which the neuron won't spike. That's pretty cool. We can see that right away. So I just wanted to find one quantity, called the "rheobase," and that is that current at which the neuron begins to spike. So if we start our cell at V reset, because let's say it just spiked, so it relaxes exponentially toward V infinity, but let's say that V infinity is right at threshold. So you can see that the time to reach the threshold is actually very long. If V infinity is equal to V threshold, it will never actually reach it. Even when V infinity is equal to V threshold, the firing rate is 0. Because if you inject just a tiny bit more current, now it'll begin to spike. It can calculate the injected current required to reach threshold. That's called the rheobase. We just set V infinity equal to V threshold, right? And we use our equation for-- that's V infinity, and we just set it equal to threshold. Now we just derive the injected current we just solved for I sub E. That's the injected current required to make V infinity reach V threshold, and you can see that it's just G leak times V threshold minus E leak. And we call that I threshold. So here's the way it looks. The firing rate of this neuron is 0 for low currents. As you inject more current, V infinity increases, but the cell still can't reach threshold until you inject an amount of current, such that V infinity reaches V threshold, and then it begins to spike. And the firing rate increases rapidly, and then going up. Does that makes sense? So many neurons have that property of having a threshold current above which you have to-- below which the cell won't spike. So now let's actually derive the equation for this firing rate as a function of injected current. So here's how we're going to do that. The cell just spiked, and we're going to calculate the amount of time before the cell spikes again. So we're going to start the voltage at V reset. We know that at some injected current above threshold the cell relaxes exponentially to V infinity. And we're just going to calculate how long it takes equal to threshold. So you know that that's an exponential, right? In fact, we wrote down the solution to that exponential a bunch of times. The difference from V to V infinity decreases exponentially. But we know a bunch of these values, right? We know that we're calculating these voltages when tau equals, sorry, when t equals delta t. And we know the initial voltage, that's V reset. And we know the voltage at time delta t, and that's just equal to V threshold. So we can just substitute those quantities into this equation. So now we have V [AUDIO OUT] minus V infinity equals V reset minus V infinity times E to the minus delta t over tau. Does that make sense? Everyone see what I just did? We're just calculating this time that it takes the neuron to relax exponentially from V reset to V threshold. We know all these quantities, so we just stick them into this equation. And what do we solve for? Delta t, good. So we take the natural log of-- well, we divide through by V reset minus V infinity, take the natural log, and we solve for delta t. So delta t equals minus tau, natural log, V infinity minus V threshold over V infinity minus V reset. It's kind of messy, right? I don't know. The shape of that doesn't really leap to my mind. So what we're going to do is actually just simplify this expression in a limit, in a limit that the injected current is large. So what happens here when the injected current is large? What gets big? V infinity gets big, right? When you inject a lot of current, V infinity is very high. Does that make sense? And when V infinity is really big, this expression approaches what? 1. And what is the log of 1? AUDIENCE: 0. MICHALE FEE: 0. So this expression, this expression approaches 1, and this expression, the log of that, approaches 0. So we can do is do a linear approximation of this turn right here. So here's what we're going to do. We're going to work in the limit that V infinity is much bigger than V reset or V threshold. We're going to use the approximation that log of 1 plus alpha is just alpha. So as this approaches 1, you can write it as 1 plus alpha. That whole thing there approximates to alpha. And when you do that and you solve for the firing rate, what you find is that the firing rate is just 1 over C delta V times the injected current minus the threshold current. Threshold current is just the rheobase that we calculated before. Well, what does that look like? Well, what is the firing rate, first of all? I kind of simplified this a bit. What is the firing rate when the current is below I threshold? 0. And then if it's above I threshold-- so this expression that I wrote right here is true only if the injected current greater than I threshold. And it's zero below that. So now, what does this look like? The firing rate is 0 until you hit threshold. And then what? It increases-- AUDIENCE: Linearly. MICHALE FEE: --linearly. So this equation is linearly in the injected current. The slope is actually exactly the same as it was for the case where there was no leak. So the fine rate is 0. Once you hit the threshold, the firing rate of the neuron increases approximately linearly for large currents. And if you look down here, what you see is that the current actually-- the actual solution is that the firing rate jumps up at threshold and then tracks along the linear approximation. So the dashed line there is the actual solution. The solid line is that linear approximation. And that right there is really a very good model for a lot of neurons. Most neurons sort of saturate a little bit. Their firing rate kind of flattens out a little bit as you go to very high firing rates. But that's a pretty good approximation. Yes? AUDIENCE: What's [INAUDIBLE]? MICHALE FEE: Delta V is the difference from V reset to V threshold. Those are just parameters of the model. There was another question here. Skyler? AUDIENCE: I had the same question. MICHALE FEE: Same question? OK. Anything else? That's the integrate and fire neuron. That's probably the most commonly used model of neurons in neuroscience-- pretty simple. Or I should say, that's the most commonly used model of spiking neurons. So you can actually take neurons that behave like this, and assemble them into complex networks, and study how network interactions occur with a spiking model. And this model captures most of the interesting, important behavior of spiking neurons. Question, Danny? Any other questions? So let's come back to our Hodgkin-Huxley model, our equivalent circuit. So what we just described was a model neuron in which we replaced these sodium and potassium conductances that actually produce these action potentials with a very simple spike generator. But now what we're going to do is we're going to come back to our model, and we're going to flesh out the biophysical details that allow these two conductances right here to produce action potentials. Now, in fact, most of the time when we model networks of neurons, we simplify the spike generator to something like an integrate and fire spike generator. But the framework that Hodgkin and Huxley developed for describing a time-dependent and voltage-dependent conductances is so powerful and commonly used to describe conductances that it's really worth understanding that mathematical description, that physical description, of ionic conductances and how they depend on voltage and time. So that's what we're going to do next. So the first thing we do is we notice that in the Hodgkin-Huxley model we have three conductances. We have conductance which is very much like the lead conductance that we just used in the integrate and fire model. It has a reversal potential of around minus 50 millivolts, and it's just always on. It's just constant conductance. We have these two other conductances, a sodium conductance and a potassium conductance, that are both time-dependent and voltage-dependent. Each one of those conductances as has a current associated with it, currents flowing through ion channels. The total membrane current is just the sum of all of those currents. That's just definition. The total ionic membrane current is just the sum of contributions from sodium channels, potassium channels, and a leak. So the equation for our Hodgkin-Huxley model, again using Kirchhoff's current law, the sum of all the currents into these nodes has equal 0. So the membrane ionic current plus the capacitive current equals the injected electrode current. Now, each one of these conductances, or each one of these currents, can be written down in the same form that we developed before for our potassium conductance. It's just a, sorry, for the potassium current. So the sodium current is just a sodium conductance times what? What is that? Driving potential, right? The driving potential for sodium. But the sodium conductance now is voltage- and time-dependent. And it's that voltage and time dependence that gives sodium channels the properties that they need to generate action potentials. It's just analogous to what we already did before for the potassium conductance. And here's the potassium current. It's just GK times the driving potential for potassium, and the potassium conductance is voltage- and time-dependent. Again, EK is minus 75, ENA is plus 55. And our leak conductance, the leaked current rather, is just the leak conductance times the driving potential for the leak current. The difference is that, in this model now, the leak conductance is just constant. There's no time dependence, it's always there, and its voltage independent. Any questions about that? So the name of the game here in understanding how this thing works is to figure out where this time dependence and voltage dependence comes from. What's that? AUDIENCE: [INAUDIBLE] MICHALE FEE: They're very close. The potassium equilibrium potential is always very close to minus 75. In some neurons, it might be as low as minus 95, but it's always in that range. The sodium reversal potential is always plus 50-ish. Well, it's highly consistent across mammals, and the numbers for squid are pretty close to that as well. I think these are the numbers for squid that come from Hodgkin and Huxley. Questions? Now, how do these things generate an action potential? That's the next question. Just in principle, how do you think about conductances like this generating action potentials? So let's say that the conductances here are 0. We set those to 0. Those little arrows mean variable or adjustable. So we can imagine that we have our hand on the knob, and let's just turn those both down to 0. So what's the voltage in the cell going to be? So what does the cell do? Well, if this is one of those moments where everybody knows the answer and they're just not saying it-- anybody? Skylar? AUDIENCE: [INAUDIBLE] MICHALE FEE: Good. So roughly, it's close to that. It's minus 50, in this case. Good. So if these conductances are 0, then the cell is going to be sitting at the vault of this battery. There's 0 current, steady state, 0 current. The voltage drop across here is 0, because there is 0 current. And so the inside of the cell had better be sitting at that voltage of that battery, which is minus 50. So now, what happens if we suddenly turn on a conductance? What do I mean by "turn on a conductance"? We make the resistance really small, or we make the conductance really big. So what are we doing? Let's turn that conductance on as much as possible, which means we're setting that resistor to 0. Actually, if this resistance is 0, then the voltage drop across-- the voltage inside has to just be-- it has to be set to the voltage of that battery. Does that make sense? Well, if we have no conductance here, then the voltage inside the cell will relax toward e sub l. But if we now turn on this conductance, we set that resistor to 0, the voltage will jump up that resistor back up to some big value. What's the voltage going to do? Relax back to e sub l. So Daniel made this nice simulation of what happens. So here's what is going to happen. These conductances are going to start at 0. We're going to-- oh, wait. These conductances are to start at 0, and then we're going to make this resistor small-- not 0, but small. Then we're going to make this resistor big, and we're going to make that resistor small. So watch what happens. So the conductances are going to be plotted along the bottom, and the voltage of the cell is going to be plotted as a function of time here. And the green and red show the reversal potentials for the sodium and potassium. There, the sodium conductance just got turned on. And what happened? The voltage of the cell jumps up to [INAUDIBLE] when we turn on the potassium conductance, and we turn off the sodium conductance, and the voltage gets dragged down to EK. Then when we turn off the potassium conductance, the voltage relaxes back up to [INAUDIBLE].. So look, conductances are just knobs that allow the cell to control its voltage-- an anthropomorphic way to put it. You can control the voltage of the cell just by setting the values of those resistors. If you make this resistor really small and that resistor big, the voltage jumps up toward [INAUDIBLE].. Why is that? Because you're connecting the inside of the cell to that battery. Turn that off, and then turn on this conductance, or setting that resistor to be very small, now you're connecting the inside of your cell to this battery, which is negative, down here. So you can just control the voltage of the cell up and down just by twiddling these knobs. Another way to think about it is this. If this resistor is big, and if you set that resistor to be really small, then the voltage here dominates. So over the timescale of any process that we're considering here, the ionic concentrations inside the cell and outside of the cell don't change. Does that make sense? Yes? AUDIENCE: So if we set both of them? MICHALE FEE: So you set both of these to 0? AUDIENCE: Not 0. [INAUDIBLE] MICHALE FEE: Yeah. AUDIENCE: [INAUDIBLE] MICHALE FEE: If the conductance is really high, that means setting the resistors really small. AUDIENCE: Exactly. MICHALE FEE: That's a great question. What happens? AUDIENCE: [INAUDIBLE] MICHALE FEE: Exactly. AUDIENCE: Isn't that like-- MICHALE FEE: That's exactly right. If you set both of these, if you turn on both of these conductances, then the voltage goes somewhere in the middle. And the voltage that it goes to is actually-- you have to [AUDIO OUT]. The Nernst potential that we calculated is calculated only for the case where you have one ion. If you want to calculate the equilibrium potential when you have multiple pores open, then you have to use a different method of calculating. You have to actually calculate the currents flowing in each one of those channels, and based on the permeabilities of the channels, and you get a different expression, called the Goldman-Hodgkin-Katz equation. But the bottom line is, if you open up both of those conductances, it's somewhere in the middle. And if you want to get the exact right answer, you have to use the GHK equation. Yes? AUDIENCE: [INAUDIBLE] MICHALE FEE: Almost. We're not quite there yet. We're going to get there. But you can see what happens here is that these conductances are actually voltage-dependent. So for example, the sodium conductance turns on at higher voltages. When this conductance starts getting bigger, what happens to the voltage in the cell? It starts going up, right? But if this is voltage-dependent and it turns on at a higher voltage, what happens to the conductance? When the voltage gets a little bit bigger, what happens to the conductance, which does what to the voltage? Makes it go up even faster, right? And so you get this runaway process where the disconductance turns on very quickly and the voltage jumps up. That's the essence of the action potential. But the essential picture that I wanted you to get from this slide is that these are just knobs. And when you turn the knobs, you can change the volt-- that controls the voltage in the cell. Each one of these is causing the cell to be dragged toward its reversal potential. So if this conductance is big, this conductance is big, the voltage in the cell gets dragged toward ENA. If the conductance is big, the voltage in the cell gets dragged toward EK. And next time, we're going to go through the process. We're going to describe the process to drive the voltage dependence of those ion channels, the sodium and potassium ion channels, to derive the voltage dependence and the time dependence that explains how you get this action potential in a neuron. So that's next Tuesday.
MIT_940_Introduction_to_Neural_Computation_Spring_2018
11_Spectral_Analysis_Part_1_Intro_to_Neural_Computation.txt
MICHALE FEE: So who remembers what that's called? A spectrograph. Good. And it's a spectrogram of me. Good morning, class. OK. So it's a spectrogram of speech. And so we are going to continue today on the topic of understanding-- developing methods for understanding how to characterize and understand temporally structured signals. So that is the microphone recording of my voice saying "good morning, class." And then this is a spectrogram of that signal where at each moment in time, you can actually extract the spectral structure of that signal. And you can see that the information in speech signals is actually carried in parts of the signal in the way the power in the signal at different frequencies changes over time. And your ears detect these changes in frequency and translate that into information about what I'm saying. And so we're going to today start on a-- well, we sort of started last time, but we're really going to get going on this in the next three lectures. We're going to develop a powerful set of tools for characterizing and understanding the temporal structure of signals. So this is the game plan for the next three lectures. Today we're going to cover Fourier series-- complex Fourier series. We're going to extend that to the idea of the Fourier transform. And then so the Fourier transform is sort of a very general mathematical approach to understanding the temporal structure of signals. That's more of a-- you think of that more in terms of doing analytical calculations or sort of conceptual understanding of what Fourier decomposition is. But then there's a very concrete algorithm for characterizing spectral structure of signals called the fast Fourier transform. And that's one class of methods where you sample signals discretely in time and get back discrete power at discrete frequencies. So that's called a discrete Fourier transform. And most often that's used to compute the power spectrum of signals. And so that's what we're going to cover today. And then in the next lecture, we're going to cover a number of topics leading up to spectral estimation. We're going to start with the convolution theorem, which is a really powerful way of understanding the relationship between convolution in the time domain and multiplication of things in the frequency domain. And the convolution theorem is really powerful, allowing you to kind of think about, intuitively understand the spectral structure of different kinds of signals that you can build by convolving different sort of basic elements. So if you understand the Fourier decomposition of a square pulse and a train of pulses or a Gaussian, you can basically just by kind of thinking about it figure out the spectral structure of a lot of different signals by just combining those things sort of like LEGO blocks. It's super cool. We're going to talk about noise and filtering. We're going to talk about the Shannon-Nyquist sampling theorem, which tells you how fast you have to sample a signal in order to perfectly reconstruct it. It turns out it's really amazing. If you have a signal in time, you can sample that signal at regular intervals and perfectly reconstruct the signal if that signal doesn't have frequency components that are too high. And so that's captured in this Shannon-Nyquist sampling theorem. That turns out to actually be a topic of current debate. There was a paper published recently by somebody claiming to be able to get around the sampling theorem and record neural signals without having to guarantee the conditions under which the Shannon-Nyquist sampling theorem claims. And Markus Meister wrote a scathing rebuttal to that basically claiming that they're full of baloney. And so those folks who wrote that paper maybe should have taken this class. So anyway, you don't want to be on the wrong end of Markus Meister's blog post. So pay attention. So then we're going to get into spectral estimation. Then in the last lecture, we're going to talk about spectrograms. We're going to talk about how to compute spectrograms and understand really how to take the data and break it into samples called windowing, how to multiply those samples by what's called a taper to avoid contaminating the signal with lots of noise that's unnecessary. We're going to understand the idea of time bandwidth product. How do you choose the width of that window to emphasize different parts of the data? And then we're going to end with some advanced filtering methods that are commonly used to control different frequency components of signals in your data. So a bunch of really powerful things. Here's what we're going to talk about today. We're going to continue with this Fourier series. We started with symmetric functions last time. We're going to finish that and then talk about asymmetric functions. We're going to extend that to complex Fourier series, introduce the Fourier transform and the discrete Fourier transform and this algorithm and the fast Fourier transform, and then I'm going to show you how to compute power spectrum. So just to make it clear, all of this stuff is basically going to be to teach you how to use one line of MATLAB. One function, FFT. Now, the problem is it's really easy to do this wrong. It's really easy to use this but not understand what you're doing and come up with the wrong answer. So all of these things that we're going to talk about today are just the kind of basics that you need to understand in order to use this very powerful function in MATLAB. All right. So let's get started. So last time, we introduced the idea of a Fourier series. We talked about the idea that you can approximate any periodic function. So here I'm just taking a square wave that alternates between positive and negative. It's periodic with a period capital T. So it's a function of time. It's a even function or a symmetric function, because you can see that it's basically mirror symmetry around the y-axis. It's even because even polynomials also have that property of being symmetric. We can approximate this periodic function T as a sum of sine waves or cosine waves, in this case. We can approximate that as a cosine wave of the same period and the same amplitude. So we can approximate as a coefficient times cosine 2 pi f0 t where f0 is just 1 over the period. So if the period is one second, then the frequency is 1 hertz, 1 over 1 second. We often use this different representation of frequency, which is usually called omega, which is angular frequency. And it's just 2 pi times this oscillation frequency. And it has units of radians per second. So we talked about the fact that you can approximate this periodic function as a sum of cosines. We talked about the idea that you only need to consider cosines that are integer multiples of omega 0, because those are the only cosine functions, those are the only functions, that also are periodic at frequency omega 0. So a function cosine 3 omega 0 t is also periodic at frequency omega 0. Does that make sense? So now we can approximate any periodic function. In this case, any even or symmetric periodic function as a sum of cosines of frequencies that are integer multiples of omega 0. And each one of those cosines will have a different coefficient. So here's an example where I'm approximating this square wave here as a sum of cosines of these different frequencies. And it turns out for a square wave, you only need the odd multiples of omega 0. So here's what this approximation looks like for the case where you only have a single cosine function. You can add another cosine of 3 omega 0 t and you can see that that function starts getting a little bit more square. You can add a cosine 5 omega 0 t. And as you keep adding those things, again, with the correct coefficients in front of these, you can see that the function more and more closely approximates the square wave that we're trying to-- yes, Habiba AUDIENCE: Why do we only need the odd multiples? MICHALE FEE: It just, in general, you need all the multiples. But for this particular function, you only need the odd ones. Here's another example. So in this case, we are summing together cosine functions to approximate a train of pulses. So the signal we're trying to approximate here just has a pulse every one unit of time, one period. And you can see that to approximate this, we can basically just sum up all the cosines of all frequencies n omega 0. And you can see that at time 0, all of those functions are positive. And so all of those positive contributions to that sum all add up. And as you add them up, you get a big peak. That's called constructive interference. So all those peaks add up. And you also get those peaks all adding up one period away. One period of cosine omega 0. You can see they add up again. So you can see this is a periodic function. In this time window between those peaks, you can see that you have positive peaks of some of those cosines, negative peaks, positive, negative. They just sort of all add up. They interfere with each other destructively to give you a 0 in the intervals between the peaks. Does that make sense? And so basically, by choosing the amplitude of these different cosine functions, you can basically build any arbitrary periodic function down here. Does that makes sense? All right. There's one more element that we need to add here for our Fourier series for even functions. Anybody have any idea what that is? Notice here I've shifted this function up a little bit. So it's not centered at 0. A constant term. What's called a DC term. We basically take the average of that function. We add it here. That's called a DC term. a 0 over 2 is essentially the average of the function we're trying to approximate. All right. Good. We can now write that as a sum. y even of t equals a0 over 2 plus a sum over all these different n's of a sub n, which is a coefficient, times cosine n omega 0 t. Omega 0 is 1 over-- is 2 pi over this time interval here, the periodicity. All right. Good. How do we find those coefficients? So I just told you that the first coefficient, the a 0 over 2, is just the average of our function t over one time window from minus t over 2 to plus t over 2. It's just the integral of that function over one period divided by t. And that gives you the average, which is a 0 over 2. All right, any questions about that? It's pretty straightforward. What about this next coefficient, a1? So the a1 coefficient is just the overlap of our function y of t with this cosine. We're just multiplying y0 times cosine of omega 0 t integrating over time and then multiplying by 2 over t. So that's the answer. And I'm going to explain why that is. That is just a correlation. It's just like asking how much-- let's say we had a neuron with a receptive field of cosine omega 0 t. We're asking how well does our signal overlap with that receptive field. Does that make sense? We're just correlating our signal with some basis function, with some receptive field. And we're asking how much overlap is there. The a2 coefficient is just the overlap of the function y with cosine 2 omega 0 t. And the a sub n coefficient is just the overlap with cosine n omega 0 t. Just like that. And you can see that this average that we took up here just is the generalization of this to the overlap of our function to cosine 0 omega 0 t. Cosine 0 omega 0 t is just 1. And so this coefficient a0 just looks the same as this. It's just that in this case, it turns out to be the average of the function. Any questions about that? So that is, in general, how you calculate those coefficients. So you would literally just take your function, multiply it by a cosine of some frequency, integrate it, and that's the answer. Let's just take a look at what some of these coefficients are for some really simple functions. So there I've rewritten what each of those coefficients is as an integral. And now let's consider the following function. So let's say our function is 1. It's just a constant at 1. So you can see that this integral from minus t over 2 to t over 2, that integral is just t multiplied by 2 over t gives that coefficient as just 2. If our function is cosine omega 0 t, you can see that if you put a cosine in here, that averages to 0. If you put a cosine in here, you get cosine squared. The integral of cosine squared is just half of basically the full range. So that's just t over 2. When you multiply by 2 over t, you get 1. And the coefficient a2 for a function cosine omega t is 0, because the integral of cosine omega 0 t times cosine 2 omega 0 t is 0. All right. If we have a function cosine 2 omega 0 t, then these coefficients are 0, and that coefficient is 1. You can see that you have this interesting thing here. If your function is cosine omega 0 t, then the only coefficient that's non-zero is the one that you're overlapping with cosine omega 0 t. It's only this first coefficient that's non-zero. If your function has a frequency 2 omega 0 t, then the only coefficient that's non-zero is the a2. So what that means is that if the function has maximal overlap, if it overlaps one of those cosines, then it has 0 overlap with all the others. And we can say that set of cosine functions, cosine omega 0 t, cosine 2 omega 0 t, forms what's called an orthogonal basis set. We're going to spend a lot of time talking about basis set later, but I'm just going to throw this word out to you so that you've heard it when I come back to the idea of basis sets later. You're going to see this connection. So the basic idea is that what we're doing is we're taking our signal, which is a vector. It's a set of points in time. We can think of that as a vector in a high dimensional space. And we're simply expressing it in a new basis set of cosines of different frequencies. So each of those functions, cosine n omega t, is like a vector in a basis set. Our signal like a vector. And what we're doing is when we're doing these projections, we're simply computing the projection of that vector, our signal, onto these different basis functions, these different basis vectors. So we're just finding the coefficient so that we can express our signal as a sum of a coefficient times a basis vector plus another coefficient times another basis vector and so on. So for example, in the simple standard basis where this vector is 0 1 and that vector is 1 0, you can write down an arbitrary vector as a coefficient times this plus another coefficient times that. That make sense? And how do we find those coefficients? We just take our vector and dot it onto each one of these basis vectors, x2. Does that make sense? You don't need to know this for this section, but we're going to come back to this. I would like you to eventually kind of combine these views of taking signals and looking at projections of those into new basis sets. And as you know, how you see things depends on how you're looking at them, the direction that you look at them. That's what we're doing. When we take a signal and we project it onto a function, we're taking a particular view of that function. So as you know, the view you have on something has a big impact on what you see. Right? So that's all we're doing is we're taking functions and finding the projection on which we can see something interesting. That's it. That's all spectral analysis is. And the particular views we're looking at are projections onto different periodic functions. Cosines of different frequencies. All right? And what you find is that for periodic signals, there are certain views where something magically pops out and you see what's there that you can't see when you just look at the time domain. All right. So we looked at even functions or symmetric functions. Now let's take a look at odd functions are antisymmetric functions. These are called odd because the odd polynomials like x cubed looks like this. If it's negative on one side, it's positive on the other side. Same here. If it's negative here, then it's positive there. So we can now write down Fourier series for odd or antisymmetric functions. What do you think we're going to use? Instead of cosines, we're going to use sines, because sines are symmetric around-- antisymmetric around the origin. And we're still going to consider functions that are periodic with period t. We can take any antisymmetric function and approximate it as a sum of sine waves of frequency 2 pi over t or, again, omega 0. And integer multiples of that omega 0. All right. So again, our odd functions can be approximated as a sum of components, contributions of different frequencies with a coefficient times sine of omega 0 t plus another coefficient times sine of 2 omega 0 t and so on. And we can write that as a sum that looks like this. So a sum over n from 1 to infinity of coefficient b sub n times sine of n omega 0 t. And why is there no DC term here? Good. Because an antisymmetric function can't have a DC offset. So for arbitrary functions, you can write down any arbitrary periodic function as the sum of a symmetric part and an antisymmetric part. So we can write down an arbitrary function as a sum of these cosines plus a sum of sine waves. So that's Fourier series. Any questions about that? So this is kind of messy. And it turns out that there's a much simpler way of writing out functions as sums of periodic functions. So rather than using cosines and sines, we're going to use complex exponentials. And that is what complex Fourier series does. All right, so let's do that. So you probably recall that you can write down a complex exponential e to the i omega t as a cosine omega t plus i sine omega t. So e to the i omega t is just a generalization of sines and cosines. So the way to think about this is e to the i omega t is a complex number. If we plot it in a complex plane where we look at the real part along this axis, the imaginary part along that axis, e to the i omega t just lives on this circle. No matter what omega t is, e to the i omega t just sits on this circle in the complex plane, the unit circle in the complex plane. e to the i omega t is a function of time. It simply has a real part that looks like cosine. So the real part of this as you increase t or the phase omega t is the real part just oscillates sinusoidally back and forth like this as a cosine. The imaginary part just oscillates back and forth as a sine. And when you put them together, something that goes back and forth this way as a cosine and up and down that way as a sine just traces out a circle. So e to the i omega traces out a circle in this direction, and as time increases, it just goes around and around like this. E to the minus i omega t just goes the other way. That make sense? Got the real part that's going like this, the imaginary part that's going like this. And you put those together and they just go around in a circle. So you can see it's a way of combining cosine and sine together in one function. So what we're going to do is we're going to rewrite our Fourier series as instead of sines and cosines, we're going to stick in e to the-- we're going to replace the sines and cosines with e to the i omega t and e to the minus i omega t. So we're just going to solve these two functions for cosine and sine, and we're going to take this and plug it into our Fourier series and see what we get. So let's do that. And remember, 1 over i is just minus i. So here's our Fourier series with our sum of cosines, our sum of sines. We're just going to replace those things with the e to the i omega t plus e to the minus i omega t and so on. So there we go. Just replacing that. There's a 1/2 there. And now we're just going to do some algebra. And we're going to collect either the i omega t's and e to the minus i omega t's together. And this is what it looks like. So you can see that what we're doing is collecting this into a bunch of terms that have e to the positive in omega t here and e to the minus in omega 0 t there. And now we have still a sum of three things. So it doesn't really look like we've really gotten anywhere. But notice something. If we just put the minus sign into the n, then we can combine these two into one sum. And this, if n is 0, what is e to the in omega t? It's just 1. So we can also write this as something e to the in omega t as long as n is 0. So that's what we do. Oh, and by the way, these coefficients here we can just rewrite as sums of those coefficients up there. Don't worry. This all looks really complicated. By the end, it's just boiled down to one simple thing. That's why we're doing this. We're simplifying things. So when you do this, when you rewrite this, this looks like a sum over n equals 0. This is a sum over positive n, n equals 1 to infinity. This is a sum over negative n, minus 1 to minus infinity. And all those combine into one sum. So now we can write down any function y of t as a sum over n equals minus infinity to infinity of a coefficient a sub n times e to the in omega. So we went from having this complicated thing, this sum over our constant terms, sines, and cosines, and we boiled it down to a single sum. That's why these complex exponentials are useful. Because we don't have to carry around a bunch of different functions, different basis functions to describe an arbitrary signal y. But remember, this is just a mathematical trick to hide sines and cosines. A very powerful trick, but that's all it's doing. It's hiding sines and cosines. All right. So we've replaced our sums over coastlines and signs with a sum of complex exponentials. All right. Remember what we're doing here. We're finding a new way of writing down functions. So what's cool about this, what's really interesting about this, is that for some functions-- so what we're doing is we have an arbitrary function y. And we're writing a down with just some numbers a sub n. And what's cool about this is that for some functions y, you can describe that function with just a few of these coefficients a. Does that make sense? So let me just show you an example of some functions that look really, really simple when you rewrite them using these coefficients a. So here's an example. So here's a function of n that has three numbers. a sub minus 1, n equals minus 1 is 1/2, a sub 0 is 0, and a sub 1 is 1/2. And all the rest of them are 0. So really, we only have two non-zero entries in this sum. So what function is that? It's just a cosine. So we have-- let's write this out. Y equals a sum over all of these things. 1/2 e to the minus-- the first n is minus 1. e to the minus i omega 0 t plus 1/2 e to the plus i omega 0 t. We're just writing out that sum. And that is just-- sorry, I didn't tell you what that equation was. That's Euler's equation. That's just cosine minus i sine omega t. That is just cosine plus i sine omega t. The sines cancel. And you're left with cosine omega 0 t. So here's this function of time that goes on infinitely. If you wanted to write down all the values of cosine omega 0 t, you'd have to write down an awful lot of numbers. And here we can write down that same function with two numbers. So this is a very compact view of that function of time. Here's another function. What function do you think that is? What would be the time domain equivalent of this set of Fourier coefficients? Good. So we're going to do the same thing. Just write it out. All these sums are 0. All these components are 0 except for two of them. n equals minus 2 and n equals plus 2. You just put those in there as 1/2 e to the minus 2 omega 0 t plus 1/2 e to the plus 2 omega 0 t. And that is just if you write that out, those sines cancel, and you have cosine 2 omega 0 t. Pretty simple, right? How about this one? This is the-- remember, the a's are complex numbers. The ones we were looking at here had two real numbers. Here's an example where the a's are imaginary. One is i over 2. One is minus i over 2. That's what the complex Fourier representation looks like of this function. We can just plug it in here. You solve that, you can see that in this case, the cosines cancel, because this is i over 2 cosine 2 omega t minus i over 2 cosine 2 omega t. Those cancel and you're left with sine 2 omega 0 t. So that is what Fourier representation of sine 2 omega t looks like. So the functions that have higher frequencies will have non-zero elements that are further out in n. Any questions about that? So again, this set of functions e to the in omega 0 t form an orthogonal, orthonormal basis set over that [INAUDIBLE] over that interval. The a0 coefficient is just the projection of our function onto e to the 0. n equals 0. And that's just the average. The a1 coefficient is just the projection of our function e to the minus i omega 0 t. And in general, the m-th coefficient is just the projection of our function onto e to the minus im omega 0 t. And we can take those coefficients, plug them into this sum, and reconstruct an arbitrary periodic function, this y of t. So we have a way of taking a function and getting these complex Fourier coefficients. And we have a way of taking those coefficients and reconstructing our function. This is just a bunch of different views function y. And from all of those different views, we can reconstruct our function. That's all it is. So in general, when we do Fourier decomposition in a computer in MATLAB on real signals that you've sampled in time, it's always done in this kind of discrete representation. You've got, in this case, discrete frequencies. When you sample signals, you've got functions that are discrete in time, and this becomes a sum. But before we go to the discrete case, I just want to show you what this looks like when we go to the case of arbitrary functions. Remember, this thing was about representing periodic functions. You can only represent periodic functions using these Fourier series. But before we go on to the Fourier transform algorithm and discrete Fourier transforms, I just want to show you what this looks like for the case of arbitrary functions. And I'm just showing this to you. I don't expect you to be able to reproduce any of this. But you should see what it looks like, for those of you who haven't seen it already. So what we're going to do is go from the case of periodic functions to non-periodic functions. And the simplest way to think about that is a periodic function does something here, and then it just does the same thing here, and then it just does the same thing here. So how do we go from this to an arbitrary function? Well, we're just going to let the period go to infinity. Does that make sense? That's actually pretty easy. We're going to let t go to infinity, which means the frequencies, which are 2 pi omega 0 or 2 pi over t, that's going to go to 0. So our steps. Remember when we had this discrete Fourier transform here, we had these steps in frequency as a function of n. Those steps are just going to get infinitely close together, the different frequency bins. So that's what we're going to do now. You're just going to let those frequency steps go to 0. And now frequency, in discrete case, the frequency is just that number, that n times omega 0. m times omega 0. Well, omega 0 is going to 0, but the m's are getting really big. So we can just change-- we're just going to call that omega. The omega 0's are going to 0, so the m's are getting infinitely big. So we can't really use m anymore or n to label our frequency steps. So we're just going to use omega. We used to call our coefficients, our Fourier coefficients, we used to label them with m. We can't use m anymore, because m is getting infinitely big. So we use this new label omega. So a sub m becomes a new variable, our Fourier transform, labeled by the frequency omega. And we're just going to basically just make those replacements in here. So the coefficients become a function of frequency. That's just an integral. Remember, t is going to infinity now. So this has to go from minus infinity to infinity of our function y times our basis function. And instead of e to the im omega 0 t, we just replace m omega 0 with omega. So e to the minus i omega t. And that is the Fourier transform. And we're just going to do the same replacement here, but instead of summing over n going from minus infinity to infinity, we have to write this as an integral as well. And so that we can reconstruct our function y of t as an integral of our Fourier coefficients times the e to the i omega t. See that? It's essentially the same thing. It's just that we're turning this sum into an integral. All right. So that's called a Fourier transform. And that's the inverse Fourier transform, this [INAUDIBLE] from a function to Fourier coefficients. And this takes your Fourier coefficients and goes back to your function. All right, good. Let me just show you a few simple examples. So let's start with the function y of t equals 1. So it's just a constant. What is the Fourier transform? So let's plug this into here. Does anyone know what that integral looks like of the integral from minus infinity to infinity of e the minus i omega t dt integrating over time. It's a delta function. There's only one value of omega for which that function is not 0. Remember, so let's say omega equals 1, 1-- you know, 1 hertz. This is just a bunch of sines and cosines. So when you integrate over sines and cosines, you get 0. Integral over time of cosine is just 0. But if omega is 0, then what is e to the i omega t? e to the i 0 is 1. And so we're integrating over 1 times dt. And that becomes infinity at omega equals 0. And that's a delta function, it's 0 everywhere except 4 at 0. So that becomes a delta function. This Fourier transform of a constant is a delta function. And that's a really-- it's a really good one to know. The Fourier transform of a constant is a delta function. That's called a Fourier transform pair. You have a function and another function. One function is the Fourier transform of the other function that's called a Fourier transform pair. So the Fourier transform of a constant is just a delta function at 0. You can invert that. If you integrate, let's just plug that delta function into here. If you integrate delta function times e to the i omega t, you just get e to the i 0t, which is just 1. So we can take the Fourier transform of 1, get a delta function, [INAUDIBLE] inverse Fourier transform the delta function, and get back 1. How about this function right here? This function is e to the i omega 1. It's a sine wave and a cosine wave, a complex sine and cosine, at frequency omega 1. Anybody know what the Fourier transform of that is? AUDIENCE: [INAUDIBLE] MICHALE FEE: Yeah. So it's basically-- you can think of this-- that's the right answer. Rather than try to explain it, I'll just show you. So the Fourier transform of this is just a peak at omega 1. And we can inverse Fourier transform that and recover our original function. So those last few slides are more for the aficionados. You don't have to know that. We're going to spend time looking at the discrete versions of these things. How about this case? This is a simple case where you have a function that the Fourier transform of which has a peak at omega 1 and a peak at minus omega 1. The Fourier transform of the inverse Fourier transform of that is just cosine omega 1t. So a function, a cosine function with frequency omega 1, has two peaks, one at frequency omega 1 and another one at frequency minus omega 1. So that looks a lot like the case we just talked about where we had this complex Fourier series. And we had a peak at n equals 2 and another peak at n equals minus 2. And that function that gave us that complex Fourier series was cosine 2 omega t. So that's just what we have here. We have a peak in the spectrum at omega 1, peak at minus omega 1, and that is the Fourier decomposition of a function that is cosine omega 1t. So it's just like what we saw before for the case of the complex Fourier series. So that was Fourier transform. And now let's talk about the discrete Fourier transform and the associated algorithm for computing that very quickly called the Fast Fourier Transform, or FFT. So you can see computing these Fourier transforms, if you were to actually try to compute Fourier transforms by taking a function, multiplying it by these complex exponentials like writing down the value of e to the i omega t at a bunch of different omegas and a bunch of different t's and then integrating that numerically, that would take forever computationally. But it turns out that there's-- so you have to compute that integral over time for every omega that you're interested in. It turns out really, really fast algorithm that you can use for the case where you've got functions that are sampled in time and you want to extract the frequencies of that signal at a discrete set of frequencies. I'm going to switch from using omega, which is very commonly used when you're talking about Fourier transforms, to just using f, the frequency. So it's just f is just 2 pi. So omega is 2 pi f. So here I'm just rewriting the Fourier transform and the inverse Fourier transform with f rather than omega. So we're going to start using f's now for the discrete Fourier transform case. And we're going to consider the case where we have signals that are sampled at regular intervals delta t. So here we have a function of time y of t. And we're going to sample that signal at regular intervals delta t. So this is delta t right here. That time interval there. So the sampling rate, the sampling frequency, is just 1 over delta t. So the way this works in the fast Fourier transform algorithm is you just take those samples and you put them into a vector in MATLAB, just some one-dimensional array. And we're going to imagine that our samples are acquired at different times. And let's [INAUDIBLE] minus time step 8, minus 7, minus 6. Time step 0 is in the middle up to time step 7. And we're going to say that N is an even number. The fast Fourier transform works much, much faster when N is an even number, and it works even faster when N is a multiple, a power of 2. So in this case, we have 16 samples. It's 2 to the 4. Should usually try to make your samples be a power of 2. The number of samples. So there is our function of time sampled at regular intervals t. So you can see that the t min, the minimum time, in this vector of sample points in our function is minus N over 2 delta t. And that [INAUDIBLE] time is N over 2 minus 1 times delta t. And that's what the MATLAB code would look like to generate an array of time values. Does that make sense? The FFT algorithm returns the Fourier components of that function of time. And it returns the Fourier components in a vector that has the negative frequencies on one side and the positive frequencies on the other side and the constant term here in the middle. The minimum frequency is N over 2 times delta F. Oh, I should say it returns the Fourier components in steps of delta f where delta F is the sampling rate divided by the number of time steps that you put into it. Don't panic. This is just reference. I'm showing you how you put the data in and how you get the data out. When you put data into the Fourier transform algorithm, the FFT algorithm, you put in data that's sampled at times, and you have to know what those times are. If you want to make a plot of the data, you need to make an array of time values. And they just go from a t min to a t max. And there's a little piece of MATLAB code that produces that array of times for you. What you get back from the Fourier transform, the FFT, an array of not values of the function of time, but rather an array of Fourier coefficients. Just like we stuck in into our-- when we did the complex Fourier series, we stuck in a function of time. And we get out a list of Fourier coefficients. Same thing here. We put in a function of time, and we get out Fourier coefficients. And the Fourier coefficients are complex numbers associated with different frequencies. So let's say the middle coefficient that you get will be the coefficient for the constant term. This coefficient down here will be the coefficient for the minimum frequency, and that coefficient will be the coefficient for the maximum frequency, the most positive frequency. Does that make sense? AUDIENCE: [INAUDIBLE] MICHALE FEE: Ah, OK. So I was hoping to just kind of skip over that for now. But when you do a discrete Fourier transform, turns out that the coefficient for the most negative frequency is always exactly the same as the coefficient for the most positive frequency. And so they're just given in one-- they're both given in one element of the array. You could replicate that up here, but it would be pointless. And the length of the array would have to be n plus 1. Yes? AUDIENCE: [INAUDIBLE] MICHALE FEE: OK. Good. That question always comes up, and it's a great question. What we're trying to do is to come up with a way of representing arbitrary functions. So we can represent symmetric functions by summing together a bunch of cosines. We can represent antisymmetric functions by summing together a bunch of sines. But if we want to represent an arbitrary function, we have to use both cosines and sines. So we have this trick, though, where we can, instead of using sines and cosines, using these two separate functions, we can use a single function which is a complex exponential to represent things that can be represented by sums of cosines as well as things that can be represented as sums of signs. It's an arbitrary function. And the reason is because the complex exponential has both a cosine in it and a sine. So we can represent a cosine as a complex exponential with a positive frequency, meaning that it's a complex number that goes around this direction plus another function where the complex number is going around in this direction. So positive frequencies mean as time increases, this complex number is going around the unit circle in this direction. Negative frequencies mean the complex number's going around in this direction. And you can see that in order to represent a cosine, I need to have-- so let's see if I can do this. Here's my e to the-- here's my plus frequency going around this way. Here's my minus frequency going around this way. And you can see that I can represent a cosine if I can make the imaginary parts cancel. So if I have one function that's going around like this, another function that's going around like this. If I add them, then the imaginary part cancels. The sine cancels. This plus this is cosine plus cosine. i sine minus i sine. So the sines can-- the sine cancels. And what I'm left with is a sine. So represent a cosine as a sum of a function that has a positive frequency and a negative frequency. You can see that the y component, the imaginary component, cancels and all I'm left with is the cosine that's going across like this. So it's just a mathematical trick. Positive and negative frequencies are just a mathematical trick to make either the symmetric part of the function or antisymmetric part of the function cancel. So I can just use these positive and negative frequencies to represent any function. If I only had positive frequencies, I wouldn't be able to represent arbitrary functions. So just one more little thing that you need to know about the FFT algorithm. So remember we talked about if you have a function of time, negative times are over here, positive times are over here. You can put-- you can sample your function at different times and put those different samples into an array. Before you send this array of time samples to the FFT algorithm, you just need to swap the right half of the array with the left half of the array using a function called time shifted arrays. Don't worry about it. It's just the guts of the FFT algorithm wants to have the positive times in the first half and the negative times in the second half. So you just do this. Then you run the FFT function on this time shifted array. And what it spits back is an array with the positive frequencies in the first half and the negative frequencies in the second half. And you can just swap those back using the circshift again. That is your spectrum. Your spectral coefficients. Your Fourier coefficients of this function. Just MATLAB guts. Don't worry about it. So here's a piece of code that computes the Fourier coefficients of a function. So the first thing we're going to do is define the number of points that we have in our array. 2048. So it's a power of 2. So the FFT algorithm runs fast. We're figuring out-- we're going to write down a delta t. In this case, it's one millisecond. The sampling rate, the sampling frequency is just 1 over delta t. So 1 kilohertz. The array of times at which the function sampled is just going from minus n over 2 to plus n over 2 minus 1 times delta t. I'm defining the frequency of a sine wave. And we're now getting the values of that cosine function at those different times. Does that make sense? We're taking that function of time and circularly shifting it by half the array. So that's the circularly shifted, the swapped values of our function y. We stick that into the FFT function. It gives you back the Fourier coefficients. You just swap it again. And that is the spectrum, the Fourier transform [INAUDIBLE] signal. And now you can write down a vector of frequencies of each one of those Fourier coefficients. Each one of those segments in that vector y-- remember, there are 2,000 of them now, 2,048 of them-- each one of those is the Fourier coefficient of the function we put in at each one of those different frequencies. Does that make sense? Now let's take a look at a few examples of what that looks like. So here is a function y of t. It's cosine 2 pi f 0 t where f 0 is 20 hertz. So it's just a cosine wave. You run this code on it. And what you get back is an array of complex numbers. It has a real and imaginary part as a function of frequency. And you get this cosine function. It has two peaks, as promised. It has a peak at plus 20 hertz and a peak at minus 20 hertz. One of those peaks gives you an e to the i omega t that goes this way at 20 hertz. The other one gives you an e of the i omega t that goes this way at 20 hertz. And when you add them together, if I can do that, it gives you a cosine. It goes back and forth at 20 hertz. That one. Here's a sine wave. y equals sine 2 pi f 0 t again at 20 hertz. You run that code on it. It gives you this. Notice that-- OK, sorry. Let me just point out one more thing. In this case, the peaks are in the real part of y. For the sine function, the real part is 0 and the peaks are in the imaginary part. There's a plus i at minus 20 i over 2, actually, at minus 20 hertz, a minus i at plus 20 hertz. And what that does is when you multiply that coefficient times e to the i omega t and that coefficient times whichever the opposite one is. One going this way, the other one going this way. You can see that the real part now cancels. And what you're left with is the sine part that doesn't cancel. And that is this function sine omega t. Any questions about that? That's kind of boring. We're going to put more interesting functions in here soon. But I just wanted you to see what this algorithm does on the things that we've been talking about all along. And it gives you exactly what you expect. Remember, we can write down any arbitrary function as a sum of sines and cosines. Which means we can write down any arbitrary function as a sum of these little peaks and these peaks at different frequencies. Does that make sense? So all we have to do is find these coefficients, these peaks, what values of these different peaks to stick in to reconstruct any arbitrary function. And we're going to do a lot of that in the next lecture. We're going to look at what different functions here look like in the Fourier domain. We're going to do that for some segments like a square pulse, for trains of pulses, for Gaussian, for all kinds of different functions. And then using the convolution theorem, you can actually just predict in your own mind what different combinations of those functions will look like. So I want to end by talking about one other really critical concept called the power spectrum. And it's basically usually what you do when you compute the Fourier transform of a function is to figure out what the power spectrum is. The simple answer is that all you do is you square this thing. Take the magnitude squared. But I'm going to build up to that a little bit. So it's called the power spectrum. But first we need to understand what we mean by power. So we're going to think about this in the context of a simple electrical circuit. Let's imagine that this function that we're computing the Fourier transform of, imagine this function is voltage. That's where this idea comes from. Imagine that this function is the voltage that you've measured somewhere in a circuit. Let's say in this circuit right here. Or current, either way. So when you have current flowing through this circuit, you can see that sinusoid-- that some oscillatory current, some cosine, that drives current through this resistor. And when current flows to a resistor, it dissipates power. And the power dissipated in a resistor is just the current times the voltage drop across that resistor. That means that the power-- now, remember, Ohm's law tells you that current is just voltage divided by resistance. So this is v divided by r. So the power is just v squared divided by r. So if the voltage is just a sine wave at frequency omega, then v is some coefficient, some amplitude at that frequency times cosine omega t. We can write that voltage using Euler's equation as 1/2 e to the minus i omega t plus 1/2 e to the plus i omega t. And let's calculate the power associated with that voltage. Well, we have to average over one cycle of that oscillation. So the average power is just given by the square magnitude of the Fourier transform [INAUDIBLE].. So let's just plug this into here. And what you see is that the power at a given frequency omega is just that coefficient magnitude squared over resistance times 1/2 e to the minus i omega t magnitude squared plus 1/2 e to the plus i over i omega t magnitude squared. And that's just equal to 1 over r. That coefficient magnitude squared over 2. So you can see that the power dissipated by this sinusoidal voltage is just that magnitude squared over 2. So we can calculate the power dissipated in any resistor simply by summing up the [INAUDIBLE] magnitude of those coefficients. So let's look at that in a little more detail. So let's think for a moment about the energy that is dissipated by a signal. So energy is just the integral over time. Power is per unit time. Total energy is just the integral of the power over time. So power is just equal to v squared over r. So I'm just going to substitute that in there. So the energy of a signal is just 1 over r times the integral of v squared over time. Now, there's an important theorem in complex analysis called Parseval's theorem that says that the integral over the square of the coefficients in time is just equal to the integral of the square magnitude coefficients in frequency. So what that's saying is that it's just the same as the total power in the signal or the total energy in a signal if you represent it in the time domain is just the same as the total energy in the signal if you look at it in the frequency domain. And what that's saying is that the sum of all the squared temporal components is just this equal to the sum of all the squared frequency components. And what that means is that you can see that each of these frequency components, each component in frequency contributes independently to the power in the signal. What that means is that you can think-- so you can think of the total energy in the signal as just the integral over all frequencies of this quantity here that we call the power spectrum. And so we'll often take a signal, calculate the Fourier components, and plot the area of the Fourier transform, the square magnitude of the Fourier transform. And that's called the power spectrum. And I've already said the total variance of the signal in time is the same as the total variance of the signal in the frequency domain. So mathy people talk about the variant of a signal. The more engineering people talk about the power in the signal. But they're really talking about the same thing. So let's take this example that we just looked at and look at the power spectrum. So that's a cosine function. It has these two peaks. Let me just point out one more important thing. For real functions, so in this class, we're only going to be talking about real functions of time. For real functions, the square magnitude of the Fourier transform is symmetric. So you can see here if there is a peak in the positive frequencies, there's an equivalent peak in the negative frequencies. So when we plot the power spectrum of a signal, we always just plot the positive side. And so here's what that looks like. Here is the power spectrum of a cosine signal. And it's just a single peak. So in that case, it was a cosine at 20 hertz. You get a single peak at 20 hertz. What does it look like for a sine function? What is the power spectrum of a sine function? Remember, in that case, it was the real part was 0 and the imaginary part had a plus peak here and a minus peak here. What does the spectrum of that look like? Right. The square magnitude of i is just 1. Magnitude squared of i is just 1. So the power spectrum of a sine function looks exactly like this. Same as a cosine. Makes a lot of sense. Sine and cosine are exactly the same function. One's just shifted by a quarter period. So it has to have the same power spectrum. It has to have the same power. Let's take a look at a different function of time. Here's a train of delta functions. So we just have a bunch of peaks spaced regularly at some period. I think it was-- I forget the exact number here. But it's around 10-ish hertz. Fourier transform of a train of delta functions is another train of delta functions. Pretty cool. The period in the time domain is delta t. The period in the Fourier domain is 1 over delta t. That's another really important Fourier transform pair for you to know. So the first Fourier transform pair that you need to know is that a constant in the time domain is a delta function in the frequency domain. A delta function in the time domain is a constant in the frequency domain. A train of delta functions in the time domain is a train of functions in the frequency domain and vice versa. And there's a very simple relation between the period in the time domain and the period in the frequency domain. What does the power spectrum of this train of delta functions look like? It's just the square magnitude of this. So it just looks like a bunch of peaks. So here's another function. A square wave. This is exactly the same function that we started with. If you look at the Fourier transform of the imaginary part of 0, the real part has these peaks. So the Fourier transform-- so these are now the coefficients that you would put in front of your different cosines at different frequencies to represent the square wave. But what it looks like is positive peak for the lowest frequency, negative peak for the next harmonic, positive peak, and gradually decreasing amplitudes. The power spectrum of this-- oh, and one more point is that if you look at the Fourier transform of a higher frequency square wave, you can see that those peaks move apart. So you can transform. So higher frequencies in the time domain, stuff happening faster in the time domain, is associated with things moving out to higher frequencies in the frequency domain. You can see that the same function higher frequencies has the same Fourier transform but with the components spread out to higher frequencies. And the power spectrum looks like this. And notice that one more final point here is that when you look at power spectra, often signals can have frequency components that are very small so that it's hard to see them on a linear scale. And so we actually plot them in this method. You plot them in units of decibels. And I'll explain what those are the next time. But that's another representation of the power spectrum of a signal. OK, so that's what we've covered. And we're going to continue with the game plan next time.
MIT_940_Introduction_to_Neural_Computation_Spring_2018
10_Time_Series_Intro_to_Neural_Computation.txt
MICHALE FEE: OK, so we're going to start a new topic today. We're going to spend the next three or so lectures talking about spectral analysis. And we're going to warm up to that topic today by talking about time series more generally. Now, one of the things that we're going to discuss in the context of this new topic-- so I'm going to spend a few minutes reviewing a little bit about receptive fields that we've talked about in the last lecture. And one of the really cool things that I find in developing tools for analyzing data is that there's a really big sense in which, when we developed tools to analyze data, we're actually developing tools that kind of look like what the brain does. So our brains basically learn to analyze sensory stimuli and extract information from those sensory stimuli. And so when we think about developing tools for analyzing data, we take a lot of inspiration from how neurons and brain circuits actually do the same thing. And a lot of the sort of formulation that we've developed for understanding how neurons respond to sensory inputs has a lot of similarity to the kind of things we do to analyze data. All right, so a brief review of mathematical models of receptive fields-- so the basic, most common model for thinking about how neurons respond to sensory stimuli is the linear/non-linear model. And again, the basic idea is that we have a sensory stimulus. In this case, this is the intensity of a visual field. So it's intensity as a function of position x and y-- let's say on the screen or on a retina. Then that stimulus goes through a filter. And the filter is basically a pattern of sensitivity of the neuron to the sensory input. And so in this case, I've represented this filter as a filter that's sensitive to a ring of light around a center of darkness. So this might be like an off neuron in the retina. So that filter acts on the stimulus. It filters some aspect of the stimulus, and develops a response to the stimulus. That response goes to what's called an output non-linearity, which typically looks something like this, where a very negative response produces no spiking of the neuron, no output of the neuron, whereas a large overlap of the stimulus with the filter, with the receptive field, produces a large spiking response. So a typical way this would look for a neuron is that if the filter response, L, is 0, the neuron might have some spontaneous firing rate, r0. And the firing rate of the neuron is modulated linearly around that spontaneous firing rate r by an amount proportional to the response of the filter. And then obviously if the response of the filter is very negative, then the firing rate of the neuron at some point reaches 0. And if the r0 plus L goes below 0, then the firing rate of the neuron can obviously not go negative. And so the firing rate of the neuron will just kind of sit at that floor of 0 firing rate. All right, so that is a response-- that is an output non-linearity. And then most neurons fire sort of randomly, at a rate corresponding to this firing rate that is the output of this output nonlinearity. And so what happens is a neuron generates spikes probabilistically at a rate corresponding to the output of this non-linear response function. OK, any questions about that? All right, so what we're going to do today is I'm going to take a little bit of a detour and talk about how we think about the randomness or the stochasticity of neuronal firing rates. OK, and I'll talk about the Poisson process. And then we're going to come back and think about filters more generally, and how we can analyze signals by applying filters of different types to them. OK, so I think this is basically what we covered last time. Again, the idea is that we can think of the response of a neuron as a spontaneous firing rate plus a filter acting on a stimulus input. In this case, the filter is a two-dimensional filter. So here I'm just fleshing out what this looks like here, for the case of a linear filter in the visual system, a spatial receptive field. So G is the spatial receptive field. i is the intensity as a function of position. And what we do is we multiply that spatial receptive field times the stimulus, and integrate over all the spatial dimensions x and y. In one dimension, we would have a spatial receptive field that looks like this. So this receptive field is sensitive to a positive brightness in the center, and a negative or a dark feature in the surrounding area. And again, the way we think about this is that the neuron is maximally responsive if the pattern of sensory input looks like the receptive field, is highly correlated with the receptive field. So if the receptive field has a positive central region surrounded by negative flanks, then that neuron is maximally responsive if the pattern of light looks like the receptive field. So if the light pattern has a bright spot surrounded by dark flanking regions, then we calculate this integral, what you find is that the positive parts-- the positive receptive field times the positive intensity or brightness multiplies to give you a positive contribution to the neuronal response. A negative component of the receptive field multiplies by a negative component of the intensity. And that gives you a positive contribution to the response. And so you can see that even though the receptive field has positive and negative parts, so does the intensity function have positive and negative parts. And when you multiply those together, you get a positive contribution to the response of the neuron everywhere. And so when you integrate that, you get a big response. In contrast, if the intensity profile looked like this-- it's very broad. So this looks like a bright spot surrounded by a dark ring. If, on the other hand, you have a large bright spot that completely overlaps this receptive field, then when you multiply these two functions together, this positive times positive will give you a positive here. But the negative part of the receptive field overlaps with a positive part of the intensity. And that gives you a negative contribution to the neuronal response. And when you integrate that, the positive here is canceled by the negative there, and you get a small response. All right, any questions about that? I think we covered that in a lot of detail last time. But again, the important point here is that this neuron is looking for a particular kind of pattern in the sensory input. And it responds when the sensory input has that pattern. It doesn't respond as well when the sensory input has a different pattern. And we have the same kind of situation for the sensitivity of neurons to temporal patterns. So we can write down the firing rate of a neuron as a function of time. It's just a spontaneous firing rate plus a filter acting on a time-dependent stimulus. So in this case, this filter will be looking for or sensitive to a particular temporal pattern. And as you recall, if we have a time-dependent stimulus-- let's say this is the intensity of a spot of light, that you can have a neuron that's responsive to a particular temporal pattern. Let's say a brief darkening of the stimulus followed by a pulse of bright high intensity, and then the neuron response after it sees that pattern in the stimulus. And the way we think about this mathematically is that what's happening is that the stimulus is being convolved with this linear temporal kernel. And the way we think about that is that the kernel is sliding across the stimulus. We're doing that same kind of overlap. We're multiplying the stimulus times the kernel, integrating over time, and asking, where in time does the stimulus have a strong overlap with the kernel? And you can see that in this case, there's a strong overlap at this point. The stimulus looks like the kernel. The positive parts of the stimulus overlap with positive parts of the kernel. Negative parts of the stimulus overlap with negative parts of the kernel. So when you multiply that all together, you get a big positive response. And if you actually slide that across and do that integral as a function of time, you can see that this convolution has a peak at the point where that kernel overlaps with the stimulus. And right there is where the neuron would tend to produce a spike. All right, and so, I think near the end of the last lecture, we talked about integrating or putting together the spatial and temporal parts of a receptive field into sort of a larger concept of a spatio-temporal receptive field that combines both spatial and temporal information. All right, so here are the things that we're going to talk about today. We're going to again, take a little bit of a detour, and talk about spike trains being probabilistic. We'll talk about a Poisson process, which is the kind of random process that most people think about when you talk about spike trains of neurons. We're going to develop a couple of measures of spike train variability. So an important thing that neuroscientists often think about when you measure spike trains is how variable are they, how reproducible are they in responding to a stimulus. And a number of different statistical measures have been developed to quantify spike trains. And we're just going to describe those briefly. And I think you'll have a problem set problem that deals with those. And then I'm going to come back to kind of a broader discussion of convolution. I'll introduce two new metrics or methods for analyzing time series data, data that's a function of time. Those are cross-correlation and autocorrelation functions. And I'm going to relate those to the convolution that you've been using more often and we've been seeing in class. And then finally we're going to jump right into spectral analysis of time series, which is a way of pulling out periodic signals from data. And what you're going to see is that that method of pulling out temporal structure out of signals looks a lot like the way we've been talking about how neurons have sensitivity to temporal structure in signals. OK, we're going to use that same idea of taking a signal and asking how much does it overlap with a linear kernel with a filter. And we're going to talk about the kind of filters you use to detect periodic structure and signal. And not surprisingly, those are going to be periodic filters. All right so that's what we're going to talk about today. All right, so let's start with probabilistic spike trains. So the first thing that you discover when you record from neurons in the brain and you present a stimulus to the animal-- let's say you record from neurons in visual cortex or auditory cortex, and you present a stimulus for some period of time, what you find is that the neurons respond. They respond with some temporal structure. But each time you present the stimulus, the response of the neuron is a little bit different. So you can see that this-- so what I'm showing here is a raster plot. So each row of this shows the spiking activity of a neuron during a presentation of this stimulus. The stimulus is a bunch of dots presented that move across the screen. And this is a part of the brain that sensitive to movement of visual stimuli. And what you can see is that each time the stimulus is presented, the neuron generates spikes. Each row here is a different presentation of the stimulus. If you average across all of those rows, you can see that there is some repeatable structure. So the neuron tends to spike most often at certain times after the presentation of this stimulus. But each time the stimulus is presented, the spikes don't occur in exactly the same place. So you have this sense that when you present the stimulus, there is some sort of underlying modulation of the firing rate of the neuron. But the response isn't exactly the same each time. There's some randomness about it. So we're going to talk a little bit about how you characterize that randomness. And the way that most people think about the random spiking of neurons is that there is a-- sorry about that. That mu was supposed to be up there. So let's go to a very simple case, where we turn on a stimulus. And instead of having a kind of a time-varying rate, let's imagine that the stimulus just comes on, and the neuron starts to spike at a constant average rate. And let's call that average rate mu. So what does that mean? What that means is that if you were to present this stimulus many, many times and look at where the spikes occur, there would be some uniform probability per unit of time that spikes would occur anywhere under that stimulus during the presentation of that stimulus. So let's break that time window up during the presentation of the stimulus into little tiny bins, delta-T. Now, if these spikes occur randomly, then they're generated independently of any other spikes, with an equal probability in each bin. And what that means is that if the bins are small enough, most of the bins will have zero spikes. And you can write down the probability that a spike occurs in any one bin is the number of spikes per unit time, which is the average firing rate, times the width of that bin in time. Does that make sense? The probability that you have a spike in any one of those very tiny bins is just going to be the spikes per unit of time times the width of the bin in time. Now, that's only true if delta-T is very small. Because if delta-T gets big, then you have some probability that you could have two or three spikes. And so this is only true in the case where delta-t is very small. So the probability that no spikes occur is 1 minus mu delta-T. And we can ask, how many spikes land in this interval T? And we're going to call that probability P. And it's the probability that n spikes land in this interval, T. And we can calculate that probability as follows. So that probability is just the product of three different things. It's the probability of having n bins with a spike. So that's mu delta-T to the n. It's n independent events, with probability mu delta-T, times the probability of having M minus n beans with no spike. So that's 1 minus mu delta-T to the M minus n. And we also have to multiply by the number of different ways that you can distribute those n spikes in M bins. And that's called M choose n. Yes. AUDIENCE: So when we pick delta-T, we still have to pick it big enough so that it's not less than how long the [INAUDIBLE],, right? Because you can't have-- MICHALE FEE: Yeah, so, OK, good question. So just to clarify, we're kind of imagining that spikes are delta functions now. So we are-- so in this case, we're imagining that spikes are not like produced by influx of sodium and an outflux of potassium, and it takes a millisecond. In general, spikes are, let's say, a millisecond across. And we're usually thinking of these bins as kind of approaching about a millisecond. But if you-- what we're about to do, actually, is take the limit where delta-T goes to 0. And in that case, you have to think of the spikes as delta functions. OK, so the probability that you have n spikes in this interval, T, is just the product of those things. It's the probability of having n bins with a spike times the number of different ways that you can put n spikes into M bins. So you multiply those things together, and you take the limit that delta-T goes to 0. And it's kind of a cute little derivation. I've put the full derivation at the end so that we don't have to go through it in class. But it's kind of fun to look at anyway. And what you find is that in the limit, that delta-T goes to 0. Of course as delta-T goes to 0, the number of bins goes to infinity. Because the number of bins is just capital T divided by delta-T. So you can go through each of those terms and calculate what happens to them in the limit that delta-T goes to 0. And what you find is that the probability of having n spikes in that window T just like mu T to the n. What is mu T? mu T is the expected number of spikes in that interval. It's just the number of spikes per unit time, times the length of the window divided by n factorial, times e to the minus mu T. And again, mu T is the expected number of spikes. And that is the Poisson distribution. And it comes from a very simple assumption, which is just that spikes occur. The spikes occur independently of each other, at a rate mu spikes per second, with some constant probability per unit time of having a spike in each little time bin. Now notice that if you have a rate-- there's some tiny probability per unit of time of having a spike there, probability of having a spike there, probability of having a spike there, and so on-- you're going to end up sometimes with one spike in the window, sometimes with two spikes, sometimes with three, sometimes four, sometimes five. If this window's really short, you're going to have more cases where you have zero or one spikes. If this window is really long, then it's going to be pretty rare to have just 0 or 1 spikes. And you're going to end up with, on average, 20 spikes, let's say. So you could see that, first of all, the number of spikes you get is random. And it depends on the size of the window. And the average number of bikes you get depends on the size of the window and the average firing rate of the neuron. OK, so let's just take a look at what this function looks like for different expected spike counts. So here's what that looks like. So we can calculate the expected number of spikes. And just to convince you, if we calculate the average number of spikes using that distribution, we just sum, over all possible number of spikes, n times the probability of having n spikes. And when you do that, what you find is that the average number is mu times T. So you can see that the firing rate, mu, is just the expected number of spikes divided by the width of the window. Does that make sense? That's pretty obvious. That's just the spike rate. And we're going to often use the variable r for that, for firing rate. OK. And here's what that looks like. You can see that if the firing rate is low enough or the window is short enough, such that the expected number of spikes is 1, you can see that, most of the time, you're going to get 0 or 1 spikes, and then occasionally 2, and very occasionally 3, and then almost never more than 4 or 5. If the expected number of spikes is 4, you can see that the mode of that-- you can see that that distribution moves to the right. You have a higher probability of getting 3 or 4 spikes. But again, there's still a distribution. So even if the average number of spikes is 4, you have quite a wide range of actual spike counts that you would get on any given trial. As the expected number of spikes gets bigger-- let's say, on average, 10-- does anyone know what this distribution starts turning into? AUDIENCE: Gaussian. MICHALE FEE: Gaussian, good. So what you can see is that you end up having a more symmetric distribution, where the peak is sitting at the expected number. In that case, the distribution becomes more symmetric, and in the limit of infinite expected number of spikes, becomes exactly a Gaussian distribution. All right, there are two measures that we use to characterize how variable spike trains are. Le me just go through them real quick. And we'll describe-- and I'll describe to you what we expect those to look like for spike trains that have a Poisson distribution. So the first thing we can look at is the variance in the spike count. So remember, for a Poisson process, the average number of spikes in the interval is mu times T. We can calculate the variance in that, which is basically just some measure of the width of this distribution here. So the variance is just the number of counts on a given trial minus the expected number, squared. n minus average and, squared. And if you multiply that out, you get-- it's the average n squared minus the average squared. And it turns out, for the Poisson process, that that variance is also mu T. So the variance is-- the average spike count is mu T, and the variance in the spike count is also mu T. So there is a quantity called the Fano factor, which is just defined as the variance of the spike count divided by the average spike count. And for a Poisson process, the Fano factor is 1. And what you find is that for neurons in cortex and other parts of the brain, the Fano factor actually can be quite close to one. It's usually between 1 and 1 and 1/2 or so. So there's been a lot of discussion and interest in why it is that spike counts in the brain are actually so random, why do neurons behave in such a way that their spikes occur essentially randomly at some rate. So it's a interesting topic of current research interest. OK, let me tell you about one other measure, called the interspike interval distribution. And basically the inner spike interval distribution is the distribution of times between spikes. And I'm just going to show you what that looks like in the Poisson process, and then briefly describe what that looks like for real neurons. OK, so let's say we have a spike. OK, let's calculate the distribution of intervals between spikes. So let's say we have a spike at time T-sub-i. We're going to ask what is the probability that we have a spike some time, tau, later-- tau-sub-i later, within some little window, delta-T. So let's calculate that. So tau-sub-i is initial spike interval between the i-plus-1 spike and the i-th spike. So the probability of having the next spike land in the interval between t of i plus 1 and t of i plus 1 plus delta t in this little window here is going to be what? It's going to be the probability of having no spike in this interval, times the probability of having a spike in that little interval. So what's the probability of having no spike in that interval, tau? What distribution can we use to calculate that probability? AUDIENCE: [INAUDIBLE] MICHALE FEE: Yeah. So what is the Poisson distribution? It tells us the probability of having n spikes in an interval T-- capital T. So how would we use that to calculate the probability of having no spike in that interval? AUDIENCE: [INAUDIBLE]. MICHALE FEE: Exactly. We just use the Poisson distribution, and plug n equals 0 into it. So go to that. There's the Poisson distribution. What does this look like if we set n equals 0? So what is mu T to the zero? AUDIENCE: [INAUDIBLE] MICHALE FEE: 1. What is 0 factorial? AUDIENCE: 1. MICHALE FEE: 1. And so the probability of having zero spikes in a window T is just e to the minus mu T. All right, so let's plug that in-- good. So the probability of having no spikes in that interval is e to the minus mu T, or rT, if we're using r for rate now. Now, what is the probability of having a spike in that little window right there? Any thoughts? AUDIENCE: [INAUDIBLE] MICHALE FEE: What's that? AUDIENCE: [INAUDIBLE] MICHALE FEE: Yeah. You could do that. But we sort of derive the Poisson process by using the answer to this question already. AUDIENCE: Oh, r delta. MICHALE FEE: r delta-T. Good. OK, so the probability of having a spike in that little window is just r delta-T. OK. Yes. AUDIENCE: Stupid question-- I just missed the transition between u and r. MICHALE FEE: Yeah, I just changed the name of the variable. If you look in the statistics literature, they most often use mu. But when we're talking about prime rates of neurons, it's more convenient to use R. And so they're just the same. AUDIENCE: [INAUDIBLE] MICHALE FEE: Yes. AUDIENCE: Are we talking the probability of having the next five commands [INAUDIBLE] given that there was no spike at the first interval? MICHALE FEE: Well, so remember, the probability of having a spike in any interval in a process is completely independent of what happens at any other time. AUDIENCE: So why are we calculating-- why did we do that [INAUDIBLE]? MICHALE FEE: OK, because in order for this to be the interval between one spike and the next spike, we needed to have zero spikes in the intervening interval. AUDIENCE: Oh, OK, so [INAUDIBLE].. OK. MICHALE FEE: Because if there had been another spike in here somewhere, this would not be our inter-spike interval. The inter-spike interval would be between here and wherever that spike occurred. And we'd just be back to calculating what's the probability of having no spike between there and there. OK, good. So that's the probability of having no spike from here to here, and having a spike in the next little delta-T. We can now calculate what's called the probability density. It's the probability per unit time of having inter-spike intervals of that duration. And to do that, we just calculate the probability divided by delta-T. And what you find is that the probability density of inter-spike intervals is just re to the minus r tau, where tau is the spike interval. And so this is what that looks like for a Poisson process. OK, so you can see that the highest probability is having very short intervals. And you have exponentially lower probabilities of having longer and longer intervals. Does that make sense? So it turns out that that's actually a lot like what interspike intervals of real neurons looks like. They very often have this exponential tail. Now, what is completely unrealistic about this interspike interval distribution? What is it that can't be true about this? [INAUDIBLE] AUDIENCE: Intervals, like, which are huge. MICHALE FEE: Well, so it's actually-- the bigger problem is not on this end of the distribution. AUDIENCE: [INAUDIBLE] MICHALE FEE: Yeah. What happens here that's wrong? AUDIENCE: [INAUDIBLE] MICHALE FEE: Exactly. So what happens immediately after a neuron spikes? AUDIENCE: You can't have a spike. MICHALE FEE: You can't have another spike right away. Why is that? AUDIENCE: [INAUDIBLE] MICHALE FEE: Because of the refractory period, which comes from-- AUDIENCE: [INAUDIBLE] MICHALE FEE: Hyperpolarization is one of them. Once you have the spike, the neuron is actually briefly hyperpolarized. You could imagine trying to re-polarize it very quickly, but then something else is a problem. AUDIENCE: [INAUDIBLE] MICHALE FEE: Sodium channel inactivation. So even if you were to repolarize the neuron very quickly, it still would have a hard time making a spike because of sodium channel inactivation. So what does this actually look like for real neuron? What do you imagine it looks like? AUDIENCE: [INAUDIBLE] MICHALE FEE: Right, so immediately after a spike, there is zero probability of having another spike. So this starts at zero, climbs up here, and then decays exponentially. OK, so that's what most inter-spike intervals for real neurons actually looks like. So this is probability density. This is tau. And this is the refractory period. In fact, when you record from neurons with an electrode in the brain, you get lots of spikes. One of the first things you should do when you set a threshold and find those spike times is compute by interval distribution. Because if you're recording from a single neuron, that interspike interval distribution will have this refractory period. If you're recording from-- it's quite easy to get multiple spikes on the end of an electrode. And what happens if you have multiple spikes, you can think they're coming from one neuron, but in fact they're coming from two neurons. And if you compute the interspike interval distribution, it won't have this dip here. So it's a really important tool to use to test whether your signal is clean. You'd be amazed at how few people actually do that. All right, I just want to introduce you to one important term. And that's called homogeneous versus inhomogeneous in the context of Poisson process. What that means is that a homogeneous Poisson process has a constant rate, mu. Most neurons don't do that. Because in most neurons, there's information carried in the fluctuation of the firing rate. And so most neurons have what's called behave more like an inhomogeneous Poisson process, where the rate is actually fluctuating in time. And the example we were looking at before shows you what an inhomogenous Poisson process would look like. So let's see what's next. All right, so let's change topics to talk about convolution, cross-correlation, and auto-correlation functions. All right, so we've been using the notion of a convolution where we have a kernel that we multiply by a signal. We multiply that, and integrate over time, and then we slide that kernel across the signal, and ask, where does the signal have structure that looks like the kernel. And that gives you an output y of T. So we've used that to model the membrane potential of [INAUDIBLE] synaptic input. So in that case, we had spikes coming from a presynaptic neuron that generate a response in the postsynaptic neuron. And now we can take-- the output of that, the response in the postsynaptic neuron as a convolution of that exponential, with an input spike train. We've used it to model the response of neurons to a time-dependent stimulus. And you also used it-- I've described how you can use that to implement a low-pass filter or a high-pass filter. And we talked about doing that for extracting either low-frequency signals, to get the local field potential out of neurons, or doing a high-pass filter to get rid of the low-frequency signals so that you can see the spikes. OK, but most generally, convolution is-- you should think about it as allowing you to model how a system responds to an input where the response of the system is controlled by a linear filter. The system is sensitive to particular patterns in the input. Those patterns can be detected by a filter. So you apply that filter to the input, and you estimate how the system responds. And it's very broadly useful in engineering and biology and neuroscience to use convolutions to model how a system responds to its input. All right, so there is a different kind of function, called a cross-correlation function, that looks like this. And it looks very similar to a convolution, but it's used very differently. Now, you might think, OK, there's just a sign change. Here I have a T minus tau, and here I have a T plus tau. Is that the only difference between those? It's not. Because here I'm integrating over a d tau, and here I'm integrating over dT. Here I'm getting the response as a function of time. Here, what I'm doing is I'm kind of extracting the kernel. What I'm getting out of this is a kernel, tau, as a function of tau. OK, so let me walk through how that works. So in this case, we have two signals, x and y. And what we're doing is we're taking those two signals and we're multiplying them by each other, x times y. And what we're doing is we're multiplying the signals and then integrating over the product. And then we shift one of those signals by a little amount, tau. And we repeat that process. So let's see what that looks like. So what do you see here? You see two different signals, x and [AUDIO OUT] I'm sorry, x and y. They look kind of similar. You can see this one has a bump here, and a couple bumps there, and a bump there. And you see something pretty similar to that here, right? Here's three big bumps. Here's three big bumps. So those signals are actually quite similar to each other, but they're not exactly the same. Right you can see that these fast little fluctuations are different on those two signals. Now, what happens when we take this signal-- and you can see that there's this offset here. So what happens if we take x times y, we multiply them together, and we integrate the result? Then we shift y a little bit, by an amount, tau, and we multiply those two signals together, and we integrate. And we keep doing that. And now we're going to plot this k as a function of that little shift, tau, that we put in there. And here's what that looks like. So k is the cross-correlation-- sometimes called the lag cross-correlation-- of x and y. So that little circle is the symbol for lag cross-correlation. And you can see that, at a particular lag, like right here, the positive fluctuations in this signal are going to line up with the positive fluctuations in that signal. The negative fluctuations in this signal are going to line up with the negative fluctuations in that signal. And when you multiply them together, you're going to get the maximum positive contribution. Those signals are going to be maximally overlapped at a particular shift, tau. And when you put that function, K of tau, you're going to have a peak at that lag that corresponds to where those signals are maximally overlapped. So a lag cross-correlation function is really useful for finding, let's say, the time at which two signals-- like if one signal is like a copy of the other one, but it's shifted in time, a lag cross-correlation function is really useful for finding that the lag between those two functions. There's another context in which this lag cross-correlation function is really useful. So when we did the spike-triggered average, when we took spikes from a neuron, and we-- at the time of those spikes, we extracted what the input was to the neuron, and we averaged that [AUDIO OUT].. What we were really doing was we were doing a cross-correlation between the spike train of a neuron and the stimulus that drove that neuron, the stimulus that was input to that neuron. And that cross-correlation was just the kernel that described how you get from that input to the neuron to the response of the neuron. So spike-triggered average is actually sometimes called reverse correlation. And the reverse comes from the fact that you have to actually flip this over to get the kernel. So let's not worry about that. But it's sometimes referred to as the correlation between the spike train and the stimulus input. Yes. AUDIENCE: [INAUDIBLE] MICHALE FEE: So there's no convolution here. We're doing the lag cross-correlation, OK. We're just taking those two signals, and shifting one of them, multiplying, and integrating. Sorry. Ask your question. I just wanted to clarify, there's no convolution being done here. AUDIENCE: So [INAUDIBLE] is that y [INAUDIBLE] MICHALE FEE: Ah, OK, great. That's actually one of the things that's easiest to get mixed up about when you do lag cross-correlation. You have to be careful about which side this peak is on. And I personally can never keep it straight. So what I do is just make two test functions, and stick them in. And make sure that if I have one of my functions-- so in this case, here are two test functions. You can see that x is-- sorry, y is before x. And so this peak here corresponds to y happening before x. So you just have to make sure you know what the sign is. And I recommend doing that with just two functions that you know. It's easy to get it backwards. Yes. AUDIENCE: [INAUDIBLE] part of y equals [INAUDIBLE].. MICHALE FEE: What do you mean, first part? AUDIENCE: The very first [INAUDIBLE].. MICHALE FEE: Oh, these? AUDIENCE: Yes. MICHALE FEE: I'm just taking copies of this, and placing them on top of each other, and shifting them. So you should ignore this little part right here. AUDIENCE: [INAUDIBLE] MICHALE FEE: Oh, yeah. So there are Matlab functions to do this cross-correlation. And it handles all of that stuff for you. Yes. AUDIENCE: [INAUDIBLE] MICHALE FEE: Yes. AUDIENCE: Like, is the x-axis tau? MICHALE FEE: The x-axis is tau. I should have labeled it on there. AUDIENCE: I just want to clarify whether they meet. So how it's kind of flat, and then this idea of the flat area that, when you multiply them and they're slightly off, this is generally not zero. But you know, they cancel each other out. MICHALE FEE: So you can see that if these things are shifted-- if they're kind of noisy, and they're shifted with respect to each other, you can see that the positive parts here might line up with the negative parts there. And the negative parts here line up with the positive parts there. And they're shifted randomly at any other time with respect to each other. On average, positive times negative, you're going to have some parts where it's positive times positive, and other parts where it's positive times negative. And it's all going to wash out and give you zero. Because those signals are uncorrelated with each other at different time lags. AUDIENCE: So only when the perfectly overlap and kind of exaggerate each other is when they'll be flat. MICHALE FEE: Exactly. It's only when they overlap that all of the positive parts here will line up with the positive parts here, and the negative parts here will line up with the negative parts. AUDIENCE: And since you're taking the integral together, it just makes a very large positive. MICHALE FEE: Exactly. All of those positive times positive add up to the negative times negative, which is positive. And all of that adds up to give you that positive peak right there. But when they're shifted, all of those magical alignments of positive with positive and negative with negative go away. And it just becomes, on average, just random. And random things have zero correlation with each other. So this is actually a really powerful way to discover the relation between two different signals. And in fact, it gives you a kernel that you can actually use to predict one signal from another signal. Now, if you were to take x and convolve it with k, you would get an estimate of y. We'll talk more about that later. Pretty cool, right? So they're mathematically very similar. They look similar. But they're used in different ways. So the way we think about convolution is-- what it's used for is to take an input signal x, and convolve it with a kernel, k, to get an output signal, y. And we usually think of x as being very long vectors, long signals. K here, kappa, is a kernel. It's just a short little thing in time. And you're going to convolve it with a long signal to get another long signal. Does that makes sense? In cross-correlation, we have two signals, x and y. x and y are the inputs. And we cross-correlate it to extract a short signal that captures the temporal relation between x and y. All right, so x and y are long signals. K is a short vector. And in this case, we're convolving a long signal with a short kernel to get another long signal, which is the response of the system. In this case, we take, let's say, the input to a system and the output of this system, and doing a cross-correlation to extract the kernel. All right, any questions about that? They're both super-powerful methods. Yes, [INAUDIBLE]. AUDIENCE: How does the cross-correlation-- mathematically, how does it give a short vector [INAUDIBLE]?? MICHALE FEE: You just do this for a small number of taus. Because usually signals that have some relation between them, that relation has a short time extent. This signal here, in a real physical system, doesn't depend on what x was doing a month ago, or maybe a few seconds ago. These signals might be 10 or 20 seconds long, or an hour long, but this signal, y, doesn't depend on what x was doing an hour ago. It only depends on what x was doing maybe a few tens of milliseconds or a second ago. So that's why x and y can be long signals. K is a short vector that represents the kernel. Any more questions about that? OK, these are very powerful methods. We use them all the time. I talked about the relation to the spike-triggered average. The autocorrelation is nothing more than a cross-correlation of a signal with itself. So let's see how that's useful. So an autocorrelation is a good way to examine a temporal structure within a signal. So if we take a signal, x, and we calculate the cross-correlation of that signal with itself, here's what that looks like. So let's say we have a signal that looks like this. And this signal kind of has slowish fluctuations that are, let's say, on a 50 or 100 millisecond timescale. If we take that signal and we multiply it by itself with zero time lag, what do you think that will look like? So positive lines up with positive, negative lines up with negative. That thing should do what? It should be a maximum, right? Autocorrelations are always a maximum at zero lag. And then what we do is we're just going to take that signal and shift it a little bit. And we'll shift it a little bit more, do that product and integral, shift that product and integral. Now what's going to happen as we shift one of those signals sideways? And then multiply and integrate. Sammy, you got this one. AUDIENCE: Oh, at first, [INAUDIBLE] zero [INAUDIBLE] point where the plus and minuses cancel out. [INAUDIBLE] this. If you [INAUDIBLE] maybe [INAUDIBLE] where the pluses overlap the second time. MICHALE FEE: Yeah, so if you shift it enough, it's possible that it might overlap again somewhere else. What kind of signal would that happen for? AUDIENCE: Like, cyclical. MICHALE FEE: Yeah, a periodic signal. Exactly. So an autocorrelation first of all has a peak at zero lag. That peak drops off when these fluctuations here. So the positive lines up with positive, negative lines up with negative. As you shift one of them, those positive and negatives no longer overlap with each other, and you start getting positives lining up with negatives. And so the autocorrelation drops off. And then it can actually go back and indicate-- it can go back up if there's periodic structure. So let's look at what this one looks like. So in this case, it kind of looked like there might be periodic structure, but there wasn't, because this was just low-pass-filtered noise. And you can see that if you compute the-- there's that Matlab function, xcorr. So you use it to calculate the autocorrelation. You can see that that autocorrelation is peaked at zero lag, drops off, and the lag at which it drops off depends on how smoothly varying that function is. If the function varies-- if it's changing very slowly, the autocorrelation is going to be very wide. Because you have to shift it a lot in order to get the positive peaks and the negative peaks misaligned from each other. Now, if you have a fast signal like this, with fast fluctuations, and you do the autocorrelation, what's going to happen? So first of all, at zero lag, it's going to be peak. And then what's going to happen? It's going to drop off more quickly. So that's exactly what happens. So here's the autocorrelation of this slow function. Here's the autocorrelation of this fast function. And you can see both of these are just noise. But I've smoothed this one. I've low-pass-filtered this with kind of a 50-millisecond-wide kernel to give very slowly-varying structure. This one I smoothed with a very narrow kernel to leave fast fluctuations. And you can see that the autocorrelation shows that the width of the smoothing of this signal is very narrow. And the width of the smoothing for this signal was broad. In fact what you can see is that this signal looks like noise that's been convolved with this, and this signal looks like noise that's been convolved with that. And here is actually a demonstration of what Sammy was just talking about. If you take a look at this signal, so the autocorrelation can be a very powerful method. It's actually not that powerful. It's a method for extracting periodic structure. And we're going to turn now, very shortly, to a method that really is very powerful for extracting periodic structure. But I just want to show you how this method can be used. And so if you look at that signal right there, it looks like noise. But there's actually structure embedded in there that we can see if we do the autocorrelation of this signal. And here's what that looks like. So again, autocorrelation has peaked. The peak is very narrow, because that's a very noisy-looking signal. But you can see that, buried under there, is this periodic fluctuation. What that says is that if I take a copy of that signal and shift it with respect to itself every 100 milliseconds, something in there starts lining up again. And that's why you have these little peaks in the autocorrelation function. And what do you think that is buried in there? The sine wave. So this data is random, with a normal distribution, plus 0.1 times cosine that gives you a 10 hertz wiggle. So you can't see that in the data. But if you do the autocorrelation, all of a sudden you can see that it's buried in there. All right, so cross-correlation is a way to extract the temporal relation between different signals. Autocorrelation is a way to use the same method essentially to extract the temporal relation between a signal and itself at different times. And that method, it's actually quite commonly used to extract structure and spike trains. But there are much more powerful methods for extracting periodic structure. And that's what we're going to start talking about now. OK, any questions? Let's start on the topic of spectral analysis, which is the right way to pull out the periodic structure of signals. Here's that the spectrogram that I was actually trying to show you last time. So what is a spectrogram? So if we have a sound-- we record a sound with a microphone, microphones pick up pressure fluctuations. So in this case, this is a bird singing. The vocal cords are vibrating with air flowing through them that produce very large pressure fluctuations in the vocal tract. And those are transmitted out through the beak, into the air. And that produces pressure fluctuations that propagate through the air at about a foot per millisecond. They reach your ear, and they vibrate your eardrum. And if you have a microphone there, you can actually record those pressure fluctuations. And if you look at it, it just looks like fast wiggles in the pressure. But somehow your ear is able to magically transform that into this neural representation of what that sound is. And what your ear is actually doing is it's doing a spectral analysis of the sound. And then your brain is doing a bunch of processing that helps you identify what that thing is that's making that sound. So this is a spectral analysis of this bit of birdsong, canary song. And what it is, it does very much what your eardrum does-- or what your cochlea does. Sorry. It calculates how much power there is at different frequencies of the sound and at different times. So what this says is that there's a lot of power in this sound at 5 kilohertz at this time, but at this time, there's a lot of power at 3 kilohertz or a 2 kilohertz and so on. So it's a graphical way of describing what the sound is. And here's what that looks like. [DESCENDING WHISTLING BIRDSONG] [FULL-THROATED CHIRP] [RAPID CHIRPS] [SIREN-LIKE CHIRPS] [STUDENTS LAUGH] So you can see visually what's happening. Even though if you were to look at those patterns on an oscilloscope or printed out on the computer, it would just be a bunch of wiggles. You wouldn't be able to see any of that. Here's another example. This is a baby bird babbling. And here's an example of what those signals actually look like-- just a bunch of wiggles. It's almost completely uninterpretable. But again, your brain does this spectral analysis. And here I'm showing a spectrogram. Again, this is frequency versus time. It's much more cluttered visually. And it's a little bit harder to interpret. But your brain actually does a pretty good job figuring out-- [RANDOM SQUEAKY CHIRPING] --what's going on there. We can also do spectral analysis of neural signals. So one of the really cool things that happens in the cortex is that neural circuits produce oscillations. And they produce oscillations at different frequencies at different times. If you close your eyes-- everybody just close your eyes for a second. As soon as you close your eyes, the back of your cortex starts generating a big 10 hertz oscillation. It's wild. You just close your eyes, and it starts going-- [MAKES OSCILLATING SOUND WITH MOUTH] --at 10 hertz. This is a rhythm that's produced by the hippocampus. So whenever you start walking, your hippocampus starts generating a 10 hertz rhythm. When you stop and you're thinking about something or eating something, it stops. As soon as you start walking-- [MAKES OSCILLATING SOUND WITH MOUTH] --10 hertz rhythm. And you can see that rhythm often in neural signals. Here you can see this 10 Hertz rhythm. It has a period of about 100 milliseconds. But on top of that, there are much faster rhythms. It's not always so obvious in the brain what the rhythms are. And you need spectral analysis techniques to help pull out that structure. You can see here, here is the frequency as a function of time. I haven't labeled the axis here. But this bright band right here corresponds to this 10 hertz oscillation. So we're going to take up a little bit of a detour into how to actually carry out state-of-the-art spectral analysis like this to allow you to detect very subtle, small signals in neural signals, or sound signals, or any kind of signal that you're interested in studying. Jasmine. AUDIENCE: [INAUDIBLE] MICHALE FEE: Yeah, OK, so this is called a color map. I didn't put that on the side here. But basically dark here means no power. And light blue to green is more power. Yellow to red is even more. Same here. So red is most power. AUDIENCE: [INAUDIBLE] MICHALE FEE: Yes. So it's how much energy there is at different frequencies in the signal as a function of time. So you're going to know how to do this. Don't worry. you're going to be world experts at how to do this right. Yes. AUDIENCE: [INAUDIBLE] MICHALE FEE: Yes. AUDIENCE: This is very similar to [INAUDIBLE].. MICHALE FEE: Yes. I'm sorry. I should have been clear-- this is recording from an electrode in the hippocampus. And these oscillations here are the local field potentials. AUDIENCE: [INAUDIBLE] MICHALE FEE: Yes. That's exactly right. Thank you. I should have been more clear about that. OK, all right, so in order to understand how to really do this, you have to understand Fourier decomposition. And once you understand that, the rest is pretty easy. So I'm going to take you very slowly-- and it may feel a little grueling at first. But if you understand this, the rest is simple. So let's just get started. All right, so Fourier series-- it's a way of decomposing periodic signals by applying filters to them. We're going to take a signal that's a function of time, and we're going to make different receptive fields. We're going to make neurons that have-- sensitive to different frequencies. And we're going to apply those different filters to extract what different frequencies there are in that signal. So we're going to take this periodic signal as a function of time. It's a square wave. It has a period capital T. And what we're going to do is we're going to approximate that square wave as a sum of a bunch of different sine waves. So the first thing we can do is approximate it as a cosine. You can see the square wave has a peak there. It has a valley there. So the sine wave approximation is going to have a peak there and a valley there. We're going to approximate it as a cosine wave of the same period and the same amplitude. Now, we might say, OK, that's not a very good approximation. What could we add to it to make a better approximation? AUDIENCE: [INAUDIBLE] MICHALE FEE: Another cosine wave, which is what we're going to do in just a second. Because apparently there's more I wanted to tell you about this. So we're going to approximate this as a cosine wave that has a coefficient in front of it. We have to approximate this is a cosine that has some amplitude a1, and it has some frequency, f0. So a cosine wave with a frequency f0 is this-- cosine 2 pi f0 T, where f0 is 1 over the period. And you can also write-- so f0 is the oscillation frequency. It's cycles per second. There is an important quantity called the angular frequency, which is just 2 pi times f0, or 2 pi over T. And the reason is because if we write this as cosine omega-0 T, then this thing just gives us 2 pi change in the phase over this interval T, which means that the cosine comes back to where it was. So if we want to make a better approximation, we can add more sine waves or cosine waves. And it turns out that we can approximate any periodic signal by adding more cosine waves that are multiples of this frequency, omega-0. Why is that? Why can we get away with just adding more cosines that are integer multiples of this frequency, omega-0? Any idea? Why is it not important to consider 1.5 times omega-0, or 2.7? AUDIENCE: Does it have something to do with [INAUDIBLE]?? MICHALE FEE: It's close, but too complicated. AUDIENCE: [INAUDIBLE] sum just over the period of their natural wave, right? So if you add non-integer [INAUDIBLE],, you lose that [INAUDIBLE]. MICHALE FEE: Right. This signal that we're trying to model is periodic, with frequency omega-0. And any integer multiple of omega-0 is also periodic at frequency omega-0. So notice that this signal that's cosine 2 omega-0 is still periodic over this interval, t. Does that make sense? So any integer multiple cosine integer omega 0 is also periodic with period T. OK? And those are the only frequencies that are periodic with period T. So we can restrict ourselves to including only frequencies that are integer multiples of omega-0, because those are the only frequencies that are also periodic with period T. We can write down-- we can approximate this square wave as a sum of cosine of different frequencies. And the frequencies that we consider are just the integer multiples of omega-0. Any questions about that? That's almost like the crux of it. There's just some more math to do. So here's why this works. So here's the square wave we're trying to approximate. We can approximate that square wave as a cosine. And then there's the approximation. Now, if we add a component that's some constant times cosine 3 omega-0, you can see that those two peaks there kind of square out those round edges of that cosine. And you can see it starts looking a little bit more square. And if you add constant times cosine 5 omega-T, it gets even more square. And you keep adding more of these things until it almost looks just like a square wave. Here's another function. We're going to approximate a bunch of pulses that have a period of 1 by adding up a bunch of cosines. So here's the first approximation-- cosine, omega-0. So there's your first approximation to this train of pulses. Now we add to that a constant times cosine 2 omega-T. See that peak gets a little sharper. Add a constant times cosine 3 omega-T. And you keep adding more and more terms, and it gets sharper and sharper, and looks more like a bunch of pulses. Why is that? It's because all of those different cosines add up consecutively right here at 0. And so those things all add up. And this peak stays, because the peak of those cosines is always at 0. Over here, though, right next door, all of those, you can see that that cosine is canceling that one. It's canceling that one. Those two are canceling. You can see all these peaks here are canceling each other. And so that goes to 0 in there. Now of course all these cosines are periodic with a period T. So if you go one T over, all of those peaks add up again and interfere constructively. So that's it. It's basically a way of taking any periodic signal and figuring out a bunch of cosines such that the parts you want to keep add up constructively and the parts that aren't there in your signal add up destructively. And there's very simple sort of mathematical tools-- basically it's a correlation-- to extract the coefficients that go in front of each one of these cosines. And then one more thing-- we use cosines to model or to approximate functions that are symmetric around the original. Because the cosine function is symmetric. Other functions are anti-symmetric. They'll look more like a sine wave. They'll be negative here, and positive there. And we use sines to model those. And then the one last trick we need is we can model arbitrary functions by combining sines and cosines into complex exponentials. And we'll talk about that next time. And once we do that, then you can basically model not just periodic signals, but arbitrary signals. And then you're all set to analyze any kind of periodic signal in arbitrary signals. So it's a powerful way of extracting periodic structure from any signal. So we'll continue that next time.
MIT_940_Introduction_to_Neural_Computation_Spring_2018
1_Course_Overview_and_Ionic_Currents_Intro_to_Neural_Computation.txt
MICHALE FEE: OK. So let's go ahead and get started. So what is neural computation? So neuroscience used to be a very descriptive field where you would describe the different kinds of neurons. Who here has seen the famous pictures-- the old pictures of the golgi-stained neurons, all those different types of neurons, describing what things look like in the brain and what parts of the brain are important for what kinds of behavior based on lesion studies. It used to be extremely descriptive. But things are changing in neuroscience, and have changed dramatically over the past few decades. Really, neuroscience now is about understanding the brain, how the brain works, how the brain produces behavior. And really trying to develop engineering-level descriptions of brain systems and brain circuits and neurons and ion channels and all the components of neurons that make the brain work. And so, for example, the level of description that my lab works at and that I'm most excited about is understanding how neural circuits-- how neurons are put together to make neural circuits that implement behaviors, or to produce let's say object recognition. So this is a figure from Jim DiCarlo, who is our department head. Basically a circuit-level description of how the brain goes from a visual stimulus to a recognition of what that object is in the stimulus. Now at the same time that there's been a big push toward understanding or generating an engineering level descriptions of brains and circuits and components, neurons, there's also been tremendous advances in the technologies that we can use to record neurons. So there are now imaging systems and microscopes that can image thousands of neurons simultaneously. This is an example of a movie recorded in an awake baby mouse that's basically dreaming. And let me just show you what this looks like. So this is a mouse that has a fluorescent protein that's sensitive to neural activity. And so when neurons in a part of the brain become active they become fluorescent and light up. And so here's a top surface of the mouse's brain. And you can see this spontaneous activity flickering around as this mouse is just dreaming and thinking about whatever it's thinking about. So one of the key challenges is to take images like this that represent the activity of thousands of neurons or millions of neurons and figure out how to relate that to the circuit models that are being developed. So here's another example. So there are these new probes where these are basically silicon probes that have thousands of little sensors on them and a computer here that basically reads out the pattern of activity. These are called neuropixels. So those are basically electrodes that can, again, record from thousands of neurons simultaneously. And they're quite long and can record throughout the whole brain, essentially, all at once. So the key now is you have these very high dimensional data set. How do you relate that to the circuit models that you're developing? And so one of the key challenges in neuroscience is to take very large data sets that look like this that just look like a mess and figure out what's going on underneath of there. It turns out that people are discovering that while you might be recording from tens of thousands of neurons and it looks really messy that there's some underlying very simple structure underneath of there. But you can't see it when you just look at big collections of neurons like this. So the challenge here is really to figure out how to not only just make those models, but test them by taking data and relating the patterns of activity that you see in these very high dimensional data sets, do dimensionality reduction-- compress that data down into a simple representation-- and then relate it to those models that you developed. One of the things we're going to try to do in this class is to apply these techniques of making models of neurons and circuits together with mathematical tools for analyzing data in the context of looking at animal behaviors. So for example, in my lab we study how songbirds sing, how they learn to produce their vocalizations. Songbirds learn by imitating their parents. They listen to their parents. [BIRDS SINGING] Here, hold on. I'm going to skip ahead. How do I do that? [BIRDS SINGING] [INAUDIBLE] bring up the-- I was hoping I'd be able to skip ahead. So this is just a setup showing how we can record from neurons in birds while they're singing and figure out how those circuits work to produce the song. This is a little micro-drive that we built. It's motorized so that we can move these electrodes around independently in the brain and record from neurons without the animal knowing that we're moving the electrodes around and looking for neurons. So songbirds are really cool. They listen to their parents. They store a memory of what their parents sing. And then they begin babbling. And they practice over and over again until they can learn a good copy of their song. So here's a bird that's singing with the micro-drive on its head. And you can hear the neuron in the background. [STATIC SOUNDS] Sorry, it's not over the loudspeaker here. But can everyone hear that? So we can record from neurons while the bird is singing. [BIRDS SINGING] Look at the activity in this network and try to figure out how that network actually works to produce the song. And also we can record in very young birds and figure out how the song is actually learned. And there's an example of a neuron generating action potentials, which is the basic unit of communication in the brain. [BIRDS SINGING] And we try to build circuit models and figure out how that thing actually works to produce and learn this song. So these computational approaches that I'm talking about are not just important for dissecting brain circuits related to behavior. The same kinds of approaches, the same kind of dimensionality reduction techniques we're going to learn are also useful in molecular genetic studies, like taking transcriptional profiling and doing clustering and looking at the different patterns that are there. It's also useful for molecular studies. Also, these ideas are very powerful in studying cognition. So if you look at the work that Josh Tenenbaum does and Josh McDermott, who developed mathematical models of how our minds work, how we learn to think about things, those are also very model-based and very quantitative. So the kinds of tools we're going to learn in this class are very broadly applicable. They're also increasingly important in medicine. So at some point we're going to take a little bit of a detour to look at a particular disease that's caused by a defect in an ion channel. And it turns out you can understand exactly how that defect in that ion channel relates to the phenotype of the disease. And you can do that by creating a mathematical model of how a neuron behaves when it has an ion channel that has this defect in it. So it's very cool. And once you model it, you can really understand why that happens. So here are some of the course goals. So we're going to start by working on basic biophysics of neurons and networks and other principles underlying brain and cognitive functions. We're going to develop mathematical techniques to analyze those models and to analyze the behavioral data and neural data that you would take to study those brain circuits. And along the way, we're going to become proficient at using MATLAB to do these things. So how many of you have experience with MATLAB? OK, great. And not? So anybody who doesn't have experience with MATLAB, we're going to really make an effort to bring you up to speed very quickly. Daniel has actually just created a very nice MATLAB cheat sheet that's just amazing. So there will be lots of help with programming. So let me just mention some of the topics that we'll be covering. So we'll be talking about equivalent circuit model of neurons. So let me just explain how this is broken down. So these are topics that we'll be covering. And these are the mathematical tools that go along with those topics that we'll be learning about in parallel. So we'll be studying neuronal biophysics. And we'll be doing some differential equations along the way for that, just first-order linear differential equations, nothing to be scared of. We'll talk about neuronal responses to stimuli and tuning curves. And along the way, we'll be learning about spike sorting and peristimulus, time histograms, and ways of analyzing firing patterns. We talked about neural coding and receptive fields. And we'll learn about correlation and convolution for that topic. We'll talk about feed forward networks and perceptrons. And then we're going to start bringing a lot of linear algebra, which is really fun. It's really powerful. And that linear algebra sets the stage for then doing dimensionality, reduction on data, and principal component analysis, and singular value decomposition, and other things. We'll then take an additional extension of neural networks from feed forward networks. We'll figure out how to make them talk back to themselves so they can start doing things like remember things and make decisions. And that involves more linear algebra, eigenvalues. And then I'm not sure we're going to get time to sensory integration and Bayes' rule. So by the end of the class, there are some important skills that you'll have. You'll be able to think about a neuron very clearly and how its components work together to give that neuron its properties. And how neurons themselves can connect together to give a neural circuit its properties. You'll be able to write MATLAB programs that simulate those models. You'll be able to analyze data using MATLAB. You'll be able to visualize high dimensional data sets. And one of my goals in this class is that you guys should be able to go into any lab in the department and do cool things that even the graduate students may not know how to do. And so you can do really great stuff as a UROP. So one of the most important things about this class is problem sets because that's where you're going to get the hands-on experience to do that data analysis and write programs and analyze the data. Please install that. It's really important, if you don't already have that. We use live scripts for problems set submissions. And Daniel made some nice examples on Stellar. And of course the guidelines for Pset submissions are also on Stellar. OK, that's it. Any questions about that? No? All right, good. So let's go ahead and get started then with the first topic. OK. So the first thing we're going to do is we're going to build a model of a neuron. This model is very particular. It uses electrical components to describe the neuron. Now that may not be surprising since a neuron is basically an electrical device. It has components that are sensitive to voltages, that generate currents, that control currents. And so we're going to build our model using electrical circuit components. And one of the nice things about doing that is that every electrical circuit component, like a resistor or a capacitor, has a very well-defined mathematical relation between the current and the voltage, the current that flows through that device and the voltage across the terminals of that device. So you can write down very precisely, mathematically, what each of those components does. So then you can then take all those components and construct a set of equations or in general a set of differential equations that allows you to basically evolve that circuit over time and plot, let's say, the voltage on the inside of the cell as a function of time. And you can see that that model neuron can actually very precisely replicate many of the properties of neurons. Now neurons are actually really complicated. And this is the real reason why we need to write down a model. So there are many different kinds of neurons. Each type of neuron has a different pattern of genes that are expressed. So this is a cluster diagram of neuron type based on a transcriptional profiling of the RNA that I think it was about 13,000 neurons that were extracted from a part of the brain. You do a transcriptional profiling. It gives you a map of all the different genes are expressed in each neuron. And then you can cluster them and you can see that this particular part of the brain, which is in the hypothalamus, expresses all of these different cell types. Now what are those different genes? Many of those different genes are actually different ion channels. And there are hundreds of different kinds of ion channels that control the flow of current across the membrane of the neuron. So this is just a diagram showing different potassium ion channels, different calcium ion channels. You can see they have families and different subtypes. And all of those different ion channels have different timescales on which the current varies as a function of voltage change. They have different voltage ranges that they're sensitive to. They have different inactivation. So many ion channels, when you turn them on, they stay on. But other ion channels, they turn on and then they slowly decay away. The current slowly decays away. And that's called inactivation. And all these different ion channels have different combinations of those properties. And it's really hard to predict when you think about how this neuron will behave with a different kind of ion channel here. It's super hard to just look at the properties of an ion channel and just see how that's going to work in a neuron because you have all these different parts that are working together. And so it's really important to be able to write down a mathematical model. If you have a neuron that has a different kind of ion channel, you can actually predict how the neuron's going to behave. Now that's just the ion channel components. Neurons also have complex morphologies. This is a Purkinje cell in the cerebellum. They have these very densely elaborated dendrites. Other neurons have very long dendrites with just a few branches. Other neurons have very short stubby dendrites. And each of those different morphological patterns also affects how a neuron responds to its inputs, because now a neuron can have inputs out here at the end of the dendrite or up close to the soma. And all of those, the spatial structure, also affects how a neuron responds. And those produce very different firing patterns. So some neurons, if you put in a constant current, they just fire regularly up. So it turns out we can really understand why all these different things happen if we build a model like this. So let me just point out a couple of other interesting things about this model. Different parts of this circuit actually do cool different things. So neurons have not just one power supply. They've got multiple power supplies to power up different parts of the circuit that do different things. Neurons have capacitances that allow a neuron to accumulate over time, act as an integrator. If you combine a capacitor with a resistor, that circuit now looks like a filter. It smooths its past inputs over time. And these two components here, this sodium current and this potassium current, make a spike generator that generates an action potential that then talks to other neurons. And you put that whole thing together, and that thing can act like an oscillator. It can act like a coincidence detector. It can do all kinds of different cool things. And all that stuff is understandable if you just write down a simple model like this. Any questions? So what we're going to do is we're going to just start describing this network. We're going to build it up one piece at a time. And we're going to start with a capacitance. But before we get to the capacitor, we need to do one more thing. We need to do one thing first, which is figure out what the wires are in the brain. For an electrical circuit, you need to have wires. So what are the wires in the brain? What do wires do in a circuit? They carry current. So what are the wires in a neuron? AUDIENCE: Axons? MICHALE FEE: What's that? AUDIENCE: Axons? MICHALE FEE: Axons. So axons carry information. They carry a spike that travels down the axon and goes to other neurons. But there is even a simpler answer than that. Yes? AUDIENCE: Ion channels? MICHALE FEE: Ion channels are these resistors here. But what is it that connects all those components to each other? AUDIENCE: Intracellular and extracellular. MICHALE FEE: Excellent. It's the intracellular and extracellular solution. And so what we're going to do today is to understand how the intracellular and extracellular solution acts as a wire in our neuron. And it's not quite as simple as a piece of metal. It's a bit more complicated. There are different ways you can get current flow in intracellular and extracellular solution. So we're going to go through that and we're going to analyze that in some detail. So in the brain, the wires are the intracellular and extracellular salt solutions. And you get current flow that results from the movement of ions in that aqueous solution. So the solution consists of ions. Like in the extracellular, it's mostly sodium ions and chloride ions that are dissolved in water. Water is a polar solvent. That means that the negative parts, the oxygen that's slightly negatively charged. Oxygen is attracted toward positive ions. And the intracellular and extracellular space are filled with salt solution at a concentration of about 100 millimolar. And that corresponds to having one of these ions about every 25 angstroms apart. So at those concentrations, there are a lot of ions floating around. And those ions can move under different conditions to produce currents. So currents flow in the brain through two primary different mechanisms. Diffusion, which is caused by variations in the concentration. And drifts of particles in an electric field. So when you put an electric field, so if you take a beaker filled with salt solution, you put two metal electrodes in it, you produce an electric field that causes these ions to drift in-- and that's another source of current that we're going to look at today. So here are our learning objectives for today. We're going to understand how the timescales of diffusion relate to the length scales. That's a really interesting story. That's very important. We're going to understand how concentration gradients lead to currents. That's known as Fick's First Law. And we're going to understand how charges drift in an electric field in a way that leads to current, and the mathematical relation that describes voltage differences. And this is called Ohm's Law in the brain. And we're going to learn about the concept of resistivity. So the first thing we need to talk about, if we're going to talk about diffusion, is thermal energy. So every particle in the world is being jostled by other particles that are crashing into it. And at thermal equilibrium, every degree of freedom, every way that a particle can move, either forward and backward, left and right, up and down, or rotations, this way or this way, or whichever way, I didn't show yet, come to equilibrium at a particular energy that's proportional to temperature. In other words, if a particle is moving in this direction in equilibrium, it will have a kinetic energy in that direction that's proportional to the temperature. And that temperature is in units of kelvin relative to absolute zero. And the proportionality constant is the Boltzmann constant, which has units of joules per kelvin. So when you multiply the Boltzmann constant k times temperature, what you find is that every degree of freedom will come to equilibrium at 4 times 10 to the minus 21 joules, which is an amount of energy, at room temperature. At zero temperature, you can see that every degree of freedom has zero energy. And so nothing is moving. Nothing's rotating, nothing's moving any direction. Everything's perfectly still. So let's calculate how fast particles move at thermal equilibrium in room temperature. So you may remember from your first physics class that the kinetic energy of a particle is proportional to the velocity squared, 1/2 mv squared. So the average velocity squared of a particle at thermal equilibrium is just 1/2 times that much energy. That makes sense? Now we [AUDIO OUT] how fast a particle is moving-- for example, a sodium ion. So you can see that the average velocity squared is just kT over m. We just divide both sides by m. So the average velocity squared is kT over m. The mass of a sodium ion is this. So the average velocity squared is 10 to the 5 meter squared per second squared. Just take the square root that, and you get the average velocity is 320 meters per second. So that means that the air molecules, which have a similar mass to sodium ion, are whizzing around at 300 meters per second. So that would cross this room in a few hundredths of a second. But of course, that's not what happens. Particles don't just go whizzing along at 300 meters per second. What happens to them? AUDIENCE: Bump into each other. MICHALE FEE: Into each other. They're all crashing into each other constantly. So in solution, a particle collides with a water molecule every about 10 to the 13 times per second. 10 to the minus 13 seconds between collisions. So that means the particle is moving a little bit crashing, moving in a different direction, crashing, moving in a different direction and crashing. So if you follow one particle, it's just jumping around, it's diffusing. So what does that look like? Daniel made a little video that shows to scale. This is position in micron. And time is in real-time. So this video shows in real-time what the motion of a particle might look like. In each point, it's moving, colliding, and moving off in some random direction. You can actually see this. If you look at a very small particle-- who was it, Daniel, who did that experiment looking at pollen? It's Brownian, at Brown. AUDIENCE: Yup. MICHALE FEE: What was his first name? Brown. Brownian motion. Have you heard of Brownian motion? So somebody named Brown was looking at pollen particles in water and noticing that they jump around, just like this. And he hypothesized that they were being jostled around by the water. Any questions? So what can we say about this? There's something really interesting about diffusion that's very non-intuitive at first. Diffusion has some really strange aspect to it. That a distance that a particle can diffuse depends very much on the time that you allow. And it's not just a simple relation. So let's just look at this. So let's ask how much time does it take for an ion to diffuse a short distance, like across the soma of a neuron. So an ion can diffuse across the soma of a neuron in about a 20th of a second. How about down it at dendrites. So let's start our ion in the cell body. And ask, how long does it take an iron to reach the end of a dendrite that can be about a millimeter away. Can take about 10 minutes on average. That's how long it will take an iron to get that far away from its starting point. So you can see, 20th of a second here. And here it's like 500 seconds. About 10 minutes. How long does it take an ion, starting at the cell body, to diffuse all the way down-- so you know there are neurons in your body that start in your spinal cord and go all the way down to your feet. So motor neurons in your spinal cord can have very long axons. So how long does it take an ion to get from the soma all the way down to the end of an axon, a long axon? Somebody just take a guess. It's 20th of a second here, 10 minutes here. Anybody want to guess? An hour, yup. 10 years. OK. Why is that? That's crazy, right? How is that possible? And that's an ion. So a cell body is making proteins and all kinds of stuff that have to get down to build synapses at the other end of that axon. And proteins diffuse a heck of a lot slower than ions do. So basically a cell body could make stuff for the axon, and it would never get there in your entire lifetime. And that's why cells have to actually make little trains. They literally make little trains. They package up stuff and put it on the train and it just marches down the axon until it gets to the end. And this is the reason why. So what we're going to do is I'm going to just walk you through a very simple derivation of why this is true and how to think about this. So here's what we're going to do. So normally things diffuse in three dimensions, right? But it's just much harder to analyze things in three dimensions.. So you can get basically the right answer just by analyzing how things diffuse in one dimension. So Daniel made this little video to show you what this looks like. This is I think 100 particles all lined up near zero. And we're going to turn on the video. We're going to let them all start diffusing at one moment. So you can just watch what happens to all these different particles. So you can see that some particles end up over here on the left. Other particles end up over here on the right. You can see that the distribution of particles spreads out. And so we're going to figure out why that is, why that happens. So the first thing I just want to tell you is that the distribution of particles, if they all start at zero, and they diffuse in 1D away from zero, the distribution that you get is Gaussian. And the basic reason is that, let's start at the center, and on every time step they have a probability of 1/2 of going to the right and 1/2 of going to the left. And so basically there are many more combinations of ways a particle can do some lefts and do some rights and end up back where it started. It's very unlikely that the particle will do a whole bunch of going right all in a row. And so that's why the density and the distribution is very low down here. And so you end up with something that's just a Gaussian distribution. So let's analyze this in a little more detail. So we're going to just make a very simple model of particles stepping to the right or to the left. We're going to consider a particle that is moving left or right at a fixed velocity vx for some time tau before a collision. And we're going to imagine that each time the particle collides it resets its velocity randomly, either to the left or to the right. So on every time step, half the particles will step right by a distance delta, which is the velocity times the time tau. And the other half of the particles will step left by that same distance. So they're going either to the left or to the right by a distance delta. So if we start with n particles and all of them start at position 0 at time 0, then we can write down the position of every particle at time step n, the i-th particle at time step n. And we're going to assume that each particle is independent, each doing their own thing, ignoring each other. So now you can see that you can write down the position of the particle at time step n is just the position of the particle at the previous time step, plus or minus this little delta. Any questions about that? So please, if you ever just haven't followed one step that I do, just let me know. I'm happy to explain it again. I often am watching somebody explaining something really simple, and my brain is just in some funny state and I just don't get it. So it's totally fine if you want me to explain something again. You don't have to be embarrassed because happens to me all the time. So now what we can do is use this expression, compute how that distribution evolves over time, how that distribution of particles, this i-th particle over time, time step n. All right, so let's calculate what the average position of the ensemble is. So these brackets mean average. So the bracket with an i, that I'm averaging this quantity over i particles. And so it's just the sum of positions for every particle, divided by the number of particles. That's the average position. So again, the position of the i-th particle at time step n is just the position of that particle at the previous time step, plus or minus delta. We just plug that into there, into there. And now we calculate the sum. But we have two terms. We have this term and that term. Let's break them up into two separate sums. So this is equal to the sum over the previous positions, plus the sum over how much the change was from one time step to the next. Does that makes sense? But what is this sum? We're summing over all the particles, how much they changed from the previous time step to this time step. Well, half of them moved to the right and half of them the left. So that sum is just zero. So you can see that the average position of the particles at this time step is just equal to the average position of the particles at the previous time step. And what that means is that the center of the distribution hasn't changed. If you start all the particles at zero, they diffuse around. The average position is still zero. Yes? AUDIENCE: [INAUDIBLE] bracket [INAUDIBLE].. MICHALE FEE: Yes. So this here is just this. So this bracket means I'm averaging over this quantity i. So you can see that's what I'm doing here. I'm summing over i and dividing by the number of particles. AUDIENCE: And what is i? MICHALE FEE: I is the particle number. So if we have 10 particles, i goes from 1 to 10. Thank you. So that's a little boring. But we used a trick here that we're going to use now to actually calculate the interesting thing, which is on average how far do the particles get from where they started. So what we're going to do is not calculate the average position of all the particles. We're going to calculate the average absolute value from where they started. Does that makes sense? We're going to ask, on average, how far did they get from where they started, which was zero. So absolute values, nobody likes. They're hard to deal with. But this is exactly the same as calculating the square root of the average square. It's the same as calculating the variance. Does that makes sense? So what we're going to do is we're going to calculate the variance of that distribution. And the square root of that variance is just the standard deviation, which is just how wide it is, which is just how far on average the particles got from where they started. Does that makes sense? So let's push on. We're going to calculate the average square distance. Now we're just going to take the square of that at the end. So the average of the position squared, we're going to plug this into here. So we're going to square it. So the position of the particle squared is just this quantity squared. Let's factor it out. So we have this term squared plus twice that times that here, plus that term squared. And we're going to now plug that average. So the average position squared is just the average. The average position squared at this time step n is the average position squared at the previous time step plus some other stuff. And let's take a look at what that other stuff is. What is this? This is plus or minus 2 times delta, which is the step it takes, the size of the step times x. So what is that average? Half of these are positive and half of these are negative. So the average is zero. And quantity is the average of delta squared. Well, delta squared is always positive, right? So what does this say? What this says is that the variance at this time step is just the variance at a previous time step is a constant. So let's analyze that. What this says is that at each time step, the variance grows by some constant. Delta is a distance. Delta squared is just the units of variance of a distribution that's a function of distance. So if the variance at time step 0 is 0, that means they're all lined up at the origin. One time step later, the variance will be delta squared. The next time step, it will be two delta squared. The next time step, dot, dot, dot. Up at some time step n, it will be n times delta squared. So you see what's happening? The variance of this distribution is growing linearly. We can change from time steps to continuous time. So the step number is just time divided by tau, which is some interval in time like the interval between collisions. And so you can see that the variance is just growing linearly in time where the variance is just 2 times d times T, where d is what we call the diffusion coefficient. It's just length squared divided by time. Why is that? Because as time grows, the variance grows linearly. So if we want to take time, multiply it by something that gives us variance, it has to be variance per unit time. And variance, for something that's a distribution of position, has to have position squared. Yes? AUDIENCE: But do we like [INAUDIBLE],, like that? MICHALE FEE: It's built into the definition of the diffusion constant, OK? Any questions about that? And now here here's the answer. So the variance is growing linearly in time. What that means is that the standard deviation, the average distance from the starting point, is growing as the square root of time. And that's key. That I want you to remember. The distance that a particle diffuses from its starting point on average grows is the square root of time. So for a small molecule, a typical small molecule, the diffusion constant is 10 to the minus 5 centimeters squared per second. And so now we can just plug in some distances in times and see how long it takes this particle to diffuse some distance. So let's do that. Let's plug in a length of 10 microns. That was our soma, our cell body. It's 10 to the minus 3 centimeters. Time is that squared, length squared. So it's 10 to the minus 6 centimeters squared divided by the diffusion constant. 2 times the diffusion constant, 2 times 10 to the minus 5 centimeters squared per second. You can see centimeter squareds cancel. That leaves us time. 50 milliseconds. Now let's put in one millimeter. That was the length of our dendrite. So that's 10 to the minus 1 centimeter. So we plug that into our equation for time. Time is just L squared-- I forgot to actually write that down. Here's the equation that I'm solving. So what this equation at the bottom here is saying is some distance is equal to the square root of 2dT. And I'm just saying L squared is equal to 2 dT. And I'm solving for T, L squared over 2d. That's the equation I'm solving. I'm giving you a length and I'm calculating how long it takes. So if you put in 10 to the minus 1 here, you get 10 to the minus 2 divided by 2 times 10 to the minus 500 seconds, which is about 10 minutes. And now if you ask how long does it take to go a meter, that's 10 to the 2 centimeters. That's 10 to the 4 divided by 10 to the minus 5. Somebody over here figured it out right away. About 5 times 10 to the 8 seconds, which is about 10 years. A year is pi times 10 to the 7 seconds, by the way. Plus or minus a few percent. Any questions about that? Cool, right? So neurons and cells and biology has to go to extraordinary lengths to overcome this craziness of diffusion, which explains a lot of the structure you see in cells. So you can see that diffusion causes the movement of ions from places where they're concentrated to places where there aren't so many ions. So let's take a little bit slightly more detail look at that idea. So what I'm going to tell you about now is called Fick's First Law. And the idea is that diffusion produces a net flow of particles from regions of high concentration to regions of lower concentration. And the flux of particles is proportional to the concentration gradient. Now this is just really obvious, right? If you have a box, and on the left side of the box you have n particles. Then on the right side of the box then you're going to have particles diffusing from here to there. And you're going to have particles diffusing from there to there. But because there are more of them over here, they're just going to be more particles going this way than there are that way. Does that makes sense? Let's say each particle here might have a 50% chance of diffusing here or staying here or diffusing somewhere else. Particles here also equally have probability of going either way. But just because there are more of them here, there's going to be more particles going that way. You can just calculate the number of particles going this way minus the number of particles going that way. And that gives you the net number of particles going to the right. But what does that look like? You have the number here minus the number some distance away. And what if you were to divide that by the distance? What would that look like? Good. It looks like a derivative. So if you calculate the flux, it's minus the diffusion constant times 1 over delta, the separation between these boxes. It's the concentration here minus the concentration there. And that is just a derivative. And that's Fick's First Law. I have a few slides at the end of the lecture that do this derivation more completely. So please take a look at that if you have time. So now this is really an important concept. This Fick's First Law, the fact that concentration gradients produce a flow of ions, of particles, is so fundamental to how neurons work. And here we're going to be building that up over the course of the next couple lectures. So imagine that you have a cell that has a lot of potassium ions inside and very few potassium ions outside. Now you can see that you're going to have potassium ions diffusing from here. Sorry, and I forgot to say, let's say that your cell has a hole in it. So you're going to have potassium ions diffusing from inside to outside through the hole. You also have some potassium ions out here. And some of those might diffuse in. But there are just so many more potassium ions inside than outside concentration-wise that the probability of one going out through the hole is just much higher than the probability of a potassium ion going back into the cell. So here I'm just zooming in on that channel, on that pore through the membrane. Lots of potassium ions here. On average, there's going to be a net flow of potassium out through that hole. And we can plot the concentration gradient through the hole. And you can see it's high here, it decreases, and it's low outside. And so there's a net flow that's proportional to the steepness of concentration profile. So that's true, you get a net flow, even if each particle is diffusing independently. They don't know anything about each other. And yet that concentration gradient produces a current. All concentration gradients go away. Why is that? Because calcium ions will flow from the inside of the cell to the outside of the cell until they're the same concentration. And then you'll have just as many flowing back inside as you have flowing outside. Why? So eventually that would happen to all of our cells. Why doesn't that happen? AUDIENCE: [INAUDIBLE] because they're alive. MICHALE FEE: Well, that's exactly the right answer, but there are a few intermediate steps. If you were to not be alive anymore, the potassium ions would just diffuse out. And that would be the end. But what happens is there are other proteins in the membrane that take those potassium ions from here and pump them back inside and maintain the concentration gradient. But that costs energy. Those proteins use ATP. And that ATP comes from eating. But eventually all concentration gradients go away. So that is how we get current flow from concentration gradients. Now the next topic has to do with the diffusion of ions in the presence of voltage differences, in the presence of voltage gradients. The bottom line here that I want you to know, that I want you to understand, is that current flow in neurons obeys Ohm's Law. Now what does that mean? Let's imagine that we have a resistor. Let's say across a membrane or in the intracellular or extracellular space of a neuron. The current flow through that resistive medium is proportional to the voltage difference. So that's Ohm's Law. The current is proportional to the voltage difference across the two terminals, the two sides of the resistor. And the proportionality constant is 1 over the resistance. So here current has units of amperes. The voltage difference is units of volts. And the resistance has units of ohms. Any questions about that? So let's go through-- let's develop this idea a little bit more and understand why it is that a voltage difference produces a current that's proportional to voltage. So let's go back to our little [AUDIO OUT] filled with salt solution. There are ions in here dissolved in the water. We have two metal plates. We've put a battery between the two metal plates that holds those two plates at some fixed voltage difference delta v. And we're going to ask what happens. So let's zoom in here. There is one plate that's at one potential. There's another plate at another potential. There's some voltage difference between those that's delta v. The two plates are separated by a distance L. And that voltage difference produces an electric field that points from the high voltage region to the low voltage region. So an electric field produces a force on a charge-- we have lots of charges in here-- that's proportional to the charge and the electric field. So what is that force going to do? That force is just going to drag that particle through the liquid, through the water. So why is it? So if this were a vacuum in here and we put a charge there and metal plates and we put a battery across, what would that particle do? It would move. But what would this force do to that particle? AUDIENCE: [INTERPOSING VOICES] MICHALE FEE: Exactly. So what would the velocity do? AUDIENCE: Increase. MICHALE FEE: It would just increase linearly. So the particle would start moving. And it would start moving slowly and it'd go-- poof-- crash into the plate. But that's not what happens here. Why is that? AUDIENCE: [INAUDIBLE] MICHALE FEE: Because there's stuff in the way. And so it accelerates, and it gets hit by a water molecule. And it gets pushed off in some direction. And then it accelerates in this direction, gets hit again. But it's constantly being accelerated in one direction before it collides. And so here's what happens. So it's diffusing around. But on each step, it has a little bit of acceleration in this direction, in the direction of the electric field. And so you can show using the same kind of analysis that we used in calculating the distribution, the change in mean and variance, you can show that mean of a distribution of particles that starts at zero shifts-- of positive particles shifts in the electric field linearly in time. And you can just think about that as the electric field reaches in, grabs that charged particle, and pulls it in this direction against viscous drag. So now a force produces a constant velocity, not acceleration. And that velocity is called the drift velocity. So the force is proportional to drift velocity. What is that little f there? Anybody know what that is? AUDIENCE: Frictional coefficient. MICHALE FEE: It's the coefficient of friction of that particle. And Einstein cleverly noticed that the coefficient of friction of a particle being dragged through a liquid is related to what? Any guess? Diffusion coefficient of that particle. Is that cool? That just gives me chills. The frictional coefficient is just kT over the diffusion constant. So if you actually just go through that same analysis of calculating the mean of the distribution, what you find is that v moves linearly in time. But it's also very intuitive. If you're in a swimming pool, you put your hand in the water, and you push your hand with a constant force. What happens? Well, let me flip it around. You move your hand through the water at a constant velocity. What is the force feel like? The force is constant, right? So flip it the other way around. If the force is constant, then you're going to get a constant velocity. Yes? AUDIENCE: So side question, but you can also look at that like a terminal velocity problem? MICHALE FEE: Exactly. It's exactly the same thing. So the drift velocity is proportional to the force by proportionality constant, 1 over the coefficient of friction, which is now d over kT. And what is this force proportional to? Anybody remember? The force was proportional to the electric field. And so let's calculate the current. So I'm going to argue that the current is proportional to the drift velocity times the area. Now why is that? So if I have an electric field, it makes these particles, all the particles in this area here drift at a constant velocity in this direction. So there is a certain amount of current that's flowing in this area right here. Does that makes sense? Now if my electrodes are big and I also have electric field up here, then that electric field is causing current to flow up here too. And if there's electric field up here, then there will be current flowing up here too. And so you can see that the amount of current that's flowing between the electrodes is proportional to the drift velocity and the cross-sectional area between the two electrodes. Yes? So that's really important. Now we figured out that the drift velocity is proportional to the electric field. So the current is proportional to the electric field times the area. And the electric field is just the voltage difference divided by the spacing between the electrodes. And so the current is proportional to voltage times area divided by length. So we have a proportionality. Current is proportional to voltage times area divided by length. And now let's plug in what that proportionality constant is. This is now like Ohm's Law, right? We're saying the current is proportional to voltage difference. The thing that the proportionality constant here is something called resistivity. Otherwise known as conductivity. But we're going to use resistivity. So this is just Ohm's Law. It says current is proportional to voltage difference. Let's rewrite that a little bit so that it looks more like Ohm's Law. Current is proportional to voltage difference. And that thing, that thingy right there, should have units of what? 1 over ohms. Right? So that is 1 over resistance. Let's just write down what the resistance is. Resistance is just resistivity times length divided by area. So let's just stop and take a breath and think about why this makes sense. Resistance is how much resistance there is to flow at a given voltage, right? So what happens if we make ours really small? What happens to the resistance? AUDIENCE: [INAUDIBLE] really big. MICHALE FEE: The resistance gets big. The amount of current gets small because there's less area that the electric field is in. And so the current goes down. That means the resistance is big. If we make our plates really big, the resistance gets smaller. What happens if we pull our plates further apart? What happens to the resistance? AUDIENCE: [INAUDIBLE] further apart. MICHALE FEE: Good. If the plates are further apart, L is bigger, and resistance is bigger. But conceptually, what's going on? Physically, what's going? The plates are further apart, so what happens? AUDIENCE: [INAUDIBLE] MICHALE FEE: Right. The voltage difference is the same, but the distance is bigger. And so the electric field, which is voltage per distance, is smaller. And that smaller electric field produces a drift velocity. And that's why the resistance goes up. Cool, right? OK. Now, let's talk for a minute about resistivity. So resistivity in the brain is really, really lousy. The wires of the brain are just awful. So if you look at the resistivity for copper, which is which is the wire that's used in electronics, the resistivity is 1.6 microohms times centimeters. What that means is if I took a block of copper, a centimeter on a side, and I put electrodes on the side of it, and I measured the resistance, it would be 1.6 microohms. That means I could run an amp, that thing with 1.6 microvolts. Now the resistivity of the brain is 60 ohms centimeters. That means a centimeter of block of saline solution, intracellular or extracellular solution, has a resistance of 60 ohms instead of 1.6 microohms. It's more than a million times worse. And what that means is that when you try to send current through brain, you try to send some current, the voltage just drops. You need huge voltage drops to produce tiny currents. That's why the brain has invented things-- axons-- because the wires are so bad that you can't send a signal from one part of the brain to another part of the brain through the wire. You have to invent this special gimmick called an action potential to send a signal more than a few microns away. It's pretty cool, right? That's why it's so interesting to understand the basic physics of something, the basic mechanisms by which something works because most of what you see is a hack to compensate for weird physics, right? Yes? AUDIENCE: Does this [INAUDIBLE]? MICHALE FEE: This high resistivity-- you're asking what causes that high resistivity. It basically has to do with things like the mean-free path of the particle. So in a metal, particles can go further effectively before they collide. So the resistivity is lower. AUDIENCE: Is that slope [INAUDIBLE]?? MICHALE FEE: It's a little bit different inside the cell because there's more gunk inside of a cell than there is outside of a cell. And so the resistivity is a little bit worse. It's 2,000 ohms centimeters, or 1,000 or 2,000 inside the cell and more like 60 outside. AUDIENCE: [INAUDIBLE] MICHALE FEE: Yes once you're outside the cell, it's basically the same everywhere. OK? So that's it. So here's what we learned about today. We understood the relation between the timescale of diffusion and length scales. And we learned that the distance that a particle can diffuse grows only as the square root of time. We understood how concentration gradients lead to currents. And we talked about Fick's First Law that says that concentration differences lead to particle flux. The flux is proportional to the gradient or the derivative of the concentration. And we also talked about how the drift of charged particles in an electric field leads to currents, and how the voltage current relation obeys Ohm's Law. And we also talked about the concept of resistivity and how the resistivity in the brain is really high and makes the wires in the brain really bad. So that's all I have. I will take any questions. Yes, Daniel? AUDIENCE: I just wanted to introduce David. MICHALE FEE: OK. Our other TA is here. Any questions? Great. So we will see you-- when is the first [AUDIO OUT]? Is that-- AUDIENCE: Tomorrow. MICHALE FEE: Tomorrow. So I will see you Thursday.
MIT_940_Introduction_to_Neural_Computation_Spring_2018
7_Synapses_Intro_to_Neural_Computation.txt
MICHALE FEE: OK, so let's go ahead and get started. OK, so in the last lecture, we talked about how the inputs to neurons actually come into a cell mostly on the dendrite, which is this extended arborization of cylinders of cell membrane that give a very large surface area that allow many, many synapses to contact onto a neuron, many more than would be possible if all of those synapses were trying to connect to this neuron on its soma. Today, we are going to follow up on that general picture of how neurons receive inputs. And today, we're going to focus on the question of how synapses work. So we're going to start by looking at a simple model of synapses. And we're going to end by understanding how synapses on different parts of the neuron can actually do quite different things. So here's our list of learning objectives for today. So we're going to learn how to add a synapse to an equivalent circuit model. And we're going to describe a simple model of how that actually generates voltage changes in a neuron and what those synaptic inputs actually do. We're going to describe mathematical process called convolution. That's going to allow us to extend the idea of how a neuron responds to a single input spike from a presynaptic neuron to how a neuron responds to multiple spikes coming from a presynaptic neuron. So we're going to introduce this idea of convolution, which I'm sure many of you have heard of before. But it's going to play an increasingly important role in the class. And so we're going to introduce it here. We're going to talk about the idea of synaptic saturation, which is the idea that a single synaptic input can generate a small response in a neuron. You would think that as you generate more and more synaptic inputs to a neuron that the response of the postsynaptic neuron might just keep increasing. But, in fact, the response of a neuron to its inputs saturates at some level. And that process of saturation actually has very important consequences for how neurons respond to their inputs. And then finally, we're going to end with a fun story about the different functions of somatic and dendritic inhibition. And we're going to tell that story in the context of a crayfish behavior. All right, so let's start with chemical synapses. There are also electrical synapses, which we're not going to talk about. And that's basically where two neurons can actually contact each other. There are actually proteins that form little holes between the neurons. And so they're just directly electrically connected with each other. That's called an electrical synapse, or a gap junction. We're not going to talk about that today. We're going to focus on chemical synapses. So this is the structure of a typical excitatory synapse from a presynaptic neuron. This is the axon of a presynaptic neuron onto the dendrites of a postsynaptic neuron. Postsynaptic dendrites have often these specializations called spines, which are just little mushroom like protrusions of the cell membrane of the dendrite onto which presynaptic neurons can form of synapse. So this is called the presynaptic component or terminal. That's the postsynaptic component or postsynaptic terminal. On the presynaptic side, there are very small synaptic vesicles, about 30 to 40 nanometers in diameter, that sort of form a cloud or a cluster on the just on the inside surface of this synaptic junction. The synapses are typically about a half a micron across. And the synaptic cleft is very small. It's about 20 nanometers. So this is not quite to scale. This should be maybe a little bit closer here. All right, notice that that is very small. That synapse is really tiny. It's about the wavelength. Its size is equal to 1 wavelength of green light. So it's a tiny structure. There's a lot going on inside that little thing, though. And here's we're going to walk through the sequence of events that just describes how a presynaptic action potential leads to depolarization and in a postsynaptic neuron. So we have an action potential that propagates down. That's a pulse of depolarizing voltage. When it reaches the synaptic terminal, that pulse of depolarizing voltage is about plus 50 millivolts. That activates voltage gated calcium channels that turn on, just the same way that we've described voltage activated sodium channels and potassium channels. That allows calcium ions to flow into the presynaptic terminal. That calcium flows in and binds to presynaptic proteins that dock these vesicles onto the membrane facing the synaptic cleft. That causes those vesicles to fuse with the membrane. They open up and release their neurotransmitters. So all of those vesicles are filled with neurotransmitter. The vesicles are coated with actual pumps that take neurotransmitter from inside of the cell and pump it into the vesicles. So then calcium flows in. Vesicle fuses, releases neurotransmitter into the cleft. Neurotransmitter then diffuses in the cleft. You could actually calculate how long it takes to get from one side to the other now, I think. It's not very long. Ligand gated ion channels, they're basically like the kinds of ion channels we've already been discussing. But instead of being gated by voltage, they're gated by the binding of a neurotransmitter to a binding site on the outside of the protein. That produces a conformational change that opens the pour to the flow of ions. Now, you have neurotransmitter binds to these, opens the pour. You have now positive ions that flow into the cell, because the cell is hyperpolarized. So it has a low voltage. Positive ions flow into the cell. That corresponds to an increase in the synaptic conductance. What is that flow of ions, positive ions, into the cell do? It depolarizes the cell. You have synaptic current flowing in that then-- I forget to put it on there-- depolarizes the cell. Any questions about that? OK, let's talk a little bit about the sort of some of the interesting numbers, like how many synapses there are, how many cells, how many dendrites in a little piece of neural tissue. It's pretty staggering actually. So the synopses are small. But what's really amazing is that in a cubic millimeter of cortical tissue, there are a billion synapses. And if you think about what that means, there is a synapse on a grid, on a lattice, every 1.1 microns. So they're sort of some fraction of a micron big. And there's one of them every micron. Most of your brain is filled with synapses. There are 4.1 kilometers of axon in that same cubic millimeter and 500 meters of dendrite. A typical cortical cell receives about 10,000 synapses. Each cell has about 4 millimeters of dendrite and 4 centimeters of axon. And there are about 10 to the 5 neurons per cubic millimeter in the mouse cortex. And in your entire brain, there are about 10 of the 8 neurons, which is the same roughly as the number of stars in our galaxy. And we're going to figure out how it works. OK, so let's come back to this. So let's start by adding a synapse to an equivalent circuit model and understanding how that model works. So let's start with an ionotropic receptor. So ionotropic receptors are neurotransmitter receptors that also form an ion channel. There are other kinds of neurotransmitter receptors where a neurotransmitter binds. And that sends a chemical signal that opens up a different kind of ion channel. Those are called metabotropic neurotransmitter receptors. We're going to focus today on ionotropic receptors. So a neurotransmitter binds. That binding opens a gate. And that allows a current to flow. So these guys, Magleby and Stevens, did an experiment to understand how that conductance-- so when that ion channel opens, it turns on a conductance. And you can directly measure that conductance by doing a voltage clamp experiment. So here's the experiment they did. They took a muscle fiber from a frog. They set up a voltage clamp on it, so an electrode to measure the voltage, and another electrode to inject current. You hold the voltage at different levels. And then what they did was they stimulated electrically the motor axon, the axon of the motor neuron that innervates the muscle. So that then activates this neuromuscular junction that opens acetylcholine receptors and produces a current as a function of time after the synapse is activated. Any questions about the setup? We're simply holding the cell at different voltages, activating the synapse, and measuring how much current flows through the synapse. Yes. AUDIENCE: So with the current flows through the ion channels in the muscle fibers? MICHALE FEE: The current is flowing through these ion channels here in the synapse. Now, remember, there are all sorts of sodium channels and potassium channels and all those things. But do those do anything? AUDIENCE: Not really. MICHALE FEE: We're just holding the voltage-- that's why the voltage is so important, because if you were to do this experiment without a voltage, you would stimulate and the muscle would contract. And it would rip the electrodes out of the muscle fiber. So when you voltage clamp it, it holds the cell at a constant potential, so that the cell can't spike when the current flows in through the synapse. Yes. AUDIENCE: On the graph, would the shock [INAUDIBLE] MICHALE FEE: Yes, the shock-- AUDIENCE: The difference is like in positive and negative-- MICHALE FEE: Yeah, I'm going to explain this. I'm just setting it up right now. OK, any questions about the setup? Good questions. One step ahead of me. All right, so now, let's look at what actually happens. So what you can see is that the current that goes through these ion channels is different depending on the voltage that you hold the cell at. So if you hold the cell at a negative potential-- so here you can see, the voltage it is like minus 120 here. And what happens is after you shock, you see this large, inward current. Remember, inward current, negative current corresponds to positive ions going into the cell, inward current. So after you activate the motor axon, you get a large inward current that lasts a couple milliseconds. That corresponds to current going into the cell that would depolarize the cell and activate it, right? But as you raise the voltage, you can see that the current gets smaller. And at some point, the current actually goes to zero. And it goes to zero when you are holding the membrane potential close to zero. And as you hold the membrane potential more positive, you can see that the current actually goes the other way. So what we can do is we can now plot that peak current as a function of the holding potential. So we're measuring current through an ion channel at different voltages. So what are we going to plot next? Just like we did for the sodium channel, for the potassium channel, we're going to-- what are we going to plot? What kind of plot? AUDIENCE: I-V. MICHALE FEE: An I-V curve. Excellent. Let's do that. You can see it's actually linear. So the current is negative when you hold the cell negative. The current's positive when you hold the cell above zero. And it crosses at zero. What do we call that place where it crosses zero? What does that tell us? When this ion channel is open-- AUDIENCE: Reversal potential. MICHALE FEE: So reversal potential or the equilibrium potential, that's right. That's kind of weird, right? An equilibrium potential that's zero. What kind of channel has an equilibrium potential at zero? Remember, sodium was very negative, like minus 80-- sorry, potassium was very negative. AUDIENCE: [INAUDIBLE] MICHALE FEE: Excellent. It's something that passes both potassium and sodium. It's like a hole. So this ion channel is basically like opening a hole that passes positive ions in both directions. Potassium goes out, sodium comes in. Yes. AUDIENCE: But that is only like [INAUDIBLE] before the-- MICHALE FEE: Notice that we're plotting that the current as a function of all voltages, and it crosses at zero. There's zero current at zero voltage, which happens when you have just a non-selective pore. OK, what does that look like? An I-V curve that looks like that, what is that? There's a name for that. You use it when you build a circuit. You might have some transistors and some capacitors and some-- how about some resistors? It's just a resistor. That's what the I-V curve of a resistor looks like. And if we were to put it in series with a battery, what would the voltage of the battery be? Zero. Remember, if we put it in series with a battery, it produces an offset. So it's just our same equation-- the current is just a conductance times a voltage. It's just Ohm's law. That's 1 over resistance. And it's V minus e synaptic. And e synaptic is just 0. So that's called the driving potential. That's the conductance. Can anyone take a guess at what the conductance looks like as a function of time? Somebody just hold your hand up, and-- I see two answers that are very close. Lena. AUDIENCE: Would it be like that? MICHALE FEE: Start over here. Like go this way. What would the conductance do as a function of time to make the current look like this? What would it be here? What would the conductance be here? Remember, the voltage is being held at some value. The current is zero. So the conductance must be? AUDIENCE: Zero. MICHALE FEE: Zero. And how about here? It should be some big conductance. And then what about here? AUDIENCE: Zero. MICHALE FEE: Zero. So what is that-- excellent, it just looks like that. The conductance just turns on and then turns off. Anybody want to take a guess at why that might be? Shock. What happens here? Why does this turn on? Because neurotransmitter binds to the receptor, opens the channel. And then what happens? The neurotransmitter falls off. And the neurotransmitter receptor closes. That's it. We're going to do a little bit of mathematical modeling of that. But it's going to be pretty simple. All right? OK. So there is our equivalent circuit. This thing right here equals conductance times of driving potential electrically is just that. You remember that, right? That's the same way we modeled the current of the sodium channel or the potassium channel. AUDIENCE: So the graphical [INAUDIBLE] because it's linear, why would that [INAUDIBLE] MICHALE FEE: The conductance constant? AUDIENCE: Yeah MICHALE FEE: OK. Because look at the current. So, remember, these different voltages here just correspond to these voltages. Here V minus E sin. So E sin is what? The synaptic reversal potential is just zero, right? So for each one of these experiments, this driving potential is just constant. It's just given by this holding potential in the voltage kind of experiment. So in order to turn something that looks like this into something that looks like this, you have to multiply it by something that looks like that. Does that makes sense? For each one of those experiments, this term is constant. And so to get a current that looks like this, the conductance has to look like that. Does that makes sense? AUDIENCE: guess she's asking the upper-- MICHALE FEE: Oh, sorry, did I misunderstand the question? Ask it again. AUDIENCE: Sorry. I was talking about that one. MICHALE FEE: Oh. AUDIENCE: But explanation did make sense. MICHALE FEE: Did it help? But it was the wrong explanation. OK, so ask your question again. AUDIENCE: Yes, so I was trying to relate it to what we were doing earlier with a [? coefficient ?] [INAUDIBLE] MICHALE FEE: Yeah. AUDIENCE: And it had an I-V curve that looked something like that curve-- MICHALE FEE: Oh, it like had some funny shape like this. AUDIENCE: Yeah, it looked like [INAUDIBLE] at zero. And then it was like-- MICHALE FEE: Yeah, so remember, this is as a function of time. And this is exactly what the sodium conductance looked like as a function of time. It turns on. And then it turns off. For the sodium conductance, this turning on happened with a voltage step. And the turning off happened because of the inactivation gate. In this case, it's different. What's turning this thing on is a neurotransmitter binding that turns it off. And what turns it off is not inactivation. It's the fact that the neurotransmitter falls off. But it has the same time dependence. It's just different mechanisms. And then for this, was there a question still about this? AUDIENCE: No. [INAUDIBLE] using that with the-- MICHALE FEE: Oh, with the tie-- it was, OK-- and you understand why this doesn't look like this, right? AUDIENCE: Yeah. MICHALE FEE: The reason that happened is because it looked like this for most of it, but it went back down to 0 here as a function of voltage y. Because of the voltage dependence shuts the [AUDIO OUT] off down here, this doesn't have voltage dependent. It's cool how all this stuff ties together, right? It's sort of the same stuff we learned for the Hodgkin Huxley, just applies to this case here. All right, so there's the circuit. Here's the model that we described before. There's a simple soma with a capacitance and some leak conductance. And now that is how we would model attaching a synapse, any kind of a synapse, onto a soma. And if we wanted the some to be able to spike, we would add some potassium, some voltage dependent potassium and some voltage dependent sodium. All right, any questions? Yeah. AUDIENCE: I probably should have asked this a long time ago and not [INAUDIBLE] circuit. Do you know whether to have the big line of the battery-- MICHALE FEE: Oh, yeah, so that's not a dumb question at all. The answer is don't worry about it. Like there's a convention that the big one is the plus side. And I'm not even 100% sure I've been perfectly consistent in all my slides. The long line is supposed to be the positive side of the battery. AUDIENCE: [INAUDIBLE] MICHALE FEE: Just don't worry. Just make one big and one small. And don't-- just make it a battery symbol. And I don't care if it's the right way. OK. OK? You don't want to make it too much like a capacitor, because if they're the same length, then it looks like a capacitor. And if you're worried just draw an arrow and write battery. OK? All right, good. OK, so now, let's step back from our voltage clamp of experiment and attach this synapse to a real neuron, like this thing, the one that can't spike. It's just a leaky soma that's hyperpolarized. And now, what's the voltage in the cell going to do when we activate that synapse. What is the voltage here? We're turning on this conductance. What that's means is we're making distance get really small. So what is the voltage inside the cell going to do? It's going to approach something. What's it going to approach? We have this circuit. We have a battery and a resistor. Let's make that resistor really, really big. That connects the battery between the outside and inside of our neuron. And now we make the resistor really small all of a sudden. What's going to happen to the voltage in here? AUDIENCE: What is G I? MICHALE FEE: Sorry, just some other conductance. And let's just imagine that it's like potassium conductance that's kind of holding the cell hyperpolarized. But I don't want you to focus on this right now. What I want you to focus on is what would happen here when I turn on that synapse, when I make the resistor really small, when I make the conductance really big. What's going to happen to the voltage inside the cell? It's going to approach something. It's going to be dragged toward something. It's going to be dragged toward the voltage of that battery. We're hooking that battery up to the inside of our neuron. Does that makes sense? OK, so that's what I'm going to show you now. So if we have an excitatory synapse-- so what I'm going to show you is what happens when we activate a glutamatergic excitatory synapse on a cell. We're going to record the voltage in the cell. And we're going to activate that synapse. And what you see is that this is what you would see, for example, for the muscle fiber. You activate the synapse, and you see that the voltage of the cell-- if the cell is hyperpolarized, the voltage of the cell goes up. If you hold the cell at a higher voltage, the voltage also goes up, but a little bit less. If you hold the cell at zero, you can see you activate that synapse, but the cell is already at the potential of the battery. And so there's no current and no change in the voltage. If you hold the cell at a positive voltage, and you activate the synapse, again, you're connecting the cell to a battery that has 0 volts. And so the voltage goes down. So here's what I want to convey here, that when you activate a synapse, it forces the cell, forces the voltage in the cell, to approach the voltage of the reversal potential of the synapse, the voltage of that battery. Is that clear? Yes. AUDIENCE: So [INAUDIBLE] MICHALE FEE: It increases the conductance. Current flows into this cell. And it flows into the cell in a direction so that the voltage of approaches the equilibrium potential. Yes. AUDIENCE: [INAUDIBLE] MICHALE FEE: What's that? AUDIENCE: [INAUDIBLE] MICHALE FEE: Ah, yes, good. In fact, that's a great way to phrase it, because that kind of experiment here where we're measuring the voltage inside the cell is called current clamp experiment. So there's voltage clamp, where you force the voltage to be constant by varying the current. Here, we're holding the current constant, clamping the current, and measuring the voltage. AUDIENCE: [INAUDIBLE] MICHALE FEE: Yeah, let's just go back to the setup here, this experiment here. You don't you don't set it up like this. You set it up like we had on the first day of class, where you just have a current source connected here and a volt meter attached to this electrode. I didn't use that word. But that was the very first experiment we did. I think on the second day of class, we had a cell with a electrode to inject current, electrode to measure voltage. That was a current clamp experiment, because we're holding the current at some constant value. OK? Here, we're holding the voltage at some constant value and measuring current. All right, so the idea is really simple, when you have a synapse, the synapse has a reversal potential. When you activate the synapse, the cell is dragged toward the reversal potential. Here, the reversal potential was zero. So when I activate the synapse, the voltage goes toward the reversal potential. Notice-- yes. AUDIENCE: Oh, just a very basic question. So is [INAUDIBLE] the top half is like the synaptic face of the circuit and the bottom half is like-- MICHALE FEE: Are you talking about like here versus here? Yeah, so I've put that pink box around the synapse. AUDIENCE: OK. MICHALE FEE: That's the circuit that corresponds to having a synapse attached to the cell. And this is the circuit that we developed several weeks ago. AUDIENCE: OK. But it's acting like the presynaptic-- like a cell or-- MICHALE FEE: Oh, no, the presynaptic cell is not part of this picture. The presynaptic cell is kind of like spritzing neurotransmitter onto this thing, which increases the conductance, which is the same as reducing the size of a resistor. Does that makes sense? Yes. AUDIENCE: So is the reversal potential always going to be zero or like just like in this specific example? MICHALE FEE: Great question. So the reversal potential is different for different kinds of synapses. You can see that this synapse, this kind of synapse here, if I'm hyperpolarized, it pushes the voltage of the cell up. What kind of synapse do you think that is? An excitatory synapse. But notice something really cool. An excitatory synapse doesn't always push the voltage of the cell up. If the cell is sitting up here, it can push the voltage of the cell down. OK? Yes. AUDIENCE: Why doesn't it just stay up? MICHALE FEE: Why? Great. So great question. You can probably answer that already. AUDIENCE: Well, it's some-- MICHALE FEE: Yeah. Yeah. I'm hearing a bunch of good answers. So what happens is we're stimulating the synapse here, releases neurotransmitter. The conductance goes up. Current flows in, depolarizes the cell, neurotransmitter unbinds. Current stops. And this thing brings the cell back down to its starting point, this other thing. Yeah, this thing is kind of holding the cell at some hyperpolarized potential. AUDIENCE: So even though they've lost a current. MICHALE FEE: The current is turning on and then turning off. Remember, the current-- here it is right here-- the current is turning on as the conductance turns on. And then when the conductance goes to zero, the current goes to zero. AUDIENCE: Oh, I thought it was a big current clamp. MICHALE FEE: No. The experiment we're doing is injecting constant current into the cell. And that's how we hold the cell at these different sort of kind of average voltages. OK, maybe I should say-- yes. AUDIENCE: So I just [INAUDIBLE] MICHALE FEE: Makes it do this? AUDIENCE: Well, I don't understand how that changes. I just always want positive ions to flow in. MICHALE FEE: Yeah. So positive ions flowing into the cell raise the voltage in this. AUDIENCE: But I don't understand if the inside of cell is already positive, why adding more positive-- MICHALE FEE: Oh. OK, great question. You're talking about up here. OK, so let me just back up and explain one thing that maybe I didn't explain very well. In this experiment, we're injecting current through our current electrode to just hold the cell at different voltages while we stimulate the synapse. Does that makes sense? Down here, let's say, we're not injecting any current. And then we inject a little bit of current to kind of hold the cell up here. And we activate the synapse and measure changes from there. Does that make sense? AUDIENCE: So when it's a current clamp, it just leaves the input the same? MICHALE FEE: Exactly. You're just turning a knob and saying I'm going to put in 1 nanoamp. And that's just going to kind of hold the cell up here. And then I activate the synapse and ask how does the voltage change. OK? Oh. AUDIENCE: That's not my question. MICHALE FEE: I know. AUDIENCE: My question-- MICHALE FEE: But I realized-- your question is why does this go down? AUDIENCE: Like I thought that's-- MICHALE FEE: Yeah. Exactly. Exactly. Isn't that cool? Excitatory synapses are not always inject-- so what happens up here? You're holding the cell up here. And now you activate the synapse. And the voltage actually goes down. And the answer is that if you're holding this cell up here positive, more positive than here, when you turn on that conductance, the current goes the other way. AUDIENCE: Passed the reversal potential. MICHALE FEE: So you've passed the reversal potential. You're holding the cell up here. And so [AUDIO OUT] turn on the synapse, the current flows the other direction. You have a positive current, which is positive ions flowing out, which lowers the voltage of the cell. So the way to think about this is that when you have a synapse, you turn the synapse on, it doesn't matter where the voltage is, it's always driving it toward the reversal potential. And if you start above the reversal potential, the voltage will go down. If you start below the reversal potential, the voltage will go up. It's like not what you learned in 901, right? AUDIENCE: No. MICHALE FEE: This is-- yeah. But this is how it really works. OK? Excitatory synapses, the reason you think of excitatory synapses as always pushing the voltage up is because most of the time, the cell is sitting [AUDIO OUT] here. But this is going to become much more important and obvious when we're talking about inhibitory synapses. So let's talk about inhibitory segments. So here's a model with the synaptic reversal at zero. All excitatory synapses have their reversal potential around zero. The neuromuscular junction, the glutamatergic synapse, they're all basically non-specific pores that have a reversal potential of zero. Inhibitory synapses are different. So excitatory-- the reason we call it an excitatory synapse is because that reversal potential is above the threshold for the neuron to spike. And so when you activate the synapse, you're pushing the voltage of the cell always toward a voltage that's above the spiking threshold. That's why it's called an excitatory synapse. And those are called excitatory postsynaptic potentials, or EPSP. That little bump is an EPSP. All right, now, inhibitory synapses look really different. And now the effect that I'm talking about is important. With an inhibitory synapse, the reversal potential is around minus 75. Remember, most inhibitory synapses are chloride channels. And chloride, do you remember on the lecture about the equilibrium potentials? The reversal potential for chloride channels is around minus 75. And that's why the synaptic reversal potential is minus 75. And now, you can see that if you hold the cell-- so here's where a cell normally sits. You activate the synapse. The voltage goes down. All right. And why does it go down? Because it's pulled toward the equilibrium potential for the chloride channel, chloride ion. Now, notice that-- OK, so as the cell is more and more depolarized, you can see that it's more strongly pulled toward [AUDIO OUT] the voltage change is bigger. If you hold the cell at minus 75, there's no voltage change at all from activating an inhibitory synapse. And if you hyperpolarize the cell even more, you can see that when you activate an inhibitory synapse the voltage of the cell actually goes up. So inhibitory synapses don't always make the potential of the cell go down. In fact, sometimes they can make this cell go up. In fact, what's really cool is in juvenile animals, the chloride reversal potential, there's more chloride inside of a cell than there is in an adult. And so the reversal potential of the chloride channels is actually up here. And chloride inhibitory synapses can actually make neurons spike. You can see where this thing sits. It just depends on the concentration of chloride ions. Most of the time inhibitory synapses have a reversal potential that's minus 75. And we call that inhibitory because the reversal potential of the synapse is less than the spiking threshold. AUDIENCE: What [INAUDIBLE] reversal potential? MICHALE FEE: Just the-- OK, you know the answer to that question. You tell me. So what are the two things-- yeah, go ahead. AUDIENCE: The type of ion. MICHALE FEE: Good. The type of iron. And one more thing. There were two things we need to have a battery in a neuron. What are they? Anybody know? AUDIENCE: [INAUDIBLE] MICHALE FEE: And? AUDIENCE: [INAUDIBLE] MICHALE FEE: Ion selective permeability. So the reversal potential depends on what ion that channel is selective for and the concentrations of that ion inside and outside the cell. So for an inhibitory synapse, there are two types. There are chloride reversal potentials, the chloride channels that have a reverse potential minus 75. And there are also potassium channels that serve an inhibitory function that can be activated by synapses. And they have a reversal potential more like minus 80. AUDIENCE: [INAUDIBLE] develop [INAUDIBLE] it's not chloride channel is not changing. So what the ion channel-- MICHALE FEE: It's the ion concen-- AUDIENCE: Change so the-- MICHALE FEE: The concentration that's different. OK? Cool. You see this kind of stuff all the time. Inhibitory postsynaptic potentials are often upward going. Somebody will be super impressed with you if you look at a trace like this and you say, is that an EPSP or an IPSP? Because you don't know it just by looking at it. Most people would assume it's an EPSP. Yeah. AUDIENCE: Was [INAUDIBLE] to cause like a spike? MICHALE FEE: Well, so what do you think? It's inhibitory if the reversal potential [INAUDIBLE] in the threshold. So no matter how strong that inhibitory synapse is, can it ever cause a spike if the reversal potential is less than the threshold? All right, let's go on. Any questions about this. It's so fundamental. Yeah. AUDIENCE: [INAUDIBLE] potential [INAUDIBLE] MICHALE FEE: So it's normally around minus 60 minus [AUDIO OUT] minus 75. OK? OK, let's go on. All right, let me just talk a little bit more about this conductance. So if you do a single channel patch experiment, you can take an electrode. Remember, I showed you what it looks like if you take an electrode and you stick it on a single sodium channel or a single potassium channel. Well, you can do the same thing. You can stick it on a neurotransmitter receptor. You can flow neurotransmitter over the receptor. And what you see is that when you put the neurotransmitter on, just like sodium and potassium channels, it flickers between an open state and a closed state. So it has two states-- open state and closed state. You can do the same kind of modeling of it as where the kinetic rate equation with two states. You can write down the probability that the channel is in the open and closed states. That's going to be a function of what? Like neurotransmitter concentration and time, right? And you can write down the total synaptic conductance as the conductance of a single open neurotransmitter receptor times the number of neurotransmitter receptors times the probability that any of them is open, any one of them is open. And so now, let's think about this probability as a function of time. So we can model the neurotransmitter receptors and write down the probability that it's open. The probability that's closed, then has to be 1 minus p. Alpha and beta are rate constants. 1 over time, they have units of 1 over time. And what controls the rate at which the channels open? Good. So alpha will depend on the concentration of neurotransmitter. That controls the rate at which closed channels open. And how about open to closed? It's not the concentration of the neurotransmitter. It'll be something else. AUDIENCE: [INAUDIBLE] MICHALE FEE: Exactly. So there will be basically some rate constant for a neurotransmitter unwinding. OK? All right, so let's do that. So here's that model. This is a simplified version of the Magleby-Stevens model. And it looks like this. There's a close and an open state. The open state corresponds to the closed neurotransmitter receptor R binding to a neurotransmitter, an unbound neurotransmitter molecule, forming a complex, bound receptor complex. There's usually another step-- the way this is usually modeled in the Magleby-Stevens model is that the bound receptor complex is closed and then it opens in another transition. But we're just going to keep it simple and do it this way. So we have a closed state unbound neurotransmitter receptor binds to an unbound neurotransmitter's receptor molecule and forms our bound receptor complex that is then open. So that's P. And that's 1 minus P. And we can just write down the rate equation just the same way that we did for analyze the time dependence of the sodium channel or the potassium channel. Isn't that amazing? All that stuff we learned for Hodgkin-Huxley, just you can use all the same machinery here. OK? OK, so that's a simple model. And you can take a simplification of it. We're going to assume that the binding is very fast, that the alpha is very fast. So that when you put a pulse of neurotransmitter concentration onto the synapse, the probability of being open, we're going to assume that this is super fast. So that the rate at which you go from unbound to bound is very high. And now, the neurotransmitter goes away. And you can see zero concentration here. Let's say you bind, the probability of being open gets big. And then the neurotransmitter goes away. You can see that this first term goes to 0, because the neurotransmitter concentration is zero. You can see that dP/dt is just minus beta P. And that's just an exponential decay. So the model is bind neurotransmitter opens the neurotransmitter receptor. Neurotransmitter goes away. And then there's an exponential decay probability that you're in the open state. Yes. AUDIENCE: So like in a real synapse, the neurotransmitter wouldn't like go away, right? MICHALE FEE: What's that? Oh, yeah, the neurotransmitter goes away. I forgot to say that. It's a super important point. There's one more step that I forgot to include. Habib is nodding at me like, yeah you forgot that. What happens is the neurotransmitter goes into the cleft. And it gets taken up by neurotransmitter-- what they called again, Habib? AUDIENCE: Reuptake. MICHALE FEE: Reuptake. Thank you. That neurotransmitter gets bound by receptors on glia and the presynaptic terminal and gets pumped out of the synaptic cleft. So that's what makes this go away, in addition to diffusion. OK. AUDIENCE: [INAUDIBLE] like the time dependence of the concentration-- MICHALE FEE: So this is-- in the full model you do. And, in fact, what this really looks like is this kind of turns on more slowly and then goes away with an exponential. But I'm just kind of walking you through this super simplified model. That's my mental model for how synapse works. OK? People who actually work on synapses would probably laugh at me, but that's kind of how I think of it. OK? Lena. AUDIENCE: Is that N the binding site? MICHALE FEE: Oh, yeah, the N, the N is really cool. What do you think it means? AUDIENCE: Binding sites? MICHALE FEE: Yes, it's the number of binding sites. So you can see that if the receptor requires two neurotransmitter molecules to bind before it opens, then you have a concentration squared. And that has really important consequences on the way it works, because what it does is it makes the receptor very sensitive to high concentrations of neurotransmitter. So they don't respond to the little leakage of some residual neurotransmitter left around that the reuptake systems haven't dealt with yet. But when a vessel releases, then you have a neurotransmitter at a very high concentration. And this thing makes it only sensitive to the peak of the neurotransmitter concentration. OK. Yes. AUDIENCE: With the probably [INAUDIBLE] number of [INAUDIBLE] MICHALE FEE: Here. In this phase, does this N matter? Is that your question? AUDIENCE: Yeah. MICHALE FEE: Yeah, well, so you know the answer to that question. Here, you're asking if here this N matters? What's that concentration here? AUDIENCE: Oh, it's 0. MICHALE FEE: So what's 0 to the N? AUDIENCE: [INAUDIBLE] MICHALE FEE: 0. OK, next, so we just did all of that. I better speed up. Let's talk about convolution. So this is what happens when a single action potential comes down the axon. The presynaptic terminal releases neurotransmitter. Boom, you get a pulse of probability of the receptor being open. That, you recall, is just proportional-- yikes, where'd it go. That probability is you just multiply it by some constants to get the conductance. So that is what the conductance looks like. OK? All right, now, so there's our input spike. Our input neuron, our presynaptic neuron generates a spike. And that produces a response, which is a conductance that looks like a pulse and an exponential decay. OK? Now, what do you think would happen if [AUDIO OUT] a bunch of spikes like this? Good. Just does it each time. OK? And we're ignoring fancy effects like paired pulse depression or facilitation or things like that. OK? There are all kinds of interesting things that synapses can do where there's one pulse and then there's some residual calcium left in the presynaptic terminal. And that makes the next pulse produce an even bigger response. Or sometimes one pulse will use up all the vesicles that are bound. And so the next pulse will come in and there won't be enough vesicles and this will be smaller. We're just going to ignore all of those complications right now. And we're just going to think about how to model a postsynaptic response given a presynaptic input. And we're going to use what's called a linear model. We're going to use convolution. And we're going to call this single event right here, we're going to call that the impulse response. The input is an impulse. And the response is the impulse response. And this is also called a linear kernel. The response to multiple inputs can be modeled as a convolution. It looks a little messy. But I'm going to explain how to just visualize it very simply. All right? So here's how you think about it. Notice that the response of the system is just this operation where we're multiplying the kernel times the stimulus and integrating over time. OK? So what we're going to do is-- the way to think about this is to take the kernel K, flip it [AUDIO OUT] wards and plot it there. OK? Now notice that what this thing does is it says I'm going to integrate the product of the kernel and the stimulus-- notice I'm integrating over tau. But this [AUDIO OUT] tau and that has a minus tau. So what I'm doing is I'm multiplying at a particular-- and notice that it's shifted by time t. OK? So what I'm going to do is I'm going to put the kernel at time t. And I'm going to integrate the product of the kernel and the stimulus. So what do I get if I multiply this times this? AUDIENCE: 0. MICHALE FEE: 0. And so at that time, I write down a zero. OK? Now, I'm going to change t. And I'm going to shift this over and multiply them together. What do I get? Good. Another 0. Now let's shift it again. I'm just increasing t here. I'm not integrating over t. I'm integrating over tau. So I just shift this a little bit. Good. And now when I multiply them together, what do I get? AUDIENCE: [INAUDIBLE] MICHALE FEE: Like that [AUDIO OUT] good. I just get the area under that curve, and integrate. And I get a point here. Now, let's shift it again. Good. Now, multiply them together. What do I get? Slightly smaller integral. And I plot that there. OK? Shift it again. Integrate. Plot another point. Shift it again. Integrate. And if I keep shifting, the product is 0 and the integral is 0. OK, so you can see that I can take this kernel, convolve it with a stimulus that's an impulse. And I get the kernel. I get the impulse response. Does that makes sense? So you can see what I'm doing here is from time I'm taking the stimulus at time t. And as I integrate over tau, I'm going backwards here, and I'm going forwards here. So starting from here, I'm integrating like this. I'm multiplying these like this. And then integrating. OK? And that's why you flip it over. OK? So it's very easy to picture what it does. So when somebody shows you a linear kernel and asks you to convolve it with a stimulus, the first thing you do is you just mentally flip it backwards and slide it over this [AUDIO OUT] and integrate the product at each different position. OK? Now, you can see that when you do that, when you convolve that kernel with a pulse, you just recover the impulse response. OK? So now, let's convolve linear [AUDIO OUT] I'm plotting it flipped backwards now-- with a single pulse. So Daniel made these little demos for us. And the resulting conductance is here. OK? Now, that was really obvious and easy, right? We didn't need to have a Matlab program simulate that for us, right? But what about this? Here, we have a spike train. There's the postsynaptic response from one spike. Now, to get the postsynaptic response from a bunch of spikes, we can just convolve the kernel with this spike train. And let's see what happens. Boom. Response from the first spike. Boom. Boom. Boom, boom, boom. OK? And that is actually really, really close to what it looks like when a train of pulses comes into a neuron. OK? What happens is the first spike produces a conductance response. When you have two spikes close together, the conductance from the first spike has not decayed yet when the second spike hits. And so that comes in and adds to it. And it adds linearly. So if this is halfway down, you add a full impulse response on top of it. And now the previous one and the current one are decaying back to zero. And you can add as many of those on top of each other as you want. And it turns out this is super easy to do in Matlab. You're going to learn how to do this. This idea is so fundamental. This idea of convolution, we're going to use to describe receptive fields. We're going to use it to describe filtering when we start getting to processing data. It's incredibly useful and very powerful. OK? Yes. AUDIENCE: [INAUDIBLE] MICHALE FEE: This linear kernel reflects that response, the conductance response, of the postsynaptic neuron to a single spike, single presynaptic spike. AUDIENCE: [INAUDIBLE] MICHALE FEE: Because if you have a bunch of single spikes like this, the answer is really obvious, right? It's just one of those, one of those little exponentials for every spike that comes in. But it's less obvious when you have a complex spike train like this. AUDIENCE: [INAUDIBLE] on top of that. So it's just [INAUDIBLE] MICHALE FEE: Yeah, the linear convolution tells you how the postsynaptic neuron is going to respond to a train of action potentials that are overlapped, where the response has not gone away to the first spike by the time the second spike arrives. AUDIENCE: So because the kernel can have like different shapes. MICHALE FEE: Yeah, the kernel can have different shapes. I've just chosen a particularly simple one, because actually an exponentially decaying kernel turns out to be really important. It's actually a low pass filter. AUDIENCE: So be it's a linear [INAUDIBLE] MICHALE FEE: Yes. The convolution, if you use the convolution, the thing you're using as the kernel is always a linear kernel, because the convolution is a linear function, you call the thing that you convolving a linear kernel. It's just terminology. AUDIENCE: OK. MICHALE FEE: Just focus on what it does and how it works, rather than the names. Just call it impulse response if that's easier. OK? All right, let's push on. I want to introduce an idea of synaptic saturation. And I'm getting really worried about the crayfish. OK, synaptic saturation. OK, so remember, we introduced the idea of a two-compartment model. So last time we talked about a model in which you have a soma and a dendrite. And you simplify the dendrites just by writing it down as another little piece of membrane or another little cellular compartment that's connected to the soma through a resistor. And now, you can write a model for the dendritic compartment that looks just like a capacitor and a conductance with a reversal potential. And you have the same kind of model-- capacitor, conductance, reversal potential, battery-- for the soma. And, of course, those two compartments are connected to each [AUDIO OUT] resistor that represents the axial resistance of the piece of dendrite that's really connecting. OK? So that's called a two-compartment model. And what we're going to do is just think briefly about how to think about what this looks like when you add a synapse to the dendrite. OK? And what we're going to study is how the voltage in the dendrite changes as a function of the amount of excitatory conductance that you add. So we're going to start by-- we're doing steady state. So we don't need to worry about our capacitor. So we can actually just unsolder them and take them out of our circuit. And we're going to study the voltage response in the dendrite. So we're going to also throw away our soma and just ask, how does the dendrite respond to this synaptic input as a function of the amount of excitatory conductance? And what I'm going to show you is just that the voltage change in the dendrite with zero inductance, of course, it's sitting there at the potassium reversal potential, or e leak. And as you add conductance, it corresponds to making that conductance, bigger making that resistor smaller. You're basically attaching the battery to the inside of the cell. And you can see that what happens is as you add more and more conductance, as you put more and more neurotransmitter onto that receptor, or have more and more neurotransmitter receptors, the voltage response goes up and then saturates. And it's really obvious why that happens, right? Once this resistor gets small enough, meaning you've added enough conductance, the inside of the cell is connected to the battery. And the voltage inside the cell just can't go any higher. It is forced to E synaptics, the reversal potential of the synapse. And it can't go any higher. And that's why no matter how much excitatory conductance you add, the voltage inside the dendrite cannot go above E synapse. And that's called synaptic saturation. And I was going to show you the derivation of this. It's just very simple. You just write down Kirchhoff's current law, substitute the equation for synaptic current, leak current, solve for voltage as a function of G synapse. And you can write down an approximation for the case of G synapse much smaller than G leak. And what you find is it's linear. And so there's a linear part [AUDIO OUT] voltage change at small conductances. And you can write down an approximation at high synaptic conductance. And you show that it approaches-- the voltage approaches E synapse. OK, so I'm not going to go through the math. But it's there. You don't have to be able to derive it yourself. But what I want you to understand is that for small synaptic conductance, the voltage responds linearly. But for high synaptic conductance, it saturates. OK? All right, and now, I want to tell you a story about inhibition. And the basic story is that we can add inhibition to-- so in real neurons in the brain, inhibition sometimes connects to dendrites. And sometimes inhibitory synapses connect to somata. And they're actually different kinds of inhibitory neurons that preferentially connect onto dendrites and others that preferentially connect onto the somata. And it turns out there's a really interesting story about how that inhibition has a different effect whether it's connected to the dendrite or connected to the soma, right? So you see what I've done here. I've got a dendrite that has an excitatory synapse. That's here. And it has an inhibitory synapse. Or-- so we can analyze this case-- or we consider the case where the excitatory synapse is still on the dendrite, but the inhibitory synapse comes onto the soma. And it turns out those two things do something very interesting. And this was first shown in the crayfish. The crayfish is a really cool model system, because it has very stereotyped behaviors. And one of its interesting stereotyped behaviors is its escape reflexes. It has three different kinds of escape reflexes that involve different what are called command neurons that get sensory input and drive motor output. And one of these particular neurons is where this effect about inhibition was first shown. It's called the LG. LG neurons, MG neurons that drive two different kinds of escape reflexes. So here are the two different kinds of escape reflexes. The medial giant neuron drives the MG escape, which is if you touch the crayfish on its nose, it flicks its tail and goes backwards. The LG escape is when you touch the crayfish on its tail, and it flicks its tail in a way that makes it go forward. OK? So let's look at what those behaviors look like. We're going to post this video. This is from a Journal of Visual Experiments. And there's a nice-- it's actually really nice-- [VIDEO PLAYBACK] - The recordings of neural and muscular field potentials, electronic recordings from a pair of bath electrodes are synchronized with high speed video recordings and display-- MICHALE FEE: So this video shows you actually how to set up a tank with electrodes so you can record the signals from these neurons in a crayfish while it's behaving. So you should watch the video. [END PLAYBACK] MICHALE FEE: But I'm going to show you-- I'm going to show you what-- [VIDEO PLAYBACK] - Here is a look at a series of single, high speed video frames and corresponding electric field recordings for an escape tail flip in response to-- MICHALE FEE: Can you hear that? - A stimulus delivered to the head or tail of a juvenile crayfish. [END PLAYBACK] MICHALE FEE: So that was the MG response. And he puts two electrodes into the tank. And you can actually measure signals in the tank from that neuron firing. [VIDEO PLAYBACK] - The giant neuron and the phasic deflection that follows enables non-ambiguous identification of the tail flip as mediated by giant neuron activity. The backward movement shown in the video traces determines the identity of the activated neural circuit. Here is a tail flip mediated by the-- MICHALE FEE: So here, you touch him on the tail-- - Tactile stimulus was applied to the tail. Upward and forward motion-- MICHALE FEE: You can see it a different movement that makes him-- - Synchronized electronic trace, displaying the giant spike and the large phasic initial deflection determines the identity of the activated neural circuit. This video demonstrates the-- MICHALE FEE: Here's a different escape reflex that doesn't involve either of those two neurons. Here he spends a little more time thinking before he figures out what to do. - A giant spike and consists of much-- [END PLAYBACK] MICHALE FEE: OK. All right, so let me just-- I probably won't get very far in explaining the inhibition part, but you can at least understand a little bit more about this behavior. So what's really cool is that inhibition is used to regulate these two behaviors. So the idea is that-- so first let me just say that kind of that LG neuron, that lateral giant neuron, is known as a command neuron. And that is because if you activate that neuron, if you just depolarize that one neuron, it activates that entire escape reflex. And if you hyperpolarize that neuron, inhibit that neuron, you completely suppress the escape reflex. And now, what's really interesting is that that neuron has inhibitory inputs that control the probability that the animal will elicit this escape reflex. And that kind of modulation of that behavior is really interesting. And it has some interesting subtleties to it. So first of all, if the animal is touched on the nose and elicits-- touched on the nose and elicits an MG response, the tail flips and the animal goes backwards, right? But now, if it goes backwards and it bumps into something on his backside, when it's going backwards, you don't want to have that immediately trigger an LG escape, and he bumps into something. He's like boom, boom, boom, right? That would be terrible. So what happens is that when the MG neuron fires and initiates that backwards movement, it sends a signal that inhibits the LG response. OK? OK, other cool things, when the animal is restrained, when you pick the animal up, if he doesn't get away from you with his first escape attempt and you hold him, you can touch him on the nose and it won't elicit an escape reflex. So when the animal is held, the response probability goes way down. Maybe he's like holding off until he feels like your grip loosen a little bit and then he'll try. OK? So there's no point in wasting escape attempts. They're very energetically expensive, wasting escape attempts when you're being restrained. Another interesting modulation is that the LG escape response is suppressed while the animal is eating. This is threshold. This is how much you have to poke in the nose or in the tail for him to escape when he's just wandering around. But then while he's eating, he's getting food. And so the threshold for eliciting that escape reflex goes up. He's like, sorry, I'm eating. Leave me alone. OK? It's not just being hungry, right? Because if he's hungry and searching for food, there's no increased threshold. So it's really because he's eating. He's found a food source. He doesn't want to leave it. So there's a higher-- he won't leave until there's like more danger. So all this different kinds of modulation and inhibition of the behavior is controlled by inhibitory [AUDIO OUT] projecting onto this LG neuron. So there are two kinds of escape modulation. One is absolute. When the animal is engaged in an escape reflex, it is impossible to activate-- when the animal is engaged in an MG escape, it's impossible to activate the LG escape. No matter how much he gets poked in the tail, he will not initiate an LG. So this kind of modulation is absolute. There's this other kind of modulation, where the likelihood of escape is just reduced, but the animal is still able to initiate escape if the danger is high enough, if the stimulus is high enough. And that's really the crux of the difference is that some kinds of suppression of a behavior are absolute. No matter how strong the stimulus is, you don't want to allow that neuron to spike. In the other case is I'm just going to turn down the probability that I generate [AUDIO OUT] because I'm eating. But if it's looks dangerous enough I'm still going to go. But I'm just going to modulate, gently, the probability. And it turns out that's the crux of the difference. So it turns out the LG neuron has two sites on it where you get sources of inhibition. There are a bunch of inhibitory inputs on the proximal dendrite near the spike initiation zone, which is right here. And that's recurrent inhibition, because it's coming from the other escape neuron. So it's recurrent within the motor system. And the other inhibitory synapses come out here on the dendrite. And they're [AUDIO OUT] higher brain areas. And it's called tonic inhibition for historical reasons. Now, previous hypothesis was that those inputs here allowed inhibition to control different branches of the input. But it turns out the answer is very simple. So let me just summarize this. So you have one [AUDIO OUT] input, inhibitory input, that's coming from the other escape neuron that lands right there on the part of this neuron the initiates spikes. The other input that suppresses the response during feeding is coming far out on the dendrite at the same location where those excitatory inputs, the sensory inputs, are coming in. So that's what the circuit looks like. So the input that absolutely suppresses the response of this neuron is coming right on the soma, right where the spike is generated. And the ones that tonic [AUDIO OUT] sort of adjust the probability of spiking are coming out here where the excitatory inputs are. And so they developed the simple equivalent circuit model where they have a dendrite out here. Out here on the dendrite, you have an excitatory input. And you can have recurrent input on the soma. And there's another model. So in one of these models, they're modeling the proximal inhibition. In the other model, they put the inhibitory synapse out here on the dendrite. And they ask, how are those two different sources of inhibition different? What do they do differently to the neuron? And they just did this model. They analyzed it mathematically, exactly like the way I just showed you. That very simple calculation works. You just use Kirchhoff's current law. And you write down the voltage response to the excitatory synapse as a function of the strength of these two different kinds of inhibition. And here's what you find. The proximal inhibition suppresses the response to the excitatory input by the same fraction no matter how strong the excitation is. So if you put in strong inhibition, you can always cause that excitatory input to be suppressed to zero. So inhibitory input approximately at the soma can suppress the response of a neuron to an excitatory input, no matter how strong the excitatory input is. On the other hand, inhibition out on the dendrite for any given amount of inhibition, there's always an excitation that's strong enough to allow the response to get through. So this shows the amount of suppression as a function of excitatory input. And you can see that no matter how strong the inhibition, there's always an amount of excitation that will overcome the inhibition. And you can also do that analysis in a much more complicated model. And you get exactly the same results. But what's really cool is that you can understand this just from this very simple two-compartment model. And I wish I had a little bit more time to go through those models and explain why that works. But basically, proximal inhibition is absolute. You can always make the inhibition win. Distal inhibition kind of gently varies the effect of the excitatory input. All right, so that's what we covered-- synapses, a model, convolution, synaptic saturation, and the different functions of distal and proximal inhibition.
MIT_940_Introduction_to_Neural_Computation_Spring_2018
9_Receptive_Fields_Intro_to_Neural_Computation.txt
MICHALE FEE: So today, we're going to introduce a new topic, which is related to the idea of fine-tuning curves, and that is the notion of receptive fields. So most of you have probably been, at least those of you who've taken 9.01 or 9.00 maybe, have been exposed to the idea of what a receptive field is. The idea is basically that in sensory systems neurons receive input from the sensory periphery, and neurons generally have some kind of sensory stimulus that causes them to spike. And so one of the classic examples of how to find receptive fields comes from the work of Huble and Wiesel. So I'll show you some movies made from early experiments of Huble-Wiesel where they are recording in the visual cortex of the cat. So they place a fine metal electrode into a primary visual cortex, and they present. So then they anesthetize the cat so the cat can't move. They open the eye, and the cat's now looking at a screen that looks like this, where they play a visual stimulus. And they actually did this with essentially a slide projector that they could put a card in front of that had a little hole in it, for example, that allowed a spot of light to project onto the screen. And then they can move that spot of light around while they record from neurons in visual cortex and present different visual stimuli to the retina. So here's what one of those movies looks like. So you're hearing the actual potential of a neuron visual cortex. So you can see the neuron generates lots of spikes when you turn a spot of light on a particular part of the visual field. So they will basically play around with spots of light or bars of light and see where the neuron spikes a light, and then they would draw on the screen-- I think they're going to draw in a moment here-- what they would call the receptive field of the neuron. So you can see that this neuron responds with high firing rate when you turn on a stimulus in that small region there. Notice that the cell also responds when you turn off a stimulus that is right in the area surrounding that small receptor field. So the neuron has parts of its receptive field that respond with increased firing when you apply light, and they also have parts of the receptive field that respond with higher firing rate when you remove light. That was actually a cell, I should have said, that's in the thalamus that projects to visual cortex. So that was a thalamic neuron. Here's what a neuron in cortex might look like. So they started recording in the thalamus. They saw that those neurons responded to spots of light in small parts of the visual field. They were actually recording from neurons in the visual cortex. They got kind of-- they couldn't really figure out what the neurons were doing, and they pulled the slide out of the projector, which made an edge of light moving across the visual field. And the neuron they were recording from at that moment responded robustly when they pulled the slide out. And they realized, oh, maybe it's an edge that the neuron is responding to. And so then they started doing experiments with bars of light. Here's an example. So you can see the neuron responds when you turn a light on in this area here. But is responds when you turn light off in this area here. And so you can see they're marking a symbol for positive responses, positive responses to light on here, and negative responses or increased firing when you turn the light off. So there's different parts of the receptive field that have positive and negative components. But you can see that the general picture here is that the process of finding receptive fields at this early stage was kind of random. You just tried different things and hoped to make the neurons spike. And we're going to come back to this idea of finding receptive fields by trying random things, but in a more systematic way, at the end of the lecture today. So here's what we're going to be talking about. So you can see that Hubel and Wiesel were able to describe that receptive field by finding positive and negative parts and writing symbols down on a screen. We're going to take a more mathematical approach and think about what that means in a quantitative model of how neurons respond to stimuli. And the basic model that we'll be talking about is called an LN model, linear/nonlinear model. And we're going to describe neural responses as a linear filter that acts on the sensory stimulus followed by a nonlinear function that just says neurons can only fire at positive rates. So we're going to have our neurons spike when that filter output is positive, but not when the filter output is negative. And we're going to describe spatial receptive fields as a correlation of the receptive field with the stimulus. And we're also going to talk about the idea of temporal receptive fields, which will be a convolution of a temporal receptive field with the stimulus. So the firing rate will be a convolution of a receptive field with the temporal structure of the stimulus. We're going to then turn to the-- combine these things into the concept of a spatial temporal receptive field that simultaneously describes the spatial sensitivity and the temporal sensitivity of a neuron, as an STRF, as it's called. And we'll talk about the concept of separability. And finally, we're going to talk about the idea of using random noise to try to drive neurons, to drive activity in neurons, and using what's called a spike-triggered average to extract the stimulus that makes a neuron spike. And we're going to use that to compute-- we're going to see how to use that to compute a spatial temporal receptive field in the visual system or a spectral temporal receptive field in the auditory system. So let's start with this. What are spatial and temporal receptive fields? So we just saw how you can think of a region of the visual space that makes a neuron spike when you turn light on or makes a neuron spike when you turn light off. And at the simplest level, you can think of that in the visual system as just a part of the visual field that a neuron will respond to. So if you flash of light over here, the neuron might respond. If you flash of light over here, it won't respond. So there's this region of the visual field where neurons respond, but it's more than just a region. There's actually a pattern of features within that area that will make a neuron spike, and other patterns will keep the neuron from spiking. And so we can think of a neuron as having some spatial filter that has positive parts, and I'll use green throughout my lecture today for positive parts of a receptive field, and negative parts. And this is a classic organization of receptive fields, let's say, in the retina or in the thalamus, where you have an excitatory central part of a receptive field and an inhibitory surround of the receptive field. So we can think of this as a filter that acts on the sensory input. And the better the stimulus overlaps with that filter, the more the neuron will spike. So let's formalize this a little bit into a model. So let's say we have some visual stimulus that is an intensity as a function of position x and y. We have sum filter, G, that filters that stimulus. So we put the stimulus into this filter. This filter, in this case, just looks like, in this case, an excitatory surround in an inhibitory center. That filter has an output, L, which is the response of the filter. Then we have some nonlinearity. So we take the response of the filter, L, we add it to some spontaneous firing rate, and we take the positive part of that sum and call that our firing rate. So that would be a typical output nonlinearity. It's called "a threshold nonlinearity," where if the sum of the filter output and the spontaneous firing rate is greater than 0, then that corresponds to the firing rate of the neuron. So in this case, you can see that as a function of L-- I should have labeled that axis L-- as a function of L, when L is 0, you can see the neuron has some spontaneous firing rate, R naught. And if L is positive, the rate goes up. If L is negative, the rate goes down until the rate hits 0. And then once the neuron stops firing, it can't go negative. So the neuron firing rate stays at 0. And then once you have this firing rate of the neuron, what is it that actually determines whether the neuron will spike? So in most models like this, there is a probabilistic spike generator that is a function of the rate output of this nonlinear output. It's basically a random process that generates spikes at a rate corresponding to this R. And in the next lecture, we're going to come back and talk a lot more about what spike trains look like, how to characterize their randomness, and what different kinds of random processes you actually see in neurons. A very common one is the Poisson process, where there's an equal probability per unit time of a neuron generating a spike, and that probability is controlled by the firing rate. We'll come back to that and discuss it more. Any questions? Yes, [INAUDIBLE] AUDIENCE: Is something biologically [INAUDIBLE] something like if the overlap [INAUDIBLE] it's just it's more excitatory? MICHALE FEE: Yeah. So we're going to come back in, I think, a couple lectures where we're going to talk about exactly how you would build a filter like this in a simple feed forward network. So at the simplest level, you can just imagine you have a sensory periphery that has neurons in it that detect, let's say, light at different positions. Those neurons send axons that then impinge on, let's say, the neuron that we're modeling right here. And the pattern of those projections, both excitatory and inhibitory projections from the periphery, would give you this linear filter. And then this nonlinearity would be a property of this neuron that we're modeling. Yes, Jasmine? AUDIENCE: [INAUDIBLE] MICHALE FEE: Great, exactly. So it's going to turn out that we're going to treat this as a linear filter. The output of this filter will be calculated for a spatial receptive field as the correlation of this filter with the stimulus. But in the time domain, when we calculate a temporal receptive field, we're going to use a convolution. And we'll get to that in a minute. That's the very next thing. Great question. Anything else? So that's called a LN model. I should have put that on the slide-- linear/nonlinear model. So let's describe that mathematically. So let's say we have a two-dimensional receptive field. We're going to call that G of x and y. So remember, we had intensity as a function of x and y. There's our stimulus input. And we're going to ask, how well does that stimulus overlap with this receptive field? And we're going to describe the receptive field as a function on this space, x and y. And our linear model is going to be how well the stimulus matches or overlaps with the receptive field. And we do that just by multiplying the receptive field times the stimulus and integrating over x and y. x and y, just think of it as a position on the retina. So let's look at this in one dimension. So remember, this was a receptive field that has a positive central region and an inhibitory surround. So if we just take a slice through that and plot G as a function of x, you can see that there is a positive central lobe and inhibitory surround, inhibitory side load. That's a very, very common receptive field early in the visual system, in the retina and in the lateral geniculate nucleus. So in one dimension, we just take this receptive field, G, multiply it by the stimulus pattern, and integrate over position. That is L. That's the output of the linear filter. We're going to add that to a spontaneous firing rate, and that gives us the firing rate of our neuron. And you can see that that's like-- that this product, an integral over x, is just like a correlation-- G of I times intensity of I summed over I. So let's walk through what that looks like. So here's G of x, the receptive field. Let's say that's our intensity profile. So we're going to have a bright spot of light surrounded by a darker side lobe. So the way to think about this is, in visual neuroscience experiments, usually the background is kind of gray. And you'll have bright spots, like here, and dark spots, like there. And the rest will just be gray. So that's how you get positive and negative intensities here, because they're relative to some kind of gray background. And so now we can just multiply those two together. And you can see that when you multiply positive times positive, you get positive. And when you multiply negative times negative, you get positive. And when you integrate over position x, you get a big number. You get some positive number. So that neuron, that stimulus, would make this a neuron with this receptive field likely to spike. Let's consider this case. Now, instead of a small spot of light centered over the excitatory lobe of the receptive field, you have a broad spot of light that covers both the excitatory and inhibitory lobes of the receptive field. What's going to happen? Yeah? AUDIENCE: [INAUDIBLE] MICHALE FEE: Yeah. You're going to get a positive times of positive here, and a negative times a positive. And so you're going to get negative contribution from the side lobes, and those things can exactly cancel out when you integrate over position. And so you can get a very small response. If you have-- I'm not going to go through this example, but it's pretty obvious. If you were to have light here, and then dark here, and then light there, you would have only these negative side lobes activated. You would have no contribution from this excitatory lobe, and the integral would actually be negative. And so the firing rate of the neuron would go down. Do you remember seeing those different points in that first movie where you saw the donut of light turning on? The neuron kind of shuts off when you turn that light on. Yes? AUDIENCE: So I'm just curious what we're signing up 0 [INAUDIBLE]. MICHALE FEE: Yeah. So 0 is just this gray background. It's some intermediate level of light intensity-- pretty straightforward. So that's a spatial receptive field right there. We refer to this correlation process, this linear filter as linear, because you can see that if you put in half the light intensity, let's say, you get half the product. And when you integrate, you get half the neural response. If you take the stimulus and you cut it in half so that you only apply light and dark to half the receptive field, then you'll also get half the response of the neuron. Because this will contribute to the integral and this won't, and so you'll get a neural response that's half as big. So in this model, the response varies linearly with this overlap of the receptive field and the intensity. So that's where the term linear comes from. Any question about that? Correlation is a linear operation. So the next thing we're going to talk about is temporal receptive field. So we just talked about spatial receptive fields. Neurons are also very sensitive to how things vary in time. So we're going to take the same concept. Instead of a stimulus that's a function of position on the retina, let's say, we're going to take a stimulus that's a function of time. And we're going to operate on that temporal stimulus with a filter that [INAUDIBLE] a temporal sensitivity. We're going to get the output of that filter, add it to a spontaneous firing rate, and we're going to get a time-dependent firing rate. So let me just show you what this looks like. So let's say that you have a stimulus that's fluctuating in time. So imagine that you have a neuron that has a spatial receptive field that's just a blob, and you shine just a positive bump. And you shine light on it, and the intensity of that light varies. And this is the intensity that you apply to that spatial receptive field as a function of time. And again, 0 is some kind of average gray level. And so you can go dark, dark, or bright around that. So now, neurons generally have receptive fields in time, and this is what a typical receptive field might look like, a temporal receptive field might look like, for a neuron. Neurons are often particularly driven by stimuli that go dark briefly and then go bright very suddenly, and that causes a neuron to spike. So we can imagine that temporal receptive field, and sliding it across the stimulus, and measuring the overlap of that temporal receptive field with the stimulus at each time. Does that makes sense? And so you can see that most of the time that negative bump and positive bump are just going to be overlapping with just lots of random wiggles. But let's say that the stimulus has a negative part and then a positive part, dark, then bright. You can see that at this point that filter will have a strong overlap with the stimulus. Why? Because the negative overlaps with the negative, and that product is positive. Positive overlaps with positive, that product is positive. And so when you integrate over time you get peak. Does that make sense? This is, what I'm plotting here, is the product of these two functions at each different time step as I slide this temporal receptive field over the stimulus. Does that make sense? Any questions about that? I'm going to go through that. We're going to work on that idea little bit more, but if you have any questions, now's a good time. AUDIENCE: [INAUDIBLE] MICHALE FEE: So you're asking, why is it correlation in the spatial domain? Because-- well, let me answer that question after we define what this is mathematically. So what is this mathematically? AUDIENCE: A convolution. MICHALE FEE: It's a convolution. It's exactly what we talked-- it's a lot like what we talked about when we talked synapses. In that case, we had some delta functions here corresponding to spikes coming in, and the synaptic response was like some decaying exponential. And we slid that over the stimulus. In this case, we have this fluctuating sensory input, this light intensity, and we're sliding that linear response of the neuron over and measuring the overlap as a function of position, and that's a convolution. Mathematically, what we're doing is we're taking this linear kernel, this linear filter, sliding it over the stimulus, using this variable t, and we're integrating over this variable tau. So we have a kernel, D, multiplied by the stimulus at different times shifts. We integrate over tau, and that's the output of our temporal receptive field. That's the linear output of our receptive field. And we add that to spontaneous firing right, and that gives us a time-dependent firing rate of the neuron. Yes? AUDIENCE: So is tau how much we [INAUDIBLE]?? MICHALE FEE: Great question. t is the location of this kernel as we're sliding it along. Tau is the variable that we're integrating over after we multiply them. Does that make sense? So we're going to pick a t, place the kernel down at that time, multiply this. And remember, this is 0 everywhere outside of here. And so we're going to multiply the stimulus by this kernel. It's going to be-- that product is going to be 0 everywhere except right in here. You're going to get a positive bump when you multiply these two, a positive bump when you multiply those two. And the integral over tau gives us this positive peak here. If we picked a slightly different t so that this thing was lined up with this positive peak here, then you'd see that you'd have positive times negative. That gives you a negative. When you integrate over tau, that gives you this negative peak here. Does that makes sense? So let's just go back to the math. So you can see that we're integrating over tau, but we're sliding the relative position of D and S with this variable t. Yes? AUDIENCE: Is that the [INAUDIBLE]?? MICHALE FEE: Yes. S is the stimulus. AUDIENCE: Oh. And D is the kernel? MICHALE FEE: D is the linear kernel. Yes? AUDIENCE: [INAUDIBLE] MICHALE FEE: So nature chooses the shape of the kernel for us. So that is the receptive field of neurons. Now, I just made this up to demonstrate what this process looks like. But in real life, this is the property of a neuron, and we're going to figure out how to extract this property from neurons using a technique called spike-triggered average, which we'll get to later. But for now, what I'm trying to convey is, if we knew this temporal receptive field of a neuron, then we could predict the firing rate of the neuron to a time varying stimulus. That was a very important question. Does everyone understand that? Because it's one of those cases where once you see it it's pretty obvious, but sometimes I don't explain it well enough. Yes? AUDIENCE: [INAUDIBLE] MICHALE FEE: Yes. So I've already flipped it, and sometimes you'll see. So this is all going this way-- positive tau. I've flipped it for you already. Sometimes you'll see it plotted the other way with tau going positive to the right, but I've plotted it this way already. Any questions? Oh, and so that was actually the very next question. You might normally-- you might sometimes see temporal receptive fields plotted this way with positive tau going to the right. And kind of meant-- I always just flip it back over. Because in this view, you see that what the neuron responds to is dark followed by light, and then right there is when you have a peak spiking probability. Peak firing rate happens right here. Yes? AUDIENCE: [INAUDIBLE] MICHALE FEE: So we're going to get to that. But typically, neurons in the retina-- I'll show you an example in the retina. A typical time scale here might be tens to 100 milliseconds, so pretty fast. So that's called the temporal kernel or the temporal receptive field. And again, it's linear in the sense that if you, for example, had a stimulus intensity that just had this positive bump without the negative bump, then the response would be lower just by the ratio of areas. So if you got rid of this big negative bump here, then the response would be, I don't know, a third as big. It would be linear in the area. Let's push on. So now, let's extend this. So we've been talking about spatial receptive fields and temporal receptive fields. But in reality, you can combine those things together into a single concept, called a "spatial temporal receptive field," and that's usually referred to as an STRF. If you're working in the auditory system, STRF, it's the same acronym, but it just means spectral temporal receptive field, because it's sensitive to the spectral content of the sounds, not the spatial structure of the visual stimulus. So in general, when you have a visual stimulus, it actually depends on x- and y-coordinates in the retina and time. So just I of x and y, which would be like a still image presented to you. I of x, y, and t is-- any movie can be written like that. Your favorite movie is just some function of I of x, y, and t. And so we're going to now present to our retina, and we're going to simplify this by considering just one spatial dimension. So we're going to take your favorite movie and just collapse it into intensity as a function of position. It's probably not nearly as interesting, but it's much easier to analyze. So we're going to write the firing rate as a function of time as a spontaneous firing rate plus a filter, D, which is a spatial temporal receptive field acting on that intensity. And you can see that we're doing stuff in here that looks like a convolution integrating over tau, and we're also doing stuff that looks like a correlation when we integrate over x. So there's the convolution integrating over tau. What I've done is I've pulled out the D tau, because we can consider-- I've just written this as two separate integrals. So we have an integral over tau that looks like a convolution. And we have an integral over x that looks like a correlation. So what is separability mean? So separability is just a particularly-- if a receptive field is separable, it means that you can write down a spatial receptive field and a temporal receptive field separately. And that looks like this. So I imagine that if you have a spatial temporal receptive field, D, that's a function of position and time. But you can see that you can just write it as a product of the spatial part and the temporal part. So here, you have a temporal receptive field that looks like this, a positive lobe here and a negative lobe there, a spatial receptive field that looks like this, just a positive lobe. And if you multiply this function of x by this function of t, you can see that you get a function of x and t that looks like this, where at any position the function of time just looks like this-- scaled. And at any time, the spatial receptive field just looks like this. Does that make sense? Other receptive fields are not separable. You can see that you can't write this receptive field as a product of a temporal receptive field and a spatial receptive field. Does that make sense? Is that clear why that is? So basically, you can see that if you take a slice here at a particular position, you can see that the temporal pattern here looks very different than the temporal pattern here. And so you can't write this simply as a product of a spatial and a temporal receptive field-- separable, inseparable. So let's take a look at what happens when you have a separable receptive field. Things kind of become very simple. We can now write our spatial temporal receptive field as a spatial receptive field, which is a function of position, times a temporal receptive field that's a function of time. And when you put that into this integral, what you find is that you can pull that spatial part of the receptive field out of the temporal integral. So basically, the way you think about this is that you find the correlation of the spatial receptive field with the stimulus, and that gives you a time-dependent stimulus, a stimulus that's just a function of time. Then you can convolve the temporal receptive field with that time-dependent stimulus. So you can really just treat it as two separate processes, which can be kind of convenient just for thinking about how a neuron will respond to different stimuli. So let's just think about, develop some intuition about, how neurons with a particular receptive field will respond to a particular stimulus. So here's what I've done. I've taken a spatial temporal receptive field here. This is a function of position and time, and we're going to figure out how that neuron responds to this stimulus. So this stimulus is also a function of space and time. It's one-dimensional in space. So what does this look like? This looks like a bar of light that extends from position 2, let's say, 2 millimeters to 4 millimeters on our screen. And it turns on at time point 1, stays on, and turns off at time point 6. Let's say 1 second to 6 seconds. Does that make sense? So just imagine we have a 1D screen, just a bar, and we turn on light that's a bar between 2 and 4. So we turn on a bar of light. We turn it on at time 1, and we turn it off at time 6. It's just a very simple case. We flash of light on at a particular position, and then we turn it off. So let's see how this neuron responds. So what we're going to do is we're going to slide-- remember, in the 1D case where we had the temporal receptive field, we just slid it across the stimulus. So we're going to do the same thing here. We're going to take that spatial temporal receptive field, and we're going to slide it across the stimulus. And we're going to integrate, we're going to take the product, and we're going to integrate. And the integral plus the spontaneous rate is going to be the firing rate of our neuron. So what is the integral right there? The product is-- AUDIENCE: 0. MICHALE FEE: --0. where? integrate, it's 0. So we're going to add a spontaneous firing rate, which will be right there. So that will be our firing rate. Now, let's slide the stimulus a little further. That means that this time we're asking, what is the firing rate of that neuron? So what is it going to look like? AUDIENCE: Go up a bit. MICHALE FEE: It's going to go up a little bit, because we have a positive part of the receptive field. Green is positive in our pictures here. It's going to overlap with this bar of light, because that neuron is sensitive to light between, let's say, 1 and 4, positions 1 and 4 on the screen. And so the light is falling within the positive part of that receptive field, and so the neuron's going to increase its firing rate. So now what's going to happen? AUDIENCE: [INAUDIBLE] MICHALE FEE: It's going to cancel. You're going to get a positive contribution to the firing rate here-- whoops-- and a negative contribution here. And those two are going to add up. You're going to multiply that times that. That gives you a plus. That times that gives you-- sorry. That times the light that's shining on it is negative. Add those up, and it's going to cancel. And the firing rate's going to go back to baseline. Now, the light in this receptive field, we're continuing to slide it in time over our stimulus. What happens here? AUDIENCE: Same thing. MICHALE FEE: Same, good. How about here? It's going to go-- AUDIENCE: Down. MICHALE FEE: --down. It's going to dip down, that's right. And then? AUDIENCE: [INAUDIBLE] MICHALE FEE: Yeah. By 0, you mean the spontaneous firing. Yeah, exactly. Cool. Yes? AUDIENCE: [INAUDIBLE] the rate of response because the slope of the line [INAUDIBLE]? MICHALE FEE: So you should think about this thing sliding over the stimulus in real time. So if these are units of seconds, then this thing is sliding across the stimulus at 1 second per second sliding across. And so that is firing rate as a function of time in those units. Does that make sense? AUDIENCE: Yeah. But why [INAUDIBLE]? MICHALE FEE: Oh, OK. Like why doesn't this go up to here? So what's the answer to that? AUDIENCE: [INAUDIBLE] MICHALE FEE: Yeah. So how would I make that steeper? How would I make that go up to here? AUDIENCE: [INAUDIBLE] the light. MICHALE FEE: What's that? AUDIENCE: You'd turn the light up. MICHALE FEE: Yeah, you'd turn the light up, that's right. This is the receptive field of a neuron, so we generally can't control that. So if we wanted to make this neuron respond more, we'd turn the light up to a higher intensity. Any other questions? So neurons often have more complex receptive fields. So here's an example. What is this going to do as we slide this across the stimulus? What's that? AUDIENCE: [INAUDIBLE] MICHALE FEE: Yeah. It's not going to-- the neuron isn't going to respond at all. Because as soon as it overlaps, it has a positive contribution. The light activates these lobes of the receptive field, but inhibits these lobes of the receptive field. And the net result is that when you integrate over the product, you're going to get 0. Does anyone have any idea what kind of stimulus might make this neuron respond? This is a very special kind of receptive field. Yes, [INAUDIBLE] AUDIENCE: The light goes from [INAUDIBLE] MICHALE FEE: Yeah. What is that called? AUDIENCE: I'm not sure. MICHALE FEE: It's called a stimulus that moves. AUDIENCE: [INAUDIBLE] MICHALE FEE: Moves-- a moving stimulus. Good. So that's a receptive field that response to a moving stimulus. So let's take a look at that. So here we go. Anybody want to take a guess at what this stimulus will do to this neuron? Can you visualize sliding it across? AUDIENCE: [INAUDIBLE] MICHALE FEE: And then what? AUDIENCE: [INAUDIBLE] MICHALE FEE: Yeah. You can see that this-- so let's describe what this [AUDIO OUT].. So we've turned a bar of light on here between 0 and 2, and then we slide it up over the course of a few seconds. So we've turned a spot of light on, and then we move it up-- off. So it's a spot of light that turns on, moves, and then disappears. So let's walk through it. So there's a little bit of overlap there, so the neuron's firing rate is going to start going up. But then as it goes further, this light is now activating those inhibitory lobes, which is going to have a negative contribution. So when you take the product, you're going to get lots of negatives there, very little contribution from the positive lobes, and so the firing rate's going to go down. And what happens is it goes down, and once the firing rate hits 0, it can't go any more negative. So the firing rate is just going to sit at 0 until this stimulus moves out of the temporal receptor of this neuron. And then what? AUDIENCE: Back up. MICHALE FEE: It's going to go back up to the spontaneous rate. So what kind of stimulus will activate this neuron? A stimulus that moves from top down. So let's take a look at that. So here's our stimulus. You see that it's going to just hit that inhibitory lobe, go down a little bit. And then the excitatory lobes of the receptive field are going to overlap with the stimulus. You're going to get a big positive peak, and then the stimulus will move out of the receptive field, and the firing rate will go back down to baseline. Any questions about that? So that's very common, in both the visual system and in the auditory system, to have neurons that are responsive to moving stimuli. What does moving stimulus mean in the auditory system? AUDIENCE: Changing pitch. MICHALE FEE: Right, changing pitch. So [WHISTLE], like that. That activated a gazillion neurons in your brain that are sensitive to upward-going pitches that you have structure like this. Isn't that crazy? [WHISTLE] I can control all the neurons in your brain-- [WHISTLE] [LAUGHS] --at least the ones that respond to whistles. So now that we've seen mathematically how to think about what a receptive field is and how it interacts with a sensory stimulus, how do you actually discover what the receptive field of a neuron is? That turns out to actually be a very challenging problem. So in very early parts of the visual and the auditory system, like in the retina and the LGN, and as far as, let's say, V1 in visual cortex, it's been possible to find receptive fields of neurons by basically just randomly flashing bars and dots of light and just hoping to get lucky and find what the response is. It turns out that that's generally a very-- it can be a very time-consuming process. And so people have worked out ways of discovering the receptive fields of neurons in a much more systematic way. And that's what we're going to talk about next-- the idea of a spike-triggered average. So here's the idea. We're going to take a stimulus, and we're going to basically-- we're basically just going to make noise, just a very noisy stimulus. So we're going to take, let's say, an intensity, a light, a spot of light, and we're going to fluctuate the intensity of that light very rapidly. And we're going to do that basically with a computer. We just take a computer, make a random number generator, hook that up to, let's say, a light source that we can control the brightness of with a voltage. And then have the computer generate-- put out that random number sequence, control the light level, and then play that to our, let's say, our visual neuron. And that neuron is going to spike. And now, what we can do is take the times of those spikes and go back figure out basically what made the neuron fire post-hoc. So if we do that here, you can basically take the spike times. Now, you know that whatever made the neuron spike happened before the spike. It didn't happen after the spike. So you can basically ignore whatever happened after the spike and just consider the stimulus that came in prior to the spike. So we're just going to take a little block of the stimulus prior to the spike, and we're going to do that for every spike that the neuron generates, and we're going to pile those up and take an average-- spike-triggered. That's it. And that is going to be-- what's really cool is that you can show that under some conditions that spike-triggered average is actually just the receptive field of the neuron. It's the linear receptive field of the neuron. So let's think-- and you can write that down as follows. We're going to add a stimulus. We're going to write down the times at which all these spikes occur, t sub i, or the times in the stimulus at which the spikes occur. We're going to take the stimulus at those times minus some tau, and we're going to average them over all the spikes, all the n spikes, that we've measured. And that K of tau is going to be the spike-triggered average, and in many cases, it's actually the linear kernel. Now, let's think for a moment about what the conditions are. What kind of stimulus do you have to use in order to get the spike-triggered average to actually be the linear kernel of the neuron, that receptive field of the neuron? Any guesses? Let me give you a hint. What happens if I take a stimulus that varies very slowly? So instead of having these wiggles, it just goes like this. It has very slow, random wiggles. Will that be a good stimulus for extracting the receptive field? Why is that? Yes? AUDIENCE: [INAUDIBLE] MICHALE FEE: Yeah. So I think what you're saying is that that stimulus is very slow, and it doesn't actually have the fast fluctuations in it that makes the neuron spike. If the stimulus varies very slowly, then it-- see, this neuron likes to have this very fast wiggle, this negative followed by a positive. But if the stimulus you put in just varies slowly, then that stimulus doesn't actually have the kind of signal that's needed to activate this neuron. Yes? AUDIENCE: [INAUDIBLE] the stimulus [INAUDIBLE] smaller than tau? MICHALE FEE: Well, tau is just the variable that describes the temporal receptive field. But I think what you're saying is that the stimulus varies more slowly than the temporal structure in the receptive field, in the temporal receptive field. That's right. Tau is just this variable that we define the receptive field on. Yes, [INAUDIBLE] AUDIENCE: So when we add up an average, are we actually adding up from everything before the spike, everything before the spike? MICHALE FEE: So great question. So how far back do you think you would need to average? AUDIENCE: [INAUDIBLE] MICHALE FEE: Maybe. I mean, in principle, you could have spikes happening very fast, and you could have signal that affects the response of a neuron from even before the last spike. But in general, what do you think the answer to that question is? Brainstorm some more ideas. So let's say that you were recording in the retina, and you knew that neurons tend to respond to visual stimuli only for-- that temporal receptive fields in the retina never extend back more than 100 milliseconds. Then how would you choose that window? You would just choose this window to be 100 milliseconds, and that would be it. If you're recording in a brain area that you really have no idea, then you have to actually try different things. So you can try a window that goes back 100 milliseconds. And if when you do the spike-triggered average, it hasn't gone to 0 yet, then you need to take more window. So you can figure it out. You can create a short window, and it only takes-- like you change one number in your Matlab code, and hit Run again, and do it again. It's pretty simple. Yes? AUDIENCE: So when you've got like [INAUDIBLE].. MICHALE FEE: Yes. AUDIENCE: Wouldn't that depend [INAUDIBLE] what kind of filter [INAUDIBLE] MICHALE FEE: Yeah. So you're saying that the stimulus that you choose actually depends on the kinds of filters that the neurons are, right? Actually, the right answer, it depends. AUDIENCE: And so we have-- [INAUDIBLE] MICHALE FEE: Yeah. So generally, the statement is that the stimulus you use has to have fluctuations in it that are faster than the fluctuations in the kernel that you're trying to measure. And so most people choose what's called a "white noise stimulus." And white noise stimulus comes from the idea that when you take the spectrum-- and we're going to get into spectra next week. But when you look at the spectrum of the stimulus, and you take the Fourier transform of it and look at how much power there is as a function of frequency, the spectrum is flat. And just like in colors, white light has a flat spectrum. And so the term evolved to a noise that has a flat spectrum white noise. And that's what people generally refer to when they do spike-triggered averages. They use noise that has a flat spectrum. And so you'll often refer to people saying that they've used a white noise stimulus to extract a spike-triggered average. Now, of course, you can't ever make a noise that truly has a flat spectrum. You have to-- you can only make things fluctuate as fast as your experimental setup can make them fluctuate. So things eventually fall off. Fortunately, neurons tend to have receptive fields that only have fluctuations that are on the scale of 10 milliseconds, or maybe a millisecond in extreme cases. Maybe in the auditory system you might get a little fluctuation for a millisecond, and that's generally pretty slow for an experimental setup. So you choose a white noise stimulus, where white noise means it's got fluctuations faster than the fastest fluctuations in the temporal receptive field, and that for real neurons tends to be, even in early sensory areas, tends to be on the scale of millisecond fluctuations. And in higher brain areas, they would have even slower fluctuations. Now, let me just say one word about spike-triggered average. It works really well in lower sensor areas. But once you get up out of primary sensory areas, this method doesn't work any more. Neurons don't actually have simple receptive fields. And this method starts not working so well outside of primary sensory areas. But before I get too much into the limitations of this, let me just show you some examples where it works really beautifully well. So this is-- I'll show you some slides that I got from Marcus Meister, who studies-- he's at Caltech. He used to be at Harvard, and he studies the retina. And so he developed this setup for extracting receptive fields of retinal neurons. And here's the idea. So here's a piece of retina, thats a representation of the circuitry in the cells within a piece of retina. You extract the retina, and you place it on a dish, a special dish, that has electrodes embedded, metal electrodes embedded, in the glass, sort of on the surface of the glass. You take the retina out. You press it down onto the glass. So now the electrodes are sensing the spiking activity of these neurons down here in the retinal ganglion cell layer. These are the photoreceptors up here. And then what he does is he has a computer monitor that's generating random patterns of visual stimuli, and you project that using a lens down onto the photoreceptors of the retina. And those neurons now make lots of spikes, and you can extract those spikes using the methods that you saw in the video from Tuesday. So here's what those signals look like. This just shows the signal on four different electrodes that happen to be right near each other, like four adjacent electrodes on this electrode array. And you can see that you get spike wave forms on all these different electrodes. You can see that you get-- that you see what looks like lots of cells on these four electrodes. One really interesting thing to note is that these electrodes are actually placed close enough together that multiple electrodes detect the spike signal from a single cell. So you can see right here, here's a spike. It exactly lines up with the spike on this other electrode. So here is a spike on one electrode that lines up with a spike on another electrode. You can see there's a little blip there and a little blip there. All of those spikes are actually from a single cell whose electrical activity is picked up on four adjacent electrodes. David? AUDIENCE: Is this the raw data? MICHALE FEE: Yeah. This is the raw voltage data coming out of those electrodes. And you can see here's at a different cell right here. You can see that this cell has a peak of voltage fluctuation on electrode two. You see a little blip there, and a little blip there, and nothing there. And here is yet another cell that has a big peak on electrode three, essentially nothing, maybe a small bump there. So you can actually extract many different cells looking at the patterns of activity that appear on nearby electrodes. And it turns out that this multi-electrode array system is actually very powerful for extracting many different cells, the spiking activity of many different cells, out of a piece of tissue. So what you can do is you put this through what's called a "spike-sorting algorithm," which uses these different spike wave forms on these different electrodes to pull out a spike train. And the spike train is now going to be a delta function for each different neuron that you've identified in this data set. So even though different neurons appear on these different electrodes, you're eventually going to extract this now so that you have a spike train for one neuron, a spike train for another neuron, a spike train for a third neuron, and so on. And then you can plot the firing rate of those different neurons. This is actually a histogram, a peristimulus histogram, of the activity of a bunch of different neurons to a movie being played to this piece of retina in the dish. And that's just literally a movie from a forest with trees swaying around in the breeze. So you have all these different neurons. And you can see that each neuron responds to a different feature of that movie. And that's because each neuron has a receptive field that's in a slightly different location, has a slightly different spatial and temporal receptive field, that it allows it to pick out different features of the visual stimulus. And there are about a million of these neurons that project on the back of the retina. Actually, I should be careful. It's actually the-- it's the back of the retina. Because the light goes through the ganglion cells through the photo receptors, which are actually-- sorry. Photoreceptors are actually on the backside of the retina. Ganglion cells are on the front, and light goes through the ganglion cells to get to the photoreceptors. And there are million of those retinal ganglion cells that then project up through the optic nerve to the thalamus. So how do we figure out what each of those neurons is actually responding to in this movie? So what we can do is-- you could imagine doing a spike-triggered average of these neurons to the movie that's playing the trees swaying in the breeze. But why would you not want to do that? Why would that be a bad idea? What is it that we just decided is the best kind of stimulus to use to extract receptive fields? This is a highly structured stimulus that's got particular patterns in the stimulus and both in space and in time. So it's really not an optimal stimulus for finding the receptive fields in neurons. What we want to do is to make a very noisy stimulus that we can play, and that's what they did. So then they make this, what they call in the visual system, a "random flicker stimulus." So it's basically a movie where you randomly choose that the pixel values, both in R, G, and B-- red, blue, and green-- for each stimulus at each time step. And here's what that looks like. So now, you play that movie to the retina, and you record the spike trains. So there's the neurons spiking. And now what you do is you-- because this is now a two-dimensional stimulus, what you do is you have to collect the samples of the movie at a bunch of time steps prior to the neurons spiking. Does that make sense? Now, you do that for each spike that occurs. And now, you average those all together to produce a little movie of what happened on average before each spike of the neuron. And here's what that looks like. So this is for two different neurons. So what is that? So this is time across the top. So it starts at minus half a second. So what did that look like? What was it that made that neurons spike? What was it that happened right before that neuron spiked? AUDIENCE: [INAUDIBLE] MICHALE FEE: Yeah, a dark spot. So that neuron was excited by a spot of light, sorry, by a stimulus that looked like a darkness right in that location right there. So that neuron is essentially being inhibited by light at that location. And when the light at that location goes away, boom, the neuron is released from inhibition and spikes. Here's another cell. So that neuron responded to a spot of light right there in that location. And that's because that neuron gets excitatory input from bipolar cells that are located in the retina at that location. And those bipolar cells respond to input from the photoreceptors at that location. That's called an on cell. That's called an off cell. So there are many different kinds of neurons in the retina. There's something like-- I forget the latest count-- 40 or 50 different types of retinal ganglion cells that have very specific responses to visual stimuli. So now let's break that down into a spatial and temporal receptive field. Most-- I probably shouldn't say most-- but many retinal ganglion cells are separable in the sense that they have a spatial receptive field and a temporal receptive field that are just a product of each other. The STRF is a product of a spatial and temporal component. So here you can see as a function of time before the spike. So this is the stimulus at the time of the spike. This neuron responds with a spike to a spot of light that happened about 150 milliseconds earlier. And here's what that stimulus looked like as a function of space on the retina. So that's the spatial receptive field. Sorry, that's the spatial temporal receptive field-- a spatial stimulus as a function of time. You can write that as a product of a spatial receptive field and a temporal receptive field. So here's what the spatial receptive field looks like, and here's the temporal receptive field. You can see that this neuron, just like the example that we talked about earlier, this neuron likes to respond when there's a darkening in the central area, followed by a bright spot. You can see that little bit of darkening right here. So the response when this goes dark and then bright. So that's the visual system. Let's take a look at auditory - you use this same method for finding receptive fields in the auditory system. So we're going to talk briefly. We're going to come back to spectral analysis and spectral processing of signals in a couple lectures, but let me just introduce some of the basic ideas. So we're going to talk about the idea of a spectral representation of a sound. So this is a microphone signal of-- let me see if you can guess what it is-- of a creature. There are parts of this stimulus that have high frequency. So this is a microphone signal. It fluctuates due to fluctuations in air pressure when you hear something. Parts of that signal have high-frequency fluctuations. Parts of that signal have low-frequency fluctuations. You can compute a Fourier transform-- which we'll talk about more later-- as a function of time stimulus and see what the spectral components are. So this is a spectrogram of the sound that I just-- right now. But you can see frequency as a function of time, and the intensity, or in this case the darkness on the plot, shows you how much energy there is at a particular frequency at a particular time, so frequency as a function of time. And now, neurons respond to stimuli like this. It's a canary song. And neurons respond to different sounds. And so you can discover what sounds activate neurons by doing the same trick. So I'll show you. This is from a paper from Michael Merzenich's lab. This was worked on by Christophe deCharms who was a post-doc in the Merzenich lab. And basically, what you can do is-- OK. So this is for calculating a visual receptive field. For calculating an auditory receptive field, what you can do is you can basically play noisy stimuli in auditory space. So what you can do is present random patterns of tones. So this is frequency, and this is time. And so what you can do is you can make a little chords of tone, tone, tone that last, let's say, 20 milliseconds. And then you make a different random combination of tones, and then a different random combination of tones. And this sounds like a very scrambled, noisy stimulus. And you play this to the animal while you're recording a neuron in auditory cortex. The neuron spikes. And then what you can do is just do exactly the same trick. You can look at the stimulus that occurred before each spike, pile up those columns. There's a little spectrum temporal pattern of stimuli that the bird-- in this case, a monkey heard right before that neuron spiked. And you can do the same thing. You can take that little snapshot of that sound and average them together. And here are the kinds of things you see. So that's a spectro-temporal receptive field. You can see I plotted it. It's plotted in a way that this is the stimulus that occurs with the spike. So this is like the D plotted already flipped. And you can see that-- how would you describe what this neuron responds to? How would you describe that? So this is frequently. Sorry, this is frequency in kilohertz. And that's time in milliseconds. So what do you think this neuron responds to? AUDIENCE: [INAUDIBLE] MICHALE FEE: What's that? AUDIENCE: It responds [INAUDIBLE].. MICHALE FEE: It responds maximally, actually, to a very short tone at 4 kilohertz. You see how it kind of has some inhibition there? See how it's kind of darker right there? So this neuron actually will respond better to a tone that only lasts 20 milliseconds than it will to a tone that lasts a long time. So this is a neuron that responds to a short tone pulse. What happens if we play a stimulus to this neuron that's broad? That instead of just being a tone, [WHISTLE],, is broad, like [STATIC]? The noise that's at 4 kilohertz here will tend to excite the neuron, but the noise that's over here at 5 kilohertz or 3 kilohertz will tend to inhibit the neuron. So the best response, the best stimulus to make this neuron respond, is a pure tone at 4 kilohertz that lasts about 20-ish milliseconds. How about this? Let's take a look at this neuron right here. What about that neuron? What does that neuron want to respond to? What does it like to hear? I'm anthropomorphizing shamelessly. You're not supposed to do that. What kind of stimulus drives this neuron? AUDIENCE: [INAUDIBLE] MICHALE FEE: Good, a downward sweep, tone sweep, that goes from about 4 kilohertz to 3 kilohertz. In how long? AUDIENCE: 100 [INAUDIBLE]. MICHALE FEE: In about 100 milliseconds, that's right. Maybe 50 would do, [WHISTLE],, like that. How about this? It's kind of messy, right? So you can see that neurons have receptive fields that can be very complex in space, or in this case, in frequency and time. They are very selective to particular patterns in the stimulus. So we talked about a mathematical version of receptive fields, which are essentially describing patterns of sensory inputs that make a neuron spike. And we've talked about a very specific model, called a linear/nonlinear model, that describes how neurons respond to stimuli or become selective to stimuli. We've talked about a spatial receptive field, described the response of a neuron as a correlation between the spatial receptive field and the stimulus. Temporal receptive fields, where we've used convolution to predict the response of a neuron to a temporal-- to a stimulus. We've talked about the idea of a spatio- or spectro-temporal receptive field, and we've talked about how to use a spike-triggered average to extract the spectro-temporal or spatio-temporal receptive field of a neuron using white noise or random stimuli.
MIT_940_Introduction_to_Neural_Computation_Spring_2018
19_Neural_Integrators_Intro_to_Neural_Computation.txt
MICHALE FEE: Today we're going to continue talking about the topic of neural-- recurrent neural networks. And last time, we talked about recurrent neural networks that give gain and suppression in different directions of the neural network space. Today we're going to talk about the topic of neural integrators. And neural integrators are currently an important topic in neuroscience because they are basically one of the most important models of short-term memory. So let me just say a few words about what short-term memory is. So and to do that, I'll just contrast it with long-term memory. So short-term memory is memory that just lasts a short period of time on the order of seconds to maybe a few tens of seconds at most, whereas long-term memories are on the order of hours, or days, or even up to an entire lifetime of the animal. A short-term memory has a small capacity, so just a few items at a time you can keep in short-term memory. The typical number would be something like seven, the classic number, sort of seven plus or minus two. You might have heard this, so just about the length of a phone number that you can remember between the time you look it up in the-- well, you know, we all have phone numbers on speed dial now, so we don't even remember phone numbers anymore. But in the old days, you would have to look it up in the phone book and remember it long enough to type it in. OK, whereas long-term memories have very large capacity, basically everything that you remember about all the work in your classes that you remember, of course, for your entire life, not just until the final exam. Short-term memories are thought to have an underlying biophysical mechanism that is the persistent firing of neurons in a particular population of neurons that's responsible for holding that memory, whereas the biophysical mechanism of long-term memories is thought to be physical changes in the neurons and primarily in the synapses that connect neurons in a population. So let me just show you a typical short-term memory task that's been used to study neural activity in the brain that's involved in short-term memory. So this is a task that has been studied in nonhuman primates. So the monkey sits in a chair, stares at the screen. There is a set of spots on the screen and a fixation point in the middle, so the monkey stares at the fixation point. One of those cues turns on, so one of those spots will change color. The monkey has to maintain fixation at that spot. The cue turns off then. So now the animal has to remember which cue was turned on. And then some delayed period later, which can be-- it's typically between three to six or maybe 10 seconds, the animal-- the fixation cue goes away, and that tells the animal that it's time to then look at the cued location. And so in this interval between the time when the cue turns off and the animal has to look at the location of that cue, the animal has to remember the direction in which that cue was activated, or it has to remember the location of that cue. Now, if you record from neurons in parts of the prefrontal cortex during this task, what you find is that the neural activity is fairly quiet during the precue and the cue period and then ramps up. The firing rate ramps up very quickly and maintains a persistent activity during this delay period. And then as soon as the animal makes a saccade to the remembered location, then that neural activity goes away because the task is over and the animal doesn't have to remember that location anymore. So that persistent activity right there is thought to be the neural basis of the maintenance of that short-term memory. And you can see that the activity of this neuron carries information about which of those cues was actually on. So this particular neuron is most active when it was the cue in the upper-left corner of the screen that was active, and that neuron shows no changes in activity when the cued location shows no change in activity during the memory period, during the delay period when the cued location was down and to the right. So this neuron carries information about which cue is actually being remembered. And of course, there are different neurons in this population of-- in this part of prefrontal cortex. And each one of those neurons will have a different preferred direction. And so by looking at a population of neurons then during the delay period, you could figure out and the monkey's brain can remember which of those cues was illuminated. OK, so the idea of short-term memory is that you can have a stimulus that is active briefly. And then for some period of time after that stimulus turns on, there is neural activity that turns on during the presentation of that stimulus and then stays on. It persists for tens of seconds after the stimulus actually turns off. So that's one notion of short-term memory and how neural activity is involved in producing that memory. And the basic idea here is that that stimulus is in some way integrated by the circuit, and that produces a step in the response. And once that stimulus goes away, then that-- the integral of that stimulus persists for a long time. All right, now, short-term memory and neural integrators are also thought to be involved in a different kind of behavior. And that is the kind of behavior where you actually need to accumulate information over time. OK, so sometimes when you look at a stimulus, the stimulus can be very noisy. And if you just look at it for a very brief period of time, it can be hard to figure out what's going on in that stimulus. But if you stare at it for a while, you gradually get a better and better sense of what's going on in that stimulus. And so during that period of time when you're looking at the stimulus, you're accumulating information about what's going on in that stimulus. And so there's a whole field of neuroscience that relates to this issue of accumulating evidence during decision-making. OK, so let me show you an example of what that looks like. So here's a different kind of task. Here's what it looks like for a monkey doing this task. The monkey fixates at a point. Two targets come up on the screen. The monkey at the end of the task will have to saccade to one or the other of those targets depending on a particular stimulus. And a kind of stimulus that's often used in tasks like this is what's called a "random dot motion stimulus." So you have dots that appear on the screen. Most of them are just moving randomly, but a small number of them move consistently in one direction. So for example, a small number of these dots move coherently to the right. And if the motion stimulus is more to the right, then the monkey has to then-- once that motion stimulus goes away, the monkey has to make a saccade to the right-hand target. Now, this task can be very difficult if a small fraction of the dots are moving coherently one way or the other. And so what you can see is that the percentage correct is near chance when the motion strength or the percent coherence, the fraction of the dots that are moving coherently, is very small. There's almost a-- there's a 50% chance of getting the right answer. But as the motion strength increases, you can see that the monkey's performance gets better and better. And not only does the performance get better, but the reaction time actually gets smaller. So I'll show-- I found a movie of what this looks like. So this is from another lab that set this up in rats. So here's what this looks like. So the rat is poking its nose in a center port. There's the rat. There's a screen. There's a center port right in front of it that the rat pokes its nose in to initiate a trial. And depending on whether the coherent motion is moving to the right or left, the rat has to get food reward from one or the other port to the left or right. So here's what that looks like. [VIDEO PLAYBACK] [BEEP] [CLINK] [BEEP] [CLINK] [BEEP] [CLINK] [CLINK] [BEEP] [CLINK] So this is a fairly high-motion coherent stimulus, so it's pretty easy to see. But and you can see the animal is performing nearly perfectly. It's getting the right-- it's making the right choice nearly every time. But for lower-coherence stimuli, it becomes much harder, and the animal gets a significant fraction of them wrong. [END PLAYBACK] OK, all right, I thought that was kind of amusing. Now, if you record in the brain in-- also in parts of frontal cortex, what you find is that there are neurons. And this is data from the monkey again, and this is from Michael Shadlen's lab, who's now at Columbia. And what you find is that during the presentation of the stimulus here, you can see that there are neurons whose activity ramps up over time as the animal is watching the stimulus. And so what you can see here is that these different traces, so for example, the green trace and the blue trace here, show what the neurons are doing when the stimulus is very weak. And the yellow trace shows what the neurons do when-- or this particular neuron does when the stimulus is very strong. And so there's this notion that these neurons are integrating the evidence about which way this-- these random dots are going until that activity reaches some sort of threshold. And so this is what those neurons look like when you line their firing rate up to the time of the saccade. And you can see that all of those different trajectories of neural activity ramp up until they reach a threshold at which point the animal makes it's choice about looking left or right. And so the idea is that these neurons are integrating the evidence until they reach a bound, and then the animal makes a decision. The weaker the evidence is, the weaker that evidence accumulates. The more weaker the coherence, the more slowly the evidence accumulates and the longer it takes for that neural activity to reach the threshold. And so, therefore, the reaction time is longer. So it's a very powerful model of accumulat-- evidence accumulation during a decision-making task. Here's another interesting behavior that potentially involves neural integration. So this is navigation by path integration in a species of desert ant. So these animals do something really cool. So they leave their nest, and they forage for food. But they're foraging for food. It's very hot. So they run around. They look for food. And as soon as they find food, they head straight home. And if you look at their trajectory from the time they leave food, they immediately head along a vector that points them straight back to their nest. And so it suggests that these animals are actually integrat-- look. The animal's doing all sorts of loop-dee-doos, and it's going all sorts of different directions. You'd think it would get lost. How does it maintain? How does it represent in its brain the knowledge of which direction is actually back to the nest? One possibility is that it uses external cues to figure this out, like it looks at the-- it sees little sand dunes on the horizon or something. You can actually rule out that it's using sensor information by after the point where it finds food, you pick it up, and you transport it here to a different spot. And the animal heads off in a direction that's exactly the direction that would have taken it back to the nest had it been in the ori-- in the location before you moved it. So the idea is that somehow it's integrating its distance, and it's doing vector integration of its distance and direction over time. OK, so lots of interesting bits of evidence that the brain does integration for different kinds of interesting behaviors. So today I'm going to show you some-- another behavior that is thought to involve integration. And it's a simple sensory motor behavior where it's been possible to study the circuitry in detail that's involved in the neural control of that motor behavior. And the behavior is basically the control of eye position. And this group, this was largely work done that was done in David Tank's lab in collaboration with his theoretical collaborators, Mark Goldman and Sebastian's Seung. OK, so let me just show you this little movie. [VIDEO PLAYBACK] OK, so these are goldfish. Goldfish have an ocular motor control system that's very similar to that in mammals and in us. You can see that they move their eyes around. They actually make saccades. And if you zoom in on their eye and watch what their eyes do, you can see that they make discrete jumps in the position of the eye. And between those discrete jumps, the eyes are held in a fixed position. OK, now if you were to anesthetize the eye muscles, the eye would always just sort of spring back to some neutral location. The eye muscles are sort of like springs. And in the absence of any motor control of any activation of those muscles, the eyes just relax to a neutral position. So when the eye moves and it's maintained at a particular position, that has-- something has to hold that muscle at a particular tension in order to hold the eye at that position. [END PLAYBACK] So there are a set of muscles that control eye position. There's a whole set of neural circuits that control the tension in those muscles. And in these experiments, the researchers just focused on the control system for horizontal eye movements, so motion, movement of the eye from a more lateral position to a more medial position or rotation, OK, so eye posi-- horizontal eye position. And so if you record the position of the eye, and look at-- this is sort of a cartoon representation of what you would see-- you see that the eye stays stable at a particular angle for a while and then makes a jump, stays stable, makes a jump, and stays stable. These are called "fixations," and these are called "saccades." And if you record from motor neurons that innervate these muscles, so these are motor neurons in the nucleus abducens, you can see that the neural activity is low, the firing rate is low when the eyes are more medial, when the eyes are more forward. And that firing rate is high when the eye is in a more lateral position because these are motor neurons that activate the muscle that pulls the eye more lateral. Notice that there is a brief burst of activity here at the time when the eye makes a saccade to the-- into the more lateral direction. And there's a brief suppression of activity here when the eye makes a saccade to a more medial position. Those saccades are driven by a set of neurons, by a brain area called "saccade burst generator neurons." And you can see that those neurons generate a burst of activity prior to each one of these saccades. There are a set of neurons that activate bursts-- activate saccades in the lateral direction, and there are other neurons that activate saccades in the medial direction. And what you see is if you-- is that these saccade burst neurons are actually-- generate activity that's very highly correlated with eye velocity. So here you can see recording from one of these burst generator neurons generates a burst of spikes that goes up to about 400 hertz and lasts about 100 milliseconds during the saccade. And if you plot eye velocity along with the firing rate of these burst generator neurons, you can see that those are very similar to each other. So these neurons are generating a burst, drives change in the velocity of the neurons of the eye. OK, so if we go from neurons that have activity that's proportional to position, and we have neurons that have activity that's proportional to velocity, how do we get from velocity? So the idea is that you have burst saccade generator neurons that project to these neurons that project to the muscles. You have to have something in between. If you have neurons that encode velocity and you have neurons that encode position, you need something to connect those to go from velocity to position. How do you get from velocity to position? If I have a trace of velocity, can you calculate the position by doing what? AUDIENCE: Integrating. MICHALE FEE: By integrating. So the idea is that you have a set of neurons here. In fact, there's a part of the brain, and in the goldfish it's called "area one," that take that burst saccade generator neuron burst, integrate it to produce a position signal that then controls eye position. All right, so if you record from one of these integrator neurons while you're watching eye position, here's what that looks like. [VIDEO PLAYBACK] And so here's the animal's looking more lateral to the nose. The goldfish's mouth is up here. So that's more lateral. That's moving more medial there, more lateral, more-- [END PLAYBACK] OK, so this neuron that we were just watching was recorded in this area, area one. Those neurons project to the motor neurons that actually innervate the muscles to control eye position. And they receive inputs from these burst generator neuron. OK, so if you look at the activity of one of these integrator neurons, that's a spike train during a series of saccades, and fixations is a function of time. This trace shows the average firing rate of that neuron. This is just smoothed over time, so you're just averaging the firing rate in some window. You can see that the firing rate steps up, that the firing rate jumps up during these saccades and then maintains a stable, persistent firing rate. So the way-- think about this is that this persistent firing right here is maintaining a memory, a short-term memory of where the eye is, and that sends an output that puts the eye at that position. OK, and so just like we described, we can think of these saccade burst generator neurons as sending an input to an integrator that then produces a step in the position, and then the burst generator input is zero during the [INAUDIBLE]. So the integrator doesn't change when the input is zero. And then there's effectively a negative input that produces decrement in the eye position. OK, we started talking last time about a neural model that could produce this kind of integration. And I'll just walk through the logic of that again. So our basic model of a neuron is a neuron that has a synaptic input. If we put a brief synaptic input, remember we described how our firing rate model of a neuron will take that input, integrate it briefly, and then the activity, the firing rate of that neuron will decay away. So we can write down an equation for this single neuron, tau dv/dt is equal to minus v. That's due to this intrinsic decay plus an input. And that input is synaptic input. But what we want, a system where when we put in a brief input, we get a persistent activity instead of a decaying activity. And I should just remind you that we think of this intrinsic decay and this intrinsic leak as having a time constant of order 100 milliseconds. And I should have pointed out actually that in this system here, these neurons have a persistence of order of tens of seconds. So even in the dark, the goldfish is making saccades to different directions. And when it makes a saccade, that eye position stays stable for-- it can stay stable for many seconds. And if you can do this in humans, you can ask a person to saccade in the dark and try to hold their eyes steady at a given position, and a person will be able to saccade to a position. Just you can imagine closing your eyes and saccading to a position. Humans can hold that eye position for about 10 or 20 seconds. So that's sort of the time constant of this integrator in the primate, so that's also consistent with nonhuman primate experiments. OK, so this has a very long time constant. But we want a neural model that can model that very long time constant of this persistent activity that maintains eye position. All right, but the intrinsic time constant of neurons is about 100 milliseconds. So how do we get from a single neuron that has a time constant of 100 milliseconds to a neural integrator that can have a time constant of tens of seconds? All right, one way to do that is by making a network that has recurrent connections. And you remember that the simplest kind of recurrent network is a neuron that has an autapse. But more generally, we'll have neurons that connect to other neurons. Those other neurons connect to other neurons. And there are feedback loops. This neuron connects to that neuron. That neuron connects back, and so on. And so the activity of this neuron can go to other neurons, and then come back, and excite that neuron again, and maintain the activity of that neuron. So we developed a method for analyzing that kind of network by [INAUDIBLE] a recurrent weight matrix, recurrent connection matrix that describes the connections to a neuron A in this network from all the other neurons in the network, A prime, input to neuron A from neuron A prime. And now we can write down a differential equation for the activity of one of these neurons. dv/dt is minus v that produces this intrinsic decay, plus synaptic input from all the other neurons in the network summed up over all the other neurons plus this external burst input. So how do we make a neural network that looks like an integrator? But how do we do that? If we want our neuron, the firing rate of our neuron to behave like an integrator of its input, what do we have to do to this equation to make this neuron look like an integrator? So what do we have to do? To make this neuron look like an integrator, it would just be tau dv/dt equals burst input. Right? So in order to make this network into an integrator, we have to make sure that these two terms sum to zero. So in other words, the feedback from other neurons in the network back to our neuron has to exactly balance the intrinsic leak of that neuron. Does that makes sense? OK, so let's do that. And when you do that, this is zero. The sum of those two terms is zero. And now the derivative of the activity of our neuron is just equal to the input. So our neuron now integrates the input. So now the firing rate of our neuron, so there should be a v is equal to 1 over tau, the integral of burst input. So we talked last time about how you analyze recurrent neural networks. We start with a recurrent weight matrix. So again, these Ms describe the recurrent weights within that network. We talked about how if M is a symmetric matrix, connection matrix, then we can rewrite the connection matrix as a rotation matrix times a diagonal matrix times a rotation, the inverse rotation matrix, so phi transpose lambda phi where, again, lambda is a diagonal matrix, and phi is a rotation matrix that's [INAUDIBLE] two, in this case, in the case of two-- a two-neuron network, then this rotation matrix has as its columns the two basis vectors that we can now use to rewrite the firing rates of this work in terms of modes of the network. So we can multiply the firing rate vector of this network times phi transpose to get the firing rates of different modes of that network. And what we're doing is essentially rewriting this recurrent network as set of independent modes, independent neurons, if you will, that described the modes with recurrent connectivity only within that mode. So we're rewriting that network as a set of only autapses. And the diagonal elements of this matrix are just the strength of the recurrent connections within that mode. All right, so for a network to behave as integrator, most of the eigenvalues should be less than 1, but one eigenvalue should be 1. And in that case, one mode of the network becomes an integrating mode, and all of the other modes of the network have the property that their activity decays away very, very rapidly. So I'm going to go through this in more detail and show you examples. But for a network to behave as an integrator, you want one integrating mode, one eigenvalue close to 1 and most of the-- all of the other eigenvalues much less than 1. So if you do that, then you have one mode that has the following equation that describes its activity, tau, and let's say that's lambda 1 that has eigenvalue of 1. So tau dc1/dt, dc/dt equals minus c, that's the intrinsic decay of that mode, plus lambda 1 c1 plus burst input. And if lambda 1 is equal to 1, then those two terms cancel. Then the feedback balances the leak, and that mode becomes an integrating mode. So when you have a burst input, the activity in that mode increases. It steps up to some new value. And then between the burst inputs, that mode obeys-- the activity of that mode obeys the following differential equation. There's no more burst input between the bursts. dc/dt is just equal to lambda minus 1 over tau times c1. And if lambda is equal to 1, then this, then dc/dt equals zero, and the activity is constant. Does that makes sense? Any questions about that? Yes, Rebecca. AUDIENCE: OK, so why does it [INAUDIBLE] need to balance [INAUDIBLE] MICHALE FEE: Yes, that's exactly right. If this is not true, if, let's say that-- what happens if lambda is less than 1? If lambda is less than 1, then this quantity is negative. So if lambda is 0.5, let's say, then this is 0.5 over tau, minus 0.5 over tau. So dc/dt is some negative constant times c. Which means if c is positive, then dc/dt is negative, and c is decaying. Does that makes sense? If lambda is bigger than 1, then this constant is positive. So if c is positive, then dc/dt is positive, and c continues to grow. So it's only when lambda equals 1 that dc/dt is zero between the burst inputs. OK, so let's look at a really simple model where we have two neurons. There's autapse recurrence here, but it's easy to add that. And let's say that the weights between these two neurons are 1. So we can write down the weight matrix. It's just 0, 1; 1, 0 because the diagonals, the diagonals are 0, OK, 0, 1; 1, 0. The eigenvalue equation looks like this. You know that because the diagonal elements are equal to each other and the off-diagonal elements are equal to each other because it's a symmetric matrix, then the eigenvalue, the eigenvectors are always what? AUDIENCE: [INAUDIBLE] MICHALE FEE: 45 degrees, OK, so 1, 1 and minus 1, 1. So our modes of the network, if we look in this state space of v1 versus v2, the two modes of the network are in the 1, 1 direction and the 1, minus 1 direction. What are the eigenvalues of this network? OK, so for a matrix like this with equal diagonals and equal off-diagonals, the eigenvalues are just the diagonal elements plus or minus the off-diagonal element. I'll just give you a hint. This is going to be very similar to a problem that you'll have on the final. So if you have any questions, feel free to ask me. OK? OK, so the eigenvalues are plus or minus 1. They're 1 and minus 1. And it turns out for this case, it's easy to show that the value for this mode is 1, and the eigenvalue for this mode is minus 1. And you can see it. It's pretty intuitive. This network likes to be active such that both of these neurons are both on. When that neuron's on, it activates that neuron. When that neuron's on, it activates that neuron. And so this network really likes it when both of those neurons are active. And that's the amplifying direction of this network. And the eigenvalue value is such that the amplification in that direction is large enough that it turns that into an integrating mode. All right, so I'll show you what that looks like. So the eigenvalues again are 1 and minus 1. If you just do that matrix multiplication, you'll see that that's true. lambda is 1, and lambda is minus 1. You can just read this off. This first eigenvalue here is the eigenvector for the first mode. This eigenvalue is the eigenvalue for that vector for that mode. So here's what this looks like. So this mode is the integrating mode. This mode is a decaying mode because the eigenvalue is much less than 1. And what that means is that no matter where we start on this network, the activity will decay rapidly toward this line. Does that makes sense? No matter where you start the network, activity in this direction will decay. Any state of this network that's away from this line corresponds to activity of this mode, and activity of that mode decays away very rapidly. So no matter where you start, the activity will decay to this diagonal line. So let me just ask one more question. So if we put an input in this direction, what will the network do? So let's turn on an input in this direction and leave it on. What does the network do? Rebecca? AUDIENCE: [INAUDIBLE] MICHALE FEE: Good. So we're going to turn it on and leave it on first. The answer you gave is the answer to my next question. The answer is when you put that input on and you turn it off, then the activity goes back to zero. That's exactly right. But when you put the input-- when you turn the input in this direction on, the network will-- the state will move in this direction and reach a steady state. When you turn the input off, it will decay away back to zero. If we put an input in this direction, what happens? AUDIENCE: It just keeps going on. MICHALE FEE: It just keeps integrating. And then we turn the input off. What happens? AUDIENCE: It [INAUDIBLE] MICHALE FEE: It stops, and it stays. Because the network activity in this direction is integrating any input that has a projection in this direction. Yes. AUDIENCE: So [INAUDIBLE] steady state [INAUDIBLE] to F1, so if anything that has any component in the F1 direction will either grow or [INAUDIBLE] over 90 degrees [INAUDIBLE] F1? MICHALE FEE: Yep. AUDIENCE: Would it [INAUDIBLE] MICHALE FEE: Like here? So if you put an input in this direction, what is the component of that input in the integrating direction? If we put an input like this, what-- it has zero component in the integrating direction, and so nothing gets integrated. So you put that input. The network responds. You take the input away, and it goes right back to zero. If you put an input in this direction, all of that input is in this direction, and so that input just gets integrated by the network. OK? What happens if you put an input in this direction? Then it has a little bit of-- it has some projection in this direction and some projection in this direction. The network will respond to the input in this direction. But as soon as that input goes away, that will decay away. This, the projection in this direction, will continue to be integrated as long as the input is there. So let me show you what that looks like. So I'm going to show you what happens when you put an input vertically. What that means, input in this direction means that we have an input to H1. Input to this neuron is 0, but the input to that neuron is 1. That corresponds to H0 being-- H1 direction being 0, and the H2 direction being 1 that has a projection in this direction and this direction. And here's what the network does. OK, sorry. I forgot which way it was going. So you can see that the network is responding to the input in the H1 direction. But as soon as that input goes away, the activity of the network in this direction goes away as soon as the input goes away. But it's integrating the projection in this direction. So you can see it continues to integrate. And than you put an input in the opposite direction, it integrates until the input goes away, and it stops there. OK, let me play that again. Does everyone get a sense for what's going on? So now we have a input that has a projection in the minus F1 direction. And so it's the network is just integrating that negative number. OK, is that clear? OK, all right, so that's a neural integrator. It's that simple. It has one mode that has an eigenvalue of 1. And all of its other modes have small eigenvalues or a negative. OK, so notice that no matter where you start, this network evolves. As long as there's no input, that network just relaxes to this line, to a state along that line. So that line is what we call an "attractor" of the network. The state of the network is attracted to that line. Once the state is sitting on that line, it will stay there. So that kind of attractor is called a "line attractor." That distinguishes it from other kinds of attractors that we'll talk in the next lecture. We'll talk about when there are particular points in the state space that are attractors. OK, no matter where you start the network around that point, the state evolves toward that one point. OK, so the line of the line attractor corresponds to the direction of the integrator mode, of the [INAUDIBLE] mode. So we can kind of see this attractor in action. If we record from two neurons in this integrator network of the goldfish during this task, if you will, where the [INAUDIBLE] saccades to different directions, so here's what that looks like. So again, we've got two neurons recorded simultaneously, and we're following the [INAUDIBLE] rate [INAUDIBLE] versus [INAUDIBLE]. And Marvin the Martian here is indicating which way the goldfish is looking [INAUDIBLE].. OK, any questions about that? So the hypothesis is that-- so I should mention that there-- I didn't say this before. There are about a couple hundred neurons in that nucleus in area one that connect to each other, that contact each other. What's not really known yet-- it's a little hard to prove, but people are working on it. What's not known yet is whether the connections between those neurons have the right synaptic strength to actually give you lambda, give you an eigenvalue of 1 in that network. So it's still kind of an open question whether this model is exactly correct in describing how that network works. But Tank and others in the field are working on proving that hypothesis. You can see that one of the challenges of this model for this persistent activity is that in order for this network to maintain persistent activity, that feedback from these other neurons back to this neuron have to be-- have to exactly match the intrinsic decay of that neuron. And if that feedback is too weak, you can see that lambda is slightly less than 1. What happens is that neural activity will decay away rather than being persistent. And if the feedback is too strong, that neural activity will run away, and it will grow exponentially. So you can actually see evidence of these two pathological cases in neural integrators. So let's see what that kind of mismatch of the feedback would look like in the behavior. So if you have a perfect integrator, you can see that the-- you'll get saccades. And then the eye position between saccades will be exactly flat. The eye position will be constant, which means the derivative of eye position will be zero between the saccades. And it will be zero no matter what eye position the animal is holding its eyes. So we can plot the derivative of eye position as a function of eye position, and that should be zero everywhere if the integrator is perfect. Now, what happens if the integrator is leaky. Now you can see that, in this case, the eye is constantly rolling going back toward zero. So but if the eye is already at zero, then the derivative should be close to zero. If the eye is far away from zero, then the derivative should be-- if the eye position is very positive, you can see that this leak, this leaky integrator corresponds to the derivative being negative. So if e is positive, then the derivative is negative. If e is negative, then the derivative is positive. And that corresponds to a situation like this. Positive eye position corresponds to negative derivative. And you can see that the equation for the activity of this mode which then translates into eye position is just e to the minus a constant times t. If you have an unstable integrator, if this lambda is greater than 1, then positive eye positions will produce a positive derivative, and you get x runaway growth of the eye position, and that corresponds to a situation like this-- positive eye position, positive derivative, negative eye position, negative derivative. And then that equation for that situation is either the plus constant times t. All right, so you can actually produce a leaky integrator in the circuit by injecting a little bit of local anesthetic into part of that nucleus. And so what would that do? You can see that if you inject lidocaine or some other inactivator of neurons into part of that network, it would reduce the feedback connections onto the remaining neurons. And so lambda becomes less than 1, and that produces a leaky integrator when you do that manipulation. So this experiment is consistent with the idea that feedback within that network is required to produce that stable, persistent activity. Now, you can actually find cases where there are deficits in the ocular motor system that are associated with unstable integration. And this is called congenital nystagmus. So this is a human patient with this condition. And the person is being told to try to fixate at a particular position. But you can see that what happens is their eyes sort of run away to the edges, to the extremes of eye position. So they can fixate briefly. The integrator kind of runs away, and their eyes run to the edges, to the extremes of the range of eye position. And it's thought that that one hypothesis for what's going on there is that the ocular motor integrator is actually in an unstable configuration, that feedback is too strong. So exactly how precisely do you need to set that feedback in order to produce a perfect integrator? So you can see that the getting a perfect integrator requires that lambda minus 1 is equal to 0. So lambda is equal to 1. But if lambda is slightly different from 1, we can actually estimate what the time constant of the integrator would be. So you can see that the time constant is really tau over lambda minus 1, tau over lambda minus 1. So given the intrinsic time constant tau n, you can actually estimate how close lambda has to be to 1 to get a 30-second time constant, OK? And that turns out to be extremely close to 1. In order to go from a 100-millisecond time constant to a 30-second time constant, you need a factor of 300 or, if the neural time constant is even shorter, maybe even 3,000 precision in setting lambda equal to 1. So this is actually one of the major criticisms of this model, that it can be hard to imagine how you would actually set the feedback in a recurrent network so precisely to get a lambda equal to give you time constants on the order of 30 seconds. Does anybody have any ideas how you might actually do that? What would happen? Let's imagine what would happen if we were-- we make saccades constantly. We make several saccades per second, not including the little microsaccades that we make all the time. But when we make a saccade, what happens to the image on the retina? AUDIENCE: [INAUDIBLE] MICHALE FEE: Yeah, so and if we make a saccade this way, the image on the retina looks like the world is going whoosh, like this. And as soon as it stops and our eyes-- if our integrator is perfect when the saccade ends, our eyes are at a certain position. What happens to the image on the retina? If our eyes make a saccade, and stop, and stay at a certain position and the velocity is zero, then what happens to the image on the retina? It becomes stationary. So but if we had a problem with our integrator-- let's say that our integrator was unstable. So we make a saccade in this direction, but our integrator's unstable, so the eyes keep going. Then what would the image on the retina look like if we would have a motion of the image across the retina during the saccade? And then if our eyes kept drifting, the image would keep going. If we had a leaky integrator and we make a saccade, the image of the world could go whoosh, and then it would start relaxing back as the eyes drift back to zero. So the idea is that when we're walking around making saccades, we have immediately feedback about whether our integrator is working or not. And so, OK, I'm going to skip this. So the idea is that we can use that sensory feedback that's called "retinal slip," image slip, to give feedback about whether the integrator is leaky or unstable and use that feedback to change lambda. So if we make a saccade this way, the image is going to go like this. And now if that image starts slipping back, what does that mean we want to do? What do we need to do to our integrator, our synapses in our recurrent network if after we make a saccade, the image starts slipping back in the direction that it came from? We need to strengthen it. That means we have a leaky integrator. We would need to strengthen or make those connections within the integrator network more excitatory. And if we make a saccade this way, the world goes like this and then the image continues to move, it would mean our integrator is unstable. The excitatory connections are too strong. And so we would have a measurement of image slip that would tell us to weaken those connections. A lot of evidence that this kind of circuitry exists in the brain and that it involves the cerebellum. David Tank and his colleagues set out to test whether this kind of image slip actually controls the recurrent connections or controls the state of the integrator, whether you can use image slip to control whether the integrator network is unstable or leaky, whether that feedback actually controls it. Rebecca. AUDIENCE: [INAUDIBLE] is the [INAUDIBLE] between slip and overcompensation with [INAUDIBLE] versus unstable integrator, the direction of [INAUDIBLE] MICHALE FEE: Yes, exactly. So if we make a saccade this way, the world on-- the image on the retina is going to, whoosh, suddenly go this way. But then if the image goes-- OK, in the unstable case, the eyes will keep going, which means the image will keep going this way. So you'll have-- I don't know what sign you want to call that, but here, it's they did a sign flip. Here the case of decay. So dE/dt is less than zero. That means that the eyes are going back, which means that after you make a saccade, the image goes this way, and then it starts sliding back. AUDIENCE: So it'll return to-- MICHALE FEE: Return, yeah. So if dE/dt is negative, that means it's leaky. The image slip will be positive. And then you use that positive image slip to increase the weight of the synapses. So you change the synaptic weights in your network by an amount that's proportional to the negative of the derivative of eye position, which is read out as image slip. OK, is that clear? OK, so they actually did this experiment. So they took a goldfish, head fixed it, put it in this arena. They made a little-- you put a little coil on the fish's eye. So this is a standard procedure for measuring eye position in primates, for example. So you can put a little coil on the eye that measures-- you measure-- OK, so you put a little coil on the eye, and you surround the fish with oscillating magnetic fields. So you have a big coil outside the fish on this side, another coil on this side, a coil on the top and bottom, and a coil on front and back. And now you run AC current through those coils. And now by measuring how much voltage fluctuation you get in this coil, you can tell what the orientation of that coil is. Does that makes sense? So now you can read that out here and get a very accurate measurement of eye position. And so now when the fish makes a saccade, you can read out which direction the saccade was. And immediately after the saccade, you can make this spot, so there's a like a disco ball up there that's on a motor that produces spots on the inside of the planetarium. Notice the fish makes a saccade in this direction. What you do is you make the spots drift back, drift in the direction as though the eyes were sliding back, as though the integrator were leaky. Does that make sense? So you can fool the fish's ocular motor system into thinking that its integrator is leaky. And what do you think happens? After about 10 minutes of that, you then turn all the lights off. And now the fish's integrator is unstable. So here's what that looks like. There's the spots on the inside. There's the disco ball. That's a overview picture showing the search coils for the eye position measurement system. And here's the control. That's what the fish-- the eye position looks like as a function of time. So you have saccade, fixation, saccade, fixation. That right there, anybody know what that is? That's the fish blinking. So it blinks. Give feedback-- OK, here they did it the other way. So they give their feedback as if the network is unstable, and you can make the network leaky. If you give feedback as if the network is leaky, so it makes a saccade, and now you drift the spots in the direction as if the eye were sliding back to neutral position, and now you can make the network unstable. So it makes a saccade, and the eyes continue to move in the direction of the saccade. Saccade, and it runs away. Any questions about that? So that learning circuit, that circuit that implements that change in the synaptic weights of the integrator circuit, actually involves the cerebellum. There's a whole cerebellar circuit that's involved in learning various parameters of the ocular motor control system that produces these plastic changes. OK, so that's-- are there any questions? Because that's it. So I'll give you a little summary. So the goldfishes do integrals. There's an integrator network in the brain that takes burst inputs that drive saccades. And the integrator integrates those bursts and produces persistent changes in the activity of these integrator neurons that then drive the eyes to different positions and maintain that eye position. So we've described a neural mechanism, which is this recurrent network, a recurrent network has one eigenvalue that's 1 that produces an integrating mode, and all the other eigenvalues are close to-- are less than 1 or negative. The model is not very robust if you have to somehow hand-tune all of those [INAUDIBLE] to get a lambda of 1. But there is a mechanism that uses retinal slip to tell whether that eigenvalue is set correctly in the brain and feeds back to adjust that eigenvalue to produce the upper lambda, the proper eigenvalue in that circuit so that it functions as an integrator and using visual feedback. And I just want to mention again, so I actually got most of these slides from Mark Goldman when he and I actually used to teach an early version of this course. We used to give lectures in each other's courses, and this was his lecture. He later moved to-- he was at Wellesley. So we would go back and forth and give these lectures. But he moved to Davis. So now I'm giving his lecture myself. And the theoretical work was done by Sebastian Seung and Mark Goldman. The experimental work was done in David Tank's lab in collaboration with Bob Baker at NYU. OK, and so next time, we're going to-- so today we talked about short-term memory using neural networks as integrators to accumulate information and to perform-- to generate line attractors that can produce a short-term memory of continuously graded variables like eye position. Next time, we're going to talk about using recurrent networks that have eigenvalues greater than 1 as a way of storing short-term discrete memories. And those kinds of networks are called Hopfield networks, and that's what we're going to talk about next time. OK, thank you.
MIT_940_Introduction_to_Neural_Computation_Spring_2018
14_Rate_Models_and_Perceptrons_Intro_to_Neural_Computation.txt
MICHALE FEE: So for the next few lectures, we're going to be looking at developing methods of studying the computational properties of networks of neurons. This is the outline for the next few lectures. Today we are going to introduce a method of studying networks called a rate model where we basically replace spike trains with firing rates in order to develop simple mathematical descriptions of neural networks. And we're going to start by introducing that technique to the problem of studying feed-forward neural networks. And we'll introduce the idea of perceptrons trance as a method of developing networks that can classify their inputs. Then in the next lecture, we're going to turn to largely describing mathematical tools based on matrix operations and the idea of basis sets. Matrix operations are very important for studying neural networks. But they're also a fundamental tool for analyzing data and doing things like reducing the dimensionality of high dimensional data sets, including methods such as principal components analysis. So it's a very powerful set of methods that apply both to studying the brain and to analyzing the data that we get when we study the brain. And then finally we'll turn to a few lectures that focus on recurrent neural networks. These are networks where the neurons connect to each other densely in a recurrent way, meaning a neuron will connect to another neuron. And that neuron will connect back to the first neuron. And networks that have that property have very interesting computational abilities. And we're going to study that in the context of line attractors and short-term memory and hopfield networks. So for today, the plan is to develop the rate model. We're going to show how we can build receptive fields with feed forward networks that we've described with the rate model. We're going to take a little detour and describe vector notation and vector algebra, which is very important for these models, and also for building up to the matrix methods that we'll talk about in the next lecture. Again, we'll talk about neural networks for classification and introduce the idea of a perceptron. So that's for today. So I've already talked about most of this. Why is it that we want to develop a simplified mathematical model of neurons that we can study analytically? Well, the reason is that we can really develop our intuition about how networks work. And that intuition applies not just to the very simplified mathematical model that we're developing, but also applies more broadly to real networks with real neurons that actually generate spikes and interact with each other by the more complex biophysical mechanisms that are going on in the brain. So a good example of this is how we simplified the detailed spiking neurons of the Hodgkin-Huxley model and approximate that as an integrate and fire model, which captures a lot of the properties of real neurons. Simplifies it enough to develop an intuition, but captures a lot of the important properties of real neural circuits. All right, so let's start by developing the basic idea of a rate model. Let's start with two neurons. We have an input neuron and an output neuron. The input neuron has some firing rate given by u. And the output neuron has some firing rate given by v. So we're going to essentially ignore the times of the spikes and describe the inputs and outputs of this network just with firing rates. You can think of the rate as just having units have spikes per second, for example. Those neurons, the input neuron and the output neuron, are connected to each other by a synapse. And we're going to replace all of the complex structure of synapses, vesicle release, neurotransmitter receptors, long-term depression and paired spike facilitation and depression, all that stuff we're just going to ignore. And we're going to replace that synapse with a synaptic weight w. Just to give you the simplest intuition of how a rate model works, there are models where we can just treat the firing rate of the output neuron, for example, as linear in its input. And we can simplify this even to the point where we can describe the firing rate of the output neuron as the synaptic weight w times the firing rate of the input neuron. So that's just to give you a flavor of where we're heading. And I'm going to justify how we can do this and/or why we can do this. And then we're going to build this up from the case of one input neuron and one output neuron to the case where we can have many input neurons and many output neurons. So how do we justify going from spikes to firing rates? So remember that the response of a real output neuron, a real neuron, to a single spike at its input, is some change in the postsynaptic conductance that follows an input spike. And in our model of a synapse, we described that the input spike produces a transient increase in the synaptic conductance. And that synaptic conductance we modeled as a simple step increase in the conductance followed by an exponential decay as the neurotransmitter gradually unbinds from the neurotransmitter receptors. So we have a transient change in the synaptic conductance. That's just a maximum conductance times an exponential decay. Now remember that we wrote down the postsynaptic-- we can write down the postsynaptic current that results from this synaptic input as the synaptic conductance times v minus e synapse, the synaptic reversal potential. In moving forward in this model, we're not going to worry about synaptic saturation. So we're just going to imagine that the synaptic current is just proportional to the synaptic conductance. So now we can write the conductance as just some weight times a kernel that is just some kernel of unit area. So what we've done here is we've just taken the synaptic current and we've written it as a constant, a synaptic weight, times an exponentially decaying kernel of area, area 1. So now if we have a train of spikes at the input instead of a single spike, we can write down that train of spikes, the spike train, as a sum of delta functions where the spike times are t sub i. And if you want to plot the synaptic current as a function of time, you would just take that spike train input and do what with that linear kernel? We would convolve it, right? So we would take that spike train, convolve it with that little exponential kernel. And that would give us the synaptic current that results from that spike train. So let's think for a moment about what this quantity is right here. What is k, this k which is a little kernel that has an exponential step, and then an exponential decay? What do you get when you convolve that kind of smooth kernel with this spike train here? What does that look like? We did that at one point when we were in class when we were talking about how you would estimate something from a spike train. What is that? What is that quantity right there? It's sort of a smoothed version of a spike train, which is how you would calculate what, Habiba? AUDIENCE: Is it a window for the spike train? MICHALE FEE: Yeah. It's windowed, but what is it that you are calculating when you take a spike train and you convolve it with some smooth window? AUDIENCE: Low-pass window? MICHALE FEE: It's like a low-pass version of the spike train. And remember in the lecture on firing rates, we talked about how that's a good way to get a time-dependent estimate of the firing rate of a neuron. We take the spike train and just convolve it with a smooth window. And if the area of that smooth window is 1, then what we're doing is we're estimating the firing rate of the neuron as a function of time. Does that make sense? Yes? AUDIENCE: So k is just a kernel? MICHALE FEE: k is just is smooth kernel that happens to have this exponential shape. AUDIENCE: Is it like [INAUDIBLE] MICHALE FEE: Well, that's our model for how a synapse-- basically, what I'm saying is that when you take a spike train and put it through a synapse, what comes out the other end is a smoothed version of the spike train. AUDIENCE: OK. MICHALE FEE: That's all this is saying. AUDIENCE: OK. [INAUDIBLE] they have this area or quantity? MICHALE FEE: Yep. If k has-- you remember that if k has an area 1, then when you convolve evolve that kernel with the spike train, you get a number that has units of spikes per second. And that quantity is an estimate of the local firing rate of the neuron. Does that make sense? So basically, we can take this spike train, and by convolve it with a smooth window, we can estimate the number of spikes per second in that window. So what do we have here? We have that the current is just a constant times an estimate of the firing rate at that time. If k is a kernel, a smooth kernel with an area normalized to 1, then this quantity is just an estimate of the firing rate. So let's take a look at that. So here I have just made a sample spike train with a bunch of spikes that look like they're increasing in firing rate and decreasing in firing rate. If we take that spike train and convolve it with this kernel, you can see that you get this sort of broad bump that looks like it gets higher in the middle where the firing rate is higher. And it's lower at the edges where the firing rate is lower. So the point is that you can take a spike train and put it into a neuron. The response of the neuron is a smooth low-pass version of the rate of this input spike train. And so you can think about writing down the input to this neuron as a weight times the firing rate of the input. So that was a way of writing down the input to this output neuron from the input neuron, the current input. Now what is the firing rate of the output neuron in response to that current injection? So that's what we're going to ask next. And you can remember that when we talked about the integrate and fire model, we saw that neurons in the approximation of large inputs have firing rate as a function of current that looks like this. It's zero for inputs below the threshold current. For input currents that aren't large enough to drive the neuron to threshold, the neuron doesn't spike at all. And then above some threshold, the neuron fires approximately linearly at higher input currents. So the way that we think about this is that the input on is spiking at some rate. It goes through a synapse. That synapse smooths the input and produces some current in the postsynaptic neuron that's proportional approximately to the firing rate of the input neuron. And the output neuron has some output firing rate that's some function of the input current. So we can write down the firing rate of our output neuron, v. It's just equal to some function of the input current, which is just some function of w times the firing rate of the input neuron. And that right there is the basic equation of the rate model. The output firing rate is some function of a weight times the firing rate of the input neuron. And everything else about the rate model is just different rate models have different numbers of input neurons where we have more than one contribution to the input current. They can have many output neurons. They can have different FI curves for the output neurons. Some of them are non-linear like this. Some of them are linear. And we're going to come back and talk about the function of different FI curves and why different FYI curves are useful. Any questions about this? That's the basic idea. All right, good. So let's take one particularly simple version of the rate model called a linear rate model. And the linear rate model has a particular FI curve. That FI curve says that the firing rate of the neuron is linear in the input current. Now why is this a really weird model of a neuron? What's fundamentally non-biological about this? AUDIENCE: Negative firing rate. MICHALE FEE: I'm hearing a bunch of right answers at the same time. AUDIENCE: Negative firing rate. MICHALE FEE: This neuron is allowed to fire at a negative firing rate if the input current is negative. That's a pretty crazy thing to do. Why do you think we would want to do that? AUDIENCE: [INAUDIBLE]? MICHALE FEE: Well, no actually we do. So you can have inhibitory inputs that produce outward currents that hyperpolarize the neuron. Any thoughts about that? It turns out that as soon as you have your output neurons have this kind of FI curve, a linear FI curve, then the math becomes super simple. You can write down very complex networks of neurons with a bunch of linear differential equations. And it becomes very easy to write down what the solution is to how a network behaves as a function of its inputs. And we're going to spend a lot of time working with network models that have linear FI curves because you can develop a lot of intuition about how networks behave by using models like this. As soon as you have models like this, you can't solve the behavior of the network analytically. You have to do everything on the computer. And it becomes very hard to derive general solutions for how things behave. So we're going to use this model a lot. And in this case again, for the case of this two-neuron network where we have one output neuron that receives a synaptic input from an input neuron, the firing rate of the output neuron is just w, the synaptic weight times the firing rate of the input neuron. And we're going to come back to non-linear neurons because that non-linearity actually does really important things. And we're going to talk about what that does. So now let's look at the case where our output neuron has not just one input but actually many inputs from a bunch of input neurons. So here we have what we call an input layer, a layer of neurons in the input layer. Each one of those neurons has a firing rate-- u1, u2, u3, u4, u5. Each of those neurons sends a synapse onto our output neuron. Each one of those synapses has a synaptic weight. This weight is w1. And that's w2, w3, w4, and w5. Now you can see that the total input, the total current, to this output neuron is just going to be a sum of the inputs from each of the input neurons. The total input is just a sum of the inputs from each of the input neuron. So the synaptic current-- total synaptic current into this neuron is w1 times u1, plus w2 times u2, plus w3 times u3, plus all the rest. So the response of our linear neuron, the firing rate of our linear neuron, is just a sum over all of those inputs. So again, in this case, we're going to say that the total input current to this neuron is the sum over this. But then because this is a linear neuron, the firing rate is just equal to that current input. Does that make sense? So you can see that this description of the firing rate of the output neuron is a sum over all of those contributions. It turns out that this actually can be written in a much more compact way in vector notation. What does that look like? Does anyone know in vector notation what that looks like? AUDIENCE: Dot product. MICHALE FEE: That's a dot product. That's right. So in general, it's much easier to write these responses in vector notation. And so I'm just going to walk you through some basics of vector notation for those of you who might need a few minutes of reminder. Actually before we get to the vector notation, I just want to describe how we can use a simple network like this to build a receptive field. So you remember that when we were talking about receptive fields of neurons, we described how a neuron can have a maximal response to a particular pattern of input. So let's say we have a neuron that's sensitive to visual inputs. And as a function of one dimension, let's say along the retina, this neuron has a big response if light comes in central field, some inhibitory responsive light comes in outside of that central lobe. Well, it turns out that a very simple way to build neurons that have receptive fields like this, for example, is to have an input layer that projects to this neuron that has this receptive field and has a pattern of synaptic inputs that corresponds to that pattern in the field. So you can see that if this neuron-- so let's say these are neurons in the retina, let's say retinal ganglion cells, and this neuron is in the thalamus, we can build a thalamic neuron that has a center-surround receptive field like this by having let's say this neuron has a strong positive excitatory synaptic weight onto our output neuron. So you can see that if you have light here that corresponds to this neuron having a high firing rate, that neuron is very effective at driving the output neuron. And so the output neuron has a positive component of its receptor field right there in the middle. Now if this neuron here, which is in this part of the retina, if that neuron has a negative weight onto the output neuron, then light coming in here driving this neuron will inhibit the output neuron. So if you have a pattern of weights that looks like this, 0 minus 1, 2 minus 1, 0, that this neuron will have a receptive field that looks like that as a function of its inputs. So that's a on-dimensional example. And you can see that you write down the output here as a weighted sum of each one of those inputs. This also works for two dimensional receptive fields. For example, if we have input from the retina that looks like this where we have-- I guess this was excitatory here in the center, inhibitory around, you can make a neuron that has a two-dimensional receptor field like this by having inputs to this neuron from all of those different regions of the visual field that have different weights corresponding to positive in the center. So neurons in the positive synaptic weights under the output neuron. And neurons around the edges have negative synaptic weights. So we can build any receptive field we want into a neuron by just plugging in-- by putting in the right set of synaptic weights. Yes? AUDIENCE: So would you rule out [INAUDIBLE] MICHALE FEE: So in real life, I assume you mean in the brain? AUDIENCE: Yeah. MICHALE FEE: So in the brain, we don't really know how these weights are built. So one idea is that there are rules that control the development of these circuits, let's say, connections of bipolar cells in the retina to retinal ganglion cells that control how these weights are determined to be positive or negative. Negative weights are implemented by bipolar cells connected to amacrine cells, which are inhibitory, and then connect to the retinal ganglion. So there's a whole circuit that gets built in the retina that controls whether these weights are positive or negative. And those can be programmed by genetic developmental programs. They can also be controlled by experience with visual stimuli. So there's a lot we don't understand about how these weights are controlled or set up or programmed. But the way we think about how receptive fields of these neurons emerge is by controlling the weight of those synaptic input. That's the message here-- that receptive fields emerge from the pattern of weights from an input layer onto an output layer. AUDIENCE: [INAUDIBLE] how many [INAUDIBLE] MICHALE FEE: If you're going to build a model, let's say, of the retina. So it just depends on how realistic you want it to be. If you wanted to make a model of a retinal ganglion cell, you could try to build a model that has as many bipolar neurons as are actually in the receptive field of that retinal ganglion cell. Or you could make a simplified model that only has 10 or 100 neurons. Depends on what you want to study. All right any other questions? And again, even for these more complex models, you can still write down a simple rate model formulation of the firing rate of the output neuron. It's just a weighted sum of the input firing rate. So each neuron in the input layer fires at some rate. It has a weight w. To get the contribution of this neuron to the firing rate of the output neuron, you just take that input firing rate times the synaptic weight, and add that up then for all the input layer neurons. So as I said, we've been describing the response of our linear neuron as this weighted sum. And that's a little bit cumbersome to carry around. So we're going to start using vector notation and matrix notation to describe networks. It's just much more compact. So we're going to take a little detour, talk about vectors. So a vector is just a collection of numbers. The number of numbers is called the dimensionality of the vector. If a vector has only two numbers, then we can just plot that vector in a plane. So for a 2D vector, if that vector has two components, x1 and x2, then we can plot that vector in that space of x1 and x2, put the origin at zero. In this case, the vector has two vector components or elements, x1 and x2. And in two dimensions we describe that as spaces, as R2, the space of two real numbers. We can write down that vector as a row in row vector notation. So x is x1, x2. We can write it as a column vector, x1, x2, organized on top of each other, like this. Vector sums are very simple. So if you have two vectors, x and y, you can write down the sum of x and y is x plus y. That's called the resultant. X plus y it can be written like this in column vector notation. You can see that the sum of x and y is just an element by element sum of the vector elements. It's called element by element addition. Let's look at vector product. So there are multiple ways of taking the product of two vectors. There's an element by element product, an inner product, an outer product that we'll cover in later lectures. And also, something called the cross product that's very common in physics. But I have not yet seen the application of a cross product to neuroscience. If anybody can find one of those, I'll give extra credit. Element by element product is called a Hadamard product. So x times y is just the element-by-element product of the elements in the two vectors. In Matlab, that element-by-element product you compute by x dot star y. Inner product or dot product looks like this. So if we have two column vectors, the dot product of x and y is the sum of the element-by-element products. So x dot y is just x1 times y1 plus x2 times y2, and so on, plus xn times yn. And that's that sum that we saw earlier in our feed forward network. OK. So notice that the dot product is a scalar. It's a single number. It's no longer a vector. Products have some nice properties. They're commutative. So x.y is equal to y.x. They're distributive so that vector w dotted into the sum of two vectors is just the sum of the two separate dot products. So w dot x plus y is just w.x, w.y. And it's also linear. So if you have a x dot y that is equal to a times the quantity x.y. So if you have vector x and y dotted into each other, if you make one of those vectors twice as long, then the dot product is just twice as big. A little bit more about inner products. So we can also write down the inner product in matrix notation. So x.y is a matrix product of a row vector. Column vector, you remember how to multiply two matrices. You multiply the elements of each row times the elements of each column. So you can see that this in matrix notation is just the dot product of those two vectors. In matrix notation, this is a 1 by n matrix. This is an n by 1. So 1 row by n columns, times n rows by 1 column. And that is equal to a 1 by 1 matrix, which is just a scalar. All right, in Matlab, let me just show you how to write down these components. So in this case, x is a column vector, a 1 by 3 column vector. y is a 1 by 3 column vector. You can calculate those vectors like this. And z is x transpose times y. And so that's how you can write down the dot product of two vectors. What is the dot product of a vector with itself? It's the square magnitude of the vector. So x is just the norm or magnitude of the vector. And you can see that the norm of the vector is just-- you can think about this as being analogous to the Pythagorean theorem. The length of one side of a triangle is just the sum of the squares of all the sides, the square root of that. So a unit vector is a vector that has length 1. So a unit vector by definition has a magnitude of 1, which means its dot product with itself is 1. We can turn any vector into a unit vector by just taking that vector, dividing by its norm. I'm going to always use this notation with this little caret symbol to represent a unit vector. So if you see a vector with that little hat on it, that means it's a unit vector. You can express any vector as a product of a scalar, a length, times a unit vector in that direction. We can find the projection or component of any vector in the direction of this unit vector as follows. So if we have a unit vector x, we can find the projection of a vector y onto that unit vector x. How do we do that? We just find the normal projection of that vector. That distance right there is called the scalar projection of y onto x. If you write down the length of the vector y, the norm of the vector y in the angle between y and x, then the dot product y.x is just equal to the magnitude of y times the cosine of the angle between the two vectors. Just simple trigonometry. We can also define what's called the vector projection of y onto x as follows. So we just draw that same picture. So we can find the projection of y onto x and add that as a vector. And that's just this scalar projection of y onto x times a unit vector in the x direction. So x actually is a unit vector in this example. So this vector projection of y to x is just defined as y dot x times x. Any questions about that? I'm guessing most of you have seen all of this stuff already. But we're going to be using these things a lot. So I just want to make sure that we're all on the same page. And that's just a scalar times a unit vector. Let me just give you a little bit of intuition about dot products here. So a dot product is related to the cosine of the angle between two vectors, as we talked about before. The dot product is just magnitude of x times the magnitude of y times the cosine of the angle between them. So the cosine of the angle between two vectors is just the dot product divided by the product of the magnitude of each of the two vectors. So if x and y are unit vectors, the cosine of the angle between them is just the dot product of the unit vectors. So again, if x and y are unit vectors, then that dot product is just the cosine of the angle. Orthogonality. So two vectors are orthogonal, are perpendicular, if and only if their dot product is 0. So if we have two vectors x and y, they are orthogonal if the angle between them is 90 degrees. x.y is just proportional to the cosine of the angle. Cosine of 90 degrees is zero. So if two vectors are orthogonal, then their dot product will be zero. If their dot product is zero, then they're orthogonal with each other. And using the notation we just developed, the vector projection of y along x is the zero vector, if those two vectors are orthogonal. There is an intuition that one can think about in terms of the relation between dot product and correlation. So the dot product is related to the statistical correlation between the elements of those two vectors. So if you have a vector x and y, you can write down the cosine of the angle between those two vectors, again, as x.y over the product of the norms. And if you write that out as sums, you can see that this is just the sum of the element-by-element products-- that's the dot product-- divided by the norm of x and the norm of y. And if you have taken a statistics class, you will recognize that as just the Pearson correlation of a set of numbers x and a set of numbers y. The dot product is closely related to the correlation between two sets of numbers. One other thing that I want to point out coming back to the idea of using this feed forward network as a way of receptive field, you can see that the response of a neuron in this model is just the dot product of the stimulus vector u. The vector of input firing rates represents the stimulus, the dot product of the stimulus vector u with the weight vector w. So the firing rate of the output neuron is just w.u. So you can see that what this means is that the firing rate of the output neuron will be high if there is a high degree of overlap between the input, the pattern of the input, and the pattern of synaptic weights from the input layer to the output neuron. We can see that w.u is big when w and u are parallel, are highly correlated, which means a neuron fires a lot when the stimulus matches the pattern of those synaptic weights. Now, so you can see that for a given amount of power in the stimulus-- so the power is just the square magnitude of u-- the stimulus that has the best overlap with the receptive field, where cosine of that angle is 1, produces the largest response. And so we now have actually a definition of the optimal stimulus of a neuron in terms of the pattern of synaptic weights. In other words, the optimal stimulus is one that's essentially proportional to the weight matrix. Any questions so far? All right, so now let's turn to the question of how we use neural networks to do some interesting computation. So classification is a very important computation that neural networks do in the brain and actually in the application of neural networks for technology. So what does classification mean? So how does the brain-- how does a neural circuit decide how a particular input-- let's say that it looks like you might eat it. How do we decide-- how do the neural circuits in our brain decide whether that thing that we're seeing is something edible or something that will make us sick based on past experience? If we see something that looks like an animal or a dog, how do we know whether that's a friendly puppy or a or a wolf? So these are classification problems. And feed forward circuits actually can be very good at classification. In fact, recent advances in training neural networks have actually resulted in feed forward neural networks that actually approach human performance in terms of their ability to make decisions like this. All right. So basically, a feed forward circuit that does classification like this typically has an input layer. It has a bunch of inputs that represent sensory stimulus. And a bunch of output neurons that represent different categorizations of that input stimulus. So you can have a retinal input here. Going to other layers of a network. And then at the end of that, you can have a network that starts firing when that input was a dog, or starts firing another neuron that starts firing when that input was a cat, or something else. Now in general, classification networks that have one input layer and one output layer can't do this problem. You can't take a visual input and have connections to another layer of neurons that just light up when the picture that the network is seeing is a dog. Another neuron lights up when it's a cat. Generally, there are many layers of neurons in between. But today, we're going to talk about a very simplified version of the classification problem and build up to the sorts of networks that can actually do those more complex problems. So I just want to point out that the obviously our brains are very good at recognizing things. We do this all the time. There are hundreds of objects in every visual scene. And we're able to recognize every one of those objects. But it turns out that there are individual neurons-- so in this case, I alluded to the idea that there are individual nones in this network that light up when the sensory input is a dog or light up when the input is an elephant. And it turns out that that's actually true in the brain. So there have recently been studies where it's been possible to record in parts of the human brain in patients that are undergoing brain surgery for the treatment of epilepsy or tumors or things like that where you have to go in and find parts of the brain that are defective, find parts of the brain that are healthy. So when you do a surgery, you can be very careful to just do surgery on the damaged parts of the brain and not impact parts of the brain that are healthy. So there are cases now, more and more commonly, where neuroscientists can work with neurosurgeons to actually record from neurons in the brain in these patients who are in preparation for surgery. And so it's been possible to record from neurons in the brain. This was a study from Itzhak Frieds lab at UCLA. And this shows recording in the right anterior hippocampus. And what this lab did was to find neurons. So these were electrodes implanted in the brain. And then they basically take these patients and they show them thousands of pictures and look at how their brains respond to different visual inputs. So let me just show you what you're looking at. These are just different pictures of celebrities. There's Luke Skywalker, Mother Teresa, and some others. This paper is getting old enough that you may not recognize most of these people. But if you record from neurons in the brain, you can see that-- so what do you see here? I think that's Oprah. The image is flashed up on the screen for about a second. You record this neuron spiking. Here you see a couple spikes. Here's when the image was actually presented. And here's where the image was turned off. You can see different trials. So this neuron actually had a little bit of a response right there shortly after the stimulus was turned on. But you can see there's not that much response in these neurons. But when they flashed a different stimulus-- anybody know who that is? That's Halle Berry. Look at this neuron. Every time you show this picture, that neuron fires off a couple spikes very precisely. If you look at the histogram, these are histograms underneath showing as a function of time relative to the onset of the stimulus, you could see that this neuron very reliably spikes. There's a different picture of Halle Berry. Neuron spikes. Different picture, neuron spikes. Another picture, neuron spikes. A line drawing of Halle Berry, the neuron spikes. Catwoman, the neuron spikes. The text, Halle Berry, the neuron spikes. It's amazing. So this group got a lot of press for this because they also found Jennifer Aniston neurons. They found other celebrities. This is like some celebrity part of the brain. No, it's actually a part of the brain where you have neurons that have very sparse responses to a wide range of things. But they're extremely specific to particular people or categories or objects. And it actually is consistent with this old notion of what's called the grandmother cell. So back before people were able to record in the human brain like this, there was speculation that there might be neurons in the brain that are so specific for particular things, that there might be one neuron in your brain that responds when you see your grandmother. And so it turns out it's actually true. There are neurons in your brain that respond very specifically to particular concepts or people or things. So the question of how these kinds of neurons acquire their responses is really cool and interesting. So that leads us to the idea of perceptrons. Perceptron is the simplest notion of how you can have a neuron that responds to a particular thing that detects a particular thing and responds when it sees it and doesn't respond when it doesn't. So let's start with the simplest notion of a perceptron. So how do we make a neuron that fires when it sees something-- let's say a dog-- and doesn't fire when there is no dog? So in order to think about this a little bit more, so we can begin thinking about this in the case where we have a single neuron input and a single output neuron. So if we have a single input neuron, then what comes in has to be-- it can't be an image right? An image is a high dimensional thing that has many thousands of pixels. So you can't write that down as a simple model with a single input neuron and a single output neuron. So you need to do this classification problem in one-dimension. So we can imagine that we have an input neuron that comes from, let's say, some set of numbers-- I'll make up a story here-- some set of neurons that measure the dogginess of an input. So let's say that we have a single input that fires like crazy when it sees this cute little guy here. And fires at a negative rate when it sees that thing, which doesn't look much like a dog. So we have a single input that's a measure of dogginess. And now let's say that we take this dogginess detector and we point it around the world. And we walk around outside with our dogginess detector and we make a bunch of measurements. So we're going to see something that looks like this. We're going to see a lot of measurements, a lot of observations down here that are close to zero dogginess. And we're going to see a bump of things up here that correspond to dogs. Whenever we point our dogginess detector at a dog, it's going to give us a measurement up here. And we're going to get a bunch of those. And those things correspond to dogs. So we need to build a network that fires when the input is up here and doesn't fire when the input is down there. So how do we do that? So the central feature of classification is this notion of binariness, of decision-making. That it fires when you see a dog and doesn't fire when you don't see a dog. So there exists a classification boundary in this stimulus space. You can imagine that there's some points along this dimension above which you'll say that that input is a dog, below which you say that it isn't. And we can imagine that that classification boundary is right here. It's a particular number. It's a particular value of our dogginess detector, above which we're going to call it a dog, and below which we're going to call it something else. How do we make this neuron respond by firing when there's a dog and not firing when there's no dog? Can we use a linear neuron? Can we use one of our linear neurons that we just talked about before? We can't do that because a linear neuron will always fire more the bigger the input is. And it will fire less if the dogginess is 0. And it will even fire more negatively if the dogginess input is negative. So a linear neuron is terrible for actually making any decisions. Linear neurons always go, ah, well, maybe that's a dog. Not really. There's no decisions. So in order to have a decision, we need to have a particular kind of neuron. And that kind of neuron uses something very natural. In biophysics, it's the spike threshold of neurons. Neurons only fire when the input is above some threshold, generally. There are neurons that are tonically active. But let's not worry about those. So many neurons only fire when the input is above some threshold. So for decision-making and classification, a commonly used kind of neuron takes this idea to an extreme. So for perceptrons, we're going to use a simplified model of a neuron that's particularly good at making decisions. There's no if, ands, or buts about it. It's either off or on. It's called a binary unit. And a binary unit uses what's called a step function for its FI curve. That step function is 0-- the output is 0 if the input is zero or below. And the output is 1 if the input is above 0. We can use that step function to create a neuron that responds when the input is above any threshold we want. So we can write down the output firing rate is this function, a step function-- that function of a quantity that's given by w times u, the synaptic weight times the input firing rate, minus that threshold. So you can see if w times u, which is the input synaptic current, if that synaptic current is above theta, then this argument to this function is greater than 0, then the neuron spikes. If this argument is negative, then the neuron doesn't spike. So by changing theta, we can put that decision boundary anywhere we want. Does that make sense? Usually the way we do this is we pick a theta. We say our neuron has a theta of 1. And then we do everything else-- we do everything else we're going to do with this network with a theta. So what I'm going to talk about today are just two cases. Where theta is a fixed number that's non-zero, or theta that's a fixed number that is equal to 0. So we're going to talk about those two cases. So the neuron fires when the input w u is greater than theta. And it doesn't fire when it's less. So now the output neuron fires whenever the input neuron has a firing rate greater than this decision boundary. So the decision boundary, the u threshold, is equal to theta divided by w. Does that make sense? U threshold is the neuron fires when u is greater than theta divided by w. So the way we learn, the way this network learns to fire when that u is above this classification boundary is simply by changing the weight. Does that make sense? So we're going to learn the weight such that this network fires whenever the input says there's a dog. And it doesn't fire whenever the input says there's no dog. So let's see what happens when w is really small. If w is really small, then what happens is all of these-- remember, this is the input. That's that the dogginess detector. If w is really small, then all these inputs get collapsed to a small input current into our output neuron. Does that make sense? So all those different inputs, dogs and non-dogs, gets multiplied by a small number. And all those inputs are close to 0. And if all those inputs are close to 0, they're all below the threshold for making this neuron spike. So this network is not good for detecting dogs because it says it never fires, whether the input is a dog or a non-dog. Now what happens if w is too big? If w is really big, then this range of dogginess values gets multiplied by a big number. And you can see that a bunch of non-dogs make the neuron fire. Does that make sense? So now this one fires for dogs plus doggie-ish looking things, which, I don't know, maybe it'll fire when it sees a cat. That's terrible. So you have to choose w to make this classification network function properly. Does that make sense? And if you choose w just right, then that classification boundary lands right on the threshold of the neuron. And now the neuron spikes whenever there is a dog. And it doesn't spike whenever there's not a dog. So what's the message here? The message is we can have a neuron that has this binary threshold. And what we can do is simply by changing the weight, we can make that threshold land anywhere on this space of inputs. And we can actually use the error to set the weight. So let's say that we made errors here. We classify dogs as non-dogs because the neuron didn't fire. You can see that this was the case when w was too small. So if you classify dogs as non-dogs, then you need to make w bigger. And if you classify non-dogs as dogs, you need to make w smaller. And by measuring what kind of errors you make, you can actually fix the weights to get to the right answer. So this is a method called supervised learning where you set w randomly. You take a guess. And then you look at the mistakes you make. And you use those mistakes to fix the w. In other words, you just look at the world and you say, oh, that's a dog. And then your mom says, no, that's not a dog, that's something else. And you adjust your weights. I think that was the example I just gave. You're going to make that w smaller. In another case, you'll make the other kind of mistake and you'll fix the weights. So this is called a perceptron. And the way you learn the weights in a perceptron is you just classify things and you figure out what kind of mistake you made and you use that to adjust the weights. So that's the basic idea of a perceptron and perceptron learning. And there's a lot of mathematical formalism that goes into how that learning happens. And we're going to get to that in more detail in the next lecture. But before we do that, I want to go from having a one-dimensional case. So here we had a one-dimensional network that was just operating on dogginess. And then we have a single neuron that says, was that a dog or not. But in general, you're not classifying things based on one input. Like for example when you have to identify a dog, you have a whole image of something. And you have to classify that based on an image. So let's go from the one-dimensional case to a two-dimensional case. So the classification isn't done on one-dimension, but it's based on many different features. So let's say that we have two features, furriness and bad breath. That dog doesn't really look like it has bad breath. but mine does. So you can have two different features, furriness and bad breath. And dogs are generally, let's say, up here. Now you can have other animals. This guy is definitely not furry. So he's down here somewhere. And you can have this guy up here. He's definitely furry. So you have these two dimensions and a bunch of observations in those two dimensions, in those higher dimensions. And you can see that, in this case, you can't actually apply that one-dimensional decision-making circuit to discriminate dogs from these other animals. Why is that? Because if I apply my one-dimensional perceptron to this problem, you can see that I could put a boundary here and it will misclassify some of these non-furry animals as dogs. Or I could put my classifier here and it will misclassify some of these cats as dogs. So how would I separate dogs from these other animals if I had this two-dimensional space? What would I do? How would I put a classification bound? If this doesn't work and this doesn't work, what would I do? You could put a boundary right there. So in this little toy problem, that would perfectly separate dogs from all these non-dogs. So how do we do that? Well, what we want is some way of projecting these inputs onto some other direction so that we can put a classification boundary right there. And it turns out there's a very simple network that does that. It looks like this. We take each one of those detectors, a furriness detector and a bad breath detector, and we have those two inputs. We have those inputs synapse onto our output neuron with some weight w1 and some weight w2, and we calculate the firing rate of this neuron. Now we have this problem of how do we place this decision boundary correctly. What's the answer? Well, in the one-dimensional example, what is it that we learned? What was it that we were actually changing? We were taking guesses. And if we were right or wrong, we did what? We changed the weight. And that's exactly what we do here. We're going to learn to change these weights to put that boundary in the right place. If we just take a random guess for these weights, that line is just going to be some random position. But we can learn to place that line exactly in the right place to separate dogs from non-dogs. So let's just think a little bit more about how that decision boundary looks as a function of the weight. So let's look at this case where we have two inputs. So now you can see that the input to this neuron is w.u. So now if we use our binary neuron with a threshold, we can see that the firing rate of this output neuron is this step function operating on or acting on this input, w.u minus theta. So now what does that look like? The decision boundary happens when this quantity is pulled to 0. When this input is greater than 0, the neuron fires. When this input is less than 0, it doesn't fire. So what does that look like? So you can see the decision boundary is when w.u minus theta equals 0. Does anyone know what that is? Remember, u is our input space. That's what we're asking, where is this decision boundary in the input space. w is some weights that are fixed right now, but we're gradually going to change them later. So what is that an equation for? It's a line. That's an equation for a line. If u is our input, you can see w.u equals theta. That's an equation for a line, base of u. The slope and position of that line are controlled by the weights w and the threshold theta. So you can see this is w1, u1, plus w2, u2 equals theta. In the space of u1 and u2, that's just a line. So let's look at the case where theta equals 0. You can see that if you have this input space, u1 and u2, if you take a particular input u and dot it into w-- so let's just pick a w in some random direction-- the neuron fires when the projection of u along w is positive. So you can see here, the projection of u along w is positive. So in this case for this u the neuron will fire. So any u that has a positive projection along w will make the neuron spike. So you can see that all of these inputs will make the neuron spike. All of these inputs will make the neuron not spike. Does that make sense? So you can see that the decision boundary, this boundary between the inputs that make the neuron spike and the inputs that don't make the neuron spike, is a line that's orthogonal to w. Does that make sense? Because you can see that any u, any input, along this line will have zero projection, will be orthogonal to w. Will have zero projection. And that's going to correspond to that decision boundary. So let's just look at a couple of cases. So here a set of points that correspond to our non-dogs. Here are a set of points that correspond to our dog. You can see that if you have a w in this direction, that produces a decision boundary that nicely separates the dogs from the non-dogs. So what is that w? that w is 1, comma, 0. And we're going to consider the case where theta is 0. Let's look at this case here. So you can see that here are all the dogs. Here are all the non-dogs. You can see that if you drew a line in this direction, that would be a good decision boundary for that classification problem. You can see that a w corresponding to solving that problem is 1, comma, minus 1, and theta equals 0. Let's look at the case where theta is not 0. So here we have w.u minus theta. When theta is not 0, then the decision boundary is w.u equals some non-zero theta. That's also a line. It's a equation for a line. When theta is 0, that decision boundary goes through the origin. When theta is not 0, the decision boundary is offset from the origin. So we could see that when we had theta is 0, the decision boundary-- that network only works if the decision boundary is going through the origin. In general, though, we can put the decision boundary anywhere we want by having this non-zero theta. So here's an example. Here are a set of points that are the dogs. Here are a set of points that are the non-dogs. If we wanted to design a network that separates the dogs from the non-dogs, we could just draw a line that cleanly separates the green from the red dots. And now we can calculate w that gives us that decision boundary. How do we do that? So the decision boundary is w minus u.theta. Let's say that we want to calculate this weight vector w1 and w2. And let's just say that our neuron has a threshold of 1. So we can see that we have two points on the decision boundary. We have one point here, a, comma, 0, right there. We have another point here, 0, comma, b. And we can calculate the decision boundary using ua.w equals theta, ub.w equals theta. That's two equations and two unknowns, w1 and w2. So if I gave you a set of points and I said calculate a weight for this perceptron that will separate one set of points from another set of points, and I give you a theta for the output neuron, all you have to do is draw a line that separates them, and then solve those two equations to get w1 and w2 for that network. It's very easy to do this in two dimensions. You can just draw a line and calculate the w that corresponds to that decision boundary. Any questions about that? Just that, if you have questions, you should ask because that's going to be a question you ought to solve. So you can see in two dimensions you can just look at the data, decide where's the decision boundary, draw a line, and calculate the weights w. But in higher dimensions, it's a really hard problem. In high dimensions, first of all, remember in high dimensions you've got images. Each pixel in that image is a different dimension in the classification problem. So how do you write down a set of weights? So imagine that's an image, that's an image. And you want to find a set of weights so that this neuron fires when you have the dog, but doesn't fire when you have the cat. That's a really hard problem. You can't look at those things and decide what that w should be. So there's a way of taking inputs and taking the answer, like a 1 for a dog and a 0 for non-dogs, and actually finding a set of weights that will properly classify those inputs. And that's called the perceptron learning rule. And we're going to talk about that in the next lecture. So that's what we did today. And we're going to continue working on developing methods for understanding neural networks next time.
MIT_940_Introduction_to_Neural_Computation_Spring_2018
5_HodgkinHuxley_Model_Part_2_Intro_to_Neural_Computation.txt
MICHALE FEE: Today we're going to continue developing our equivalent circuit model, the Hodgkin-Huxley model of a neuron. And we're still focusing on the mechanism that generates spikes. As you recall, there are two conductances, iron conductances, that lead to action potential generation. There is sodium conductance that is connected to a sodium battery that has a high equilibrium potential. There is a potassium conductance that is connected to a potassium battery that has a negative equilibrium potential, and those two conductances together have voltage and time dependence that lead to the generation of a positive going, followed by a negative going, fluctuation in the voltage that is the action potential. And as you recall, the way that happens, there is a time dependence to these conductances so that when the sodium conductance turns on, this resistor gets really small, and basically connects the inside of the cell to that positive battery. When the sodium conductance turns off and the potassium conductance turns on, we're disconnecting the sodium battery and connecting the potassium battery, which has a negative voltage. And the voltage of the cell, then, is driven toward the negative potassium equilibrium potential. So last time we worked out the voltage and time dependence of the potassium conductance. Today, we're going to focus on the, sorry, focus here on the sodium conductance and explain various aspects of the voltage and time dependence of the sodium conductance. And then once we do that, we're going to turn in the second half of the lecture to a really beautiful, simple model of a disease related to a defect in the sodium channel. And it's an example of how we can use modeling to test and elaborate on hypotheses about how defects in a circuit, or in an ion channel, can lead to very complex phenotypes in a whole animal. So as you recall, our Hodgkin-Huxley model has three conductances and a capacitance that represents a capacitor that represents the capacitance of the membrane. The total membrane ionic current is just a sum of the sodium current, the potassium current, and this voltage independent, time independent, fixed leak current. So the equation for the membrane potential, the differential equation for the membrane potential in the Hodgkin-Huxley model, is just a simple first order linear differential equation that relates the membrane current and the membrane potential. So last time we described a set of experiments that were done by Hodgkin and Huxley to study the voltage and time dependence of these conductances in the squid giant axon. And as you remember, this axon is very large. It's 1 millimeter in diameter, which makes it very easy to put wires into it, and change the voltage, and measure the currents, and so on. So the experiment they did was a voltage clamp experiment, where you can hyperpolarize and depolarize the cell. There's a very fast feedback system that allows you to set a command voltage, and this operational amplifier injects however much current is needed to hold the cell at whatever membrane potential you command. And the typical experiment that they would do would be to hyperpolarize or depolarize the cell to fixed membrane potentials and measure how much current passes through the membrane during and after that transient change in the command voltage. So if you take a squid giant axon, you start at minus 65 millivolts, and you hyperpolarize the cell, not much happens. And that's because all of those currents are already off when the cell is hyperpolarize at minus 60 or at low voltages. On the other hand, if you start at minus 65 millivolts and depolarize the cell up to 0 millivolts, all of a sudden you see a very large transient current that first goes negative, which corresponds to positive charges going into the cell followed by a positive current that's associated with positive charges leaving the cell. And last time we talked about how we can dissect these two phases of the current, this negative phase and this positive phase, into two different ionic conductances. That they did that experiment by replacing the sodium in the extracellular solution that the axon was sitting in with a solution that has no sodium in it. They replaced that with choline chloride. So choline is a positive ionic-- has a positive charge and chloride, of course, has a negative charge. And so you can replace the sodium chloride with choline chloride. And now, when you depolarize your cell, you can see that that negative part is gone. And the only current you see is this positive-- this kind of slowly ramping up positive current. And they identified that as being due to potassium ions. And if you subtract the current curve without sodium from the current curve with sodium, the difference is obviously due to sodium. And so if you plot the difference between those two curves, you can see that the sodium current turns on very rapidly and then decays very rapidly, that that transient sodium current happens very quickly, almost before the potassium current even gets started. And we talked about how that fast sodium current, followed by a slower potassium current, is exactly the profile, that we showed here, that generates depolarizing change in the voltage followed by a hyperpolarize change in the voltage that looks like an action potential. So now, let's just review quickly how we took these current curves, and from those, extracted the conductance of the sodium and potassium ion channels as a function of voltage and time. So what we did was we looked at the case where we do our voltage clamp experiment to different voltages. We start hyperpolarized. We step up to minus 40 and measure this potassium current. We step up to 0, and you see this larger potassium current. If you step from minus 40 to 40, you see an even larger potassium current. And you can plot this peak current, or the steady state current, as a function of voltage. That gives you an I-V curve, and we'll look at that in a second. If you do the same thing for the sodium currents, you see something different that's initially very confusing. If you step from minus 80 to minus 40, you see a small sodium current. If you make a larger voltage step up to 0, you see this bigger sodium current. But then if you step up from minus 80 millivolts to 40 millivolts, now you see you just have a tiny little sodium current. Anybody remember why that would be? Why is it that you would see only a very tiny sodium current, if you step up to 40 millivolts? What is the equilibrium potential for sodium? AUDIENCE: [INAUDIBLE] MICHALE FEE: Good, good. So what would the sodium current be if I had stepped this voltage up exactly to 50 millivolts? AUDIENCE: 0. MICHALE FEE: It'd be 0. So this is pretty close to 50 millivolts, which is why the sodium current is actually pretty small. So now, let's plot the peak current as a function of voltage for potassium and the peak current here as a function of voltage for sodium. That's what that looks like. So you can see that the potassium current is 0 for these voltages down here and grows. It's actually stay 0 for even more negative voltages. The sodium current, on the other hand, has this very kind of funny shape. It's linear up here around high voltages, around the sodium equilibrium or reversal potential, and then at drops to 0. The sodium current stays at 0 for negative voltages. And you recall that we use this to think about what the conductance must be. So let me just walk you through that logic again. So remember that the current is just a conductance times the driving potential. The driving potential is positive when you're above the equilibrium potential, and it's negative when you're below. So this term here is a straight line. It's linear in voltage, and it goes through 0 when V is equal to EK. So there is the driving potential for potassium as a function of voltage. Now, you can see clearly that the conductance as a function of voltage has some voltage dependence, because this doesn't look like this. So the difference between this and this is captured by this voltage-dependent conductance. And does anyone remember what that conductance, that GK as a function of V, looks like? AUDIENCE: Sigmoidal. MICHALE FEE: Yeah, sigmoidal. And what is it down here? It's 0. So the way that you can get a 0 current, even with a very negative driving potential, is if the conductance is 0. You can see that the current is linear up here, and the driving potential is linear up here. So the conductance has to be constant. And so we have a conductance that has to be 0 down here and a constant non-zero up here. Yes? AUDIENCE: So why is the potassium curve 0 when it's more negative than GK? Why doesn't it go in the other direction? MICHALE FEE: Why doesn't this curve do something else? So what is it that you're-- AUDIENCE: Like why doesn't it-- why doesn't it-- MICHALE FEE: Why doesn't it keep going? AUDIENCE: Yeah. Why is there like a [INAUDIBLE]? MICHALE FEE: Ah. Because-- OK. That's a great question. So maybe you can answer it. How would I change the conductance curve to make this look more like this? I could do something very simple to the voltage dependence of the potassium conductance to actually make it look like that. What would I do? The reason this goes to 0 and stays at 0 is because the voltage dependence of the conductance turns it off before the driving potential can go negative. So what would I do to the conductance to make this current dip below 0 before it comes back, any suggestions? AUDIENCE: Translate it. MICHALE FEE: Yeah, which way? AUDIENCE: This way. MICHALE FEE: Good, exactly. So if I took this curve and I shifted it that way, if I made the potassium conductance turn off at a more negative potential, then this would go down before it got turned off by the conductance. Does that make sense? Great question. Any other questions? So the answer is, the reason this doesn't go negative is because the voltage dependence of the potassium conductance turns off the conductance before or on the positive side of the equilibrium potential of potassium. Yes? AUDIENCE: Can you explain again why the [INAUDIBLE]?? MICHALE FEE: So if G were constant, if G had no voltage dependence and it was just a constant, what would this current look like? What would it look like? If this G were just a constant, no dependent on voltage? AUDIENCE: [INAUDIBLE] MICHALE FEE: Good. It would look just like this, right? So the reason this curve shuts off and goes to 0 is that the conductance goes to 0 down here, and it's constant up here. Does that make sense? And that curve just looks like that. It's 0 down here and constant up here. Good question. Any other? There was another hand up here. Yeah? AUDIENCE: I was wondering about notation. So it's GK of V. It's not like GK times V, right? MICHALE FEE: No. It's GK as a function of V. Yeah, that's-- the notation is sometimes a little bit confusing. You kind of have to read it out from the context. Any other questions? So now you can see why this curve looks the way it does. So now, let's plot the driving potential, V minus Ena. That's this curve right here. It's Ohm's law, but it has a battery that makes it centered. It makes it give 0 current when V is equal to Ena, which is positive. So that's why that curve looks like that. And what is it that makes the sodium current go to 0 down here? It must be that the what? What about the conductance? AUDIENCE: Turns off. MICHALE FEE: Good. The conductance, the sodium conductance, has to turn off down here. And what about up here? This is linear. This is linear, so the sodium conductance has to be what up here? Constant, good. So you can see that the sodium conductance has exactly the same shape as the potassium conductance. It's not exactly at the same voltage, but it's close. Good. So now you can see where this kind of weird shape of these sodium and potassium currents comes from. It's actually very simple. It's just a resistor in series with a battery that gives you this driving potential offset from 0, and that's multiplied by this voltage-dependent conductance. Now, the time dependence of the conductance is entirely due-- sorry. The time dependence of the current, that ramping up current that turns on and then stays constant for the potassium, is entirely due to the time dependence of the potassium conductance. So the potassium conductance just turns on. That process of the conductance turning on is called activation. Same for the sodium-- the sodium conductance turns on quickly. That's called activation. The sodium conductance turns on very fast, and the potassium conductance turns on slowly. Now, we talked about how the voltage gates work in our voltage-dependent ion channel. And the idea is that you have some gating charges that are literally charged residues, charged amino acids, in the protein. When the membrane potential is very negative, when the cell is at rest, you can see that there's a large electric field pointing that way inside the membrane, and that pushes the charges, pushes those gating charges, toward the inside of the cell, and that closes the gate. When you depolarize the cell, this membrane potential goes closer to 0, the electric field drops, and those gating charges are no longer being pushed into the cell. And they relax back, and the gate opens. So that is the basic, sort of a cartoon, picture of the mechanism by which voltage-dependent ion channels acquire that voltage dependence. So remember, we talked about how we can model that time dependence. We can model that opened and closed state of the ion channel as two states, an open state and a closed state, where the probability, n, of being in the open state, a probability of 1 minus n being in the closed state. Remember, this was for one subunit. For the potassium channel, there are four subunits, and all of them have to be open. And we wrote down a differential equation for that gating variable, n. There is an n infinity, a steady state, that's a function of voltage. And remember, for the potassium, n infinity is negative down here and increases as a function of voltage to get close to 1 at voltages above minus 50, or somewhere between minus 50 and 0 millivolts, that gating variable. And the n infinity of that gating variable, n, goes from being very small to being large. Now, so that's potassium. We went through that last time. And now let's talk about sodium. Sodium looks exactly the same. The sodium conductance can be modeled as having two states, an open state and a closed state. Remember, we did a patch recording on a single sodium channel. You could see that it flickers back and forth between open and closed. So we can model that process in exactly the same way that we did for the potassium conductance. We have an open state, a closed state, a probability, m, of being in the open state. So m is our gating variable for-- our activation gating variable for the sodium conductance. Probability of being in a closed state is 1 minus m. There is that same kind of differential equation for the m gating variable, and a m infinity that has a voltage dependence that looks very much like the voltage dependence of n infinity. So so far, the sodium and potassium conductances look very similar. They both have the same kind of activation gating variable, the same simple model for how to turn on and turn off, same differential equation, same gating variable that has this sigmoidal dependence on voltage. Any questions about that? So you remember the way we thought about the time dependence of these is we simply integrate this differential equation over time. It's a first order linear differential equation, and you can think about the n, the gating variable, as always relaxing exponentially toward whatever n infinity is at that moment. And n infinity is a function of voltage, and any time dependence it gets comes from changes in the voltage. So we're going to simplify things and just consider piecewise constant changes in the voltage. So let's do a simple experiment. We're going to hyperpolarize the voltage to minus 80. What is n infinity going to be, big or small? Remember what n infinity looks like is a function of voltage? AUDIENCE: Small. MICHALE FEE: Good. So at hyperpolarized voltages, n infinity is going to be small, and so is m infinity. Those ion channels are closed at hyperpolarize voltages. So the gating variables that represent what the probability is of being open, those gating variables are small when the voltage is negative, very negative. So then we're going to step the voltage up. And what is n infinity going to do? AUDIENCE: [INAUDIBLE] MICHALE FEE: Anybody want to just draw for me what it's going to do in the air? It starts out small. So is it going to ramp up slowly? Is it going to jump up? Is it going to wiggle around? What's it going to do? So why is it-- so I have several different answers. I have some people saying that it's going to ramp up. I'm asking about M infinity now, not n. So how many people say it's going to jump up suddenly? OK, good. That's what it's going to do. It's going to start out at a small value and jump up to a larger value when you depolarize the cell. And then what is n going to do? AUDIENCE: [INAUDIBLE] MICHALE FEE: Good. n is going to start at some initial condition and relax exponentially toward n infinity. And then when you turn the voltage back down, N infinity is going to go from this large value back down to a small value, and n is going to relax exponentially to that smaller value of n infinity. Any questions about that? We saw that last time. Now, what is the conductance going to do? Where does the conductance depend on n, anybody remember, for potassium? How many subunits are there in a potassium? AUDIENCE: Four. MICHALE FEE: Four. And so if the probability that each one is open is n, and there are four independent, what's the probability that they're all going to be open? AUDIENCE: [INAUDIBLE] MICHALE FEE: Good. And so the conductance is going to turn on as this relaxing exponential to the fourth. And it's going to have that kind of gradual ramping up. Good. It looks exactly the same for sodium. So if you start hyperpolarized, you depolarized the cell, that m infinity is going to start small, it's going to jump up to a high value. M is going to start small, and it's going to relax exponentially toward that higher value of m infinity. Now, anybody want to guess at what the sodium conductance will look like? It's going to be some function of m, right? It turns out that it m cubed. And the reason is that even though there are four things that have to all be open, they're not independent of each other. And so the exponent is not m to the fourth, it's m cubed. And Hodgkin and Huxley figured that out simply by plotting these relaxing exponentials to different powers. I imagine them saying, oh, the potassium is 4. Let's take m to the 4. But it didn't fit. So they tried some other, and they found that m cubed fits. So that's it. Now, the problem with this model is what? What is the problem with this model? Is that when you depolarize the cell, the potassium current turns on. The potassium conductance turns on, but then what happens? What is-- sorry. The sodium turns on. What happens? It doesn't do this. It doesn't turn on and stay on, right? The potassium, when you depolarize, turns on and stays on, just like that model. But the sodium does something else. What does it do? AUDIENCE: [INAUDIBLE] MICHALE FEE: What's that? AUDIENCE: It's a voltage clamp. MICHALE FEE: This is voltage clamp, so it's we're controlling the voltage. m is already a maximum here, so it can't shoot up anymore, right? Anybody remember what sodium does that's really weird? AUDIENCE: Deactivation. MICHALE FEE: It inactivates. So the current turns on, a conductance turns on, but it doesn't stay on. It turns off, and that's what we're going to talk about next. And once we have that, we've got the whole Hodgkin-Huxley model. And that'll set us up for this really interesting sodium channel defect that we're going to talk about. So that process there of shutting off is called inactivation. This process of n turning on is called activation. n turning off is called deactivation. m turning on is called activation. m turning off is called deactivation. But this other thing has a different name. It's called inactivation. It's kind of a little tricky terminology. So the potassium-- the probability of the sodium current being-- the sodium channel being open actually goes like m cubed times some other gating variable that describes how this turns off. And so there's another gating variable, called h. It's called the inactivation gating variable for sodium. And so now we're going to figure out how to think about h and how to describe it mathematically. You probably wouldn't be surprised to hear that it's just another first order linear differential equation-- activation gating variable, m, inactivation gating variable, h. So how do we think about inactivation? Inactivation is literally just a little loop of goo or snot on the inside of the sodium channel, and it's charged. And when the sodium channel opens, it just falls in and plugs up that the pore. That's it. So when the membrane potential is very negative, the inside of the cell is negative, there's an electric field pointing this way, and the inactivation particle is slightly positively charged. And that pushes it, keeps it out of the way. It turns out that that's a real thing. It turns out it's just a loop of amino acids on the inside of the ion channel. Hodgkin and Huxley, of course, they didn't have the structure of the sodium channel, but they actually predicted the existence of this thing that they called the inactivation particle. When you depolarize the cell, when the membrane potential inside the cell goes more positive, that positive charge is no longer actively kept out of the pore. And so it falls in and blocks the pore. And that prevents ions from flowing through the ion channel. So how would you model this? There's an open state and a closed state with energy levels. How would you want to do that? AUDIENCE: Use the Boltzmann distribution. MICHALE FEE: Yeah, you could use the Boltzmann distribution to compute the voltage dependence. I haven't done that, but I'm sure it would work pretty well. How would you model the time dependence? So let me ask you this. If there is a gating variable-- let's start with this. If there is a gating variable, h, that we're going to use to describe this thing getting open and closed, what is the voltage dependence of h infinity going to look like? When the voltage is very negative, what is h doing? You think it's big or small? Here's the equation-- m cubed h. So when the-- yeah, right. h has to start out high and go small in order to explain this thing turning off. Does that make sense? So what we're going to do is we're going to have-- we're going to model this again with two states, an open state and a closed state. h is the probability that this inactivation particle is in the open state. It turns out that there's only one of these particles, and so that explains why it's just times h, not times h to some power. And we have a differential equation that describes how h changes as a function of time in a way that depends on h infinity. And Aditu, why don't you draw what h infinity probably looks like as a function of voltage. AUDIENCE: High. MICHALE FEE: Yeah. It just starts high and goes down. How do we actually measure that? Let me show you an experiment how you'd measure that. So first, let me just show you this. So when you depolarize the cell, h starts out high, because h infinity is high. And then when you depolarize the cell, h infinity gets small, and h just relaxes exponentially toward the new smaller h infinity. And what's really cool is that the tail, this inactivation, the way that conductance or the current turns off, is just a single exponential. It just falls like E to the minus some time constant. It's just given by this first order linear differential equation. Good. This h getting smaller is called inactivation. Anybody want to take a guess at what this is called? AUDIENCE: Deinactivation. MICHALE FEE: Deinactivation, good. So there's activation and deactivation. There's inactivation and deinactivation. Those are different things. Just remember activation, which is easy, right? It's just things turning on. And then there's the same process that undoes the turning on. That's deactivation. And there's inactivation, which is a separate particle. And it has a process of blocking and unblocking. So it's inactivation, deinactivation. Any questions about that? Yes? AUDIENCE: If there is any activation, does that mean it's already charged up? So what does deactivation mean? MICHALE FEE: Yeah. So when-- here. Let's just go back to this picture here. When the cell is hyperpolarized, the thing is hanging out outside not getting in the way. When you depolarize the cell, that electric field is not pushing it out anymore, and it falls in. But when you hyperpolarize the cell again, that electric field turns back on. And what is it going to do? It pushes the particle back out to the other state, to the open state. Any other questions? Pretty simple, right? Kind of very machine-like. And then what we're going to talk about soon is how this thing sometimes doesn't work, this thing. There are genetic mutations that turn out to be fairly common actually, where this doesn't reliably block the pore. And we're going to see what happens. First order linear differential equation. Exponential relaxation toward new h infinity. We can actually measure this h infinity as a function of voltage by doing the following experiment. What we do is we hold the cell hyperpolarize. We can then step the cell up to different membrane potentials-- very low or very high. And then what we do is we jump the membrane potential up to turn on the activation gating variable. And now we can see-- what you see is, that depending on where you held the voltage before you did this big voltage step, you get sodium currents of different size. And you can guess that if you hold the voltage very negative and then turn it on, that activation gating variable for all those ion channels is [AUDIO OUT].. And when you turn on the sodium-- turn on the gating variable, you're going to get a big current, right? If you hold the cell for a while here at a higher voltage, most of those sodium channels are going to have that inactivation gate already closed. And so now when you step the voltage up, turn on m, you're going to get a much smaller current. And so if you just plot the current size as a function of this holding potential, you can see that h is big for low voltages and goes to 0 for higher voltages. And what this means is that when a cell spikes, that voltage goes up, and h starts falling, and the sodium channels-- many of the sodium channels in the cell becomes inactivated-- become inactivated. Yes? AUDIENCE: The membrane potential on the x-axis, is that the difference in the-- is that [AUDIO OUT] or is that the [INAUDIBLE]? MICHALE FEE: That's the absolute voltage during this holding. That's right. You can actually see at rest most cells actually have a substantial fraction of the sodium channels already inactivated. So here's the plan. We now have a full description of the potassium and the sodium conductances as a function of voltage and time. So we're to put it all together and make a full quantitative description of the Hodgkin-Huxley model. Our probability of the sodium current, sodium channel, being open is m cubed h. I just want to mention that this m cubed h assumes one thing about the gating variable and the inactivation variable. The mechanism for activation and the mechanism for inactivation assumes what about them? AUDIENCE: They're independent. MICHALE FEE: They're independent. And it turns out that that's not quite true. It's one of the very few things that Hodgkin and Huxley didn't get spot-on. So it's not exactly independent, but it's really not bad either. So it's a pretty-- it's still a pretty good model. We can write down the sodium conductance as just the conductance of the sodium channel when it's all the way open times m cubed h. Yes? AUDIENCE: So do we know what the inactivation particle is? MICHALE FEE: Yeah. We're going to see in a second. I'll show you exactly what it looks like and where these mutations are that have this effect on inactivation. So we can write down the conductance, and we can write down the current. The current is just the open conductance times m cubed h times the driving potential. And that's our sodium current. Yes? AUDIENCE: For the [INAUDIBLE],, I'm not showing there is [INAUDIBLE] like a number, like sodium channel or something. It doesn't have it. MICHALE FEE: Yeah, that's right. It's one, one single protein, but it has these transmembrane alpha-helices that act-- are multiple voltage sensors. And they act somewhat independently, but still a little bit cooperatively, and that's where this m cubed comes from. But you're right. The potassium channel actually has four separate subunits that form a tetramer. The sodium channel [AUDIO OUT] that it's all one big protein. That's right. AUDIENCE: [INAUDIBLE] MICHALE FEE: Yeah. You should really think of this-- I mean the n and the m were both-- it was empirically discovered that one goes as n to the fourth, and the other one goes as m cubed. And it turns out for potassium it has a really beautiful relation to the structure. For sodium, it's a little bit messier. And I'm sure there are people who actually understand more about why it's exactly m cubed, but I'm not one of those people. So I'm going to refer you to the literature. And I'm happy-- maybe I can find a good reference for that. So now that we have the sodium conductance and the sodium current, let's put this all together. So here's how we're going to now-- here's the algorithm for generating an action potential. And we introduced this last time, but let's just flesh it out for the full story. So given an initial voltage, compute n infinity, tau n, m infinity, tau m, and h infinity and tau h, as a function of that voltage. Those are just those algebraic expressions that give you the alpha n and beta n for each of those things-- one for potassium, one for sodium, the m, and one for the h for sodium. So we're going to calculate all of those. Steady state gating variables as a function of voltage, we're going to start from our initial condition of n, m, and h, and integrate that differential equation one time step using-- it's going to relax exponentially toward n infinity. We're going to plug that n, m, and h into our equations for the potassium current, sodium current, and leak, which doesn't have those gating variables. So the potassium current is Gn to the 4 times the driving potential. Sodium current is Gm cubed h [AUDIO OUT] driving potential. We're going to add all of those currents together to give the total membrane current. That membrane current is going to give us a V infinity for our cell. Remember, the V infinity is just the current times the effective resistance. So we can use that to also calculate the membrane time constant. And then we integrate the voltage one time step. Go back and recompute those n, m, and h infinities. And then we just keep cycling through this. When you do that, and you plot the voltage, you get an action potential. Now, you can do that in a hundredth of a second in MATLAB. Hodgkin and Huxley we're doing this on their slide rules, and they got 2/3 of the way through an action potential and said, let's just publish. [LAUGHTER] So here's what that looks like. Here's V as a function of time for when you implement that loop in MATLAB. So you can see what you did. So this is the injected current through the electrode, and can see it starts to depolarize the cell a little bit. And at some point, what happens is-- this is just a copy over here so you can line things up-- when you inject current, the cell starts to depolarize. And you can see that m starts to grow. The sodium current is starting to turn on. And at some point, m gets big enough that it's turning on a substantial amount of sodium current into the cell. And what does that do? It depolarizes to cell more, which causes m to grow faster, which causes more current, which depolarizes the cell faster. And it just runs away-- bam-- until you reach essentially the equilibrium potential of sodium. And then what does the sodium current do? The sodium current actually stops even though the channel's open. Then what happens is, during that whole time, n has, in this hyperpolarized voltage-- the potassium channel is starting to open and grows, potassium current conductance turns on, and that starts hyperpolerizing the cell. During that whole time, the inactivation gate-- this cell is very depolarized, very positive. That little bit of goo falls in, h drops. That shuts off the sodium conductance. Potassium conductance finishes bringing the cell back. Beautiful, right? Yes? AUDIENCE: Is h just the voltage-dependent or it's also time-dependent? MICHALE FEE: Time-dependent in exactly the same way that n and m are time-dependent. There is a-- h infinity changes as a sum-- as a function of voltage. And then h relaxes exponentially toward h infinity. Any questions about that? So for the problem set, you'll have code for this, and you can play around with this and try different things. And then there's a particular problem that Daniel and I cooked up for you for this. I'll basically show you what that looks like. Here's the crux of it. If you inject a little bit of current into the Hodgkin-Huxley neuron, you get a spike. And then if you wait a few milliseconds and inject another current pulse, what happens? You don't get a spike. Can anybody guess why that is? AUDIENCE: h is still inactivated. MICHALE FEE: Yeah. That thing is still stuck in there and hasn't had time to fall out yet. And if you plot h, you can see that it hasn't recovered back to the state it was at the beginning. So that's called a "refractory period." So cells don't like to spike two times in a row to close. Yes? AUDIENCE: So what things like [INAUDIBLE] h at which it's a spike? MICHALE FEE: Yeah. So you want to just like-- what would be the intuitive answer? So there's not a hard cutoff, right? If h is right here, it will be much harder. You'd have to inject a lot more current to make it spike. If h is recovered to here, then it would take a little bit less current to make it spike. So basically, there's a gradual decrease in the amount of current it would take to make the neuron spike again. So there's no one answer. So let's take a look at what happens when sodium channels go bad. [VIDEO PLAYBACK] [MUSIC PLAYING] - Most of the animals on this petting farm, on Maui, Hawaii, are sweet, but nothing too unusual. And then there are the goats-- Myotonic goats, to be specific-- more commonly known as stiff-legged goats, wooden-leg goats, nervous goats, fainting goats. Fainting goats are indigenous to North America. But that name is a bit of a misnomer, because they never lose consciousness when they keel over. If they're startled, a genetic condition causes their muscles to lock up. But it only lasts a few moments, and then they're back on their feet. Now, until the next time they're spooked. [END PLAYBACK] MICHALE FEE: So these fainting goats have a particular mutation in their sodium channel. Now, it turns out that the sodium channels that are in your brain that control action potentials are a different gene than the sodium channels that are in your muscles that produce muscle contractions. So you can have a mutation in the skeletal isoform of the sodium channel that produces these muscular effects without having any effect on brain function. But that same mutation in the brain, isoform of the sodium channel, is lethal. So this is actually a condition that exists in humans. It's called-- there are actually a whole set of these, what are called "sodium channel myotonias." One of them is called hyperkalemic periodic paralysis. And this just shows a different-- this is a different phenotype of one of these sodium channel defects. So the goats became very stiff and fell over. It turns out there's a different phenotype that looks like this. So basically, it causes extreme weakness. The muscles are completely paralyzed. They can't contract anymore, and it seems like that would be a completely different effect-- what would cause muscles to just go rigid and a very similar thing would cause paralysis-- and it turns out that actually those two things have very similar cause. That hyperkalemic-- kalemic refers to potassium. And so this condition is very sensitive to potassium levels. At high potassium levels, it's much worse than at low potassium levels. So there can be an attack of weakness or paralysis, and then just a few minutes later somebody's all better, and that paralysis goes away. So to understand what's going on in this condition, we need to take a look at how muscle fibers actually work. So let's take a little detour in that. So basically, let's start here with the action potential that drives muscle twitches. So the way this works is that it an action potential will propagate down an axon toward the neuromuscular junction. That action potential will cause the release of neurotransmitter that then causes current to flow into the muscle fiber. That current flowing into the muscle fiber depolarizes it, turns on sodium channels, and that causes an action potential in the muscle fiber that looks very much like the action potential that we just saw for a neuron for the squid giant axon. Now, there is this famous problem, called the "excitation contraction coupling problem," which is, how does an action potential here on the surface of a muscle fiber get down into the myofibril and cause a contraction of the muscle? So we'll get to that question, but let me just describe what these things are. So the myofibrils-- the myofibril is this little element inside of the muscle fiber itself. And these are bundles of thick fibers and thin fibers that essentially-- here, I think it's on the next slide. So let me just finish the story about how the action potential gets inside. So the action potential propagates through these little structures called transverse tubules. These are little tubes that go from the surface of the muscle fiber down into the muscle cell. They're like axons. But instead of going out from the cell body, they go into the cell body. That's pretty cool. This thing is huge. This muscle fiber is about 100 microns across. So in order for that signal to get into the myofibril to cause contraction, it actually has to propagate down an axon that goes into the muscle fiber. So that action potential propagates down into the t-tubules that's a voltage pulse that opens up voltage-dependent calcium channels that activate calcium release in something called the sarcoplasmic reticulum. So you may remember that in neurons the endoplasmic reticulum sequesters calcium. In a muscle fiber, the sarcoplasmic reticulum does the same thing. It's sequesters calcium. But when this voltage pulse comes down the t-tubule, its voltage-dependent calcium channels, which cause the release of calcium, which then activates calcium-dependent calcium release through another set of channels, and it basically floods the myofibrils with calcium. And that triggers the contraction. And here's how that works. Within these myofibrils are bundles of thick filaments, which are myosin and thin filaments, which are actin. The thick filaments are these structures right here. The actin are filaments, thin filaments, that intercalate between the myosin thick filaments. The myosin thick filaments are covered with these myosin molecules that stick out. The myosin heads that are like little feet reach out. And if they bind to the actin, then these things basically grab the actin and start walking along. And they pull the actin. They pull this actin filament this way. The ones over here walk this direction and pull this actin filament that way, and that causes these two end plates to pull, sorry, these two, what are called "z disks," to pull together. And the thing shortens. Does that make sense? And then when the contraction stops, these little feet stop walking. They relax, and those actin filaments now can relax and retract. Pretty cool, right? So how does the calcium connect to that? So the calcium goes in, floods this myofibril. The calcium goes in and binds to these little molecules, called troponin, that are sitting in grooves of the actin filaments. And when the calcium binds to troponin, it moves out of the way and opens up the binding site for these myosin heads to grab onto the actin filament. They grab on and they pull. And as soon as they pull, an ATP comes off. These things open up, ATP binds, boom. They pull again. So they just walk along with one ATP per cycle. Then when the calcium-- what happens is that the calcium starts being sequestered back into the sarcoplasmic reticulum that unbinds from the troponin. The troponin falls back into the groove, and the myosin heads can no longer connect to the actin. And that's the end of the muscle twitch. Pretty amazing, right? So what goes wrong when sodium channels are inactivated? And that's what we're going to talk about next-- when sodium channels fail to inactivate. So here's what the sodium channel looks like. There are these clusters of transmembrane alpha-helices. These things together, these four things together, form the pore. And there's a loop between them here that produces the inactivation. And you can see, if you look at the sights of different mutations of the sodium channel that produce defective inactivation, they tend to be clustered in these cytoplasmic loops of the sodium channel. So myotonia and the periodic paralysis that we just saw in those movies are caused by these different sets of mutations on those loops. And again, for these myotonias, these mutations are in the skeletal isoform of the sodium channel. So now, what do those mutations actually do to this? So now, let's take a look at-- let's do a patch clamp experiment, where we take muscle fiber from a wild-type. So you can just take a muscle biopsy-- extract a little pinch of muscle. You can culture it in a dish. And you can do that for wild-type, normal human muscle fibers. And you can do it for muscle fibers from a person with this particular mutation of this sodium channel. And you can see that just like for the neurons, just like for the sodium channels in neurons, you can see that depolarizing this ion channel produces brief openings that are aligned at the time when you do the depolarization step. And then there are no more openings. The sodium channels turn on, and then that gating variable, that inactivation gate, shuts off the pores, and there are no more openings. But in the muscle fiber that has this mutation, you can see that you get this burst of openings right at the time of depolarization, but you keep getting openings at later times. And if you plot the average current over many trials, you can see in normal fibers there's this very brief pulse of opening, and in these fibers, muscle fibers, with a mutation, there is a constant extended high probability of that sodium channel turning on, opening up. And that's what causes all the problems right there. In these conditions, that only represents about a 2%, a 0.02 probability, of turning on at a time when a normal muscle fiber would be inactivated. So you can actually study these things in more detail. So this shows a set of experiments that were done in rat fast twitch muscle. This shows a control, and this shows a muscle fiber that's been treated with a toxin that comes from the sea anemone that produces a toxin that uses this toxin to actually help catch prey. And it turns out, what that toxin does is it mimics the effect of this blockage of the inactivation of the sodium channel. So you can see that applying this toxin also produces these extended openings or failures to inactivate. If you take that toxin and you [AUDIO OUT] to a muscle fiber, you see something really interesting. You take a muscle fiber. You can hook it up to-- tie a string to one end, and tie a string to the other end, and kind pull it tight a little bit, and measure the force that that muscle fiber is exerting. So you can measure force as a function of time. If you stimulate that muscle fiber with a little electrical shock, you can elicit what's called a muscle twitch. And in the presence of this ATXII toxin, you can see that that twitch is very extended in time. Is there a question? Did I see a hand? No. So what's going on? So you can now record from this muscle fiber when it's been treated with this toxin that produces what's called a myotonic run. And you can see that [AUDIO OUT] muscle fiber produces a single or maybe two action potentials when depolarize it. That's what a muscle fiber normally does. But when you treat it with this ATXII, it generates many action potentials. Now, why would that be? Does that make sense? We're going to explore why that is. We're going to look at a particular model for how the sodium-- the failure to inactivate of the sodium channel produces these myotonic runs. What's really crazy is that after you turn off that current injection that activates the muscle fiber, the neuron keeps spiking. The muscle fiber keeps spiking. That continued spiking corresponds to continued contraction of the muscle. So you can trigger the muscle to generate some action potentials in a normal muscle that produces a very brief twitch. But in these muscles with this mutated sodium channel-- in this case it's with the toxin, but the same thing happens in the muscle fibers with the mutated sodium channel-- it produces continued contraction of the muscle. And that's what was happening to the goats. Their muscles contracted, and then they didn't relax. And so they were stiff like this, and then they fall over. Now, that's called a myotonic run. It's really interesting and was a big clue to what the mechanism is that produces this. If you take these muscle fibers and you put them into a solution that doesn't have the right osmolarity-- so too much, two too many ions, too high an osmolarity, or too low an osmolarity, just like pure water, for example-- produces what's called an osmotic shock. And what it does is it breaks all the t-tubules from the membrane. So it doesn't break the membrane, but it disconnects all the t-tubules from the membrane. Now, what happens is you see the myotonic run goes away. So something about the t-tubules is causing this myotonic run. So there's a really beautiful set of papers from David Corey and a person named Cannon, who proposed a hypothesis for why this actually happens, and I'll walk you through the hypothesis right now. So here's the idea. So when you have an input from a motor neuron onto the muscle fiber you get synaptic input, [AUDIO OUT] muscle fibers. So this is the motor neuron synapse. That's the muscle fiber. So you should think about this as being a very long cell here, and here's a t-tubule that's represented by a channel coming in from the surface. So this is a cross-section of the muscle fiber. So the idea is that that current injection causes an action potential, which causes sodium to flow into the cell. And on the hyperpolarize phase of the action potential, potassium goes out of the cell to bring the cell back down to a negative voltage. Now, that actual potential propagates into the t-tubule, which means you're going to have sodium flowing into the cell and potassium flowing out of the cell. But out of the cell means into the t-tubule, right? So what normally happens is, after an action potential, you're left with an excess of potassium in the t-tubule. So what happens-- think is going to happen, anybody? Think back to the first lecture. Yeah? AUDIENCE: [INAUDIBLE] MICHALE FEE: Yeah, there's going to be some pumping going on here. But actually, most of the potassium gets out of the t-tubule by a different mechanism. It gets out by diffusion. So these extra potassium ions diffuse out through that t-tubule back into the extracellular space. Now, can we estimate how long it takes that to happen? Any idea how we would do that? Anybody want to take a guess? Does anyone remember how long it takes an ion to diffuse across, let's say, a cell body, 10 microns? Kind of a few tens of milliseconds, right, 50 milliseconds? This thing is about 25 microns long. And so it will be maybe four times that. So that timescale we can calculate by just using our equation for the relation between time and distance for a diffusion, and you find that that's about 300 to 400 milliseconds. So that's how long it takes those potassium ions to diffuse out of the t-tubule. Now, what happens when we have a sodium ion that isn't inactivating? What happens is you're going to get a lot more spikes. You're going to get a lot more spikes generated, because this sodium current turns on, but now it's not properly inactivating. And so you're going to get extra spikes. And those failure to enact [AUDIO OUT] extra spikes, and extra spikes means you're going to have a lot more potassium going into the t-tubule. So what is all that-- and remember, we now have 300 or 400 milliseconds before that potassium can get out of the t-tubule by diffusion. So what's going to happen when you have all that extra potassium in the t-tubule? What's it going to do? Yeah, [INAUDIBLE]? AUDIENCE: It corrects the muscle fiber [INAUDIBLE].. MICHALE FEE: Yeah. So remember, the equilibrium potential, the negative equilibrium potential of the muscle fiber, which is normally, like any cell, is down around minus 80, that negative potential is caused because there's so much more potassium inside the cell than outside the cell. And so the potassium ions are normally kind of leaking out of a cell, and that keeps the membrane potential low. But now, if you-- remember, this is outside the cell. So you have now, suddenly, a very high concentration of potassium ions outside the cell. And what do they do? They push their way back in. They start diffusing back in, which does what to the cell? You now have potassium ions going the wrong way, which does what? I think you already gave the answer. Say it again. AUDIENCE: Depolarizes it. MICHALE FEE: Depolarizes the cell. Puts potassium back in, and it depolarizes the cell. And what is that going to do? AUDIENCE: Cause more spikes. MICHALE FEE: Cause more spikes, which is going to do what? Push more potassium into the t-tubule. It's runaway instability. So that's kind of a cool hypothesis, right? You could imagine all sorts of experiments to test this. Like you could put a little thing in there to measure potassium concentration in the t-tubule. Well, that's only a few microns across. How do you test this hypothesis? How would you-- it's a great idea. But how do you know if it even makes any sense when you put it all together, any suggestions? Yeah? AUDIENCE: [INAUDIBLE] the potassium. MICHALE FEE: Yep. So it's already known that at low potassium this problem is less severe. The disease is even named after that observation-- hyperkalemic periodic paralysis. Any other suggestions? What are we here for? What is this class? Introduction to neural computation, right? So what can we do? This is a word model, right? When you actually put it all together, you could do all this, and when you model it, it makes no sense whatsoever. There's something wrong with this word model. Neuroscience is full of word models. The only way to know if a word model makes any sense is to actually write down some equations and see if it works the way you think it is going to work. See if your word model translates into math. And so that's what David Corey and Cannon did. They took this picture, and they developed a model for what that looks like it. And it started with just the Hodgkin-Huxley model. Here's Hodgkin-Huxley. That's what we've been using all along. They added another little compartment that represents the conductances and the batteries associated with the membrane in the t-tubule. And notice, there's a EK here. What does EK depend on? AUDIENCE: [INAUDIBLE] MICHALE FEE: Say it again. EK depends on-- AUDIENCE: [INAUDIBLE] MICHALE FEE: Of-- AUDIENCE: Potassium. AUDIENCE: Potassium. MICHALE FEE: Of potassium ions, and potassium ions are changing. So let's actually-- so this part you already know. That's just Hodgkin and Huxley with a few extra resistors attached to the side of it. What about the potassium part? Let's just flesh out that model a little bit more to see how spiking activity would lead to changes in potassium, how that change in potassium would change the battery, and how that would feedback and change the spiking activity. So let's do that. So we're going to imagine that we are going to model our potassium conductance in here. So we're going to write down a variable that's the potassium concentration inside the t-tubule. And what is going to affect that potassium concentration? What are the sources of potassium? What are the sinks of potassium, anybody? Well, one is just diffusion. So we can model that, and that looks an awful lot, actually, like Fick's first law. So the change in potassium concentration as a function of time has a contribution from the difference between the potassium concentration inside and outside. That rate of change through diffusion is proportional to the difference in concentration inside and outside divided by that time constant that we've just calculated. Now, what-- so that's how potassium leaves. That's one way that potassium leaves. So the potassium gets into the t-tubule at a rate that's just proportional to the potassium current. The rate of change of the potassium concentration is proportional to the potassium current. And the potassium current-- so let's just flesh this out a little bit more. This, we already calculated. This is the conductance times the driving potential. But that current, we have to do a little bit of changes of units to get current into the right units for a change in potassium concentration as a function of time. So current is coulombs per second, and here we have moles per liter per second. So we need to divide by two things. We need the volume of the t-tubule, and we need Faraday's constant, which is just coulombs per mole. That's a well-known number that you can just look up. Multiply those two things together, you get the contribution of the potassium current to the rate of change of potassium concentration. The potassium current is just conductance times driving potential. Notice the EK is a function of potassium concentration. I haven't written it in here, but that's just the Nernst potential. And so we have a differential equation for the potassium concentration as a function of time. It's a function of the potassium concentration voltage and equilibrium potential. And now, we just take that and add it to the code that we already have for Hodgkin and Huxley. And here's what you get. So here's a normal muscle fiber. You get a single action potential. What they did was they modeled-- they made some fraction of those ion channels fail to inactivate. And here's what happens to the model when you make 2% sodium channels fail to inactivate. You see that you get this large number of action potentials, because the sodium channels are not inactivating properly. And when you turn the current off, you get this high potassium concentration in the t-tubule that's now causing additional spikes. That is continued contraction of the muscle. That is this myotonia. The model is exhibiting myotonia. How do you explain periodic paralysis? That's totally different, right? Now the muscle just goes completely limp. How do you do that? Any thoughts about this? What do you think would happen if we made a slightly larger fraction of the sodium channels fail to inactivate? Here's what happens. You get more and more action potentials. And at some point, what happens is the voltage just locks up. The sodium channels go into a different state where the system is no longer oscillating. It's just fixed at a high voltage. It's called depolarization block, and it's what happens when there's no longer enough-- there aren't enough sodium channels active to give you spiking, but there are enough non-inactivated sodium channels to just hold the voltage high. And this muscle fiber is no longer able to contract, and it's completely flaccid. It's completely loose. And so this is the hyperkalemic periodic paralysis. So you get both of these really interesting phenotypes in this disease just depending on one little parameter, which is what fraction of these sodium channels are failing to inactivate. And so you can see, you get this very complex phenotype from a simple mutation of an ion channel. And in order to understand really how it's behaving, you have to do modeling like this. It's the way you understand a system and how it works. Until you do this, you don't really understand it. So I'll leave it there. Thank you.
MIT_940_Introduction_to_Neural_Computation_Spring_2018
20_Hopfield_Networks_Intro_to_Neural_Computation.txt
MICHALE FEE: Today, we're going to finish up with recurrent neural networks. So as you remember, we've been talking about the case where we have a layer of neurons in which we have recurrent connections between neurons in the output layer of our network. And we've been developing the mathematical tools to describe the behavior of these networks and describe how they respond to their inputs. And we've been talking about the different kinds of computations that recurrent neural networks can perform. So you may recall that we started talking about-- we introduced the math or the concept of how to study recurrent neural networks by looking at the simplest recurrent network that has a single-- it's a single neuron with a recurrent connection called an autapse. A recurrent connection has a strength lambda. And we can write down-- let's see. So we can write down the equation for this, the response of this neuron, without a recurrent connection as tau dv dt equals minus v. The minus v is essentially a leak term, so that if you put input into the neuron, the response of the neuron jumps up and then decays exponentially in response to an input, h. If we have a recurrent connection lambda, then there's an additional input to the neuron that's proportional to the firing rate of the neuron. We can rewrite that equation now as tau dv dt equals minus quantity one minus lambda times v plus the input. And the behavior of this simple recurrent neural network depends strongly on the value of this coefficient one minus lambda. And we've talked about three different cases. We've talked about case where lambda is less than one, where lambda is equal to one-- in which case, this coefficient is zero-- and when lambda is greater than one. So let's look at those three cases again for this equation. So when lambda is less than one, you can see that this quantity right here, this coefficient in front of the v is negative. And what that means is that the firing rate of this neuron relaxes exponentially toward some h infinity-- sorry, some v infinity. And then when the input goes away, the neuron-- the firing rate decays exponentially towards zero. OK, so in the case where lambda is equal to one, you can see that this coefficient is zero. And now you can see that the derivative of the firing rate of the neuron is just equal to the input. What that means is that the firing rate of the neuron essentially integrates the input. And you can see, if you put a step input into this neuron with this recurrent connection of lambda equal one, that the response of the neuron simply ramps up linearly, which corresponds to integrating that step input. And then when the input is turned off and goes back to zero, you can see that the firing rate of the neuron stays constant. And that's because the leak is exactly balanced by this excitatory recurrent input from the neuron onto itself. So you can see that for the case for lambda equals one, there's persistent activity after you put an input into this neuron. And we talked about how this forms a short-term memory that can be used for a bunch of different things. It's a short-term memory of a scalar, or a continuous quantity, like I position. Or we talked about short-term memory integration being used for path integration or for accumulating evidence across noisy-- over long exposure to a noisy stimulus. So today, we're going to focus on networks where this lambda is greater than one. And in that case, you can see that the differential equation looks like this. So if lambda is greater than one, then the quantity inside the parentheses here is negative. But that's multiplied by a minus one. So the coefficient in front of the v is positive. So if v itself is a positive number, then dv dt is also positive. So if v is positive and dv dt is positive, then what that means is that the firing rate of that neuron is growing and, in this case, is growing exponentially. So that when you put an input in, the response of the neuron grows exponentially. But when you turn the input off, the firing rate of the neuron continues to grow exponentially, which is a little bit crazy. You know that neurons in the brain, of course, don't have firing rates that just keep growing exponentially. So we're going to solve that problem by using nonlinearities in the firing F-I curve of neurons. But the key point here is that this kind of network actually remembers that there was an input, as opposed to this kind of network, where the when the input goes away, the activity of the network just decays back to zero. This kind of network has no memory that there was an input long ago in the past. Whereas, this kind of network remembers that there was an input. And so that kind of property when lambda is greater than one is useful for storing memories. So we're going to expand on that idea. In particular, we're going to use that theme to build networks that have attractors, that have stable states that they can go to, that depend on prior inputs, but also can be used to store long-term memories. All right? We're going to see how that kind of network can also be used to produce a winner-take-all network that is sensitive to which of two inputs are stronger and stores a memory of preceding inputs where one input is stronger than the other. Or it ends up in a different state when, let's say, input one is stronger than input 2, and it lands in a different state when input 2 is stronger than input one. We're going to then describe a particular model, called a Hopfield model, for how attractor networks can store long-term memories. We're going to introduce the idea of an energy landscape, which is a property of networks that have symmetric connections, of which the Hopfield model is an example. And then we're going to end by talking about how many memories such a network can actually store, known as the capacity problem. OK, so let's start with recurrent networks with lambda greater than one. So let's start with our autapse. Let's put lambda equal to 2. And again, you can see that if we rewrite this equation with lambda greater than one, we can write tau dv dt equals lambda minus one times v plus h. You can see that the value of zero, at the firing rate of zero, is an unstable fixed point of the network. Why is that? Because at v equals zero, then dv dt equals zero. So what that means is that if the firing rate is exactly zero, that's a fixed point of the system. But if v deviates very slightly from zero, v becomes very slightly positive, then dv dt is positive, and the firing rate of the neuron starts running away. So what you can see is if you start the fire rate at zero and have the input at zero, then dv dt is zero, and the network will stay at zero firing rate. But if you put in a very slight, a very small input, then dv dt goes positive, and the network activity runs away. Now, let's put in an input of the opposite sign. So now let's start with v equals zero and put in a very tiny negative input. What's the network going to do? So tau dv dt equals v. So v is very slightly negative, or if h is very slightly negative and v is zero, then dv dt will be negative, and the network will run away in the negative direction. So this network actually can produce two memories. It can produce a memory that a preceding input was positive, or it can store a memory that a preceding input was negative. So it has two configurations after you've put in an input that is positive or negative, right? It can produce a positive output or a negative output that's persistent for a long time. Yes? AUDIENCE: Is the [INAUDIBLE] of a negative firing rate [INAUDIBLE]? MICHALE FEE: Yeah. So you can basically reformulate everything that we've been talking about for neurons that have zero, that can't have negative firing rates. But in this case, we've been working with linear neurons. And it seems like the negative fire rates are pretty non-physical, non-intuitive. But it's a pretty standard way to do the mathematical analysis for neurons like this, is to treat them as linear. But you can sort of reformulate all of these networks in a way that don't have that non-physical property. So for now, let's just bear with this slightly uncomfortable situation of having neurons that have negative firing rates. Generally, we're going to associate negative firing rates as inhibition, OK? But don't worry about that here. All right, so we're going to solve this problem that these neurons have firing rates that are kind of running away exponentially by adding a nonlinear activation function. So a typical nonlinear activation function that you might use for linear neurons, like for networks of the type we've been considering, is a symmetric F-I curve, where if the input is positive and small, the firing rate of the neuron grows linearly, until you reach a point where it saturates. And larger inputs don't produce any larger firing rate of the neuron. So most neurons actually have kind of a saturating F-I curve, like this, like the Hodgkin-Huxley neurons begin to saturate. Why is that? Because the sodium channels begin to inactivate, and it can't fire any faster than the-- there's a time between spikes that's sort of the closest that the neuron-- the fastest that the neuron can spike because of sodium channel inactivation. And then on the minus side, if the input is small and negative, then the firing rate of the neuron goes negative linearly for a while and then saturates at some value. And we typically have the neuron saturating between one and minus one. So now, if you start your neuron at zero firing rate and you put in a little positive input, what's the neuron going to do? Any guesses? AUDIENCE: [INAUDIBLE] MICHALE FEE: Yeah. It's going to start running up exponentially, but then it's going to saturate up here. And so the firing rate will run up and sit at one. And if we put in a negative input, a small negative input, then the neuron-- then this little recurrent network will go negative and saturate at minus one, OK? So you can see that this network actually has one unstable fixed point, where if it sits exactly at zero, it will stay at zero, until you give a little bit of input in either direction. And then the network will run up and sit at another fixed point here of one. If you put in a big negative input, you can drive it to another fixed point. And these two are stable fixed points, because once they're in that state, if you give little perturbations to the network, it will deviate a little bit from that value. If you give a small negative input, you can cause this to decrease a little bit. But then when the input goes away, it will relax back. So this is an unstable fixed point, and these are two stable fixed points. Now, we're going to come back to this in more detail later. But we often think about networks like this as sort of like a ball on a hill. So you can imagine that you can describe this network using what's called an energy landscape, where if you start this system at some point on this sort of valley-shaped hill, all right, the network sort of-- it's like a ball that rolls downhill. So if you start the network exactly at the peak, the ball will sit there. But if you give it a little bit of a nudge, it will roll downhill toward one of these stable points, OK? If you start it slightly on the other side, it will roll this way, OK? And those stable fixed points are called attractors. And this particular network has two tractors-- one with a firing rate of one and one at a firing rate of minus one. Yes, Appolonia? AUDIENCE: The stable fixed points of the top graph, where'd you say they were? MICHALE FEE: The stable fixed point is here, because once you-- if the system is in this state, you can give slight perturbations and the system returns to that fixed point. This is an unstable fixed point, because if you start the system there and give it a little nudge in either direction, the state runs away. Does that makes sense? AUDIENCE: Yeah. MICHALE FEE: Any questions about that? Yes? AUDIENCE: How is the shape of the curve [INAUDIBLE] points determined based on like-- MICHALE FEE: So I'm going to get-- I'm going to come back to how you actually calculate this energy landscape more formally. There's a very precise mathematical definition of how you define this energy landscape. All right, so this was all for the case of one neuron, all right? So now let's extend it to the case of multiple neurons. So let's just take two neurons with an autapse. One of these autapses have a value strength of two, and the other autapse have a strength of minus two. So this one is recurrent and excitatory. This one is recurrent and inhibitory. So now what we're going to do is we can plot the state of the network. Now, instead of being the state of the network in one dimension, v, we're now going to have v1 and v2. So the state of the system is going to be a point in a plane given by v1 and v2. So now, by looking at this network, you can see immediately that this particular neuron, this neuron with a firing rate of v2, looks like the kind of network that we've already studied, right? It has a stable fixed point at zero. And this network has two stable fixed points-- one at one and the other one at minus one. So you can see that this system will also have two stable fixed points-- one there and one there, right? Because if I take the input away, this neuron is either going to one or minus one, and this neuron is going to go to zero. So there's one and minus one on the v1 axis. And those two states have zero firing rate on the v2 axis. Is that clear? So now what's going to happen if we made this autapse have a strength of two? Anybody want to take a guess? AUDIENCE: That's, like, four attractors? MICHALE FEE: Right. Why is that? AUDIENCE: Because that will also have stable fixed points at [INAUDIBLE]. MICHALE FEE: Right. So this one will have stable fixed points at one and minus one. This will also have stable fixed points at one and minus one, right? And the system can be in any one of four states-- 0, 0. Sorry, 1, 1; minus 1, minus 1; 1 minus 1; and minus 1, 1. That's right. All right, so I just want to make one other point here, which is that no matter where you start the system for this network, it's going to evolve towards one of these stable fixed points, unless I started it exactly right there at zero. That will be another fixed point, but that's an unstable fixed point. OK, so this system will-- no matter where I start the state of that system, other than that exact point right there, the network will evolve toward one of those two attractors. That's why they're called attractors, because they attract the state of the system toward one of those two points. Yes? AUDIENCE: So are the attractors determined by the nonlinear activation function? MICHALE FEE: They are. So if this non-linear activation function saturated at two and minus two, then these two points would be up here at two and minus two. So you could see that this network has two eigenvalues, right? If we think of it as a linear network, this network has two eigenvalues. The connection matrix is given by a diagonal matrix with a two and a minus two along the diagonals, right? So let's take a look at this kind of network. Now, instead of an autapse network, we have recurrent connections of strength minus 2 and minus 2. So what does that weight matrix look like? AUDIENCE: 0, minus 2; minus 2, 0. MICHALE FEE: 0, minus 2; minus 2, 0, right? Well, what are the eigenvalues of this network? Anybody remember that? AUDIENCE: [INAUDIBLE] MICHALE FEE: Right. It's a plus b and a minus b. And so the eigenvalues of this network are 0 plus negative 2 and 0 minus negative 2. So it's 2 and minus 2, right? So this network here will have exactly the same eigenvalues as this network. But what's going to be different? What are the eigenvectors? AUDIENCE: The 45. MICHALE FEE: The 45 degrees. So the eigenvectors of this network are the x- and y-axes. The eigenvectors of this network are the 45-degree lines. So anybody want to take a guess as to what the stable states of this-- it's just this network rotated by 45 degrees, right? So those are now the attractors of this network, right? And that makes sense, right? This neuron can be positive, but that's going to be strongly driving this neuron negative. But if this neuron is negative, that's going to be strongly driving this neuron positive, right? And so this network will want to sit out here on this line in this direction or in this direction. And because of the saturation-- if there were no saturation, if this were a linear network, the activity of this neuron would just be running exponentially up these 45-degree lines. But because of the saturation, it gets stuck here at 1, minus 1. Or rather, minus 1, 1 or 1, minus 1. Any questions about that? Yeah, Jasmine? AUDIENCE: So the two fixed points right now, like it's [INAUDIBLE]? MICHALE FEE: Yeah. It'll be one in this direction and one in that direction. AUDIENCE: So why [INAUDIBLE]? MICHALE FEE: Because this neuron is saturated. Because the saturation is acting at the level of the individual neurons. AUDIENCE: OK. MICHALE FEE: So each neuron will go up to its own saturation point. OK? All right. So this kind of network is actually pretty cool. This network can implement decision-making. It can decide, for example, whether one input is bigger than the other, all right? So if we have an input-- so let's start our network right here at this unstable fixed point, all right? We've carefully balanced the ball on top of the hill, and it just sits there. And now let's put an input that is in this direction h, so that it's slightly pointing to the right of this diagonal line. So what's going to happen? It's going to kick the state of the network up in this direction, right? But we've already discussed how if the network state is anywhere on either side of that line, it will evolve toward the fixed point. If the h is on the other side, it will kick the network unstable fixed point into this part of the state space. And then the network will evolve toward this fixed point, OK? These half planes here, this region here, is called the attractor basin for this attractor. And on this side, it's called attractor basin for that attractor, OK? And you can see that this network will be very sensitive to whichever input, h1 or h2, is slightly larger. So let me show you what that looks like in this little movie. So we're going to start with our network exactly at the zero point. And we're going to give an input in this direction. And you can see that we've kicked the network slightly this way. And now the network evolves toward the fixed point, and it stays there. Now if we give a big input this way, we can push network over, push it to the other side of this dividing line between the two basins of attraction, and now the network sits here at this fixed point. We can kick it again with another input and push it back. So it's kind of like a flip-flop, right? It's pretty cool. It detects which input was larger, pushes the network into an attractor that then remembers which input was larger for, basically, as long as the network-- as long as you allow the network to sit there. OK? All right, any questions about that? Yes, Rebecca? AUDIENCE: Sorry. So the basin is just like each side of that [INAUDIBLE]?? MICHALE FEE: That's right. That's the basin of attraction for this attractor. If you start the network anywhere in this half plane, the network will evolve toward that attractor. And you can use that as a winner-take-all decision-making network by starting the network right there at zero. And small kicks in either direction will cause the network to relax into one of these attractors and maintain that memory. Now let's talk about sort of a formal implementation of a system for producing memories, long-term memories, all right? And that's called a Hopfield model. And the Hopfield model is actually one of the best current models for understanding how memory systems like the hippocampus work. So the basic idea is that we have neurons in the hippocampus, in particular in the CA3 region of the hippocampus, that have very prominent-- a lot of recurrent connectivity between those neurons, all right? And so you have input from entorhinal cortex and from the dentate gyrus that sort of serve as the stimuli that come into that network and form-- and burn memories into that part of the network by changing the synaptic weights within that network. [INAUDIBLE] that some time later, when similar inputs come in, they can reactivate the memory in the hippocampus. And you recognize and remember that pattern of stimuli. All right, so we're going to-- actually, so an example of how this looks when you record neurons in the hippocampus, it looks like this. So here's a mouse or a rat with electrodes in its hippocampus. If you put it in a little arena like this, it will run around and explore for a while. You can record where the rat is in that arena [AUDIO OUT] from neurons. And measure when the neurons spike and look at how the firing rate of those neurons relates to the position of the animal. So the black trace here shows all of the locations where the rat was when it was running around the maze, and the red dot shows where one of these neurons in CA3 of the hippocampus generated a spike, where the rat was when that neuron generates a spike. And those are shown with red dots here. And you can see that this neuron generates spiking when the animal is in a particular restricted region of the cage, of its environment. And different neurons show different localized regions. So these regions are called place fields, because they are the places in the environment where that neurons spikes. Different neurons have different place fields. You can actually record from many of these neurons-- and looking at the pattern of neurons that are spiking, you can actually figure out where the rat was or is at any given moment, just by looking at which of these neurons is spiking. That's pretty obvious, right? If this neuron is spiking and this neuron isn't, all these other neurons, then the animal is going to be-- you know that the animal is somewhere in that location right there. All right, so in a sense, the activity of these neurons reflects the animal remembering, or sort of remembering, that it's in a particular location. It's in a cage. It looks at the walls of the environment. It sees a little-- they use colored cards on the wall to give the animal cues as to where it is. So they look around and they say, oh, yeah, I'm here. In my environment, there's a red card there and a yellow card there, and that's where I am right now. So that's the way you think about these hippocampal place fields as being like a memory. On top of that, this part of the hippocampus is necessary for the actual formation of memories in a broader sense-- not just spatial locations, but more generally in terms of life events, right? For humans, the hippocampus is an essential part of the brain for storing memories. All right, so let's come back to this idea of our recurrent network. And what we're going to do is we're going to start adding more and more neurons to our recurrent network. All right, so here's what the attractor looked like for the case where we have one eigenvalue in the system that's greater than one, another one that's less than one. If we now make both of these neurons have recurrent connections that are stronger than one, now we're going to have four attractors, right? Each one of these has two stable fixed points-- a one and minus one. So here, for these two states, v1 is one. And for these two states, v1 is negative 1. For these two states, v2 is 1, and these two states, v2 is negative one, all right? So you can see every time we add another neuron or another neuron to our network that has an autapse, every time we add another neuron with another eigenvalue, we add more possible states of the network, OK? So if we had two neurons, we have one neuron with an eigenvalue with an autapse greater than one, we have two states. If we have two, we have four states. If we have three of those, we have eight states. So you can see that if we have n of these neurons with recurrent excitation with a lambda of greater than one, we have 2 to the n possible states that that system can be in, OK? So I don't know exactly how many neurons are in CA3. It has to be several million, maybe 10 million. We don't know the exact number. But 2 to that is a lot of possible states, right? So the problem is that-- so let's think about how this thing acts as a memory. So it turns out that this little device that we've built here is actually a lot like a computer memory. It's like a register, where we can write a value. So we can write in here a 1, minus 1, 1. And as long as we leave that network alone, it will store that value. Or we can write a 1, 1, 1, and it will store that value. But that's not really what we mean when we talk about memories, right? We have a memory of meeting somebody for lunch yesterday, right? That is a particular configuration of sensory inputs that we experienced. So the other way to think about this is this kind of network is just a short-term memory. We can program in some values-- 1, 1, 1. But if we were to turn the activity of these neurons off, we'd erase the memory, right? How do we build into this network a long-term memory, something that we can turn all these neurons off and then the network sort of goes back into the remembered state? You do that by building connections between these neurons, such that only some of these possible states are actually stable states, all right? So let me give you an example of this. So if you have a whole bunch of neurons-- n neurons. You've got 2 to the n possible states that that network can sit in. What we want is for only some of those to actually be stable states of the system. So, for example, when we wake up in the morning and we see the dresser or maybe the nightstand next to the bed, we want to remember that's our bedroom. We want that to be a particular configuration of inputs that we recall, right? So what you want is you want a set of neurons that have particular states that the system evolves toward that are stable states of the system. So the way you do that is you take this network with recurrent autapses and you build cross-connections between them that make particular of those possible states actual stable states of the system. We want to restrict the number of stable states in the system. So take a look at this network here. So here we have two neurons. You know that if you had autapses between these-- of these neurons to themselves, there would be four possible stable states. But if we now build excitatory cross-connections between those neurons, two of those states actually are no longer stable states. They become unstable. And only these two remain stable states of this system, remain attractors. If we put inhibitory connections between those neurons, then we can make these two states the attractors of the system, OK? All right. Does that make sense? All right, so let's actually flesh out the mathematics of how you take a network of neurons and program it to have particular states that are tractors of the system, all right? So we've been using this kind of dynamical equation. We're going to simplify that. We're going to follow the construction that John Hopfield used when he analyzed these recurrent networks. And instead of writing down a continuous update so that we update the-- in the formulation we've been using, we update the firing rate of our neuron using this differential equation. We're going to simplify it by just writing down the state of the network at time t plus 1. That's a function of the state of the network of the previous time step. So we're going to discretize time. We're going to say v, the state of the network, the firing rates of all the neurons at time t plus 1, is just a function of a weight matrix that connects all the neurons times the state of the system, times the firing rate vector. And then this can also have an input into it, all right? All right. And here, I'm just writing out exactly what that matrix multiplication looks like. It's the state of the i-th [vector?] after we update the state of the network is just a sum over all of the different inputs coming from all of the other neurons, all the j other neurons. And we're going to simplify our neuronal activation function to just make it into a binary threshold neuron. So if the input is positive, then the firing rate of neuron will be positive. If the input is negative, the firing rate of the neuron will be negative. All right? And that's the sine function. It's 1 if x is greater than 0 and minus 1 if x is less than or equal to 0. All right, so the goal is to build a network that can store any memory we want, any pattern we want, and turn that into a stable state. So we're going to build a network that will evolve toward a particular pattern that we want. And xi is just a pattern of ones and minus ones that describes that memory that we're building into the network, all right? So xi is just a one or minus one for every neuron in the network. So xi i is one or minus one for the i-th neuron. Now, we want xi to be an attractor, right? We want to build a network such that xi is an attractor. And what that means is that-- what does building a network mean? When we say build a network, what are we actually doing? What is it here that we're actually trying to decide? AUDIENCE: The seminal roots. MICHALE FEE: Yeah, which is? AUDIENCE: Like the matrix M. MICHALE FEE: The M, right. So when I say build a network that does this, I mean choose a set of M's that has this property. So what we want is we want to find a weight matrix M such that if the network is in a stable state, is in this desired state, that when we multiply that state times the matrix M and we take the sine of that sum, you're going to get the same state back. In other words, you start the network in this state, it's going to end up in the same state. That's what it means to have an attractor, OK? That's what it means to say that it's a stable state. OK, so we're going to try a particular matrix. And I'm going to describe what this actually looks like in more detail. But the matrix that programs a pattern xi into the network as an attractor is this weight matrix right here. So if we have a pattern xi, our weight matrix is some constant times the outer product of that pattern with itself. I'm going to explain what that means. What that means is that if neuron i and neuron j are both active in this pattern, both have a firing rate of one, then those two neurons are going to be connected to each other, right? They're going to have a connection between them that has a value of one, or alpha. If one of those neurons has a firing rate of one and the other neuron has a firing rate of zero, then what weight do we want between them? If one of them has a firing rate of one and the other has a firing rate of minus one, the strength of the connection we want between them is minus one. So if one neuron is active and another neuron is active, we want them to excite each other to maintain that as a stable state. If one neuron is plus and the other one is minus, we want them to inhibit each other, because that will make that configuration stable. OK, notice that's a symmetric matrix. So let's actually take our dynamical equation that says how we go from the state at time t to the state of time t plus 1 and put in this weight matrix and see whether this pattern xi is actually a stable state. So let's do that, Let's take this M and stick it in there, substitute it in. Notice this is a sum over j, so we can pull the xi i out. And now, you see that v at t plus 1 is this. And it's the sine of a times xi i times the sum of j of xi j, xi k. Now, what is that? Any idea what that is? So the elements of xi are what? They're just ones or minus ones. So xi j times xi j has to be? AUDIENCE: One. MICHALE FEE: One. And we're summing over n neurons. So this sum has to have a value N. So you can see that the state at time t plus 1-- if we start to network in this stored state, it's just this-- sine of a N xi. But a is positive. N is just a positive integer, number of neurons. So this equal xi. So if we have this weight matrix, we start to network in that stored state, the state at the next time step will be the same state. So it's a stable fixed point. All right, so let's just go through an example. That is the prescription for programming a memory into a Hopfield network, OK? And notice that it's just-- it's essentially a Hebbian learning rule. So the way you do this is you activate the neurons with a particular pattern, and any two neurons that are active together form a positive excitatory connection between them. Any two neurons where one is positive and the other is negative form a symmetric inhibitory connection, all right? All right, so let's take a particular example. Let's make a three-neuron network that stores a pattern 1, 1, minus 1. And again, the notation here is xi, xi transpose. That's an outer product, just like you use to compute the covariance matrix of a data matrix. So there's a pattern we're going to program in. The weight matrix is xi, xi transpose, but it's 1, 1, minus 1 times 1, 1, minus 1. You can see that's going to give you this matrix here, all right? So that element there is 1 times 1. That element there. So here are two neurons. These two neurons storing this pattern, these two neurons-- sorry, this neuron has a firing rate of minus one. So the connection between that neuron and itself is a one, right? It's just the product of that times that. All right any questions about how we got this weight matrix? I think it's pretty straightforward. So is that a stable point? Let's just multiply it out. We take this vector and multiply it by this matrix. There's our stored pattern. There's our matrix that stores that pattern. And we're just going to multiply this out. You can see that 1 times 1 plus 1 times 1 plus minus 1 times minus 1 is 3. You just do that for each of the neurons. Take the sine of that. And you can see that that's just 1, 1, minus 1. So 1, 1, minus 1 is a stable fixed point. Now let's see if it's actually an attractor. So when a state is an attractor, what that means is if we start to network at a state that's a little bit different from that and advance the network one time step, it will converge toward the attractor. So into our network that stores this pattern 1, 1, minus 1, let's put in a different pattern and see what happens. So we're going to take that weight matrix, multiply it by this initial state, multiply it out, and you can see that next state is going to be the sine of 3, 3, minus 3. And one time step advanced, the network is now in the state that we've programmed in. Does that make sense? So that state is a stable fixed point and it's an attractor. I'm just going to go through this very quickly. I'm just going to prove that xi is an attractor of the network if we write down the network as the outer product of this. The matrix elements are the outer product of the stored state, OK? So what we're going to do is we're going to calculate the total input onto the i-th neuron if we start from an arbitrary state, v. So k is the input to all the neurons, right? And it's just that matrix times the initial state. So v j is the firing rate of the j-th neuron, and k is just M times v. That's the pattern of inputs to all of our neurons. So what is that? k equals-- we're just going to put this weight matrix into this equation, all right? We can pull the xi i outside of the sum, because it doesn't depend on j. The sum is over j. Now let's just write out this sum, OK? Now, you can see that if you start out with an initial state that has some number of neurons that have the correct sign that are already overlapping with the memorized state and some number of neurons in that initial state don't overlap with the memorized state, we can write out this sum as two terms. We can write it as a sum over some of the neurons that are already in the correct state and a sum over neurons that are not in the correct state. So if these neurons in that initial state have the right sign, that means these two have the same sign. And so the sum over xi j vj for neurons where v has the right sign is just the number of neurons that has the correct sign. And this sum over incorrect neurons means these neurons have the opposite sign of the desired memory. And so those will be one, and those will be minus one. Or those will be minus one, and those will be one. And so this will be minus the number of incorrect neurons. So you can see that the input of the neuron will have the right sign if the number of correct neurons is more than the number of incorrect neurons, all right? So what that means is that if you program a pattern into this network and then I drive an input into the network, where most of the inputs drive-- if the input drives most of the neurons with the right sign, then the inputs will cause the network to evolve toward the memorized pattern in the next timestamp. OK, so let me say that again, because I felt like that didn't come out very clearly. We program a pattern into our network. If we start to network at some-- let's say at zero. And then we put in a pattern into the network such that just the majority of the neurons are activated in a way that looks like the stored pattern, then in the next time step, all of the neurons will have this stored pattern. So let me show you what that looks like. Let me actually go ahead and show you-- OK, so here's an example of that. So you can use Hopfield networks to store many different kinds of things, including images, all right? So this is a network where each pixel is being represented by a neuron in a Hopfield network. And a particular image was stored in that network by setting up the pattern of synaptic weights just using that xi, xi transpose learning rule for the weight matrix M, OK? Now, what you can do is you can [INAUDIBLE] that network from a random initial condition. And then let the network evolve over time, all right? And what you see is that the network converges toward the pattern that was stored in the synaptic [?], OK? Does that make sense? Got that? So, basically, as long as that initial pattern has some overlap with the stored pattern, the network will evolve toward the stored pattern. All right, so let me define a little bit better what we mean by the energy landscape and how it's actually defined. OK, so you remember that if we start our network in a particular pattern v, the recurrent connections will drive inputs into all the neurons in the network. And those inputs will then determine the pattern of activity at the next time step. So if we have a state of the network v, the inputs to the network, to all the neurons in the network, from the currently active neurons is given by the connection matrix times v. So we can just write that out as a sum like this. So you define the energy of the network as the dot product-- basically, the amount of overlap-- between the current state of the network and the inputs to all of the neurons that drive the activity in the next step, OK? And the energy is minus, OK? So what that means is if the network is in a state that has a big overlap with the pattern of inputs to all the other neurons, then the energy will be very negative, right? And remember, the system likes to evolve toward low energies. In physics, you have a ball on a hill. It rolls downhill, right, to lower gravitational energies. So you start the ball anywhere on the hill, and it will roll downhill. So these networks do the same thing. They evolve downward on this energy surface. They evolve towards states that have a high overlap with the inputs that drive the next state. Does that make sense? So if you're in a state where the pattern right now has a high overlap with what the pattern is going to be in the next time step, then you're in an attractor, right? OK, so it looks like that. So this energy is just negative of the overlap of the current state of the network with the pattern of inputs to all the neurons. Yes, Rebecca? AUDIENCE: So [INAUDIBLE] to say [INAUDIBLE] with the weight matrix, since that's sort of the goal of the next time step, and it will evolve towards the matrix [INAUDIBLE]?? MICHALE FEE: Yeah. So the only difference is that the state of the network is this vector, right? And the weight matrix tells us how that state will drive input into all the other neurons. And so if you're in a state that drives a pattern of inputs to all the neurons that looks exactly like the current state, then you're going to stay in that state, right? And so the energy is just defined as that dot product, the overlap of the current state, or the state that you're calculating the energy of, and the inputs to the network in the next time step. All right, so let me show you what that looks like. And so the energy is lowest, current state has a high overlap with the synaptic drive to the next step. So let's just take a look at this particular network here. I've rewritten this dot product as-- so k is just M times v. This dot product can just be written as v transpose times Mv. So that's the energy. Let's take a look at this matrix, this network here-- 0, minus 2, minus 2, 0. So it's this mutually inhibitory network. You know that that inhibitory network has attractors that are here at minus 1, 1 and 1, minus 1. So let's actually calculate the energy. So you can actually take these states-- 1, minus 1-- multiply it by that M, and then take the dot product with 1, minus 1. And do that for each one of those states and write down the energy. You can see that the energy here is minus 1. The energy here is minus 1, and the energy here is 0. So if you start the network here, at an energy zero, it's going to roll downhill to this state. Or it can roll downhill to this state, depending on the initial condition, OK? So you can also think about the energy as a function of firing rates continuously. You can calculate that energy, not just for these points on this grid. And what you see is that there's basically-- in high dimensions, there are sort of valleys that describe the attractor basin of these different attractors, all right? And if you project that energy along an axis like this, you can see that you sort of-- let's say, take a slice through this energy function. You can see that this looks just like the energy surface, the energy function, that we described before for the 1D factor, the single neuron with two attractors, right? This corresponds to a valley and a valley and a peak between them. And then the energy gets big outside of that. And questions about that? Yes, [INAUDIBLE]. AUDIENCE: [INAUDIBLE] vector 1/2 because-- in this case, right? MICHALE FEE: That's the general definition, minus 1/2 v dot k. It actually doesn't really-- this 1/2 doesn't really matter. It actually comes out of the derivative of something, as I recall. But a scaling factor doesn't matter. The network always evolves toward a minimum of the energy. And so this 1/2 could be anything. All right, so the point is that starting the network anywhere with a sensory input, the system will evolve toward the nearest memory, OK? And I already showed you this. OK, so now, a very interesting question is, how many memories can you actually store in a network? And there's a very simple way of calculating the capacity of the Hopfield network. And I'm just going to show you the outlines of it. And that actually gives us some insight into what kinds of memories you can store. Basically, the idea is that when you store memories in a network, you want the different memories to be as uncorrelated with each other as possible. You don't want to try to store memories that are very similar to each other. And you'll see why in a second when we look at the map. So let's say that we want to store multiple memories in our network. So instead of just storing one pattern, xi, we want to store a bunch of different patterns xi. And so let's say we're going to store P different patterns. So we have a parameter variable mu. An index mu addresses each of the different patterns we want to store. So we're going to store zero to p patterns, p minus 1 patterns. So what we do, the way we do that is we compute the contribution to the weight make matrix from each of those different patterns. So we calculate a weight matrix using the outer product for each of the patterns we want to store in the network, all right? And then we add all of those together. We're going to essentially sort of average together the network that we would make for each pattern separately. Does that makes sense? So there is the equation for the weight matrix that stores p different patterns in our memory, in our network. And that's how we got this kind of network here, where we store multiple memories, all right? So let me just show you an example of what happens when you do that. So I found these nice videos online. So here is a representation of a network that stores a five by five array of pixels. And this network was trained on these three different patterns. And what this little demo shows is that if you start the network from different configurations here and then evolve the network-- you start running it. That means you run the dynamic update for each neuron one at a time, and you can see how this system evolves over time. So this is a little GUI-based thing. You can flip the state and then run it. And you can see that if you change those, now it-- I think he was trying to make it look like that. But when you run it, it actually evolved toward this one. He's going to really make it look like that. And you can see it evolves toward that one. All right, any questions about that? You can see it stored three separate memories. You've given an input, and the network evolves toward whatever memory was closest to the input. So that's called a content [INAUDIBLE] memory. You can actually recall a memory-- not by pointing to an address, like you do in a computer, but by putting in something that looks a little bit like the memory. And then the system evolves right to the memory that was closest to the input. So it's also called an auto-associative memory. It automatically associates with the nearest-- with a pattern that's nearest to the input. So here's another example. It's just kind of more of the same. This is a network similar to this. Instead of black and white, it's red and purple, but it's got a lot more pixels. And you'll see the three different images that are stored in there-- so a face, a world, and a penguin. So then what they're doing here is they add noise. And then you run the network, and it [AUDIO OUT] one of the patterns that you stored in it. So here's the penguin. Add noise. Add a little bit of noise. Here, he's coloring it in, I guess, to make it. And then you run the network, and it remembers the [AUDIO OUT]. OK, so that's interesting. So he ran it. He or she ran the network. And you see that it kind of recovered a face, but there's some penguin head stuck on top. So what goes wrong there? Something bad happened, right? The network was trained with a face, a globe, and a penguin. And you run it most of the time, and it works. And then, suddenly, you run it, and it recovers a face with a penguin head sticking out of it. What happened? So we'll explain what happens. What happened was that that this network was trained in a way that has what's called a spurious attractor. And that often happens when you train a network with too many memories, when you exceed the capacity of the network to store memories. So let me show you what actually goes wrong mathematically there. All right, so we're going to do the same analysis we did before. We're going to take a matrix. We're going to build a network that stores multiple memories. This was the matrix to build one memory. Let's see what I did here. So in order for-- Yeah. Sorry. This was the matrix for multiple memories. We're summing mu. I just didn't write the mu equals 0 to p. So we're going to program p different memories by summing up this outer product for all the different patterns that we're wanting to store, all right? We're going to ask whether one of those-- under what conditions is one of those patterns, the xi 0, actually a stable state of the network? So we're going to build a network with multiple patterns stored, and we're just going to ask a simple question. Under what conditions is xi 0 going to evolve to xi 0? And if xi 0 evolves toward xi 0, or stays at xi 0, then it's a stable point. All right, so let's do that. We're going to take that weight matrix, and we're going to plug-in our multiple memory weight matrix, all right? You can see that we can pull out the xi i out of this sum over j. And the next step is we're going to separate this into a sum over mu equals zero and a separate sum for mu not equal to 0, all right? So this is a sum over all the mu's, but we're going to pull out the mu zero term as a separate sum over j. Is that clear? Anyway, this is just for fun. You don't have to reproduce this, so don't worry. So we're going to pull out the mu equals zero term. And what does that look like? It's xi i0, sum over j of xi j0, xi j0. So what is that? That's just N, right, the number of neurons. We're summing over j equals 1 to N, number of neurons. I should add those limits here. So you can see that that's N. So this is just sine of xi i0 plus a bunch of other stuff. So you can see right away that if all of this other stuff is really small, then this is a fixed point. Because if all this stuff is small, the system will evolve toward the sine of xi [INAUDIBLE],, which is just xi i0. So let's take a look at all of this stuff and see what can go wrong to make this not small. All right, so let's zoom in on this particular term right here. So what is this? This is sum over j, xi mu j, xi 0 j. So what is that? Anybody know what that is? It's a vector operation. What is that? AUDIENCE: The dot product between one image and then zero. MICHALE FEE: Exactly. It's a dot product between the image that we're asking is it a stable fixed point and all the other images in the network. Sorry, and the mu-th image. So what this is saying is that if our image is orthogonal to all the other images in the network that we've tried to store, then this thing is zero. So this is referred to as crosstalk between the stored memories. So if our pattern, xi 0, is orthogonal to all the other patterns, then it will be a fixed point. So the capacity of the network, the crosstalk-- the capacity of the network depends on how much overlap there is between our stored pattern and all the other patterns in the network, all right? So if all the memories are orthogonal, if all the patterns are orthogonal, then they're all stable attractors. But if one of those memories, xi 1-- let's take xi 1-- is close to xi 0, then xi 0 dot xi 1-- the two patterns are very similar-- then the dot product is going to be N, right? And when you plug that, if that's N, then you can see that this becomes xi 1 i, right? So what happens is that these other memories that are similar to our memorized pattern-- then when you sum that, when you compute that sum, some of these terms get big enough so that the memory in the next step is not that stored memory. It's a combination. All right? So what happens is-- so the way the capacity of the network is stored. So you can't actually choose all your memories to be orthogonal. But a pretty good way of making memory is nearly orthogonal is to store them as random [AUDIO OUT].. So a lot of the thinking that goes into how you would build a network that stores a lot of patterns is to take your memories and sort of convert them in a way that makes them maximally orthogonal to each other. You can use things like lateral inhibition to orthogonalize different inputs. So once you make your patterns sort of noisy, then it turns out you can actually calculate that if the values of xi sort of look like random numbers, that you can store up to about 15% of the number of neurons worth of memories in your network. So if I have 100 neurons in my network, I should be able to store about 15 different states in that network before they start to interfere with each other, before you have a sufficiently high probability that two of those memories are next to each other. And as soon as that happens, then you start getting crosstalk between those memories that causes the state of the system to evolve in a way that doesn't recall one of your stored memories, all right? And what that looks like in the energy landscape is when you build a network with, let's say, five memories, there will be five minima in the network that sort of have equal low values of energy. But when you start sticking too many memories in your network, you end up with what are called spurious attractors, sort of local minima that aren't at the-- that don't correspond to one of the stored memories. And so as the system evolves, it can be going downhill and get stuck in one of those minima that look like a combination of two of the stored memories. And that's what went wrong here with the guy with the penguin sticking out of his head. Who knows? Maybe that's what happens when you look at something and you're confused about what youre seeing. We don't know if that's actually what happens, but it would be an interesting thing to test. Any questions? All right, so that's-- so you can see that these are long-term memories. These don't depend on activity in the network to store, right? Those are programmed into the synaptic connections between the neurons. So you can shut off all the activity. And if you just put in up a pattern of input that reminds you of something, the network will recover the full memory for you.
MIT_940_Introduction_to_Neural_Computation_Spring_2018
6_Dendrites_Intro_to_Neural_Computation.txt
MICHALE FEE: So today, we're going to start a new topic. We're going to be talking about the propagation of signals in dendrites and axons. So the model that we've considered so far is just a soma. We basically had a kind of a spherical shell of insulator that we've been modeling that has different kinds of ion channels in it that allow the cell to do things like generate an action potential. So the reason that we've been doing that is because in most vertebrate neurons, the soma is the sight in the neuron at which the decision to make an action potential is made. So all kinds of inputs come in, and then the soma integrates those inputs, accumulates charge, reaches some spiking threshold, and then generates an action potential. And so that's where the decision is about whether a neuron is going to spike or not. Now, in real neurons, relatively few of the inputs actually come onto the soma. Most of the synaptic inputs, most of the inputs arrive onto the dendrites, which are these branching cylinders of cell membrane. And most of the synapses actually form onto the dendrite at some distance from the soma. There are synapses that form onto the soma. But the vast majority of synapses form onto these dendrites. And sometimes, those synapses can be as far away as 1 or 2 millimeters for very large neurons in cortex. So there's a population of neurons in deep layer V some of you may have heard about that have dendrites that reach all the way up into layer I. And those cells can be-- those dendrites can be as long as a couple of millimeters. So we really have to think about what this means, how signals get from out here in the dendrite down to the soma. And that's what we're going to talk about today. So the most important thing that we're going to do is to simplify this-- by the way, anybody know what kind of cell this is? AUDIENCE: [INAUDIBLE] MICHALE FEE: Good. It's a Purkinje cell. And it was one of the-- this is one of the cells that Ramon Cajal drew back in the late 1800s. So the most important thing we're going to do is to simplify this very complex dendritic arborization. And we're going to basically think of it as a single cylinder. Now mathematically, there are reasons why this is not actually unreasonable. You can write down-- if you analyze the structure of dendritic trees, there is something about the way the ratio of the diameters of the different dendrites as they converge to form thicker branches as you get closer and closer to closer to the soma that mathematically makes this not a bad approximation for an extended dendritic arbor like this. So we're going to think about the problem of having a synapse out here on this cylindrical approximation to a dendrite. And we're going to imagine that we're measuring the voltage down here at the soma or at different positions along the dendrite. And we're going to ask, how does synaptic input out here on this cylinder affect the membrane potential in the dendrite and down here at the soma? And the basic conceptual picture that you should have is that those signals propagate some distance down the soma but gradually leak out. And there's a very simple kind of intuitive picture, which is that the dendrite you can think of this as a leaky pipe or a leaky hose. So imagine you took a piece of garden hose and you poked holes in the side of it or so that they're kind of close together. And when you hook this up to the water faucet, you turn the water on, that some of that water flows down the hose. But some of it also leaks out through the holes that you drilled. And you can see that eventually the water is all going to leak out through the sides, and it's not going to go all the way down to the other end to get to your hydrangeas or whatever it is that you're watering. And so you can see that that signal isn't going to get very far if the holes you drilled are big enough. And the general kind of analogy here is that current is like the flow of the water. Electrical current here is like water current flowing down the pipe. And voltage is like pressure. So the higher the pressure here, the higher the current flow you'll get. And we're going to develop an electrical circuit model for a dendrite like this that's going to look like a set of resistors going down the axis of the dendrite and a set of resistors that go across the membrane. And you can see that each little piece of membrane here, a little piece of the dendrite is going to look like a resistor divider, where you have a resistor along the axial direction and a resistance across the membrane. And as you make a longer and longer piece of dendrite, you're going to get additional voltage dividers. Each voltage divider divides the voltage by some constant factor. And as you stack those things up, the voltage drops by some constant factor per unit length of the dendrite. And so you can see-- anybody want to just take a guess of what kind of functional form that would give you if you divide the voltage by some constant factor each unit length of the dendrite? AUDIENCE: Exponential. MICHALE FEE: Exponential. That's right. And that's where this exponential falloff comes from. So today, we're going to do the following things. And we're going to basically draw a circuit diagram, an electrical equivalent circuit of a piece of dendrite. And I would like you to be able to make that drawing if you're asked to. We're going to be able to plot the voltage in a piece of dendrite as a function of distance for the case of a dendrite that has leaky walls and for the case of a dendrite that has non-leaky walls. And we're going to describe the concept of a length constant, which I'll tell you right now is just the 1 over the distance at which the voltage falls by 1 over e as a function of length. So it's some length over which the voltage falls by some amount 1 over e. We're going to go over how that length constant depends on the radius of the dendrite. It's a function of the size. And also, we're going to describe the concept of an electrotonic length. And then finally, we're going to go to some sort of extreme simplifications, even beyond taking that very complex dendrite, simplifying it as a cylinder. We're going to go to an even simpler case where we can just treat the cell as a soma connected to a resistor connected by a resistor to a separate compartment. And that's sort of the most extreme simplification of a dendrite. But, in fact, it's an extremely powerful one from which you can get a lot of intuition about how signals are integrated in dendrites. So we're going to analyze a piece of dendrite using a technique called finite element analysis. We're going to imagine-- we're going to approximate our piece of dendrite as a cylinder of constant radius a, an axial dimension that we're going to label x. We're going to break up this cylinder into little slices. So imagine we just took a little knife, and we cut little slices of this dendrite. And they're going to be very small slices. And we're going to model each one of those slices with a separate little circuit. And then we're going to connect them together. And we're going to let the length of that slice be delta x. And then eventually, we're going to let delta x go to 0. We're going to get some differential equations that describe that relationship between the voltage and the current in this piece of dendrite. So let's start with a model for the inside of this cylinder. So remember, in a cell, we had the inside of the cell modeled by a wire. In a dendrite, we can't just use a wire. And the reason is that current is going to flow along the inside of the dendrite. It's going to flow, and it's going to experience voltage drops. So we have to actually model the resistance of the inside of the dendrite. And we're going to model it like the resistance between each one of those slices with a resistor value little r. We're going to model the outside of the axon or the dendrite as a wire. And the reason we're going to put resistors inside and just a wire outside is because the resistance-- remember the axon or dendrite is very small. In the brain, dendrites might be about 2 microns across. So the current is constrained to a very small space. When currents then flow outside, they're flowing in a much larger volume, and so the effective resistance is much smaller. And we're going to essentially ignore that resistance and treat the outside as just a wire. Now we have to model the membrane. Anybody want to take a guess how we're going to model the membrane? AUDIENCE: [INAUDIBLE] MICHALE FEE: What's that? I heard two correct answers. What did you say Jasmine? AUDIENCE: The capacitor. MICHALE FEE: Capacitor. And? AUDIENCE: [INAUDIBLE] MICHALE FEE: Excellent. Whoops. I wasn't quite there. Let's put that up. Good. So we're going to have a capacitance. We're going to imagine that this membrane might have an ion selective ion channel with some conductance G sub l and an equilibrium or reversal potential E sub l. Now coming back to these terms here, we're going to model. We're going to write down the voltage in each one of our little slices of the dendrite. So let's do that. Let's just pick one of them as V, the voltage, at position, x, and time t. The voltage in the next slides over is going to be V at x plus delta x of t. And the voltage in this slice over here is V of x minus delta x and t. So now, we can also write down the current that goes axially through that piece of-- that slice of our dendrite. We're going to call that I of x and t. And we can write down also the current in every other time-- in every other slice of the dendrite, I of x minus delta x and t. And we're going to model this piece of membrane in each one of those slices as well. Any questions about that? That's the basic setup. That's the basic finite element model of a dendrite. No questions? Now we also have to model the current through the membrane. That's going to be I sub m, m for membrane. And it's going to be a current per unit length of the dendrite. We're going to imagine that there's current flowing from the inside to the outside through the membrane. And there's going to be some current per unit length of the dendrite. And we can also imagine that we have current being injected, let's say, through a synapse or through an electrode that we can also model as coming in at any position x. And this is, again, current per unit length times delta x. Does that make sense? So the first thing we're going to do is we're going to write down the relation between V in each node and the current going through that node. So let's do that. We're going to use Ohm's law. So the voltage difference between here and here is just going to be the current times that resistance. Does that make sense? We're just going to use Ohm's law-- very simple. So V of x and t minus V of x plus delta x, t is just equal to little r times that current. And now we're going to rewrite this. Let's divide both sides of this equation by delta x. So you see 1 over delta x V of x minus V of x plus delta x is equal to little r over delta x times the current. And can anyone tell me what that thing is as delta x goes to 0? AUDIENCE: [INAUDIBLE] MICHALE FEE: Good. It's the derivative of-- it's the spatial derivative of the voltage. That's just the definition of derivative when delta x goes to 0. So let's write that out. Notice that it's the negative of the derivative because the derivative would have V of x plus delta x minus V of x. So it's a negative of the derivative. So negative dv/dx is equal to some resistance times the current. And notice that this capital R sub a is called the axial resistance per unit length. It's this resistance per unit length of the dendrite. Now notice that if you pass current down that dendrite, the voltage drop is going to keep increasing. The resistance is going to keep increasing the longer that piece of dendrite is. So you can think about resistance in a piece of dendrite more appropriately as resistance per unit length. So there's Ohm's law-- minus dv/dx equals axial resistance per unit length times the current. Any questions? And notice that according to this, current flow to the right, positive I is defined as current to the right here produces a negative gradient in the voltage. So the voltage is high on this side and low on that side. So the slope is negative. So now let's take this, and let's analyze this for some simple cases where we have no membrane current. So we're going to just ignore those. And we're just going to include these axial resistances. And we're going to analyze what this equation tells us about the voltage inside of the dendrite. Does that makes sense? So let's do that. So if we take that equation, we can write down the current at, let's say, these two different nodes-- I of x minus delta x and I of x. And because there are no membrane currents, you can see that those two currents have to be equal to each other. Kirchoff's Current Law says that the current into this node has to equal the current out of that node. So if there are no membrane currents, there's nothing leaking out here, then those two currents have to equal each other. And we can call that I0. So now, dv/dx is minus axial resistance times I0. And what does that tell us about how the voltage changes in a piece of dendrite if there's no membrane current, if there's no leaky membrane? There's no leakage in the membrane. If dv/dx is a constant, what does it tell us? AUDIENCE: [INAUDIBLE]. MICHALE FEE: Yeah. But decreases how? What functional form? AUDIENCE: [INAUDIBLE] MICHALE FEE: Good. It changes linearly. So if there are no membrane conductances, then the membrane potential changes linearly. So you can see that the voltage as a function of position-- sorry I forgot to label that voltage-- just changes linearly from some initial voltage to some final voltage over some length l. We're considering a case of a piece of dendrite of length l. Yes? AUDIENCE: [INAUDIBLE] MICHALE FEE: Yep. So I just rewrote this equation. Sorry, I just rewrote this equation moving the minus sign to that side. Yep. Good. Now you can see that the delta V that the voltage difference from the left side to the right side is just the total resistance times the current-- just Ohm's law again. And the total resistance is the axial resistance per unit length times the length. Really simple. Voltage changes linearly. If you don't have any membrane conductances, and you can just write down the relation between the voltage difference on the two sides and the current. So, in general, let's think a little bit more about this problem of being able to what you need to write down the solution to this equation. It's a very simple equation. If you integrate this over x, you can see that the voltage as a function of position is some initial voltage minus a resistance times the current times x. And that, again, just looks like this. That's where that solution came from. It's just integrating this over x. And you can see that in order to write down the solution to this equation, we need a couple of things. We need to either know the voltages at the beginning and end, or we need to know the current. We need to know some combination of those three things. So let's write down the voltage here. Let's call it V0. Let's write down the voltage there, V sub l, and plug those in. And you can see that-- there's V0. There's V sub l. You can see that if you know any two of those quantities-- V0, V sub l, or Io-- you can calculate the third. So if you know V0 and Vl, you can calculate the current. If you know V0 and the current, you can calculate V sub l. That is the concept of boundary conditions. You can write down the voltages or the currents at some positions on the dendrite and figure out the total solution to the voltage [AUDIO OUT] of position. Does that make sense? If you don't know some of those quantities, you can't write down the solution to the equation. It's just the simple idea that when you integrate a differential equation, you need to have an initial condition in order to actually solve the equation. Any questions about that? So let's think about a couple of different kinds of boundary conditions that you might encounter. So this boundary condition right here-- so let's say that we inject a x amount of current I0 into a piece of dendrite. And we take that piece of dendrite and we inject current on one end, and we cut the other end so that it's open. What does that produce at the other end? So we have a wire that describes the inside of the dendrite. We have a wire that describes the outside of the dendrite. And if you cut the end of the dendrite off so that they're-- it's leaky-- so it's an open end-- what does that look like electrically? Like what's the word for-- like those two wires are touching each other. What's that called? They're shorts. If you cut the end of a dendrite off, you've created a short circuit. The inside is connected to the outside. So that's called an open end boundary condition. And what can you say about the voltage at this end? If the outside is [AUDIO OUT] what can you say about the voltage inside the dendrite at that end? AUDIENCE: [INAUDIBLE] MICHALE FEE: It's 0. Good. So we have injected current. We have V0, the voltage at this end. And we know, if we have an open end, that the voltage here is 0. Now we can write down. We know that the initial voltage is V0. The voltage at position L is 0. And now you can-- you know that the current here is equal to the current there, and you can write down the equation and solve V0. So V0 is just the resistance, the total resistance of the dendrite times the injected current. And that Rin is known as input impedance. It's just the resistance of the dendrite. It tells you how much voltage change you will get if you inject a given amount of current. All right. Any questions about that? Let's consider another case. Rather than having an open end, let's leave [? the end of ?] the dendrite closed so that it's sealed closed. So we're going to consider a piece of dendrite that, one end, we're injecting current in, and the other end is closed. So what do you think that's going to look like? It's called a closed end. What does that look like here? It's an open circuit. Those two wires are not connected to each other. There's no resistance between them. Let's say we define the voltage here as V0. What can you say-- well, what you can say about the current there is that the current is 0, because it's an open circuit. There's no current flowing. And so the current flowing through this at this end is 0. Does that make sense? So what can you say about the current everywhere? AUDIENCE: 0. MICHALE FEE: It's 0. And what can you say about the voltage everywhere? It's V0. Exactly. So the voltage everywhere becomes V0. And the input impedance? Anybody want to guess what the input impedance is? How much-- what's the ratio of the voltage at this end and the current at this end? AUDIENCE: Infinite? MICHALE FEE: It's infinite. That's right. So we're just trying to build some intuition about how voltage looks [AUDIO OUT] of distance for one special case, which is a piece of dendrite of some finite length for which you have no membrane currents. And you can see that the voltage profile you get is linear, and the slope of it depends on the boundary conditions, depends on whether the piece of dendrite has a sealed end, whether it's open. All right. So now we're going to come back to the case where we have membrane currents, and we're going to derive the general solution to the voltage in a piece of dendrite for the case where we have membrane capacitance and membrane currents. All right. And I don't expect you to be able to reproduce this, but we're going to derive what's called the cable equation, which is the general mathematical description, the most general mathematical description for the voltage in a cylindrical tube, of which-- that's what dendrites look like. So we're going to write down that differential equation, and I want you to just see what it looks like and where it comes from, but I don't expect you to be able to derive it. All right. So let's come back to this simple model that we started. We're going to put our model for the membrane back in. Remember, that's a capacitor and a conductance in parallel. We're going to-- we can write down the membrane current, and we're going to have an injected current per unit length. So Kirchoff's current law tells us the sum of all of those currents into each node has to be 0. So let's just write down-- let's just write down an equation that sums together all of those and sets them to 0. So the membrane current leaking out minus that injected current coming in. They have positive signs because one is defined as positive going into the dendrite, and the other one is defined as positive going out. So those two, the membrane currents, plus the current going out this way minus the current coming in that way is 0. So we're going to do the same trick we did last time. We're going to divide by delta x. So, again, membrane current per unit length times the length of this finite element. We're going to divide by delta x. So this thing right here, i membrane minus i electrode, I guess, equals minus 1 over delta x I of x minus I of x minus delta x. So what is this? You've seen something like that before. It's just a derivative. First derivative of I with respect to position. So now what you see is that the membrane current minus the injected current is just the first derivative of I. So hang in there. We're going to substitute that with something that depends on voltage. So how do we do that? We're going to take Ohm's law. There's Ohm's law. Let's take the derivative of that with respect to position. So now we get the second derivative of voltage with respect to position is just equal to minus Ra times the first derivative of current. And you can see we can just take this and substitute it there. So here's what we get, that the second derivative of voltage with respect to position is just equal to the membrane or injected current coming into the dendrite at any position. So the curvature of the voltage, how curved it is, just depends on what's coming in through the membrane. Remember, in the case where we had no membrane current and no injected current, the curvature was 0, d2V dx squared is 0, which, if the curvature is 0, then what do you have? A straight line. Now, we're going to plug in the right equation for our membrane current. What is that? That we know. It's just a sum of two terms. What is it? It's the sum of-- remember, this is going to be the same as our soma model. What was that? We had two terms. What were they? The current through the membrane in the model, in the Hodgkin-Huxley model is? What's that? AUDIENCE: [INAUDIBLE]. MICHALE FEE: Good. It's a capacitive current and a membrane ionic current. So let's just plug that in. We're just going to substitute into here the current through the capacitor and the current through this conductance. That's just C dV dt G times V minus EL. It's a capacitive part and a resistive part. Now, the capacitance is a little funny. It's capacitance per unit length times the length of the element plus-- and the [AUDIO OUT] is conductance per unit length times the length of our finite element. Capacitance per unit length and ionic conductance per unit length. And we're going to plug that into there. We're first going to notice that this E leak is just an offset, so we can just ignore it. We can just set it to 0. We can always add it back later if we want. We divide both sides by the membrane conductance per unit length to get this equation. And that's called the cable equation. It's got a term with the second derivative of voltage with respect to position, and it's got a term that's the first derivative of voltage with respect to time. That's because of the capacitor. And then it's got a term that just depends on [AUDIO OUT].. Now, that's the most general equation. It describes how the voltage changes in a dendrite if you inject a pulse of current, how that current will propagate down the dendrite or down an axon. We're going to take a simplifying case. Next, we're going to study the case just of the steady state solution to this. But I want you to see this and to see how it was derived just using finite element analysis, deriving Ohm's law in a one-dimensional continuous medium. And by plugging in the equation for the membrane that includes the capacitive and resistive parts, you can derive this full equation for how the voltage changes in a piece of dendrite. Now, there are a couple of interesting constants here that are important-- lambda and tau. So lambda has units of length. Notice that all of the denominators here have units of voltage. So this is voltage per distance squared. So in order to have the right units, you have to multiply by something that's distance squared. This is voltage per unit time, so you have to multiply by something that has units of time. So that is the length constant right there, and that is a time constant. And the length constant is defined as 1 over membrane conductance. That's the conductance of the membrane, through the membrane, and this is the axial resistance down the dendrite. So this is conductance per unit length, and this is resistance per unit length. And when you multiply those things together, you get two per unit length down in the denominator. So when you put those in the numerator, you get length squared. And then you take the square root, and that gives you units of length. The time constant is just the capacitance per unit length divided by the conductance per unit length. And that is the membrane time constant, and that's exactly the same as the membrane time constant that we had for our cell. It's a property of the membrane, not the geometry. So any questions about that? It was-- it's a lot. I just wanted you to see it. Yes, [INAUDIBLE]. AUDIENCE: Like, two slides ago [INAUDIBLE] MICHALE FEE: This one, or-- AUDIENCE: One more slide [INAUDIBLE].. MICHALE FEE: Yes, here. AUDIENCE: So when you plug that in for the derivative of V, were we not assuming that there was no membrane [INAUDIBLE]?? MICHALE FEE: No. That equation is still correct. AUDIENCE: OK. MICHALE FEE: It's-- voltage is the derivative with respect to position as a function of the axial current. AUDIENCE: OK. MICHALE FEE: OK? Remember, going back up to here, notice that when we derive this equation right here, we didn't even have to include these membrane. They don't change anything. It's just Ohm's law. It's the voltage here minus the voltage there has to equal the current flowing through that resistor. Doesn't matter what other currents-- whether current is flowing in other directions here. AUDIENCE: OK. MICHALE FEE: Does that make sense? The current through that resistor is just given by the voltage difference on either side of it. That's Ohm's law. So now we're going to take a simple example. We're going to solve that equation for the case of steady state. How are we going to take the steady state? How are we going to find the steady state version of this equation? Any idea? AUDIENCE: [INAUDIBLE]. MICHALE FEE: Good. We just set dV dt to 0, and we're left with this equals that. So we're going to take a piece of our cable, and we're going to imagine that we take a piece of dendrite that's infinitely long in either direction. And somewhere here in the middle of it, we're going to inject-- we're going to put an electrode, and we're going to inject current at one position. So it's injecting current at position 0. How many of you have heard of a delta function, a Dirac delta function? OK. So we're going to define the current as a function of position as just a current times a Dirac delta function of x, that just says that all the current is going in at position 0, and no current is going in anywhere else. So the Dirac delta function is just-- it's a peaky thing that is very narrow and very tall, such that when you integrate over it, you get a 1. So we're going to go to the steady state solution. And now let's write down that. So there's the steady state cable equation. And we're going to inject current at a single point. So that's what it looks like. Does anyone know the solution to this? Notice, what this says is we have a function. It's equal to the second derivative of that function. Anybody know? There's only one function that does this. It's an exponential. That's right. So the solution to this equation is an exponential. V of position is V0, some voltage in the middle, e to the minus x over lambda. Why do I have an absolute value? What is the voltage going to look like if I inject current right here? You're going to have current flowing. Where's the current going to go? If I inject current into the middle of a piece of dendrite, is it all going to go this way? No. What's it going to do? It's going to go both ways. And the current-- the voltage is going to be high here, and it's going to fall as you go in both directions. That's why we have an absolute value here. So the voltage is going to start at some V0 that depends on how much current we're injecting, and it's going to drop exponentially on both sides. And notice what's right here. The lambda tells us the 1 over e point, how far away the 1 over e point of the voltage is. What that means is that the voltage is going to fall to 1 over e of V0 at a distance lambda from the side at which the current is injected. Does that make sense? That is the steady state space constant. It has units of length. It's how far away do you have to go so that the voltage falls to 1 over-- falls to 1 over 2.7 of the initial voltage. Any questions? It's pretty simple. We took an unusually complicated route to get there, but that's the-- the nice thing about that is you've seen the most general solution to how a cable-- a dendrite will behave when you inject current into it. So now we can calculate the current as a function of position. Any idea how to do that? What-- if you know voltage, what do you use to calculate current? Which law? AUDIENCE: Ohm's. MICHALE FEE: Ohm's law. Anybody remember what Ohm's law looks like here? AUDIENCE: [INAUDIBLE]. MICHALE FEE: Yes. And we have to do something else. The-- remember, the current is what? Ohm's law in a continuous medium, the current is just going to be what of the voltage, the blank of the voltage? AUDIENCE: Derivative? MICHALE FEE: The derivative of the voltage. So we're just going to take this and take the derivative. That's it. dV dx is just equal to minus R times I. So the current is proportional to the derivative of this. What's the derivative of an exponential? Just another exponential. So there we go. The current, and then there's some-- you have to bring the lambda down when you take the derivative. So the current is now just minus 1 over the axial resistance per unit length times minus V0 over lambda-- lambda comes down when you take the derivative-- times e to the minus x over lambda. Notice, the current is to the right on this side, so the current is positive it's flowing to the left on that side, so the current is negative. So to do this properly, you'd have-- this is the solution on the right side. You'd have to write another version of this for the current on the left side, but I haven't put that in there. And, again, the current starts out at I0, and drops exponentially, and it falls to 1 over e at a distance lambda. Why is that? Because the current is leaking out through the holes in our garden hose. So as you go further down, less and less of the current is still going down the dendrite. I don't expect you to be able to derive this, but, again, just know where it comes from. Comes from Ohm's law. So I want to show you one really cool thing about the space constant. It has a really important dependence on the size of the dendrite. And we're going to learn something really interesting about why the brain has action potentials. So let's take a closer look at the space constant, and how you calculate it, and how it depends on the size, this diameter, this radius of the dendrite. So we're going to take a little cylinder of dendrite of radius a length little l. G sub m is the membrane conductance per unit length. Let's just derive what that would look like. The total membrane conductance of this little cylinder of dendrite, little cylinder of cell membrane, is just the surface area of that cylinder times the conductance per unit area. Remember, this is the same idea that we've talked about when we were talking about the area of our soma. We have a conductance per unit area that just depends on the number of ion channels and how open they are on that piece of membrane. So the total conductance is just going to be the conductance per unit area times the area. And the area of that cylinder is 2 pi a-- that gives us the circumference-- times the length, 2 pi al. And the conductance per unit length is just that total conductance divided by the length. So it's 2 pi a times g sub l, the conductance per unit area. So that's membrane conductance per unit length. The axial resistance per unit length along this piece, this little cylinder of dendrite, we can calculate in a similar way. The total axial resistance along that dendrite is-- can be calculated using this equation that we developed on the very first day, the resistance of a wire in the brain, the resistance of a chunk of extracellular or intracellular solution. The resistance is just the resistivity times the length divided by the area. The longer-- for a given medium of some resistivity, the longer you have to run your current through, the bigger the resistance is going to be. And the bigger the area, the lower the resistance is going to be. So that total resistance is-- it has units of ohm-millimeters. So it's the resistivity times l divided by A. In intracellular space, that's around 2,000 ohm-millimeters. And the cross-sectional area is just pi times a squared. So now we can calculate the axial resistance per unit length. That's the total resistance divided by l. So that's just resistivity divided by A, which is resistivity divided by pi radius squared, just the cross-sectional area, and that has units of ohms per millimeter. So now we can calculate the steady state space constant. Conductance per unit length and axial resistance for unit length-- the space constant is just 1 over the product of those two, square root. We're just going to notice that that's siemens per millimeter, ohms per millimeter, inverse ohms. So those cancel, and you're left with millimeter squared, square root, which is just millimeters. So, again, that has the right units, units of length. But now let's plug these two things into this equation for the space constant and calculate how it depends on a. So let's do that. Actually, the first thing I wanted to do is just show you what a typical lambda is for a piece of dendrite. So let's do that. Conductance per area is around 5 times 10 to the minus 7, typically. So the conductance per unit length of a dendrite, 6 nanosiemens per millimeter. You don't have to remember that. We're just calculating the length constant. Axial resistance is-- plugging in the numbers for a piece of dendrite that's about 2 microns in radius, the axial resistance per unit length is about 60 megaohms per millimeter. And so when you plug those two things to calculate lambda, you find that lambda for a typical piece of dendrite is about a millimeter. So that's a number that I would hope that you would remember. That's a typical space constant. So if you inject a signal into a piece of dendrite, it's gone-- it's mostly gone or about 2/3 gone in a millimeter. And that's how you can have dendrites that are up in the range of close to a millimeter, and they still are able to conduct a signal from synaptic inputs out onto the dendrite down to the soma. So a millimeter is a typical length scale for how far signals propagate. So now let's plug in those-- the expressions that we derived for conductance per unit length and axial resistance for unit length of a into this equation for the space constant. And what you find is that the space constant is a divided by 2 times the resistivity times the membrane conductance per unit area, per area to the 1/2. It goes as the square root. The space constant, the length, goes as the square root of the radius. And notice that the space constant gets bigger as you increase the size of the dendrite. As you make a dendrite bigger, what happens is the resistance down the middle gets smaller. And so the current can go further down the dendrite before it leaks out. Does that make sense? But the resistance [AUDIO OUT] is dropping as the square of the area, but the surface area is only increasing linearly. And so the resistance down the middle is dropping as the square. The conductance out the side is growing more slowly. And so the signal can propagate further the bigger the dendrite is. So that's why-- it's very closely related to why the squid giant axon is big. Because the current has more access to propagate down the axon the bigger the cylinder is. But there are limits to this. So you know that, in our brains, neurons need to be able to send signals from one side of our head to the other side of our head, which is about how big? How far is that? Not in Homer, but in [AUDIO OUT] seen the cartoon with little-- OK, never mind. How big across is the brain? How many millimeters, about? Yes. Order of magnitude, let's call it 100. So a piece of dendrite 2 microns across has a length constant of a millimeter. How-- what diameter dendrite would we need if we needed to send a signal across the brain passively through a piece of-- a cylindrical piece of dendrite like this? So lambda scales with radius. 2 microns diameter, radius, gives you 1 millimeter. Now you want to go to-- you want to go 100 times further. How-- by what factor larger does the radius have to be? AUDIENCE: [INAUDIBLE]. MICHALE FEE: 10,000. Good. And so how big does our 2-micron radius piece of dendrite have to be to send a signal 100 millimeters? 10,000 times 2 microns, what is that? Anybody? AUDIENCE: [INAUDIBLE]. MICHALE FEE: 2 centimeters. So if you want to make a piece of dendrite that sends a signal from one side of your brain to the other 100 millimeters away, you need 2 centimeters across. Actually, that's the radius. It needs to be 4 centimeters. across. Doesn't work, does it? So you can make things-- you can make signals propagate further by making dendrites bigger, but it only goes as the square root. It's like diffusion. it's only-- it increases very slowly. So in order to get a signal from one side of your brain to the other with the same kind of membrane, your dendrite would have to be 4 centimeters in diameter. So that's why the brain doesn't use passive propagation of signals to get from one place to the other. It uses action potentials that actively propagate down axons. Pretty cool, right? All right. So I want to just introduce you to the concept of electrotonic length. And the idea is very simple. If we have a piece of dendrite that has some physical length l, you can see that that length l might be very good at conducting signals to the soma if what? If-- what aspects of that dendrite would make it very good at conducting signals to the soma? AUDIENCE: [INAUDIBLE]. MICHALE FEE: So it's big. Or what else? AUDIENCE: Short. MICHALE FEE: It's got a fixed physical length l, so let's think of something else. AUDIENCE: [INAUDIBLE]. MICHALE FEE: Less leaky. Right. OK. So depending on the properties of that dendrite, that piece of dendrite of physical length l might be very good at sending signals to the soma, or it might be very bad if it's really thin, really leaky. So we have to compare the physical length to the space constant. So in this case, there's very little decay. The signal is able to propagate from the site of the synapse to the soma. In this case, a slightly smaller piece of dendrite might have a shorter lambda, and so there would be more decay by the time you get to the soma. And in this case, the lambda is really short, and so the signal really decays away before you get to the soma. So people often refer to a quantity as the-- referred to as the electrotonic length, which is simply the ratio of the physical length to the space constant. So you can see that in this case, the physical length is about the same as lambda, and so the electrotonic length is 1. In this case, the physical length is twice as long as lambda, and so the electrotonic length of that piece of dendrite is 2. And in this case, it's 4. And you can see that the amount of signal it gets from this end to that end will go like what? Will depend how on the electrotonic length? It will depend something like e to the minus L. So a piece of dendrite that has a low electrotonic length means that the synapse out here at the other end of it is effectively very close to the soma. It's very effective at transmitting that signal. If the electrotonic length is large, it's telling you that some input out here at the end of it is very far away. The signal can't propagate to the soma. And the amount of signal that gets to the soma goes as e to the minus L, e to the minus [AUDIO OUT].. So if I told you that a signal is at the end of a piece of dendrite that has electrotonic length 2, how much of that signal arrives at the soma. The answer is e to the minus 2, about 10%, whatever that is. So I want to tell you a little bit more about the way people model complex dendrites in-- sort of in real life. So most of the time, we're not integrating or solving the cable equation. The cable equation is really most powerful in terms of giving intuition about how cables respond. So you can write down exact solutions to things like pulses of current input at some position, how the voltage propagates down the dendrite, the functional form of the voltage as a function of distance. But when you actually want to sort of model a neuron, you're not usually integrating the cable equation. And so people do different approximations to a very complex dendritic structure like this. And one common way that that's done is called multi-compartment model. So, basically, what you can do is you can model the soma with this capacitor-resistor combination. And then you can model the connection to another part of the dendrite through a resistor to another sort of finite element slice, but we're gonna let the slices go to 0 length. We're just going to model them as, like, chunks of dendrites, that are going to be modeled by a compartment like this. And then that can branch to connect to other parts of the dendrite, and that can branch to connect to other parts of the model that model other pieces [AUDIO OUT] So you can basically take something like this and make it arbitrarily complicated and arbitrarily close to a representation of the physical structure of a real dendrite. And so there are labs that do this, that take a picture of a neuron like this and break it up into little chunks, and model each one of those little chunks, and model the branching structure of the real dendrite. And you can put in real ionic conductances of different types out here in this model. And you get a gazillion differential equations. And you can [AUDIO OUT] those differential equations and actually compute, sort of predict the behavior of a complex piece of dendrite like this. Now, that's not my favorite way of doing modeling. Any idea why that would be-- why there could be a better way of modeling a complex dendrite? I mean, what's the-- one of the problems here is that, in a sense, your model gets to be as complicated as the real thing. So it would be-- it's a great way to simulate some behavior, but it's not a great way of getting an intuition about how something works. So people take simplified versions of this, and they can take this very complex model and simplify it even more by doing something like this. So you take a soma and a dendrite. You can basically just break off the dendrite into a separate piece and connect it to the soma through a resistor. Now, we can simplify this even more by just turning it into another little module, a little compartment, that's kind of like the soma. It just has a capacitor, and a membrane resistance, and whatever ion channels in it you want. And it's a dendritic compartment that's connected to the somatic compartment through a resistor. And if you write that down, it just looks like this. So you have a somatic compartment that has a somatic membrane capacitance, somatic membrane conductances, a somatic voltage. You have a dendritic compartment that has all the same things-- dendritic membrane capacitance, conductances, and voltage, and they're just connected through a coupling resistor. It turns out that that very simple model can explain a lot of complicated things about neurons. So there are some really beautiful studies showing that this kind of model can really explain very diverse kinds of electrophysiological [AUDIO OUT] neurons. So you can take, for example, a simple model of a layer 2/3 pyramidal cell that has a simple, compact dendrite. And you can write down a model like this where you have different conduct [AUDIO OUT] dendrite. You have Hodgkin-Huxley conductances in the soma. You connect them through this resistor. And now, basically, what you can do is you can model that spiking behavior. And what you find is that if you have the same conductances in the dendrite and in the soma but you simply increase the area, the total area of this compartment, just increase the total capacitance and conductances, that you can see that-- and that would model a layer 5 neuron that has one of these very large dendrites-- you can see that the spiking behavior of that neuron just totally changes. And that's exactly what the spiking behavior of layer 5 neurons looks like. And so you could imagine building a very complicated thousand-compartment model to simulate this, but you wouldn't really understand much more about why it behaves that way. Whereas [AUDIO OUT] a simple two-compartment model and analyze it, and really understand what are the properties of a neuron that give this kind of behavior as opposed to some other kind of behavior. It's very similar to the approach that David Corey took in modeling the effect of the T tubules on muscle fiber spiking in the case of sodium-- failures of the sodium channel to inactivate. That was also a two-compartment model. So you can get a lot of intuition about the properties of neurons [AUDIO OUT] simple extensions of an additional compartment onto the soma. And, next time, on Thursday, we're going to extend a model like this to include a model of a [AUDIO OUT] So let me just remind you of what we learned about today. So you should be able to draw a circuit diagram of a dendrite, just that kind of finite element picture, with maybe three or four elements on it. Be able to plot the voltage in a dendrite as a function of distance in steady state for leaky and non-leaky dendrites, and understand the concept of a length constant. Know how the length constant depends on dendritic radius. You should understand the idea of an electrotonic length and be able to say how much a signal will decay for a dendrite of a given electrotonic length. And be able to draw the circuit diagram of a two-compartment model. And we're going to spend more time on that on Thursday.
MIT_940_Introduction_to_Neural_Computation_Spring_2018
15_Matrix_Operations_Intro_to_Neural_Computation.txt
MICHALE FEE: OK. All right, let's go ahead and get started. OK, so we're going to continue talking about the topic of neural networks. Last time, we introduced a new framework for thinking about neural network interactions, using a rate model to describe the interactions of neurons and develop a mathematical framework for how to combine collections of neurons to study their behavior. So, last time, we introduced the notion of a perceptron as a way of building a neural network that can classify its inputs. And we started talking about the notion of a perceptron learning rule, and we're going to flesh that idea out in more detail today. We're going to then talk about the idea of using networks to perform logic with neurons. We're going to talk about the idea of linear separability and invariance. Then we're going to introduce more complex feed-forward networks, where instead of having a single output neuron, we have multiple output neurons. Then we're going to turn to a more fully developed view of the math that we use to describe neural networks, and matrix operations become extremely important in neural network theory. And then, finally, we're going to turn to some of the kinds of transformations that are performed by matrix multiplication and by the kinds of-- by feed-forward neural networks. OK, so we've been considering a kind of neural network called a rate model that uses firing rates rather than spike trains. So we introduced the idea that we have an output neuron with firing rate v that receives input from an input neuron that has firing rate u. The input neuron synapses onto the output neuron with a synapse of weight w. And we described how we can think of the input neuron producing a synaptic input into the output neuron that has a magnitude of the firing rate times the strength of the synaptic connection. So the input to the output neuron here is w times u. And then we talked about how we can convert that input current, let's say, into our output neuron into a firing rate of the output neuron through some function f, which is what's called the F-I curve of the neuron that relates the input to the firing rate of the neuron. And we talked about several different kinds of F-I firing rate versus input functions that can be useful. We then extended our network from a single input neuron synapsing onto a single output neuron by having multiple input neurons. Again, the output neuron has a firing rate, and our input neurons have a vector of firing rates now-- u1, u2, u3, u4, and so on-- that we can combine together into a vector, u. Each one of those input neurons has a synaptic strength w onto our output neuron. So we have a vector of synaptic strengths. And now we can write down the input current to our output neuron as a sum of the contributions from each of those input neurons-- so w1, u1 plus w2, u2, plus w3, u3, and so on. So we can now write the input current to our output neuron as a sum of contributions that we can then write as a dot product-- w dot u. OK, any questions about that? And so, in general, we have the firing rate of our output neuron is just this F-I function, this input-output function of our output neuron acting on the total input, which is w dot u. And then we talked about different kinds of functions that are useful computationally for this function f. So in the context of the integrate and fire neuron, we talked about F-I curves that are zero below some threshold and then are linear above that threshold current. We talked last time about a binary threshold known that has zero firing rate below some threshold and then steps up abruptly to a constant output firing rate one. And then we also introduced, last time, the notion of a linear neuron, whose firing rate is just proportional to the input current and has positive and negative firing rates. And we talked about the idea that although it's biophysically implausible to have neurons that have negative firing rates, that this is a particularly useful simplification of neurons. Because we can just use linear algebra to describe the properties of networks of linear neurons. And we can do some really interesting things with that kind of mathematical simplification. We're going to get to some of that today. And that allows you to really build an intuition for what neural networks can do. OK, so let's come back to what perceptron is and introduce this perceptron learning role. So we talked about the idea that a perceptron carries out a classification of its inputs that represent different features. So we talked about classifying animals into dogs and non-dogs based on two features of animals. We talked about the fact that you can't make that classification between dogs and non-dogs just on the basis of one of those features, because these two categories overlap in this feature and in this feature. And so in order to properly separate those categories, you need a decision boundary that's actually a combination of those two features. And we talked about how you can implement that using a simple network, called a perceptron, that has an output neuron and two input neurons. Each one of those input neurons represents the magnitude of those two different features for each object that you're trying to classify. So u1 here and u2 are the dimensions on which we're performing this classification. And so we talked about the fact that that decision boundary between those two classifications is determined by this weight matrix w. And then we used a binary threshold neuron for making the actual decision. Binary threshold neurons are great for making decisions, because unlike a linear neuron-- so a linear neuron just responds more if its input is larger, and it responds less if its input is smaller. Binary threshold neurons have a very clear threshold below which the neuron doesn't spike and above which the neuron does spike. So, in this case, this network, this output neuron here, will fire, will have a firing rate of one, for any input that's on this side of the decision boundary and will have a firing rate of zero for any input that's on this side of the decision boundary, OK? All right, so we talked about how we can, in two dimensions, just write down a decision boundary that will separate, let's say, green objects from red objects. So you can see that if you sat down and you looked at this drawing of green dots and red dots, that it would be very simple to just look at that picture and see that if you put a decision boundary right there, that you would be able to separate the green dots from the red dots. How would you actually calculate the weight vector that that corresponds to in a perceptron? Well, it's very simple. You can just look at where that decision boundary crosses the axes-- so you can see here, that decision boundary crosses the u1 axis at point A, crosses the u2 axis at, I should say, a value of B. And then we can use those numbers to actually calculate the w. So, remember, u is the input space. w is a weight vector that we're trying to calculate in order to place the decision boundary at that point. Is that clear what we're trying to do here? OK, so we can calculate that weight vector. We assume that just data is some number. Let's just call it one. We have an equation for a line-- w dot u equals theta. That's the equation for that decision boundary. We have two knowns, the two points on the decision boundary that we can just read off by eye. And we have two unknowns-- the synaptic weights, w1 and w2. And so we have two equations-- ua dot w equals theta, ub dot w equals theta. And we can just solve for w1 and w2, and that's what you got, OK? So the weight vector that gives you that decision boundary is 1 over a and 1 over b, OK? Those are the two weights. Any questions about that? OK. So in two dimensions, that's very easy to do, right? You can just look at that cloud of points, decide where to draw a line that best separates the two categories that you're interested in separating. But in higher dimensions, that's really hard. So in high dimensions, for example, we're trying to separate images, for example. So we can have a bunch of images of dogs, a bunch of images of cats. Each pixel in that image corresponds to a different input to our classification unit. And now how do you decide what all of those weights should be from all of those different pixels onto our output neuron that separates images of one class from images of another class? So there's just no way to do that by eye in high dimensions. So you need an algorithm that helps you choose that set of weights that allows you to separate different classes-- you know, a bunch of images of one class from a bunch of images of another class. And so we're going to introduce a method called the perceptron learning rule that is a category of learning rules called supervised learning rules that allow you to take a bunch of objects that you know-- so if you have a bunch of pictures of dogs, you know that they're dogs. If you have a bunch of pictures of cats, you know they're cats. So you label those images. You feed those inputs, those images, into your network, and you tell the network what the answer was. And through an iterative process, it finds all of the weights that optimally separate those two different categories. So that's called the perceptron learning rule. So let me just set up how that actually works. So you have a bunch of observations of the input. So in this case, I'm drawing these in two dimensions, but you should think about each one of these dots as being, let's say, an image of a dog in very high dimensions, where instead of just u1 and u2, you have u1 through u1000, where each one of those is the value of a different pixel in your image. So you have a bunch of images. Each one of those corresponds to an image of a dog. Each one of those corresponds to an image of a cat. And we have a whole bunch of different observations or images of those different categories. Any questions about that? All right, so we have n of those observations. And for each one of those observations, we say that the input is equal to one of those observations for one iteration of this learning process, OK? And so with each observation, we're told whether this input corresponds to one category or another, so a dog or a non-dog. And our output, we're asking-- we want to choose this set of weights such that the output of our network is equal to some known value. So t sub i, where if it's a dog, then the answer is one for yes. If it's a non-dog, the answer is no for that's not a dog. And we have n of those answers. We have n images and labels that tell us what category that image belongs to. So for all of these, t equals one. For all of these, t equals zero. And we want to find a set of weights such that when we take the dot product of that weight factor into each one of those observations minus theta that we get an answer that is equal to t for each observation. Does that make sense? So how do we do that? All right, so each observation, we have two things-- the input and the desired output. And that gives us information that we can use to construct this weight vector. So, again, that's called supervised learning. And we're going to use an update rule, or a learning rule, that allows us to change the weight vector on as a result of each estimate, depending on whether we got the answer right or not. So how do we do this? What we're going to do is we're going to start with a random set of weights, w1 and w2, OK? And we're going to put in an input. So there's a space of inputs. We're going to start with some random weight, and I started with some random vector in this direction. You can see that that gives you a classification boundary here. And you can see that that classification boundary is not very good for separating the green dots from the red dots. Why? Because it will assign a one to everything on this side of that decision boundary and a zero to everything on that side. But you can see that that does not correspond to the assignment of green and red to each of those dots, OK? So how do we update that w in order to get the right answer? So what we're going to do is we're going to put in one of these inputs on each iteration and ask whether the network got the answer right or not. So we're going to put in one of those inputs. So let's pick that input right there. We're going to put that into our network. And we see that the answer we get from the network is one, because it's on the positive side of the decision boundary. And so one was the right answer in this case. So what do we do? We don't do anything. We say the change in weight is going to be zero if we already get the right answer. So if we got lucky and our initial weight vector was in the right direction, so our perceptron already classified the answer, then the weight vector is never going to change, because it was already the right answer. OK, so let's put it in another input-- a red input. You can see that the correct answer is a zero. The network gave us a zero, because it's on the negative side of the weight vector of the decision boundary. And so, again, delta w is zero. But let's put in another input now such that we get the wrong answer. So let's put in this input right here. So you can see that the answer here, the correct answer is one, but the network is going to give us a zero. So what do we do to update that weight vector? So if the output is not equal to the correct answer, then we're wrong. So now we update w. And the perceptron learning rule is very simple. We introduce a change in w that looks like this. It's a little change, so eps eta is a learning rate. It's generally going to be smaller than one. So we're going to put in a small change in w that's in the direction of the input that was wrong if the correct answer is a one. We're going to make a small change to w in the opposite direction of that input if the correct answer was zero. Does that make sense? So we're going to change w in a way that depends on what the input was and what the correct answer was. So let's walk through this. So we put it in an input here. The correct answer is a one, and we got the answer wrong. The network gave us a zero, but the correct answer is a one. So we're in this region here. The answer was incorrect, so we're going to update w. The correct answer was a one, so we're going to change delta-- we're going to change w in the direction of that input. So that input is there. So we're going to add a little bit to w in this direction. So if we add that little bit of vector to the w, it's going to move the w vector in this direction, right? So let's do that. So there's our new w. Our new w is the old plus delta w, which is in the direction of this incorrectly classified input. So there's our new decision boundary, all right? And let's put in another input-- let's say this one right here. You can see that this input is also incorrectly classified, because the correct answer is a zero. It's a red dot. But the network since it's on the positive side of the decision boundary. So the network classifies it as a one. OK, good. So the network classified it as a one and the correct answer was a zero, so we were wrong. So we're going to update w, and we're going to update it in the opposite direction of the input if the correct answer was zero, which is the case. So we're going to update w. And that's the input xi. Minus xi is in this direction. So we're going to update w in that direction. So we're going to add those two vectors to get our new w. And when we do that, that's what we get. There's our new w. There's our new decision boundary. And you can see that that decision boundary is now perfectly oriented to separate the red and the green dots. So that's Rosenblatt's perceptron learning rule. Yes, Rebecca? AUDIENCE: How do you change the learning rate? Because what if it's too big? You'll sort of get not helpful [INAUDIBLE].. MICHALE FEE: Yeah, that's right. So if the learning rate were too big, you could see this first correction. So let's say that we corrected w but made a correction that was too far in this direction. So now the new w would point up here. And that would give us, again, the wrong answer. What happens, generally, is that if your learning rate is too high, then your weight vector bounces around. It oscillates around. So it'll jump too far this way, and then it'll get an error over here, and it'll jump too far that way. And then you'll get an error over there, and it'll just keep bouncing back and forth. So you generally choose learning rates that-- the process of choosing learning rates can be a little tricky Basically, the answer is start small and increase it until it breaks. OK, any questions about that? So you can see it's a very simple algorithm that provides a way of changing w that is guaranteed to converge toward the best answer in separating these two classes of inputs. All right, so let's go a little bit further into single layer binary networks and see what they can do. So these kinds of networks are very good for actually implementing logic operations. So you can see that-- let's say that we have a perceptron that looks like this. Let's give it a threshold of 0.5 and give it a weight vector that's 1 and 1. So you can see that this perceptron gives an answer of zero. The output neuron has zero firing rate for an input that's zero. But any input that's on the other side of the decision boundary produces an output firing rate of one. What that means is that if the input a, or u1, is a 1, 0, then the output neuron will fire. If the input is 0, 1, the output neuron will fire. And if the input is 1, 1, the output neuron will fire. So, basically, any input above some threshold will make the output neuron fire. So this perceptron implements an OR gate. If it's input a or input b, the output neuron spikes, as long as those inputs are above some threshold value. So that's very much like a logical OR gate. Now let's see if we can implement an AND gate. So it turns out that implementing an AND gate is almost exactly like an OR gate. We just need-- what would we change about this network to implement an AND gate? AUDIENCE: A larger [INAUDIBLE]. MICHALE FEE: What's that? AUDIENCE: A larger theta? MICHALE FEE: Yeah, a larger theta. So all we have to do is move this line up to here. And now one of those inputs is not enough to make the output neuron fire. The other input is not enough to make the output neuron fire. Only when you have both. So that implements an AND gate. We just increase the threshold a little bit. Does that make sense? So we just increase the threshold here to 1.5. And now when either input is on at a value of one, that's not enough to make the output neuron fire. If this input's on, it's not enough. If that output is on, it's not enough. Only when both inputs are on do you get enough input to this output neuron to make it have a non-zero firing rate, to get it above threshold. Now, there's another very common logic operation that cannot be solved by a simple perceptron. That's called an exclusive OR, where this neuron, this network, we want it to fire only if input a is on or input b is on, but not both. Why is it that that can't be solved by the kind of perceptron that we've been describing? Anybody have some intuition about that? AUDIENCE: I mean, it's obviously [INAUDIBLE] separable. MICHALE FEE: Yeah, that's right. The keyword there is separable. If you look at this set of dots, there's no single line, there's no single boundary that separates all the red dots from off the green dots, OK? And so that set of inputs is called non-separable. And sets of inputs that are not separable cannot be classified correctly by a simple perceptron of the type we've been talking about. So how do you solve that problem? So this is a set of inputs that's non-separable. You can see that you can solve this problem now if you have two separate perceptrons. So watch this. We can build one perceptive one that fires, that has a positive output when this input is on. We can have a separate perceptron that is active when that input is on. And then what would we do? If we had one neuron that's active if that input is on another input that's active when that input is on? We would or them together, that's right. So this is what's known as a multi-layer perceptron. We have two inputs, one that represents activity in a, another that represents activity in b. And we have one neuron in what's called the intermediate layer of our perceptron that has a weight vector of 1 minus 1. What that means is this neuron will be active if input a is on but not input b. This one will be active. This neuron has a different weight vector-- minus 1, 1. This neuron will be active if input b is on but not input a. And the output neuron implements an OR operation that will be active when this intermediate neuron is on or that intermediate neuron is on, OK? And so that network altogether implements this exclusive OR function. Does that make sense? Any questions about that? So this problem of separability is extremely important in classifying inputs in general. So if you think about classifying an image, like a number or a letter, you can see that in high-dimensional space, images that are all threes, let's say, are all very similar to each other. But they're actually not separable in this linear space. And that's because in the high dimensional space they exist on what's called a manifold in this high-dimensional space, OK? They're like all lined up on some sheet, OK? So this is an example of rotations, and you can see that all these different threes kind of sit along a manifold in this high-dimensional space that are separate from all the other numbers. So all those numbers exist on what's called an invariant transformation, OK? Now, how would we separate those images of threes from all the other numbers or letters? How would we do that? Well, we could imagine building a multi-layer perceptron that-- so here, I'm showing that there's no single line that separates the threes on this manifold from all the other digits over here. We can solve that problem by implementing a multi-layer perceptron that while one of those perceptrons detects these objects, another perceptron detects these objects, and then we can OR those all together. So that's a kind of network that can now detect all of these three, separate them from non-threes. Does that make sense? So we can think of objects that we recognize, like this three that we recognize, even though it has different-- we can recognize it with different rotations or transformations or scale changes. You can also think of the problem of separating images from dogs and cats as also solving this problem, that the space of dogs, of dog images, somehow lives on a manifold in the high dimensional space of inputs that we can distinguish from the set of images of cats that's some other manifold in this high-dimensional space. So it turns out that you need more than just a single layer perceptron. You need more than just a two-layer perceptron. In general, the kinds of networks that are good for separating different kinds of images, like dogs and cats and cars and houses and faces, look more like this. So this is work from Jim DiCarlo's lab, where they found evidence that networks in the brain that do image classification-- for example, in the visual pathway-- look a lot like very deep neural networks, where you have the retina on the left side here sending inputs to another letter in the thalamus, sending inputs to v1, to v2, to v4, and so on, up to IT. And that we can think of this as being, essentially, many stacked layers of perceptrons that sort of unravel these manifolds in this high-dimensional space to allow neurons here at the very end to separate dogs from cats from buildings from faces. And there are learning rules that can be used to train networks like this by putting in a bunch of different images of people and other different categories that you might want to separate. And then each one of those images has a label, just like our perceptron learning rule. And we can use the image and the correct label-- face or dog-- and train that network by projecting that information into these intermediate layers to train that network to properly classify those different stimuli, OK? This is, basically, the kind of technology that's currently being used to train-- this is being used in AI. It's being used to train driverless cars. All kinds of technological advances are based on this kind of technology here. Any questions about that? Aditi? AUDIENCE: So in actual neurons, I assume it's not linear, right? MICHALE FEE: Yes. These are all nonlinear neurons. They're more like these binary threshold units than they are like linear neurons. That's right. AUDIENCE: But then do you there's, like-- because right now, I imagine that models we make have to have way more perceptron units. MICHALE FEE: Yes. AUDIENCE: We use our simplified [INAUDIBLE].. But then our brain is sometimes-- I mean, it's at, like, a much faster level, like way faster, right? So you think it'd be like-- if we examine what functions neurons might be using, in a way that would let us reduce the number of units needed? Because right now, for example, [INAUDIBLE] be a bunch of lines. But maybe in the brain, there's some other function it's using, which is smoother. MICHALE FEE: Yeah. OK, so let me just make sure I understand. You're not talking about the F-I curve of the neurons? Is that correct? You're talking about the way that you figure out these weights. Is that what you're asking about? AUDIENCE: No. I'm asking if we use a more accurate F-I curve, we'll need less units. MICHALE FEE: OK, so that's a good question. I don't actually know the answer to the question of how the specific choice of F-I curve affects the performance of this. The big problem that people are trying to figure out in terms of how these are trained is the challenge that in order to train these networks, you actually need thousands and thousands, maybe millions, of examples of different objects here and the answer here. So you have to put in many thousands of example images and the answer in order to train these networks. And that's not the way people actually learn. We don't walk around the world when we're one-year-old and our mother saying, dog, cat, person, house. You know, it would be... in order to give a person as many labeled examples as you need to give these networks, you would just be doing nothing, but your parents would be pointing things out to you and telling you one-word answers of what those are. Instead, what happens is we just observe the world and figure out kind of categories based on other sorts of learning rules that are unsupervised. We figure out, oh, that's a kind of thing, and then mom says, that's a dog. And then we know that that category is a dog. And we sometimes make mistakes, right? Like a kid might look at a bear and say, dog. And then dad says, no, no, that's not a dog, son. So the learning by which people train their networks to do classification of inputs is quite different from the way these deep neural networks work. And that's a very important and active area of research. Yes? AUDIENCE: Is the fact that [INAUDIBLE] use unsupervised learning, as well, to train a computer to recognize an image of a turtle as a gun, but humans can't do that [INAUDIBLE].. MICHALE FEE: Recognize a turtle if what? AUDIENCE: Like I saw this thing where it was like at MIT, they used an AI. They manipulated pixels in images and convinced the computer that it was something that it was not actually. MICHALE FEE: I see. Yeah. AUDIENCE: So like you would see a picture of a turtle, but the computer would get that picture and say it was, like, a machine gun. MICHALE FEE: Just by manipulating a few pixels and kind of screwing with its mind. AUDIENCE: Yes. So it's [INAUDIBLE]. MICHALE FEE: Yeah. Well, people can be tricked by different things. The answer is, yes, it's related to that. The problem is after you do this training, we actually don't really understand what's going on in the guts of this network. It's very hard to look at the inside of this network after it's trained and understand what it's doing. And so we don't know the answer why it is that you can fool one of these networks by changing a few pixels. Something goes wrong in here, and we don't know what it is. It may very well have to do with the way it's trained, rather than building categories in an unsupervised way, which could be much more generalizable. So good question. I don't really know the answer. Yes? AUDIENCE: Sorry, can you explain what you mean [INAUDIBLE] the neural network needs an answer? They're not categorized and then tell the user dogs? MICHALE FEE: Yeah, so no, in order to train one of these networks, you have to give it a data set, a labeled data set. So a set of images that already has the answer that was labeled by a person. AUDIENCE: So you can't just give it a set of photos of puppies and snakes and it'll categorize them into two groups? MICHALE FEE: No, nobody knows how to do that. People are working on that, but it's not known yet. Yes, Jasmine? AUDIENCE: [INAUDIBLE] but I see [INAUDIBLE] I can't separate them and like adding an additional feature to raise it to a higher dimensional space, where it's separable? MICHALE FEE: Sorry, I didn't quite understand. Can you say it again? AUDIENCE: I think I remember reading somewhere about how when the scenes are nonlinearly separable-- MICHALE FEE: Yes. AUDIENCE: --you can add in another feature to [INAUDIBLE].. MICHALE FEE: Yeah, yeah. So let me show you an example of that. So coming back to the exclusive OR. So one thing that you can do, you can see that the reason this is linearly inseparable-- it's not linearly separable-- is because all these points are in a plane. So there's no line that separates them. But one way, one sort of trick you can do, is to add noise to this. So that now, some of these points move. You can add another dimension. So now let's say that we add noise, and we just, by chance, happen to move the green dots this way and the red dots, well, that way. And now there's a plane that will separate the red dots from the green dots. So that's advanced beyond the scope of what we're talking about here. But yes, there are tricks that you can play to get around this exclusive OR problem, this linear separability problem, OK? All right, great question. All right, let's push on. So let's talk about more general two-layer feed-forward networks. So this is referred to as a two-layer network-- an input layer and an output layer. And in this case, we had a single input neuron and a single output neuron. We generalized that to having multiple input neurons and one output neuron. We saw that we can write down the input current to this output neuron as w, the vector of weights, dotted into the vector of input firing rates to give us an expression for the firing rate of the output neuron. And now we can generalize that further to the case of multiple output neurons. So we have multiple input neurons, multiple output neurons. You can see that we have a vector of firing rates of the input neurons and a vector of firing rates of the output neurons. So we used to just have one of these output neurons, and now we've got a whole bunch of them. And so we have to write down a vector of fire rates in the output layer. And now we can write down the firing rate of our output neurons as follows. So the firing rate of this neuron here is going to be a dot product of the vector of weights onto it. So the firing rate of output neuron one is the vector of weights onto that first output neuron dotted into the vector of input firing rates. And the same for the next output neuron. The firing rate of output neuron two is dot product of the weights onto that output neuron two and onto the vector of input firing rates. Same for neuron three. And we can write that down as follows. So the eighth output-- the firing rate of the eighth output neuron is the weight vector onto the eighth output neuron dotted into the input firing rate vector, OK? And we can write that down as follows, where we've now introduced a new thing here, which is a matrix of weights. So it's called the weight matrix. And it essentially is a matrix of all of these synaptic weights, from the input layer onto the output layer. And now if we had a linear neuron, we can write down the firing rate of the output neuron. The firing rate vector of output neuron is just this weight matrix times the vector of input fire rates. So now, we've rewritten this problem of finding the vector of output firing rates as a matrix multiplication. And we're going to spend some time talking about what that means and what that does. So our feed-forward network implements a matrix multiplication. All right, so let's take a closer look at what this weight matrix looks like. So we have a weight matrix w sub a comma b that looks like this. So we have four input neurons and four output neurons. We have a weight for each input neuron onto each output neuron. The columns here correspond to different input neurons. The rows correspond to different output neurons. Remember, for a matrix, the elements are listed as w sub a, b, where a is the output neuron. b is the input neuron. On so it's w postsynaptic, presynaptic-- post, pre. Rows, columns. So the rows are the different output neurons. The columns are the different input neurons. So it can be a little tricky to remember. I just remember that it's rows-- a matrix is labeled by rows and columns. And weight matrices are postsynaptic, presynaptic-- post, pre. AUDIENCE: [INAUDIBLE] comment of [INAUDIBLE]?? MICHALE FEE: I think that's standard. I'm pretty sure that's very standard. If you find any exceptions let me know. OK, we can think of each row of this matrix as being the vector of weights onto one output neuron. That row is a vector of weights onto that output neuron-- that row, that output neuron; that row, that output neuron. Does that makes sense? All right, so let's flesh out this matrix multiplication. The vector of output firing rates, we're going to write it as a column vector, where the first number is this firing rate. That number is that firing rate. That number represents that firing rate, OK? That's equal to this weight matrix times the vector of input firing rates, again, written as a column vector. And in order to calculate the firing rate of the first output neuron, we take the dot product of the first row of the weight matrix and the column vector of input firing rates. And that gives us this first firing rate, OK? To get the second firing rate, we take the dot product of the second row of weights with the vector of firing rates, and that gives us this second firing rate. Any questions about that? Just a brief reminder of matrix multiplication. All right, no questions? All right, so let's take a step back and go quickly through some basic matrix algebra. I know most of you have probably seen this, but many haven't, so we're just going to go through it. All right, so just as vectors are-- you can think of them as a collection of numbers that you write down. So let's say that you are making a measurement of two different things-- let's say temperature and humidity. So you can write down a vector that represents those two quantities. So matrices you can think of as collections of vectors. So let's say we take those two measurements at different times, at three different times. So now we have a vector one, a vector two, and a vector three that measure those two quantities at three different times, all right? So we can now write all of those measurements down as a matrix, where we collect each one of those vectors as a column in our matrix, like that. Any questions about that? And there's a bit of MATLAB code that calculates this matrix by writing three different column vectors and then concatenating them into a matrix. All right, and you can see that in this matrix, the columns are just the original vectors, and the rows are-- you can think of those as a time series of our first measurement, let's say temperature. So that's temperature as a function of time. This is temperature and humidity at one time. Does that make sense? All right, so, again, we can write down this matrix. Remember, this is the first measurement at time two, the first measurement at time three. We have two rows and three columns. We can also write down what's known as the transpose of a matrix that just flips the rows and columns. So we can write transpose, which is indicated by this capital super scripted t. And here, we're just flipping the rows and columns. So the first row of this matrix becomes the first column of the transposed matrix. So we have three rows and two columns. A symmetric matrix-- I'm just defining some terms now. A symmetric matrix is a matrix where the off-diagonal elements-- so let me just define, that's the diagonal, the matrix diagonal. And a symmetric matrix has the property that the off-diagonal elements are zero. And a symmetric matrix has the property that the transpose of that matrix is equal to the matrix, OK? That is only possible, of course, if the matrix has the same number of rows and columns, if it's what's called a square matrix. Let me just remind you, in general about matrix multiplication. We can write down the product of two matrices. And we do that multiplication by taking the dot product of each row in the first matrix with each column in the second matrix. So here's the product of matrix A and matrix B. So there's the product. If this matrix, if matrix A, is an m by k-- m rows by k columns-- and matrix B has k rows by n columns, then the product of those two matrices will have m by n rows and columns. And you can see that in order for matrix multiplication to work, the number of columns of the first matrix equal the number of rows in the second matrix. You can see that this k has to be the same for both matrices. Does that make sense? So, again, in order to compute this element right here, we take the dot product of the first row of A and the first column of B. That's just 1 times 4, is 4. Plus negative 2 times 7 is minus 14. Plus 0 times minus 1 is 0. Add those up and you get minus 10. So you get this number. You multiply this row dot product this row with this column and so on. Notice, A times B is not equal to B times A. In fact, in cases of rectangular matrices, matrices that aren't square, you can't even do this, often do this, multiplication in a different order. Mathematically, it doesn't make sense. So let's say that we have a matrix of vectors, and we want to take the dot product of each one of those vectors x with some other vector v. So let's just write that down. The way to do that is to say the answer here, the dot product of each one of those column vectors in our matrix with this other vector v we do by taking the transpose of v, which takes a column vector and turns it into a row vector. And we can now multiply that by our data matrix x by taking the dot product of v with that column of x. And that gives us a matrix. So this matrix here, that vector is a one by two matrix. This is a two by three matrix. The product of those is a one by three matrix. Any questions about that? OK. We can do this a different way. Notice that the result of this multiplication here is a row vector, y. We can do this a different way. We can take dot product. We can also compute this as y equals x transpose v. So here, we've taken the transpose of the data matrix times this column vector v. And again, we take the dot product of this, this with this, and that with that. And now we get a column vector that has the same entries that we had over here. All right, so I'm just showing you different ways that you can manipulate a vector in a matrix to compute the dot product of elements of vectors within a data matrix and other vectors that you're interested in. All right, identity matrix. So when you're multiplying numbers together, the number one has the special property that you can multiply any real number by one and get the same number back. You have the same kind of element in matrices. So is there a matrix that when multiplied by A gives you A? And the answer is yes. It's called the identity matrix. So it's given by the symbol I, usually. A times I equals A. What does that matrix look like? Again, the identity matrix looks like this. It's a square matrix that has ones along the diagonal and zero everywhere else. So you can see here that if you take an arbitrary vector x, multiplied by the identity matrix, you can see that this product is x1, x2 dotted into 1, 0, which gives you x1. x1, x2 dotted into 0, 1, gives you x2. And so the answer looks like that, which is just x. So the identity matrix times an arbitrary vector x gives you x back. Another very useful application of linear algebra, linear algebra tools, is to solve systems of equations. So let me show you what that looks like. So let's say we want to solve a simple equation, ax equals c. So, in this case, how do you solve for x? Well, you're just going to divide both sides by a, right? So if you divide both sides by a, you get that x equals 1 over a times c. So it turns out that there is a matrix equivalent of that, that allows you to solve systems of equations. So if you have a pair of equations-- x minus 2y equals 3 and 3x plus y equals 5-- you can write this down as a matrix equation, where you have a matrix 1, minus 2, 3, 1, which correspond to the coefficients of x and y in these equations. Times a vector xy is equal to 3, 5, another vector 3, 5. So you can write this down as ax equals c-- that's kind of nice-- where this matrix A is given by these coefficients and this vector c is given by these terms on this side of the equation, on the right side of the equation. Now, how do we solve this? Well, can we just divide both sides of that matrix equation, that vector equation, by a? So division is not really defined for matrices, but we can use another trick. We can multiply both sides of this equation by something that makes the a go away. And so that magical thing is called the inverse of A. So we take the inverse of matrix A, denoted by A with this superscript minus 1. And that's the standard notation for identifying the inverse. It has the property that A inverse times A equals the identity matrix. So you can sort of think about this as A equals the identity matrix over A. Anyway, don't really think of it like that. So to solve this system of equations ax equals c, we multiply both sides by that A inverse matrix. And so that looks like this-- A inverse A times x equals A inverse c. A inverse A is just what? The identity matrix times x equals A inverse c. And we just saw before that identity matrix times x is just x. All right, so there's the solution to this system of equations. All right, any questions about that? So how do you find the inverse of a matrix? What is this A inverse? How do you get it in real life? So in real life, what you usually do is you would just use the matrix inverse function in Matlab. Because for any matrices other than a two-by-two, it's really annoying to get a matrix inverse. But for a two-by-two matrix, it's actually pretty easy. You can almost just get the answer by looking at the matrix and writing down the inverse. It looks like this. The inverse of a two-by-two square matrix is just given by a slight reordering of the coefficients, of the entries of that matrix, divided by what's called the determinant of A. So what you do is you flip-- in a two-by-two matrix, you flip the A and the D, and then you multiply the diagonal elements by minus 1. Now, what is this determinant? The determinant is given by a times d minus b times c. And you can prove that that actually is the inverse, because if we take this and multiply it by A, what you find when you multiply that out is that that's just equal to the identity matrix. So a matrix has an inverse if and only if the determinant is not equal to zero. If the determinant is equal to zero, you can see that this thing blows up, and there's no inverse. We're going to spend a little bit of time later talking about what that means when a matrix has an inverse and what the determinant actually corresponds to in a matrix multiplication context. If the determinant is equal to zero, we say that that matrix is singular. And in that case, you can't actually find an inverse, and you can't solve this equation right here, this system of equations. All right, so let's actually go through this example. So here's our equation, ax equals c. We're going to use the same matrix we had before and the same c. The determinant is just the product of those minus the product of those, so 1 minus negative 6. So the determinant is 7. So there is an inverse of this matrix. And we can just write that down as follows. Again, we've flipped those two and multiplied those by minus 1. So we can solve for x just by taking that inverse times c, A inverse times c. And if you multiply that out, you see that there's the inverse. It's just a vector. That's it. That's how you solve a system of equations, all right? Any questions about that? So this process of solving systems of equations and using matrices and their inverses to solve systems of equations is a very important concept that we're going to use over and over again. All right, let's turn to the topic of matrix transformations. All right, so you can see from this problem of solving this system of equations that that matrix A transformed a vector x into a vector c, OK? So we have this vector x, which was 3/7 minus 4/7 a vector. When we multiplied that by A, we got another vector, c. And the vector A inverse transforms this vector c back into vector x, right? So we can take that vector c, multiply it by A inverse, and get back to x. Does that make sense? So, in general, a matrix A maps a set of vectors in this whole space. So if you have a two-by-two vector, it maps a set of vectors in R2 onto a different set of vectors in R2. So you can take any vector here-- a vector from the origin into here-- multiply that vector by A, and it gives you a different vector. And if you multiply that other vector by A inverse, you go back to the original vector. So this vector A implements some kind of transformation on this space of real numbers into a different space of real numbers, OK? And you can only do this inverse if the determinant of A is not equal to zero. So I just want to show you what different kinds of matrix transformations look like. So let's start with the simplest matrix transformation-- the identity matrix. So if we take a vector x, multiply it by the identity matrix, you get another vector y, which is equal to x. So what we're going to do is we're going to kind of riff off of a theme here, and we're going to take slight perturbations of the identity matrix and see what that new matrix does to a set of input vectors, OK? So let me show you how we're going to do that. We're going to take it the identity matrix 1, 0, 0, 1. And we're going to add a little perturbation to the diagonal elements. And we're going to see what that does to a set of input vectors. So let me show you what we're doing here. We have each one of these red dots. So what I did was I generated a bunch of random numbers in a 2D space. So this is a 2D space. And I just randomly selected a bunch of numbers, a bunch of points on that plane. And each one of those is an input vector x. And then I multiplied that vector times this slightly perturbed identity matrix. And then I get a bunch of output vectors y. Input vectors x are the red dots. The output vectors y are the other end of this blue line. Does that make sense? So for every vector x, multiplying it by this matrix gives me another vector that's over here. Does that make sense? So you can see that what this matrix does is it takes this space, this cloud of points, and stretches them equally in all directions. So it takes any vector and just makes it longer, stretches it out. No matter which direction it's pointing, it just makes that vector slightly longer. And here's that little bit of code that I used to generate those vectors. OK, so let's take another example. Let's say that we take the identity matrix and we just add a little perturbation to one element of the identity matrix, OK? So what does that do? It stretches the vectors out in the x direction, but it doesn't do anything to the y direction. So the vector with a component in the x direction, the x component gets increased by an by a factor 1 plus delta. The components of each of these vectors in the y direction don't change, all right? So we're going to take this cloud of points, and we're going to stretch it in the x direction. What about this matrix here? What's that going to do? AUDIENCE: Stretch it in the y direction. MICHALE FEE: Good. It's going to stretch it out in the y direction. Good. So that's kind of cute. And you can see that this earlier matrix that we looked at right here stretches in the x direction and stretches in the y direction. And that's why that cloud of vectors just stretched out equally in all directions. Out this. What is that going to do? AUDIENCE: It would stretch in the x direction and compress in the y direction MICHALE FEE: Right. This perturbation here is making this component, the x component larger. This perturbation here-- and delta here is small. It's less than one. Here, it's making the y component smaller. And so what that looks like is the y component of each one of these vectors gets smaller. The x component gets larger. And so we're squeezing in one direction and stretching in the other direction. Imagine we took a block of sponge and we grabbed it and stretched it out, and it gets skinny in this direction and stretches out in that direction. All right, that's kind of cool. What is this going to do? Here, I'm not making a small perturbation of this, but I'm flipping the sign of one of those. What happens there? What is that going to do? AUDIENCE: [INAUDIBLE] MICHALE FEE: Good. What do we call that? There's a term for it. What do you-- yeah, it's called a mirror reflection. So every point that's on this side of the origin gets reflected over to this side of the origin. And every point that's over here-- sorry, of this axis. Every point that's on this side of the y-axis gets reflected over to this side. So that's called a mirror reflection. What is this? What is that going to do? Abiba? AUDIENCE: Reflect it [INAUDIBLE].. MICHALE FEE: Right. It's going to reflect it through the origin, like this. So every point that's over here, on one side of the origin, is going to reflect through to the other side. That's pretty neat. Inversion of the origin. OK? So we have symmetric perturbations in the x and y components of the identity matrix. We have a stretch transformation that stretches along one axis, but not the other. Stretch around the other axis, the y-axis, but not the x-axis. Stretch along x and compression along y. Mirror reflection through the y-axis. Inversion through the origin. These are examples of diagonal matrices, OK? So the only thing we've done so far-- we've gotten all these really cool transformations, but the only thing we've done so far are change these two diagonal elements. So there's a lot more crazy stuff to happen if we start messing with the other components. Oh, and I should mention that we can invert any one of these transformations that we just did by finding the inverse of this matrix. The inverse of a diagonal matrix is very simple to calculate. It's just one over those diagonal elements. All right, how about this? What is that going to do? Anybody? When you take a vector and you multiply it by that, what's going to happen? This part is going to give you the original vector back. This part is going to take a little bit of the y component and add it to the x component. So what does that do? That produces what's known as a shear. So points up here, we're going to take a little bit of the y component and add it to the x component. So if something has a big y component, it's going to be shifted in x. If something has a negative y component, it's going to shift this way in x. If something has a positive y component, it's going to shift this way an x. And it's going to produce what's called a shear. So we're pushing these points this way, pushing those points this way. Shear is very important in things like the flow of liquid. So when you have liquid flowing over a surface, you have forces, frictional forces to the liquid down here that prevent it from moving. Liquid up here moves more quickly, and it produces a shear in the pattern of velocity profiles. OK, that's pretty cool. What about this? It's going to just produce a shear along the other direction. That's right. So now components that have a-- vectors that have a large x component acquire a negative projection in y. OK, what does this look like? It's pretty cool. We're going to get some shear in this direction, get some shear in this direction. What's it going to do? AUDIENCE: [INAUDIBLE] MICHALE FEE: Good. Good guess. That's exactly right, produces a rotation. Not exactly a rotation, but very close. So that's how you actually produce a rotation. So notice, for small angles theta, these are close to one, so it's close to an identity matrix. These are close to zero, but this is negative and this is positive, or the other way around. So if we have diagonals close to one and the off-diagonals one positive and one negative, then that produces a rotation. That, formally, is a rotation matrix. Yes? AUDIENCE: On the previous slide, is there a reason you chose to represent the delta on the x-axis as negative? MICHALE FEE: No. It goes either way. So if you have a rotation angle that's positive, then this is negative and this is positive. If your rotation angle is the other sign, then this is positive and this is negative. So, for example, if we want to produce a 45-degree rotation, then we have 1, 1, minus 1, 1. And of course, all those things have a square root of 2, 1 over square root of 2, in them. And so that looks like this. So if you have, let's say, theta equals 10 degrees, we can produce a 10-degree rotation of all the vectors. If theta is 25 degrees, you can see that the rotation is further. Theta 45, that's this case right here. You can see that you get a 45-degree rotation of all of those vectors around the origin. And if theta is 90 degrees, you can see that, OK? Pretty cool, right? OK, what is the inverse of this rotation matrix? So if we have a rotation-- oh, and I just want to point out one more thing. In this formulation of the rotation matrix, positive angles correspond to rotating counterclockwise. Negative angles correspond to rotation in the clockwise direction, OK? So there's a big hint. What is the inverse of our rotation matrix? If we have a rotation of 10 degrees this way, what is the inverse of that? AUDIENCE: [INAUDIBLE] MICHALE FEE: Right. AUDIENCE: [INAUDIBLE] MICHALE FEE: That's right. Remember, matrix multiplication implements a transformation. The inverse of that transformation just takes you back where you were. So if you have a rotation matrix that you implemented a 20-degree rotation in the plus direction, then the inverse of that is a 20-degree rotation in the minus direction. So the inverse of this matrix you can get just by putting in a minus sign into the theta. And you can see that cosine of minus theta is just cosine of theta. But sine of minus theta is negative sine of theta. So the inverse of this matrix is just this. You change the sign of those diagonals, which just makes the shear go in the opposite direction, right? OK, so a rotation by angle plus theta followed by a rotation of angle minus theta puts everything back where it was. So rotation matrix phi of minus theta times phi of theta is equal to the identity matrix. So those two are inverses of each other. And the inverse of a-- notice that the inverse of this rotation matrix is also just the transpose of the rotation matrix. All right, so what you can see is that these different cool transformations that these matrix multiplications can do are just examples of what our feed-forward network can do. Because the feed-m forward network just implements matrix multiplication. So this feed-forward network takes a set of vectors, a set of input vectors, and transforms them into a set of output vectors, all right? And you can understand what that transformation does just by understanding the different kinds of transformations you can get from matrix multiplication. All right, we'll continue next time.
MIT_940_Introduction_to_Neural_Computation_Spring_2018
18_Recurrent_Networks_Intro_to_Neural_Computation.txt
MICHALE FEE: All right, let's go ahead and get started. So we're starting a new topic today. This is actually one of my favorite lectures, one of my favorite subjects in computational neuroscience. All right, so brief recap of what we've been doing. So we've been working on circuit models of neural networks. And we've been working on what we call a rate model, in which we replaced all the spikes of a neuron with, essentially, a single number that characterizes the rate at which a neuron fires. We introduced a simple network in which we have an input neuron and an output neuron with a synaptic connection of weight w between them. And that synaptic connection leads to a synaptic input that's proportional to w times the firing rate of the input neuron. And then we talked about how we can characterize the output, the firing rate of the output neuron, as some nonlinear function of the total input to this output neuron. We've talked about different F-I curves. We've talked about having what's called a binary threshold unit, which has zero firing below some threshold. And then actually, there are different versions of the binary threshold unit. Sometimes the firing rate is zero for inputs below the threshold. And in other models, we use a minus 1. And then a constant firing rate of one above that threshold. And we also talked about linear neurons, where we can write down the firing rate of the output neuron just as a weighted sum of the inputs. And remember that these neurons are kind of special in that they can have negative firing rates, which is not really biophysically plausible, but mathematically, it's very convenient to have neurons like this. So we took this simple model and we expanded it to the case where we have many input neurons and many output neurons. So now we have a vector of input firing rates, u, and a vector of output firing rates, u. And for the case of linear neurons, we talked about how you can write down the vector of firing rates of the output neuron simply as a matrix product of a weight matrix times the vector of input firing rates. And we talked about how this can produce transformations of this vector of input firing rates. So in this high-dimensional space of inputs, we can imagine stretching that input vector along different directions to amplify certain directions that may be more important than others. We talked about how you can do that, stretch in arbitrary directions, not just along the axes. And we talked about how that vector of-- that, sorry, matrix of weights can produce a rotation. So we can have some set of inputs where, let's say, we have clusters of different input values corresponding to different things. And you can rotate that to put certain features in particular output neurons. So now you can discriminate one class of objects from another class of objects by looking at just one dimension and not the whole high-dimensional space. So today, we're going to look at a new kind of network called a recurrent neural network, where not only do we have inputs to our output neurons from an input layer, but we also have connections between the neurons in the output layer. So these neurons in a recurrent network talk to each other. And that imbues some really cool properties onto these networks. So we're going to develop the math and describe how these things work to develop an intuition for how recurrent networks respond to their inputs. We're going to get into some of the computations that recurrent networks can do. They can act as amplifiers in particular directions. They can act as integrators, so they can accumulate information over time. They can generate sequences. They can act as short-term memories of either continuous variables or discrete variables. It's a very powerful kind of circuit architecture. And on top of that, in order to describe these mathematically, we're going to use all of the linear algebra tools that we've been developing so far. So, hopefully, a bunch of things will kind of connect together. OK, so mathematical description of recurrent networks. We're going to talk about dynamics in these recurrent networks, and we're going to start with the very simplest kind of recurrent network called an autapse network. Then we're going to extend that to the general case of recurrent connectivity. And then we're going to talk about how recurrent networks store memories. So we'll start talking about a specific circuit models for storing short-term memories. And I'll touch on recurrent networks for decision-making. And this will kind of lead into the last few lectures of the class, where we get into how sort of specific cases of looking at how networks can store memories. OK, mathematical description. All right, so the first thing that we need to do is-- the really cool thing about recurrent networks is that their activity can evolve over time. So we need to talk about dynamics, all right? The feed-forward networks that we've been talking about, we just put in an input. It gets weighted by synaptic strength, and we get a firing rate in the output, just sort of instantaneously. We've been thinking of you put an input, and you get an output. In general, neural networks don't do that. You put an input, and things change over time until you settle at some output, maybe, or it starts doing something interesting, all right? So the time course of the activity becomes very important, all right? So neurons don't respond instantaneously to inputs. There are synaptic delays. There are integration of membrane potential. Things change over time. And a specific example of this that we saw in the past is that if you have an input spike, you can produce a postsynaptic current that jumps up abruptly as the synaptic conductance turns on. And then the synaptic conductance decays away as the neurotransmitter unbinds from the neurotransmitter receptor, and you get a synaptic current that decays away over time, OK? So that's a simple kind of time dependence that you would get. And that could lead to time dependence in the firing rate of the output neuron. OK, dendritic propagation, membrane time constant, other examples of how things can take time in a neural network. All right, so we're going to model the firing rate of our output neuron in the following way. If we have an input firing rate that's zero and then steps up to some constant and then steps down, we're going to model the output, the firing rate of the output neuron, using exactly the same kind of first order linear differential equation that we've been using all along for the membrane potential, for the Hodgkin-Huxley gating variables. The same kind of differential equation that you've seen over and over again. So that's the differential equation we're going to use. We're going to say that the time derivative of the firing rate of the output neuron times the time constant is just equal to minus the firing rate of the output non plus v infinity. And so you know that the solution to this equation is that the firing rate of the output neuron will just relax exponentially to some new v infinity. And the v infinity that we're going to use is just this non-linear function times the weighted input to our neuron. So we're going to take the formalism that we developed for our feed-forward networks to say, what is the firing rate of the output neuron as a function of the inputs? And we're going to use that firing rate that we've been using before as the v infinity for our network with dynamics. Any questions about that? All right, so that becomes our differential equation now for this recurrent network, all right? So it's just a first order linear differential equation, where the v infinity, the steady state firing rate of the output neuron, is just this nonlinear function times the weighted sum of all the inputs. All right, and actually, for most of what we do today, we're going to just take the case of a linear neuron. All right. So this I've already said. This I've already said. And actually, what I'm doing here is just extending this. So this was the case for a single output neuron and a single input neuron. What we're doing now is we're just extending this to the case where we have a vector of input neurons with a firing rate represented by a firing rate vector u, and a vector of output neurons with a fine rate vector v. And we're just going to use this same differential equation, but we're going to write it in vector notation. So each one of these output neurons has an equation like this, and we're going to combine them all together into a single vector. Does that make sense? All right, so there is our vector notation of the activity in this recurrent network. Sorry, I forgot to put the recurrent connections in there. So the time dependence is really simple in this feed-forward network, right? So in a feed-forward network, the dynamics just look like this. But in a recurrent network, this thing can get really interesting and start doing interesting stuff. All right, so let's add recurrent connections now and add these recurrent connections to our equation. So in addition to this weight matrix w that describes the connections from the input layer to the output layer, we're going to have another weight matrix that describes the connections between the neurons in the output layer. And this weight matrix, of course, has to be able to describe a connection from any one of these neurons to any other of these neurons. And so this weight matrix is going to be a function of the postsynaptic neuron, the weight-- the synaptic strength is going to be a function of the postsynaptic neuron and the presynaptic-- the identity of the postsynaptic neuron and the identity of the presynaptic neuron. Does that make sense? OK, so there are two kinds of input-- a feed-forward input from the input layer and a recurrent input due to connections within the output layer. Any questions about that? OK, so there is the equation now that describes the time rate of change of the firing rates in the output layer. It's just this first order linear differential equation. And the infinity is just this non-linear function of the inputs, of the net input to this neuron, to each neuron. And the net input to this set of neurons is a contribution from the feed-forward inputs, given by this weight matrix w, and this contribution from the recurrent inputs, given by this weight matrix, m. So that is the crux of it, all right? So I want to make sure that we understand where we are. Does anybody have any questions about that? No? All right, then I'll push ahead. All right, so what is this? So we've seen this before. This product of this weight matrix times this vector of input firing rates just looks like this. You can see that the input to this neuron, this first output neuron, is just the dot product of these weights onto the first neuron and the dot product of that vector of weights, that row of the weight matrix, with the vector of input firing rates. And the feed-forward contribution to this neuron is just the dot product of that row weight of this input weight matrix with the vector of input firing rates, and so on. If we look at the recurrent input to these neurons, the recurrent input to this first neuron is just going to be the dot product of this row of the recurrent weight matrix and the vector of firing rates in the output layer. The recurrent inputs to the second neuron is going to be the dot product of this row of the weight matrix and the vector of firing rates. Yes? AUDIENCE: So I guess I'm a little confused, because I thought it was from A. Oh, to A. OK. MICHALE FEE: Yeah, it's always post, pre. Post, pre in a weight matrix. That's because we're usually writing down these vectors the way that I'm defining this notation. This vector is a column matrix, a column vector. All right, so we're going to make one simplification to this. When we work with the recurrent networks, we're usually going to simplify this input. And rather than write down this complex feed-forward component, writing this out as this matrix product, we're just going to simplify the math. And rather than carry around this w times u, we're just going to replace that with a vector of inputs onto each one of those neurons, OK? So we're just going to pretend that the input to this neuron is just coming from one input, OK? And the input to this neuron is coming from another single input. And so we're just going to replace that feed-forward input onto this network with this vector h. So that's the equation that we're going to use moving forward, all right? Just simplifies things a little bit so we're not carrying around this w u. So now, that's our equation that we're going to use to describe this recurrent network. This is a system of coupled equations. What does that mean? You can see that the time derivative of the firing rate of this first neuron is given by a contribution from the input layer and a contribution from other neurons in the output layer. So the time rate of change of this neuron depends on the activity in all the other neurons in the network. And the time rate of change in this neuron depends on the activity of all the other neurons in the network. So that's a set of coupled equations. And that, in general, can be-- you know, it's not obvious, when you look at it, what the solution is, all right? So we're going to develop the tools to solve this equation and get some intuition about how networks like this behave in response to their inputs. So the first thing we're going to do is to simplify this network to the case of linear neurons. So we don't have-- so the neurons just fire. Their firing rate is just linear with their input. And so that's the equation for the linear case. All we've done is we've just gotten rid of this non-linear function f. All right, so now let's take a very simple case of a recurrent network and use this equation to see how it behaves, all right? So the simplest case of a recurrent network is the case where the recurrent connections within this layer are given by-- the weight matrix is given by a diagonal matrix. Now, what does that correspond to? What that corresponds to is this neuron making a connection onto itself with a synapse of weight lambda one, right there. And that kind of recurrent connection of a neuron onto itself is called an autapse, like an auto synapse. And we're going to put one of those autapses on each one of these neurons in our output layer, in our recurrent layer. So now we can write down the equation for this network, all right? And what we're going to do is simply replace-- sorry, let me just bring up that equation again. Sorry, there's the equation. And we're simply going to replace this weight matrix m, this recurrent weight matrix, with that diagonal matrix that I just showed you. So there it is. So that time rate of change of this vector of output neurons is just minus v plus this diagonal matrix times [INAUDIBLE] plus the inputs. So now you can see that if we write out the equation separately for each one of these output neurons-- so here it is in vector notation. We can just write that out for each one of our output neurons. So there's a separate equation like this for each one of these neurons. But you can see that these are all uncoupled. So we can understand how this network responds just by studying this equation for one of those neurons. OK, so let's do that. We have an independent equation. The firing rate change-- the time derivative of the firing rate of neuron one depends only on the firing rate of neuron one. It doesn't depend on any other neurons. As you can see, it's not connected to any of the other neurons. OK, so let's write this equation. And let's see what that equation looks like. So we're going to rewrite this a little bit. We're just going to factor out the va all right here. This parameter, 1 minus lambda a, controls what kind of solutions this equation has. And there are three different cases that we need to consider. We need to consider the case where 1 minus lambda is greater than zero, equal to zero, or less than zero. Those three different values of that parameter 1 minus lambda give three different kinds of solutions to this equation. We're going to start with the case where lambda is less than one. And if lambda is less than 1, then this term right here is greater than zero. If we do that, then we can rewrite this equation as follows. We're going to divide both sides of this equation by 1 minus lambda, and that's what we have here. And you can see that this equation starts looking very familiar, very simple. We have a first order linear differential equation, where we have a time constant here, tau over 1 minus lambda, and a v infinity here, which is the input, the effective input onto that neuron, divided by 1 minus lambda. So that's tau dv dt equals minus v plus v infinity. But now you can see that the time constant and the v infinity depend on lambda, depend on the strength of that connection, all right? And the solution to that we've seen before, to this equation. It's just exponential relaxation toward v infinity. OK, so here's our v infinity. There's our tau. True for the case of lambda between-- let's just look at these solutions for the case of lambda between zero and one. So I'm going to plot v as a function of time when we have an input that goes from zero and then steps up and then is held constant. All right, so let's look at the case of lambda equals zero. So this lambda zero means there's no autapse. It's just not connected. So you can see that, in this case, the solution is very simple. It's just exponential relaxation toward infinity. v infinity is just given by h, the input, and tau is just the original tau, 1 minus 0, right? So it's just exponential relaxation to h. That make sense? And it relaxes with a time constant tau, tau m. We're going to now turn up the synapse a little bit so that it has a little bit of strength. You see that what happens when lambda is 0.5, that v infinity gets bigger. v infinity goes to 2h. Why? Because it's h divided by 1 minus 0.5. So it's h over 0.5, so 2h. And what happens to the time constant? Well, it becomes two tau. All right, and if we make lambda equal to 0.3-- sorry, 0.66. We turn it up a little bit. You can see that the response of this neuron gets even bigger. So you can see that what's happening is that when we start letting this neuron feed back to itself, positive feedback, the response of the neuron to a fixed input-- the input is the same for all of those. The response of the neuron gets bigger. And so having positive feedback of that neuron onto itself through an autapse just amplifies the response of this neuron to its input. Now, let's consider the case where-- so positive feedback amplifies the response. And what also does it do? It slows the response down. The time constants are getting longer, which means the response is slower. All right, let's look at what happens when the lambdas are less than zero. What does lambda less than zero correspond to here? AUDIENCE: [INAUDIBLE] MICHALE FEE: Yeah, which is, in neurons, what does that correspond to? AUDIENCE: [INAUDIBLE] MICHALE FEE: Inhibition. So this neuron, when you put an input in, it tries to activate the neuron. But that neuron inhibits itself. So what do you think's going to happen? So positive feedback made the response bigger. Here, the neuron is kind of inhibiting itself. So what's going to happen? You put in that same h that we had before, what's going to happen when we have inhibition? AUDIENCE: Response is [INAUDIBLE].. MICHALE FEE: What's that? AUDIENCE: The response is going to be smaller. MICHALE FEE: The response will just be smaller, that's right. So let's look at that. So here's firing rate of this neuron is a function of time for a step input. You can see for a lambda equals zero, we're going to respond with an amount h. But if we put in-- in a time constant tau. If we put in a lambda of negative one-- that means you put this input in-- that neuron starts inhibiting itself, and you can see the response is smaller. But another thing that's real interesting is that you can see that the response of the neuron is actually faster. So if the feedback-- if the lambda is minus one, you can see that v infinity is h over 1 minus negative 1. So it's h over 2. All right, and so on. The more we turn up that inhibition, the more suppressed the neuron is, the weaker the response that neuron is to its input, but the faster it is. So negative feedback suppresses the response of the neuron and speeds up the response. OK, now, there's one other really important thing about recurrent networks in this regime, where this lambda is less than one. And that is that the activity always relaxes back to zero when you turn the input off. OK, so you put a step input in, the neuron responds, relaxing exponentially to sum of v infinity. But when you turn the input off, the network relaxes back to zero, OK? So now let's go to the more general case of recurrent connections. Oh, and first, I just want to show you how we actually show graphically how a neuron responds-- sorry, how one of these networks respond. And a typical way that we do that is we plot the firing rate of one neuron versus the firing rate of another neuron. That's called a state-space trajectory. And we plot that response as a function of time after we put in an input. So we can put an input in described as some vector. So we put in some h1 and h2, and we then plot the response of the neuron-- the response of the network in this output state space. So let me show you an example of what that looks like. So here is the output of this little network for different kinds of inputs. So Daniel made this nice little movie for us. Here, you can see that if you put an input into neuron one, neuron one responds. If you put a negative input into neuron one, the neuron goes negative. If you put an input into neuron two, the neuron responds. And if you put a negative input into neuron two, it responds. Now, why did it respond bigger in this direction than in this direction? AUDIENCE: That's [INAUDIBLE]. MICHALE FEE: Good. Because neuron one had-- AUDIENCE: Positive? MICHALE FEE: Positive feedback. And neuron two had negative feedback. So neuron one, this neuron one, amplified its input and gave a big response. Neuron two suppressed the response to its input, and so it had a weak response. Let's look at another interesting case. Let's put an input into these neurons-- not one at a time, but simultaneously. So now we're going to put an input into both neurons one and two simultaneously. It's like Spirograph. Did you guys play with Spirograph? It's kind of weird, right? It's like making little butterflies for spring. So why does the output-- why does the response of this neuron to an input, positive input to both h1 and h2, look like this? Let's just break this down into one of these little branches. We start at zero. We put an input into h1 and h2, and the response goes quickly like this and then relaxes up to here. So why is that? Lena? AUDIENCE: [INAUDIBLE] so there was [INAUDIBLE] and then because it's negative, it's shorter. MICHALE FEE: Yup. The response in the v2 direction is weak but fast. AUDIENCE: Yeah. MICHALE FEE: So it goes up quickly. And then the response in the v1 direction is? AUDIENCE: Slow, but [INAUDIBLE]. MICHALE FEE: Good. That's it. It's slow, but [AUDIO OUT]. It's amplified in this direction, suppressed in this direction. But the response is fast this way and slow this way. So it traces this out. Now, when you turn the input off, again, it relaxes. v2 relaxes quickly back to zero, and v1 relaxes slowly back to zero. So it kind of traces out this kind of hysteretic loop. It's not really hysteresis. Then it's exactly mirror image when you put in a negative input. And when you put in h1 positive and v1 negative, it just looks like a mirror image. All right, so any questions about that? Yes, Lena? AUDIENCE: If there was nothing, like no kind of amplified or [INAUDIBLE],, would it just be like a [INAUDIBLE]?? MICHALE FEE: Yeah, so if you took out the recurrent connections, what would what would it look like? AUDIENCE: An x? MICHALE FEE: Yeah, the output-- so let's say that you just literally set those to zero. Then the response will be the identity matrix, right? You get the output as a function of input. Let's just go back to the equation. Can always, always get the answer by looking at the equation. Too many animations. No, it's a very good question. Here we go. There it is right there. So you're asking about-- let's just ask about the steady state response. So we can set dv dt equal to zero. And you're asking, what is v? And you're saying, let's set lambda to zero, right? We're going to set all these diagonal elements to zero. And so now v equals h. OK, great question. Now, let's go to the case of fully recurrent networks. We've been working with this simplified case of just having neurons have autapses. And the reason we've been doing that is because the answer you get for the autapse kind of captures almost all the intuition that you need to have. What we're going to do is we're going to take a fully recurrent neural network, and we're going to do a mathematical trick that just turns it into an autapse network. And the answer for the fully recurrent network is just going to be just as simple as what you saw here. All right, so let's do that. Let's take this fully recurrent network. Our weight matrix m now, instead of just having diagonal elements, also has off-diagonal elements. And I'll say that one of the things that we're going to do today is just consider the simplest case of this fully recurrent network, where the connections are symmetric, where a connection from v1 to v2 is equal to the connection from v2 to v1, all right? We're going to do that because that's the next thing to do to build our intuition, and it's also mathematically simpler than the fully general case, OK? So we saw how the behavior of this network is very simple if m is diagonal. So what we're going to do is we're going to take this arbitrary matrix m, and we're going to just make it diagonal. So let's do that. So we're going to rewrite our weight matrix m as-- so we're going to rewrite m in this form, where this phi-- sorry, where this lambda is a diagonal matrix. So we're going to take this network with recurrent connections between different neurons in the network, and we're going to transform it into sort of an equivalent network that just has autapses. So how do we write m in this form, with a rotation matrix times a diagonal matrix times a rotation matrix? We just solve this eigenvalue equation, OK? Does that make sense? We're just going to do exactly the same thing we did in PCA, where we find the covariance matrix. And we rewrote the covariance matrix like this. Now we're going to take a weight matrix of this recurrent network, and we're going to rewrite it in exactly the same way. So that process is called diagonalizing the weight matrix. So the elements of lambda here are the eigenvalues of m. And the columns of the phi are the eigenvectors of m. And we're going to use these quantities, these elements, to build a new network that has the same properties as our recurrent network. So let me just show you how we do that. So remember that what this eigenvalue-- this is an eigenvalue equation written in matrix notation. What this means is this is set of eigenvalues equations that have-- it's a set of n eigenvalue equations like this, where there's one of these for each neuron in the network. OK, so let me just go through that. OK, so here's the eigenvalue equation. If M is a symmetric matrix, then the eigenvalues are real and phi is a rotation matrix. And the eigenvectors give us an orthogonal basis, all right? So everybody remember this from a few lectures ago? If M is symmetric-- and this is why we're going to, at this point on, consider just the case where M is symmetric, then the eigenvectors, the columns of that matrix phi, give us an orthogonal set of vectors and their unit vectors. So it satisfies this orthonormal condition. And phi transpose phi is an identity matrix, which means phi is a rotation matrix. OK, so now what we're going to do is rewrite. The first thing we're going to do to use this trick to rewrite our matrix, our network, is to rewrite the vector of firing rates v in this new basis. What are we going to do? Well take the vector and all we're going to do is to rewrite that vector in this new basis set. We're just going to do a change of basis of our firing rate vector into a new basis set that's given by the columns of phi. Another way of saying it is that we're going to rotate this firing rate vector v using the phi rotation matrix. So we're going to project v onto each one of those new basis vectors. So there's v in the standard basis. There's our new basis, f1 and f2. We're going to project v onto f1 and f2 and write down that scalar projection, c1 and c2. So we're going to write down the scalar projection of v onto each one of those basis vectors. So we can write that c sub alpha-- that's the alpha-th component-- is just v dot the alpha-th basis vector. So now we can express v as a linear combination in this new basis. So it's c1 times f1 plus c2 times f2 plus c3-- that's supposed to be a three-- times f3 and so on. And of course, remember, we're doing all of this because we want to understand the dynamics. So these things are time dependent. So v is v changes in time. We're not going to be changing our basis vectors in time. So if we want to write down a time dependent v, it's really these coefficients that are changing in time, right? Does that make sense? So we can now write our vector v, our firing rate vector, as a sum of contributions in all these different directions corresponding to the new basis. And each one of those coefficients, c is just the time dependent v projected onto one of those basis vectors. And questions? No? OK. And remember, we can write that in matrix notation using this formalism that we developed in the lecture on basis sets. So v is just phi c, and c is just phi transpose v. So we're just taking this vector v, and we're rotating it into a new basis set, and we can rotate it back. All right, so now what we're going to do is we're going to take this v expressed in this new basis set and were going to rewrite our equation in that new basis set. Watch this. This is so cool. All right, you ready? We're going to take this, and we're to plug it into here. So dv dt is phi dc dt. V is just phi c. v is phi c, and h doesn't change. So now what is that? Do you remember? AUDIENCE: Phi [INAUDIBLE]. MICHALE FEE: Right. We got phi as the solution to the eigenvalue equation. What was the eigenvalue equation? The eigenvalue equation was m phi equals phi lambda. So the phi here, this rotation matrix, is the solution to this equation, all right? So we're given m, and we're saying we're going to find a phi and a lambda such that we can write m phi is equal to phi lambda. So when we take that matrix m and we run eig on it in Matlab, Matlab sends us back a phi and a lambda such that this equation is true. So literally, we can take the weight matrix m stick it into Matlab, and get a phi and a lambda such that m phi is equal to phi lambda. So m phi is equal to what? Phi lambda. That becomes this. Now, all of a sudden, this thing is just going to simplify. So how would we simplify this equation? We can get rid of all of these things, all of these phi's, by doing what? How do you get rid of phi's? AUDIENCE: Multiply [INAUDIBLE] phi transpose. MICHALE FEE: You multiply by phi transpose, exactly. So we're going to multiply each term in this equation by phi transpose. So what do you have? Phi transpose phi, phi transpose phi, phi transpose phi. What is phi transpose phi equal to? The identity matrix. Because it's a rotation matrix, phi transpose is just the inverse of phi. So phi inverse phi is just equal to the identity matrix. And all those things disappear. And you're left with this equation-- tau dc dt equals minus c plus lambda c plus h, hf. And what is hf? hf is just h rotated into the new basis set. So this is the equation for a recurrent network with just autapses, which we just understood. We just wrote down what the solution is, right? And we plotted it for different values of lambda. So now let's just look at what some of these look like. So we've rewritten our weight matrix in a new basis set. We've rebuilt our network and a new basis set, in a rotated basis set where everything simplifies. So we've taken this complicated network with recurrent connections and we've rewritten it in a new network, where each of these neurons in our new network corresponds to what's called a mode of the fully recurrent network. So the activities c alpha c1 and c2 of the network modes represent kind of an activity in a linear combination of these neurons. So we're going to go through what that means now. So the first thing I want to do is just calculate what the steady state response is in this neuron. And I'll just do it mathematically, and then I'll show you what it looks like graphically. So there's our original network equation. We've rewritten it a set of differential equations for the modes of this network. I'm just rewriting this by putting an I here, minus I times c. That's the only change I made here. I just rewrote it like this. Let's find a steady state. So we're going to set dc dt equal to zero. We're going to ask, what is c in steady state? So we're going to call that c infinity, all right? I minus lambda times c infinity equals phi transpose h. OK, don't panic. It's all going to be very simple in a second. c infinity is just I minus lambda inverse phi transpose h. But I is diagonal. Lambda is diagonal. So I minus lambda inverse is just the-- it's a diagonal matrix with these elements with one over all those diagonal elements. Now let's calculate v infinity. v infinity is just phi times v infinity. So here, we're multiplying on the left by phi. That's just v infinity. So v infinity is just this. So what is this? This just says v infinity is some matrix-- it's a rotated stretch matrix-- times the input. So v infinity is just this matrix times h. And now let's look at what that is. v infinity is a matrix times h. We're going to call that g. v infinity is a gain matrix. We're going to think of that as a gain times the input. So it's just a matrix operation on the input. This matrix has exactly the same eigenvectors as m. And the eigenvalues are just 1 over 1 minus lambda. Hang in there. So what this means is that if an input is parallel to one of the eigenvectors of the weight matrix, that means the output is parallel to the input. So if the input is in the direction of one of the eigenvectors, v infinity is g times f. But g times f-- f is an eigenvector v. And what that means is that v infinity is parallel to f with a scaling factor 1 over 1 minus lambda. All right? So hang in there. I'm going to show you what this looks like. So in steady state, the output will be parallel to the input if the input is in the direction of one of the eigenvectors of the network. So if the input is in the direction of one of the eigenvectors of the network, that means you're activating only one mode of the network. And only that one mode responds, and none of the other modes respond. The response of the network will be in the direction of that input, and it will be amplified or suppressed by this gain factor. And the time constant will also be increased or decreased by that factor. So now let's look at-- so I just kind of whizzed through a bunch of math. Let's look at what this looks like graphically for a few simple cases. And then I think it will become much more clear. Let's just look at a simple network, where we have two neurons with an excitatory connection from neuron one to neuron two, an excitatory connection from neuron two to neuron one. And we're going to make that weight 0.8. OK, so what is the weight matrix M look like? Just tell me what the entries are for M. AUDIENCE: Does it not have the autapse? MICHALE FEE: No, so there's no connection of any of these neurons onto themselves. AUDIENCE: So you have, like, zeros on the diagonal. MICHALE FEE: Zeros on the diagonal. Good. AUDIENCE: All the diagonals. MICHALE FEE: Good. Like that? Good. Connection from neuron one to itself is zero. The connection from post, pre is row, column. So onto neuron one from neuron two is 0.8. Onto neuron two from neuron one is 0.8. And neuron two onto neuron two is zero. So now we are just going to diagonalize this weight matrix. We're going to find the eigenvectors and eigenvalues. The eigenvectors are the columns of phi. And the eigenvalues are the diagonal elements of lambda. Let's take a look at what those eigenvectors are. So this vector here is f1. This vector here is another eigenvector, f2. And how did I get this? How did I get this from this? How would you do that? If I gave you this matrix, how would you find phi? AUDIENCE: Eig M. MICHALE FEE: Good, eig of M. Now, remember in the last lecture when we were talking about some simple cases of matrices that are really easy to find the eigenvectors of? If you have a symmetric matrix, where the diagonal elements are equal to each other, the eigenvectors are always 45 degrees here and 45 degrees there. And the eigenvalues are just the diagonal elements plus or minus the off-diagonal elements. So the eigenvalues here are 0.8 and minus 0.8. All right, so those are the two eigenvectors of this matrix, of this network. Those are the modes of the network. Notice that one of the modes corresponds to neuron one and neuron two firing together. The other mode corresponds to neuron one and neuron two firing with opposite sign-- minus one, one. So the lambda-- the diagonal elements of the lambda matrix are the eigenvalues. They're 0.8 and minus 0.8, a plus or minus b. Now, this gain factor, what this says is that if I have an input in the direction of f1, the response is going to be amplified by a gain. And remember, we just derived, on the previous slide, that that gain factor is just 1 over 1 minus the eigenvalue for that eigenvector. In this case, the eigenvalue for mode one is 0.8. So 1 over 1 minus 0.8 is 5. So the gain in this direction is 5. The gain for an input in this direction is 1 over 1 minus negative 0.8, which is 1 over 1.8. Does that makes sense? OK, let's keep going, because I think it will make even more sense once we see how the network responds to its inputs. So zero input. Now we're going to put an input in the direction of this mode one. And you can see the mode responds a lot. Put a negative input in, it responds a lot. If we put a mode input in this direction or this direction, the response is suppressed by an amount of about 0.5. Because here, the gain is small. Here, the gain is big. So you see what's happening? This network looks just like an autapse network, but where we've taken this input and output space and just rotated it into a new coordinate system, into this new basis. Yes? AUDIENCE: Why did it kind of loop around on the one side [INAUDIBLE]? MICHALE FEE: OK, it's because these things are relaxing exponentially back to zero. And we got a little bit impatient and started the next input before it had quite gone away. OK, good question. It's just that if you really wait for a long time for it to settle, then the movie just takes a long time. But maybe it would be better to do that. So input this way and this way lead to a large response, because those inputs activate mode one, which has a big gain. Inputs in this direction and this direction have a small response, because they activate mode two, which has small gain. But notice that when you activate mode one-- when you put an input in this direction, it only activates mode one. And it doesn't activate mode two at all. If you put an input in this direction, then it only activates mode two, and it doesn't activate mode one at all. So it's just like the autapse network, but rotated. So now let's do the case where we have an input that activates both modes. So let's say we put an input in this direction. What does that direction correspond to h up. What is that input mean here in terms of h1 and h2? Let's say we just put an input-- remember, this is a plot on axes h1 versus h2. So this input vector h corresponds to just putting an input on h2, into this neuron. So you can see that when we put an input in this direction, we're activating-- that input has a projection onto mode one and mode two. So we're activating both modes. You can see that the input h has a projection onto f1 and projection onto f2. So what you do is-- well, here, I'm just showing you what the steady state response is mathematically. Let me just show you what that looks like. What this says is that if we put an h in this direction, it's going to activate a little bit of mode one with a big gain and a little bit of mode two with a very small gain. And so the steady state response will be the sum of those two. It'll be up here. So the steady state response to this input in this direction is going to be over here. Why? Because that input activates mode one and mode two both. But the response of mode one is big, and the response of mode two is really small. And so the steady state response is going to be way over here because of the big response, the amplified response of mode two, which is in this direction, OK? So when we put an input straight up, the response of the network's going to be all the way over here. How is it going to get there? Let's take a look. We're going to put an input-- sorry, that was first in this direction. Now let's see what happens when we put an input in this direction. You can see the response is really big along the mode one direction, in this direction, and it's really small in this direction. So input up in the upward direction onto just this neuron produces a large response in mode, which is this way, and a very small response in mode two, which is this way. The response in mode two is very fast, because the lambda, the 1 over 1 minus lambda, is small, which makes the time constant faster and the response smaller. So, again, it's just like the response of the autapse network, but rotated into a new coordinate system. All right, any questions about that? So you can see we basically understood everything we needed to know about recurrent networks just by understanding simple networks with just autapses. And all these more complicated networks are just nothing but rotated versions of the response of a network with just autapses. Any questions about that? OK, let's do another network now where we have inhibitory connections. That's called mutual inhibition. And let's make that inhibition minus 0.8. The weight matrix is just zeros on the diagonals, because there's no autapse here. And minus 0.8 on the off-diagonals. What are the eigenvectors for this matrix, for this network? AUDIENCE: The same. MICHALE FEE: Yeah, because the diagonal elements are equal to each other, and the off-diagonal elements are equal to each other. It's a symmetric network with equal diagonal elements. The eigenvectors are always at 45 degrees. And what are the eigenvalues? AUDIENCE: [INAUDIBLE] MICHALE FEE: Well, the two numbers are going to be the same. It's zero plus and minus 0.8, plus and minus negative 0.8, which is just 0.8 and minus 0.8, right? Good. So the eigenvalues are just 0.8 and minus 0.8. But the eigenvalues correspond to different eigenvectors. So now the eigenvalue mode in the 1, 1 direction is now minus 0.8, which means it's suppressing the response in this direction. And the eigenvalue for the eigenvector in the minus 1, 1 direction is now close to 1, which means that mode has a lot of recurrent feedback. And so its response in this direction is going to be big. It's going to be amplified. So unlike the case where we had positive recurrent synapses, where we had amplification in this direction, now we're going to have amplification in this direction. Does that make sense? Think of it this way-- if we go back to this network here, you can see that when these two neurons-- when this neuron is active, it tends to activate this neuron. And when this neuron is activate, it tends to activate that neuron. So this network, if you were to activate one of these neurons, it tends to drive the other neuron also. And so the activity of those two neurons likes to go together. When one is big, the other one wants to be big. And that's why there's a lot of gain in this direction. Does that make sense? With these recurrent excitatory connections, it's hard to make this neuron fire and make that neuron not fire. And that's why the response is suppressed in this direction, OK? With this network, when this neuron is active, it's trying to suppress that neuron. When that neuron has positive firing rate, it's trying to make that neuron have a negative firing rate. When that neuron is negative, it tries to make that one go positive. And so this network likes to have one firing positive and the other neuron going negative. And so that's what happens. What you find is that if you put an input into the first neuron, it tends to suppress the activity in the second neuron, in v2. If you put neuron into neuron two, it tends to suppress the activity, or make v1 go negative. So it's, again, exactly like the autapse network, but just, in this case, rotated minus 45 degrees instead of plus 45 degrees, OK? Any questions about that? All right. So now let's talk about how-- yes, Linda? AUDIENCE: So we just did, those were all symmetric matrices, right? MICHALE FEE: Yes. AUDIENCE: So [INAUDIBLE] can we not do this strategy if it's not symmetric? MICHALE FEE: You can do it for non-symmetric matrices, but non-symmetric matrices start doing all kinds of other cool stuff that is a topic for another day. So symmetric matrices are special in that they have very simple dynamics. They just relax to a steady state solution. Weight matrices that are not symmetric, or even anti-symmetric, tend to do really cool things like oscillating. And we'll get to that in another lecture, all right? OK, so now let's talk about using recurrent networks to store memories. So, remember, all of the cases we've just described, all of the networks we've just described, had the properties that the lambdas were less than one. So what we've been looking at are networks for which lambda is less than one and they're symmetric weight matrices. So that was kind of a special case, but it's a good case for building intuition about what goes on. But now we're going to start branching out into more interesting behavior. So let's take a look at what happens to our equation. This is now our equation different modes of a network. What happens to this equation when lambda is actually equal to one? So when lambda is equal to one, this term goes to zero, right? So we can just cross this out and rewrite our equation as tau dc dt equals f1 f dot h. So what is this? What does that look like? What's the solution to c for this differential equation? Does this exponentially relax toward a v infinity? What is v infinity here? It's not even defined. If you set dc dt equal to zero, there's not even a c to solve for, right? So what is this? The derivative of c is just equal to-- if we put in an input that's constant, what is c? AUDIENCE: [INAUDIBLE] MICHALE FEE: This is an integrator, right? This c, the solution to this equation, is that c is the integral of this input. c is some initial c plus the integral over time. So if we have an input-- and again, what we're plotting here is the activity of one of the modes of our network, c1, which is a function of the projection of the input along the eigenvector of mode one. So we're going to plot h, which is just how much the input overlaps with mode one. And as a function of time, let's start at one equals zero. What will this look like? This will just increase linearly. And then what happens? What happens here? Raymundo? AUDIENCE: R just stays constant. MICHALE FEE: Good. We've been through that, like, 100 times in this class. Now, what's special about this network is that remember, when lambda was less than one, the network would respond to the input. And then what would it do when we took the input away? It would decay back to zero. But this network does something really special. This network, you put an input in and then take the input away, this network stays active. It remembers what the input was. Whereas, if you have a network where lambda is less than one, the network very quickly forgets what the input was. All right, what happens when lambda is greater than one? So when lambda is greater than one, this term is now-- this thing inside the parentheses is negative, multiplied by a negative number. This whole coefficient in front of the c1 becomes positive. So we're just going to write it as lambda minus one. And so this because positive. And what does that solution look like? Does anyone know what that looks like? dc dt equals a positive number times c. Nobody? Are we all just sleepy? What happens? So if this is negative, if this coefficient were negative, dc-- if c is positive, then dc dt is negative, and it relaxes to zero, right? Lets think about this for a minute. What happens if this quantity is positive? So if c is positive-- cover that up. If this is positive and c is positive, then dc dt is positive. So that means if c is positive, it just keeps getting bigger, right? And so what happens is you get exponential growth. So if we now take an input and we put it into this network, where lambda is greater than one, you get exponential growth. And now what happens when you turn that input off? Does it go away? What happens? draw with their hand what happens here. So just look at the equation. Again, h dot f1 is zero here, so that's gone. This is positive. c is positive. So what is dc dt? Good. It's positive. And so what is-- AUDIENCE: [INAUDIBLE] MICHALE FEE: It keeps growing. So you can see that this network also remembers that it had input. So this network also has a memory. So anytime you have lambda less than one the network just-- as soon as the input goes away, the network activity goes to zero, and it just completely forgets that it ever had input. Whereas, as long as lambda is equal to or greater than one, then this network remembers that it had input. So if lambda is less than one, then the network relaxes exponentially back to zero after the input goes away. If you have lambda equal to one, you have an integrator, and the network activity persists after the input goes away. And if you have exponential growth, the network activity also persists after the input goes away. And so that right there is one of the best models for short-term memory in the brain. The idea that you have neurons that get input, become activated, and then hold that memory by reactivating themselves and holding their own activity high through recurrent excitation. But that excitation has to be big enough to either just barely maintain the activity or continue increasing their activity. OK, now, that's not necessarily such a great model for a memory, right? Because we can't have neurons whose activity is exploding exponentially, right? So that's not so great. But it is quite commonly thought that in neural networks involved in memory, the lambda is actually greater than one. And how would we rescue this situation? How would we save our network from having neurons that blow up exponentially? Well, remember, this was the solution for a network with linear neurons. But neurons in the brain are not really linear, are they? They have firing rates that saturate. At higher inputs, firing rates tend [AUDIO OUT].. Why? Because sodium channels become inactivated, and the neurons can't respond that fast, right? All right, this I've already said. So we use what are called saturating non-linearities. So it's very common to write down models in which we can still have neurons that are-- we can still have them approximately linear. So it's quite often to have neurons that are linear for small [INAUDIBLE]. They can go plus and minus, but they saturate on the plus side or the minus. So now you can have an input to a neuron that activates the neuron. You can see what happens is you start activating this neuron. It keeps activating itself, even as the input goes away. But now, what happens is that activity starts getting up into the regime where the neuron can't fire any faster. And so the activity becomes stable at some high value of firing. Does that make sense? And this kind of neuron, for example, can remember a plus input, or it can remember a minus input. Does that make sense? So that's how we can build a simple network with a neuron that can remember its previous inputs with a lambda that's greater than one. And this right here, that basic thing, is one of the models for how the hippocampus stores memories, that you have hippocampal neurons that connect to each other with a lot of recurrent connections [AUDIO OUT] in the hippocampus has a lot of recurrent connections. And the idea is that those neurons activate each other, but then those neurons saturate so they can't fire anymore, and now you can have a stable memory of some prior input. And I think we should stop there. But there are other very interesting topics that we're going to get to on how these kind of networks can also make decisions and how they can store continuous memories-- not just discrete memories, plus or minus, on or off, but can store a value for a long period of time using this integrator. OK, so we'll stop there.
MIT_940_Introduction_to_Neural_Computation_Spring_2018
8_Spike_Trains_Intro_to_Neural_Computation.txt
MICHALE FEE: OK, good morning. So far in class, we have been developing an equivalent circuit model of a neuron, and we have extended that model to understanding how action potentials are generated. And, more recently, we extended the model to understanding the propagation of signals in dendrites. So today we are going to consider how we can record activity, record electrical signals related to neural activity in the brain, and we're going to understand a little bit about how we can, in particular, record extracellular signals. So that will be the focus of today's lecture. So, so far in class, we have been analyzing measurements of electrical signals recorded inside of neurons. For example in the voltage clamp experiment, we imagined that we were placing electrodes inside of cells so that we could measure the voltage inside of this of cells. But it's actually quite difficult, in general, to record membrane potentials of neurons in behaving animals, although it's certainly possible. It's much easier to record electrical signals outside of neurons. In this case, we can actually record action potentials. And so those kinds of recordings are made by placing metal electrodes that are insulated everywhere along the shank of the electrode except right near the tip. And if we place a metal electrode in the brain, we can record voltage changes in the brain right near cells of interest. So in this case, we can record from action potentials of individual neurons in behaving animals and various aspects of either sensory stimuli or behavior. So this kind of recording is called extracellular recording, and we're going to go through a very simple analysis of how to think about extracellular recordings. So recall, of course, that when we measure voltages in the brain, we're always measuring voltage differences. So when we place an electrode in the brain near a cell, we are-- we connect that electrode to an amplifier. Usually, we use a differential amplifier that gives us-- that provides us with a measurement of the voltage difference between two terminals, the plus terminal and the minus terminal. So we place-- we connect the electrode to the plus terminal, and we connect another electrode, called the ground electrode, that can be placed in the brain some distance away from the brain area that we're recording from, or the surface of the skull. So we're measuring the voltage that's near a cell relative to the voltage that's someplace further away. Now, the voltage that we measure in the brain, voltage changes, are usually or always associated with current flow through the extracellular space. So we can analyze this in terms of Ohm's law. So, basically, the voltage changes that we are measuring are going to be associated with some current through extracellular space times some effective resistance in the extracellular space. And you remember from previous lectures that the effective resistance of extracellular space is proportional to the resistivity in extracellular space times a length scale divided by an area scale. So how do we think about what kind of voltage changes we might expect if we placed an electrode near a cell that is generating an action potential? So let's start with a spherical neuron, and let's give this spherical neuron sodium channels and potassium channels so that it can generate an action potential. So during an action potential, we have an influx of sodium, followed by an outflux of potassium, And that influx of sodium produces a large positive-going change in the voltage inside the cell. So you can see that during the rising phase of the action potential, we have sodium flowing in. But at the same time, we have current flowing outward through the membrane in the form of capacitive current. Now, these two currents, the sodium ions flowing through the membrane and the capacitive current flowing outwards through the membrane, are co-localized on the same piece of membrane. And so there's no spatial separation of the currents flowing through the membrane. And as a result of that, there's actually no current flow in extracellular space, and so there's no extracellular voltage changes. The first lesson from this is that, if we were to record an extracellular space from a spherical neuron with no dendrites and no axon, we would actually not be able to measure any extracellular voltage change. Now let's consider what happens if we have a neuron with a dendrite. So in this case, when the sodium current flows into the soma, that current, part of it will flow out through the capacitive-- through capacitive current, but part of that current will flow down the dendrite, then out through the membrane through capacitive current back to the soma. So we have a closed circuit of current, current flowing into the soma out through the dendrite, and then back to the soma through extracellular space. So in this case, if we write down the equivalent circuit model of what this looks like-- this is the somatic compartment, this is the dendritic compartment-- we have-- in our earlier calculations of current flow through processes, through dendrites, we were neglecting the extracellular resistance, but in this case we are going to include that, because that extracellular resistance is what provides-- is what produces a voltage drop in extracellular [? space. ?] But during an action potential, we have current flowing in through the soma, out through the inside of the dendrite, back out through the membrane of the dendrite, and through extracellular space back to the soma. The voltage drop across this region of extracellular space will be proportional to this extracellular current times changes in extracellular voltage. So now, we have a simple view that we have current flowing into the soma in a region-- from a region around the soma. So we have effectively what is known as a current sink. So we have charges flowing into the soma in the region around the soma. That current then flows out through the dendrite and appears in extracellular space in the region of the dendrite, and we call that a current source. So we have a combination of a current sink and a current source. And you can see that the current in extracellular space is flowing from the current source to the current sink. So in this case, you can see in our simple equivalent circuit model that the voltages are more positive in regions corresponding to current sources, and the voltages are more negative in regions of extracellular space corresponding to current sink. Current is flowing from-- in extracellular space, this current-- current is flowing from the region of the dendrite to the region of the soma. The voltage here is more positive, the voltage here is more negative. Now, let's take a look at understanding the relationship between the extracellular voltage change and the intracellular voltage change. So let's just write down the equation for the voltage drop across this extracellular space. That voltage drop is just the external current times some effective external resistance. The external current is just the sum of a capacitive current and a resistive current through the membrane of the dendrite. So we can write down that voltage drop is now proportional to an external resistance-- extracellular resistance, I should say. So the-- so we can now write down an expression for these two currents as a function of membrane potential. And you recall from earlier lectures that the capacitive current is just given by C dV dt, and the membrane ionic current is given by some conductance, membrane ionic conductance, times the driving potential. Now, in an action potential, voltage change is very rapid. So dv dt is large. And, in fact, this term generally dominates over this term in an action potential. And so what we find is that the voltage change in extracellular space is proportional to the derivative of the membrane potential. And we can see that this is the case. In earlier experiments from Gyorgy Buzsaki's lab, they were able to record simultaneously from a cell intracellularly-- that's shown in this trace here-- and the extracellular voltage recorded from a microwire electrode placed near the soma. Extracellular recording is shown here, that the extracellular signal is actually quite close to the derivative of the intracellular signal. Now, why is this voltage negative? Because the membrane potential near the soma here, the voltage here goes negative because during the rising phase of the action potential, we have sodium ions flowing into the soma from extracellular space. A current is flowing out of the dendrite out here and traveling back through extracellular space to the soma. So, again, the soma is acting like a voltage sink, and so the voltage is going negative near the soma. So now let's take a look at what happens when we have synaptic input onto a neuron. So it turns out you can also observe extracellular signals that result not from action potentials but from synaptic inputs. So let's take our example neuron and attach an excitatory synapse to the dendrite. In this case, when the synapse is activated, if the cell is hyperpolarized, when a neurotransmitter is released onto the postsynaptic compartment of the synapse, it turns on conductances which allow current to flow into the dendrite. That current flows down the dendrite to the soma, flows back capacitively out through the soma and back to the synapse through extracellular space. So you can see that in this case, the region near the synapse looks like a current sink as charges are flowing into the cell. The region near the soma looks like a current source as current flows out of the soma and back to the current sink. And see that, in this case, you have charges flowing into the cell, and that should correspond to a decrease in the extracellular voltage. For the soma, you have positive charges flowing into extracellular space from the inside of the cell, and that corresponds to an increase. OK, things look different when you have an inhibitory synapse. In the case of an inhibitory synapse, for example if GABA is released onto this GABAergic synapse, it opens chloride channels. Chloride is a negative iron that flows into the cell, but that corresponds to an outward-going current. So now we have the region around the inhibitory synapse looking like a current source. Current flows through extracellular space to the soma, where the soma now looks like a current sink. And so in the presence of activation of inhibitory inputs in the-- near the dendrites, you actually have an increase in voltage of extracellular space. While near the soma, you have a decrease in the voltage. One of the important things to consider is that, in the discussion we've just had, we've been thinking about an individual neuron, how current sources and current sinks appear as a result of synaptic activity and action potentials around a single neuron. But in the brain, neurons are not isolated but are in the tissue next to many other neurons, that are also receiving inputs and spiking. So it turns out that the types of electrical extracellular signals that you see in the tissue depends very much on how different cells are organized spatially. So in some types of tissue, for example in the hippocampus or in the cortex, cell bodies are lumped together in a layer, and the dendrites are collinear and are organized in a different layer. So this is called a laminar morphology. In this case, many of the synaptic inputs arrive onto the dendrites, and currents then flow into the somata. And in this case, these extracellular currents are reinforced. They reinforce each other, and they sum together to produce very large extracellular voltage changes. So now let's turn to the question of how one actually records neural activity in the brain. So let's go back to our experimental setup now where we have an electrode placed near the soma of a neuron. And that electrode is connected to a differential amplifier or an instrumentation amplifier, giving us the voltage difference between the extracellular space near the soma and the extracellular space somewhere far away. So we-- so this amplifier then measures that voltage difference and multiplies it by a gain of typically, let's say, a couple hundred, or 1,000, or even 10,000. We then take the output of that amplifier and put it into an analog to digital converter, that then measures the voltage at regularly spaced samples and stores those voltage digitally in computer memory. So in analog to digital converters, the voltage is sampled at regular intervals, delta t, corresponding to some sampling rate, which is a frequency or a rate that's given by 1 over delta t. So the rate at which the samples are acquired is referred to as the sampling rate or sampling frequency. So if we were to record the extracellular voltage in a region of the hippocampus, we might see a signal that looks very much like this. These are data from Matt Wilson's lab. The signal has a number of features. So you can see that there is a slow modulation of the signal that actually corresponds to the theta rhythm in a rat. That slow modulation of the voltage is actually caused by synaptic currents in the hippocampus you can also see that there are very fast deflections of the voltage corresponding to action potentials. So, once again, this is about a second's worth of data of intracellular-- sorry-- of extracellular recording from rat hippocampus, and we can see both slow and very fast components of this signal. The extracellular-- the action potentials, you can see, are very brief. They typically last about a millisecond. If we were to look at the amount of power at different frequencies in this signal using a technique called measuring the power spectrum, which we will cover in more detail later in class, you can see that there is a lot of power at low frequencies and much less power at high frequencies. So this is a representation of the amount of power at different frequencies. So we can actually extract the fast and slow components of the signal using a technique called high pass and low pass filtering. So what we're going to do is we are going to develop this technique that allows us to remove high frequency components from the original signal to reveal just low-frequency structure within the signals. So how do we do that? Well, basically, what we do is we're going to start by using a technique of low pass filtering that works basically by convolving this signal with a kernel. What that kernel does is it locally averages the signal over short periods of time. So what we do is we take that signal, and we're going to place this kernel over the signal at different points in time, average together. We're going to multiply the kernel by this signal at different points of time-- in time, and plot the result down here. So let me explain what that looks like. So let's say this is the original signal. You can see that it's a little bit noisy. It's fluctuating between 1 and 3, 1, 3, 1, 3. And then, at a certain point here, it jumps to a higher value, 5, 3, 5, 3, 5. So, intuitively, we would expect low pass filtered version of this signal to be low here, and then jump up to a higher value here. Now, here's our kernel. This is a representation of a kernel that looks like this. The kernel is 0, 0.5, 0.5, and 0. And, basically, what we're going to do is place the kernel at some point in time over our signal, multiply the kernel by the signal time element by time element. So you can see that the product of the kernel with the signal is 0 here. 0 times 1 is 0, 0.5 times 3 is 1.5, 0.5 times 1 is 0.5, and 0 times 3 is 0. So that's the product of the kernel and the signal within that time window. Then sum up that-- the elements of that product. 0 plus 1.5 plus 0.5 plus 0 is 2. I'm going to write down that sum at this point in the filtered output. Now, what we're going to do is just slide the kernel over by one element and repeat that. And I also added the earlier values of the output here. So now we slide the kernel over by one, we repeat. We get 0, 0.5, 1.5, and 0. Sum that up, we get a 2. And write down that filtered output down here. So you can see that the low pass filtered result of this signal is 2 everywhere up to here. Now, if we slide the kernel over one more and multiply, you can see we get a 0, a 1.5. Here's 0.5 times 5. That's 2.5. 0 times 3 is 0. And the sum of that, the sum of those four elements is 4. Write down a 4 here. And if we keep doing that, all of the rest of those values is 4, which you can verify. So you can see that the low pass filtered version of this signal, filtered by this kernel, is 2 up to this point, and then it jumps up to 4, which was consistent with our intuition about what low pass filtering would do. So that was low pass filtering. Now, how would we high pass filter? How would we extract out these high frequency components from the data, throw away the low frequency components? Well, one way to think about this is that we actually-- we can get rid of the low frequency elements simply by subtracting off the low pass signal that we just calculated. So how do we do that? So we're going to use this kernel here to do our high pass filtering. And notice that this kernel has two components. It has a square component that's negative, and it has a delta function at 0. So you can see that this negative component of this high pass filter looks a lot like the negative of our-- looks like the negative of our low pass filter. So if we were to take this kernel and instead use a kernel that was the negative of that kernel, like this, then what we would get is the negative of this low passed, of this filtered signal. So this part right here, this part of the high pass filter, is producing the negative of our low pass filter. Now let's take a look at this component here. This component is a delta function with a value of 1 at the peak. You can see that if you convolve that kernel with your original signal, it just gives you back the same original signal. So what we've done is this component simply gives us back the original signal. This component subtracts off the low pass version of the signal. What we're left with is the high pass version. Let's look at what that does to the spectrum of our signal. You can see that, the high pass, we've gotten rid of all of these low frequencies, all of the power at low frequencies, and we're left with just the high frequency part of our signal. And if we go back and look at the spectrum of what the low passed signal looks like, we can see that the low passed output retains all of the power at low frequencies and gets rid of these high frequency components [AUDIO OUT]. Low passed signal has no power at high frequencies. So once we have extracted high pass filtered version of the signal, you can see that what you're left with are action potentials. So what we're going to talk about now is how you actually extract these action potentials and figure out when action potentials occurred during this behavior. Next thing we're going to do is detect-- we're going to do spike detection. So, basically, the best way to detect spikes is to look at the signal, plot the signal, and figure out what amplitude the spikes are. So look at the voltage of the peak, the peak of the spikes. And then set a threshold that consistently is crossed by that peak in the spike waveform. So here are the individual samples associated with one action potential. You can see that a voltage right about here will reliably detect these spikes. And then, basically, you write one line of MATLAB code that detects where this voltage crossed from being below your threshold on one sample to being above your threshold on the next. Now what we can do is, once we detect the time at which that threshold crossing occurred, we can write down that threshold crossing time for each spike in our wave form. The spike here, we write down that time. That's t1. If you have another threshold crossing here, you write that time down, t2. And you collect all of these spike times into an array. So we can now represent the spike train as a list of spike times. So we're going represent this as variable t with an index i, where i goes from 1 to N. Now, we can also think of spike trains as a delta function. So you may remember that a delta function is 0 everywhere except at the time where the argument is 0. So this delta function is 0 everywhere except when t is equal to t signal. That's when t is equal to the time of the signal. At that time, the delta function has a non-zero value. So we can write down this spike train as a sum of delta functions. So the spike train is a function of time is delta of t minus t1, corresponding to the first spike, plus another delta function at time t2 corresponding to that spike, and so on. So we can now write down mathematically our spike train as a sum of delta functions, one at each spike time. We can also think of a spike train as being the derivative of a spike count function. So a spike count function will reflect the number of spikes that have occurred at times less than the time in the argument. So if this is our spike train, then prior to the first spike, there will be zero spikes in our spike count function. After the first spike, then we'll have a spike count of 1. And you can see that you get this stairstep increasing to the right for each spike in the spike train. Since the integral of this spike train has units-- the integral over time has units of spikes, you can see that the spike train, this rho of t, has units of spikes per second. So now let's turn to the question of what we can extract from neural activity by measuring spike trains. So one of the simplest properties that we find about cells in the brain is that they usually have a firing rate that depends on something, either a motor behavior or a sensory stimulus. So, for example, simple cells in primary visual cortex, of the cat in this case, are responsive to the orientation of visual-- of stimuli in the visual field. So this shows a bar of light, but represents a bright bar of light on a black background, actually. And you can see that if you move this bar in space, at this orientation this neuron doesn't respond. But if we rotate the-- if we rotate this bar of light and move it in this direction, now the cell responds with a high firing rate. So if we quantify the firing rate as a function of orientation of this bar of light, you can see that the neuron has a higher firing rate for some orientations than for others. That property of being tuned for particular stimulus features is called tuning, and the measurement of that firing rate as a function of some parameter is called a tuning curve. So you can see that in primary visual cortex, neurons are tuned to orientation. And the tuning curves of neurons in primary visual cortex often have this characteristic of being highly responsive at particular orientations, and then smoothly dropping off to being unresponsive at other orientations. So in a similar-- similar to the way that neurons in visual cortex are tuned to orientation, neurons in auditory cortex are tuned to different frequencies. So, for example, in the auditory system, when sound impinges on the ear, it transmits vibrations into the cochlea. Those vibrations enter the cochlea and propagate along a basilar membrane, where vibrations of particular frequencies are amplified at particular locations, and the membrane is unresponsive to vibrations of other frequencies. So if you [? learn ?] from neurons in visual cortex and you-- sorry-- in auditory cortex and you play a tone, so this shows a philograph of a auditory stimulus. At some frequency, you can see that this neuron spiked robustly in response to that. So Individual neurons are tuned to respond robustly at particular frequencies but not other frequencies. And you can see that different neurons are selective to different frequencies. So you can see that this curve representing one neuron, that neuron is most active for frequencies around a little bit above 5 kilohertz, whereas other neurons are most responsive to frequencies around 6 kilohertz, and so on. So we saw an example now of how firing rates of neurons in sensory cortex are sensitive to particular parameters of the sensory stimulus. This property of tuning applies not only to sensory neurons but also to neurons in motor cortex. This shows the results of a classic experiment on analyzing the spiking activity of neurons in motor cortex during the movements, during our movements in different directions. So this shows a manipulandum. This is basically a little arm that the monkey can grab the handle here and move that arm around. The monkey's task is to hold that-- hold this arm at a central location. And then a light comes on, and the monkey has to move this manipulandum from the center out to the location of the light that turned on. And so the experiment is repeated multiple times where the monkey has to move to different directions. You can see here the trajectories that the monkey went through as it moves from the center location out to these eight different target locations. And in this experiment, neurons were recorded at different regions of motor cortex, and the results-- the resulting spike trains were plotted. In this figure, you can see what are called raster plots, where-- for example, for movements in this direction, five trials were collected together. So each row here corresponds to the spikes that a neuron generated during movements from the center to this direction. You can see that the neuron became active just after the [INAUDIBLE],, indicating which direction was turned on, and prior to the onset of the movement, which is indicated [AUDIO OUT]. You can see that the neuron responded robustly on every single trial to movements in this direction. But the neuron responded quite differently in some other direction. So you can see that the response to downward movement was quite weak. There was essentially no change in the firing rate. And you can see the to movements in other directions, for example to the right, were associated with an actual suppression of the spiking activity. So I should just point out briefly that these spikes here and here, before and after the onset of the trial, are spontaneous spikes that occurred continuously even when the monkey wasn't engaged in moving the handle. So we have a spontaneous firing rate, the trial initiates an increase in firing rate, and then a recovery to the baseline position. So you can see that these motor cortical neurons exhibit tuning for particular movement directions. And we can quantify this now by counting the number of spikes in this movement interval, and plotting that as a function of the angle of the movement. And when you that, you can see that movements in particular directions, in this case [AUDIO OUT] by degree direction, resulted in high firing rates, whereas movements in other directions resulted in lower firing rates. So in order to do this kind of quantification, we need to be able to-- we need to understand a few different methods for how we can actually quantify firing rates. The simplest thing that I just described here is to just count the number of spikes in some particular interval that you decide is relevant for your experiment. So, for example, let's say we have a stimulus that turns on at a particular time, is-- it stays on, and then turns off at some later time. On different trials of that-- or different presentations of that stimulus, you can see that the spike train- that the neuron spikes in a somewhat different way. But you can see that this sample neuron here, that I just made up, you can see that, in general, there is a vague increase in the firing rate after the onset of the stimulus. So we can quantify that by simply setting a relevant time window and counting the number of spikes in that time window. So we're going to set a time window T from the onset of the stimulus to the offset of the stimulus, and simply count the number of spikes on each trial. So N is the number of spikes that occurred on the ith trial. The brackets here represent the average over that quantity i, which is trial number. So we're going to count the number of spikes on the ith trial and average that over trials. So then, once we have that average count, we simply divide by the interval T that we're counting over, and that gives us a rate. Now, you can see that the number of spikes, the firing rate is not constant. So you can see in this little toy example here that, the way I've drawn it, that the spike rate increases at stimulus onset, and then it decayed away, which is very typical. So you're throwing away a lot of information. Now, the way that we just quantified that firing rate, we just counted spikes over the whole stimulus presentation. But if we want to get more temporal resolution in how we quantify the firing rate, we can actually just break the period of interest up into smaller pieces, count the number of spikes, the average number of spikes in each one of these smaller bins, and divide by the interval of these smaller bins. So, for example, we can have-- we can count the number of spikes on trial i and then j. And we can average the number of spikes in the jth bin, overall trials. So that's just the average number of spikes in this first bin, for example, and divide by the interval delta T. And so now you can see that you have a different rate, finer temporal resolution. So, for example, if you look at the analysis that Georgopoulos did in that 1982 paper for the arm movements of the monkey, they broke the trials up into small bins of about 10 or 20 milliseconds each, counted the number of spikes in each one of those bins, divided by that bin width, and computed the firing rate, spikes per second, in each one of those bins. You can see that they did one other thing here. The average firing rate during each bin in which we've subtracted off the firing rate in the pre-stimulus period. So that is, very typically, what you'll see in a neuroscience paper describing the behavior of neurons to a stimulus or during a motor task. And so we can use similar tricks to estimate the firing rate of neurons in just a continuous spike train. So not all neuroscience experiments are done in trials like this, where we present a stimulus and then turn it off. Some trials-- some experiments are done, for example, where a movie may-- a monkey or an animal might be in a movie where stimuli are presented continuously. So you don't have this clear trial structure. So we can also quantify firing rates in cases where we just have a continuous spike train without trials. And we can do that, again, by just taking that continuous spike train and breaking it up into intervals. So there will be N sub j spikes in bin j, and then the bin has some width delta T. Now, one problem that you can see immediately from this is that the answer you get will depend on where you place the boundaries of the bin. So if you take all these bins and you shift them over by, let's say, half of-- shift them over by delta T over 2, you can see that you can get a completely different set of-- a different set of firing rates for this same spike train. So there's not a unique answer. So another way to do this is to quantify firing rates in bins that are shifted to all possible times. So, for example, we can take a window where we have 0 everywhere except within this window. We can now multiply the spike train. Well, one way to do this would simply be to count the number of spikes within that window, and then shift the window over and count the number of spikes, and shift the window over and count the number of spikes. And now you get a count of number of spikes in each of those windows for all the windows that have width delta T, but we've shifted it in small time steps. But how can we describe that mathematically? Well, you may recall that this, in fact, looks a lot like a convolution. We're going to take this square kernel, and we're going to multiply it by the spike train and count the number of spikes, take the integral over that product. And you can see that that's basically going to give you the number of spikes within that window from t1 to t2. So, for example, in this case we're going to use a square window. The firing rate is just going to be given by the number of spikes divided by the width of the window. And that's just 1 over delta T times the integral t minus delta T over 2 to t plus delta T over 2, sliding that gradually over the spike train. So we're effectively convolving our spike train with this rectangular kernel. And that's what it looks like mathematically. The firing rate is the convolution of the spike train with this smoothing kernel. So, as I mentioned, that's just a convolution, and we're convolving our spike train with this square kernel of this width. So that is the mathematical expression for a convolution. the firing rate is just the convolution of this kernel with the spike train. And, again, the kernel is 0 everywhere except within this window, minus delta T over 2 to plus delta T over 2, and it has a height of 1 over delta T such that we make the area 1. Notationally, we often write that as a star, rho star K. Now, in this case we were convolving our spike train with a square kernel. The problem with a square kernel is that it changes abruptly every time a spike comes into the window or drops out of the window. A more common way of quantifying firing rates is to convolve the spike train with a Gaussian kernel. So instead of using a square kernel here, we're going to use a Gaussian kernel that looks like this. The kernel is just defined as this Gaussian function. And it's normalized by 1 over sigma root 2 pi, and this normalization gives area under that kernel, 1. So it's still essentially counting the number of spikes within that window. It's again just a weighted average of the number of spikes divided by the width of the kernel, but there's less weight at the edges. It has smoother edges, and that gives you a less steppy-looking result. So let me show you what that looks like. So here's a spike train. If we take fixed bins and compute the firing rate as a function of time, it looks like this. If we take that spike train and we convolve it, and that estimate of firing it would depend very much on where exactly the windows are placed. On the other hand, this shows the firing rate estimated with a square kernel of the same width. And you can see that it shows a much smoother estimate, a smoother representation of the firing rate varying over time. And if you take that same spike train and you convolve it with a Gaussian, then you get a function that looks like this. And we think of this as perhaps better representing the underlying process in the brain that produces this time-varying firing rate, this time-varying spike train, than either of these two. You don't really think that the firing rate of this neuron is jumping around in rectangular steps. We think of it as being some kind of smooth, continuous underlying function that represents maybe the input to that neuron. Youll see that in these estimates of firing rate, we had to actually choose a width for our window. We had to choose a bin size. We had to choose the width of our square kernel here, and we had to choose the width of our Gaussian kernel here. And you can see that the answer, actually, that you get for firing rate as a function of time depends very strongly on the size or the width of that kernel that you choose to use. And now we smooth it with a Gaussian kernel that has a width, a sigma, of 4 milliseconds. So that's the standard deviation of the Gaussian that we use to smooth the spike train. And you can see that what you get is a very peaky estimate of the firing rate. So that spike produces this little peak. But the question is, what's the right answer? How should you actually choose the size of the kernel to use to estimate firing rate for your experiment? The answer is that it really depends on the experiment. So neural spike trains have widely different temporal structure. So, first of all, neuronal responses aren't constant. Firing rates are not a constant thing. Neurons-- neural firing rates are constantly changing depending on the type of stimulus that you use. And different types of neurons have different temporal structure in response to the same stimuli. So, for example, this shows the response of four different neurons to stimuli, to-- four different neurons in rat vibrissa cortex in response to whisker deflections. So this shows a raster, a histogram of firing rate as a function of time for one neuron in response to a deflection of one of the whiskers right here. So the whisker is at rest, deflected, and then relaxed back to its original position. You can see that this neuron shows a little burst of activity at the onset of the deflection, and it's fairly persistent throughout the deflection. Here's another neuron during a deflection. Increased activity at the onset of the deflection, followed by a fairly persistent increased spiking rate throughout the deflection. Now, a different neuron was quite a different behavior. So this neuron shows a brief increase in firing rate just at the time of the deflection, a little increase of firing rate when the deflection is removed, and [INAUDIBLE] no activity that persists during this constant part of the deflection. Here's another neuron. This neuron was primarily active at the onset of the deflection. Here's another neuron that was silent at the onset of the deflection, and then gave a robust response. Neurons, the changes in firing rate are not of a particular timescale. Different neurons have different time scales on which the changes in their firing rate are important. But let's come back to our example of our auditory neuron. Here's the spiking of an auditory neuron during the presentation of an auditory stimulus [INAUDIBLE] right here. This neuron-- in fact, you can't see it here, but this neuron shows spiking at a particular phase of the auditory stimulus, much like the auditory neurons that we discussed for sound localization in the owl. So if you plot the firing rate of this neuron as a function of time during the presentation of this stimulus, you can see that the firing rates are rapidly modulated in time. You can see that, at particular phases of this stimulus, the firing rate is very high. And then just a millisecond later, the firing rate is very low. So in this case, you can see that the spikes are locked temporally at particular phases of the stimulus. That's reflected when you make plots of firing rate as a function of time. It's reflected in various modulations. The neurons are firing at a particular time. So this corresponds to a case in which we would say that the spike timing is precisely controlled by the stimulus. It's, in many ways, more natural here to think about spike times as being controlled rather than spike firing rate being modulated. So you can see here that sensory neurons, neurons can spike more in response to some stimuli than others. Motor neurons spike more during some behaviors more than others. We can think about information being carried in the number of spikes that are generated by a stimulus or during a motor act. Now, all neurons exhibit temporal modulation in their firing rates. They fire more during a movement or after the presentation of a stimulus. Sometimes that information is carried by slow modulations in the firing rate. For example, the response to different oriented bars is carried in the average firing rate of the neuron during that particular stimulus. And we can refer to that kind of code and that kind of representation of information as rate coding. We can say that information about the stimulus is carried in the rate, the firing rate of the neurons. But in other cases, like the auditory neuron that we just saw, we can see that information is carried by rapid, by fast modulations, rapid changes in spike probability. And in that case, we often say that information is coded by spike timing. So a common question that you often hear about neurons is whether they're coding information using firing rate or temporal coding, rate coding versus temporal coding. Really, this is a false dichotomy. You shouldn't think about neurons coding information one way or the other. These are just-- really just two limits of a continuum. The brain uses information at fast time scales as well as slow time scales. And how do we determine, how do we know what's important for the brain? What time scales are important? And the answer to that question really comes from understanding the way spike trains are read out by the neurons that they project to. What time scale is relevant for the computation that's being done in the system that you're studying? What are the biophysical mechanisms that those spikes act on. To understand these questions, then that's the appropriate level of analysis to think about how spike trains are important for our sensory coding and for motor behavior.
MIT_940_Introduction_to_Neural_Computation_Spring_2018
17_Principal_Components_Analysis__Intro_to_Neural_Computation.txt
MICHALE FEE: OK, let's go ahead and get started. So today we're turning to a new topic called that basically focused on principal components analysis, which is a very cool way of analyzing high-dimensional data. Along the way, we're going to learn a little bit more linear algebra. So today, I'm going to talk to you about eigenvectors and eigenvalues which are one of the most fundamental concepts in linear algebra. And it's extremely important and widely applicable to a lot of different things. So eigenvalues and eigenvectors are important for everything from understanding energy levels and quantum mechanics to understanding the vibrational modes of a musical instrument, to analyzing the dynamics of differential equations of the sort that you find that describe neural circuits in the brain, and also for analyzing data and doing dimensionality reduction. So understanding eigenvectors and eigenvalues are very important for doing things like principal components analysis. So along the way, we're going to talk a little bit more about variance. We're going to extend the notion of variance that we're all familiar with in one dimension, like the width of a Gaussian or the width of a distribution of data to the case of multivariate Gaussian distributions or multivariate-- which means, it's basically the same thing as high-dimensional data. We're going to talk about how to compute a covariance matrix from data which describes how the different dimensions of the data are correlated with each other, what the variance in different dimensions is, and how those different dimensions are correlated with each other. And finally, we'll go through actually how to implement principal components analysis, which is useful for a huge number of things. I'll come back to many of the different applications of principal components analysis at the end. But I just want to mention that it's very commonly used in understanding high-dimensional data and neural circuits. So it's a very important way of describing how the state of the brain evolves as a function of time. So nowadays, you can record from hundreds or even thousands or tens of thousands of neurons simultaneously. And if you just look at all that data, it just looks like a complete mess. But somehow, underneath of all of that, the circuitry in the brain is going through discrete trajectories in some low-dimensional space within that high-dimensional mess of data. So our brains have something like 100 billion neurons in them-- about the same as the number of stars in our galaxy-- and yet, somehow all of those different neurons communicate with each other in a way that constrains the state of the brain to evolve along the low-dimensional trajectories that are our thoughts and perceptions. And so it's important to be able to visualize those trajectories in order to understand how that machine is working. OK, and then one more comment about principal components analysis, it's not actually the best way often of doing this kind of dimensionality reduction. But the basic idea of how principal components analysis works is so fundamental to all of the other techniques. It's sort of the base on which all of those other techniques are built conceptually. So that's why we're going to spend a lot of time talking about this. OK, so let's start with eigenvectors and eigenvalues. So remember, we've been talking about the idea that matrix multiplication performs a transformation. So we can have a vector x that we multiply it by matrix A. It transforms that set of vectors x into some other set of vectors y. And we can go from y back to x by multiplying by A inverse-- if the determinant of that matrix A is not equal to zero. So we've talked about a number of different kinds of matrix transformations by introducing perturbations on the identity matrix. So if we have diagonal matrices, where one of the elements is slightly larger than 1, the other diagonal element is equal to 1, you get a stretch of this set of input vectors along the x-axis. Now, that process of stretching vectors along a particular direction has built into it the idea that there are special directions in this matrix transformation. So what do I mean by that? So most of these vectors here, each one of these red dots is one of those x's, one of those initial vectors-- if you look at the transformation from x to y going-- so that's the x that we put into this matrix transformation. When we multiply by y, we see that that vector has been stretched along the x direction. So for most of these vectors, that stretch involves a change in the direction of the vector. Going from x to y means that the vector has been rotated. So you can see that the green vector is at a different angle than the red vector. So there's been a rotation, as well as a stretch. So you can see that's true for that vector, that vector, and so on. So you can see, though, that there are other directions that are not rotated. So here's another. I just drew that same picture over again. But now, let's look at this particular vector, this particular red vector. You can see that when that red vector is stretched by this matrix, it's not rotated. It's simply scaled. Same for this vector right here. That vector is not rotated. It's just scaled, in this case, by 1. But let's take a look at this other transformation. So this transformation produces a stretch in the y direction and a compression in the x direction. So I'm just showing you a subset of those vectors now. You can see that, again, this vector is rotated by that transformation. This vector is rotated by that transformation. But other vectors are not rotated. So again, this vector is compressed. It's simply scaled, but it's not rotated. And this vector is stretched. It's scaled but not rotated. Does that make sense? OK, so these transformations here are given by a diagonal matrices where the off-diagonal elements are zero. And the diagonal elements are just some constant. So for all diagonal matrices, these special directions, the directions on which vectors are simply scaled but not rotated by that matrix by that transformation, it's the vectors along the axes that are scaled and not rotated-- along the x-axis or the y-axis. And you can see that by taking this matrix A, this general diagonal matrix, multiplying it by a vector along the x-axis, and you can see that that is just a constant, lambda 1, times that vector. So we take this times this, plus this times this, is equal to lambda 1. This times this plus this times this is equal to zero. So you can see that A times that vector in the x direction is simply a scaled version of the vector in the x direction. And the scaling factor is simply the constant that's on the diagonal. So we can write this in matrix notation as this lambda, this stretch vector, this diagonal matrix, times a unit vector in the x direction. That's the standard basis vector, the first standard basis vector. So that's a unit vector in the x direction is equal to lambda 1 times a vector in the x direction. And if we do that same multiplication for a vector in the y direction, we see that we get a constant times that vector in the y direction. So we have another equation. So this particular matrix, this diagonal matrix, has two vectors that are in special directions in the sense that they aren't rotated. They're just stretched. So diagonal matrices have the property that they map any vector parallel to the standard basis into another vector along the standard basis. So that now is a general n-dimensional diagonal matrix with these lambdas, which are just scalar constants along the diagonal. And there are n equations that look like this that say that this matrix times a vector in the direction of a standard basis vector is equal to a constant times that vector in the standard basis direction. Any questions about that? Everything else just flows from this very easily. So if you have any questions about that, just ask. OK, that equation is called the eigenvalue equation. And it describes a property of this matrix lambda. So any vector v that's mapped by a matrix A onto a parallel vector lambda v is called an eigenvector of this matrix. So we're going to generalize now from diagonal matrices that look like this to an arbitrary matrix A. So the statement is that any vector, that when you multiply it by a matrix A that gets transformed into a vector parallel to v, it's called an eigenvector of A. And the one vector that this is true for that isn't called an eigenvector is the zero vector because you can see that a zero vector here times any matrix is equal to zero. OK, so we exclude the zero vector. We don't call the zero vector an eigenvector. So typically a matrix, an n-dimensional matrix, has n eigenvectors and n eigenvalues. Oh, and I forgot to say that the scale factor lambda is called the eigenvalue associated with that vector v. So now, let's take a look at a matrix that's a little more complicated than our diagonal matrix. Let's take one of these rotated stretch matrices. So remember, in the last class, we built a matrix like this that produces a stretch of a factor of 2 along a 45-degree axis. And we built that matrix by multiplying it together by basically taking this set of vectors, rotating them, stretching them, and then rotating them back. So we did that by three separate transformations that we applied successively. And we did that by multiplying phi transpose lambda and then phi. So let's see what the special directions are for this matrix transformation. So you can see that most of these vectors that we've multiplied by this matrix get rotated. And you can see that even vectors along the standard basis directions get rotated. So what are the special directions for this matrix? Well, they're going to be these vectors right here. So this vector along this 45-degree line gets transformed. It's not rotated. It gets stretched by a factor of 1. And this vector here gets stretched. OK, so you can see that this matrix has eigenvectors that are along this 45-degree axis and that 45-degree axis. So in general, let's calculate what are the eigenvectors and eigenvalues for a general rotated transformation matrix. So let's do that. Let's take this matrix A and multiply it by a vector x. And we're going to ask what vectors x satisfy the properties that, when they're multiplied by A, are equal to a constant times x. So we're going to ask what are the eigenvectors of this matrix A that we've constructed in this form? So what we're going to do is we're going to replace A with this product of matrices, of three matrices. We're going to multiply this equation on both sides by phi transpose on the left side, by phi transpose. OK, so phi transpose times this, is equal to A sabai, x subai, times 5 transpose on the left. What happens here? Remember phi is a rotation matrix. What is phi transpose phi? Anybody remember? Good. Because for rotation matrix, the inverse, the transpose of a rotation matrix, is its inverse. And so phi transpose phi is just equal to the identity matrix. So that goes away. And we're left with lambda phi transpose x equals A phi transpose x. So remember that we just wrote down that if we have a diagonal matrix lambda, that the eigenvectors are the standard basis vectors. So what does that mean? If we look at this equation here, and we look at this equation here, it seems like phi transpose x is an eigenvector of this equation as long as phi transpose x is equal to one of the standard basis vectors. Does that make sense? So we know this solution is satisfied by phi transpose x is equal to one of the standard basis vectors. Does that make sense? So if we replace phi transpose x with one of the standard basis vectors, then that solves this equation. So what that means is that the solution to this eigenvalue equation is that the eigenvalues A are simply the diagonal elements of this lambda here. And the eigenvectors are just x, where x is equal to phi times the standard basis vectors. We just solve for x by multiplying both sides by phi transpose inverse. What's phi transpose inverse? phi. So we multiply both sides by phi. This becomes the identity matrix. And we have x equals phi times this set of standard basis vectors. Any questions about that? That probably went by pretty fast. But does everyone believe this? We went through that. We went through both examples of how this equation is true for the case where lambda is a diagonal matrix and the e's are the standard basis vectors. And if we solve for the eigenvectors of this equation where A has this form of phi lambda phi transpose, you can see that the eigenvectors are given by this matrix times a standard basis vector. So any standard basis vector times phi will give you an eigenvector of this equation here. Let's push on. And the eigenvalues are just these diagonal elements of this lambda. What are these? So now, we're going to figure out what these things are, and how to just see what they are. These eigenvectors here are given by phi times a standard basis vector. So phi is a rotation matrix, right? So phi times a standard basis vector is just what? It's just a standard basis vector rotated. So let's just solve for these two x's. We're going to take phi, which was this 45-degree rotation matrix, and we're going to multiply it by the standard basis vector in the x direction. So what is that? Just multiply this out. You'll see that this is just a vector along a 45-degree line. So this eigenvector, this first eigenvector here, is just a vector on the 45-degree line, 1 over root 2. It's a unit vector. That's why it's got the 1 over root 2 in it. The second eigenvector is just phi times e2. So it's a rotated version of the y standard basis vector, which is 1 over root 2 minus 1, 1. That's this vector. So our two eigenvectors we derived for this matrix that produces this stretch along a 45-degree line, the two eigenvectors are the vector, 45-degree vector in this quadrant, and the 45-degree vector in that quadrant. Notice it's just a rotated basis set. So notice that the eigenvectors are just the columns of our rotated matrix. So let me recap. If you have a matrix that you've constructed like this, as a matrix that produces a stretch in a rotated frame, the eigenvalues are just the diagonal elements of the lambda matrix that you put in there to build that thing, to build that matrix. And the eigenvectors are just the columns of the rotation matrix. OK, so let me summarize. A symmetric matrix can always be written like this, where phi is a rotation matrix. And lambda is a diagonal matrix that tells you how much the different axes are stretched. The eigenvectors of this matrix A are the columns of phi. They are the basis vectors, the new basis vectors, in this rotated basis set. So remember, we can [AUDIO OUT] this rotation matrix as a set of basis vectors, as the columns. And that set of basis vectors are the eigenvectors of any matrix that you construct like this. And the eigenvalues are just the diagonal elements of the lambda that you put in there. All right, any questions about that? For the most part, we're going to be working with matrices that are symmetric, that can be built like this. So eigenvectors are not unique. So if x eigenvector of A, then any scaled version of x is also an eigenvector. Remember, an eigenvector is a vector that when you multiply it by a matrix just gets stretched and not rotated. What that means is that any vector in that direction will also be stretched and not rotated. So eigenvectors are not unique. Any scaled version of an eigenvector is also an eigenvector. When we write down eigenvectors of a matrix, we usually write down unit vectors to avoid this ambiguity. So we usually write eigenvectors as unit vectors. For matrices of n dimensions, there are typically n different unit eigenvectors-- n different vectors in different directions that have the special properties that they're just stretched and not rotated. So for our two-dimensional matrices that produce stretch in one direction, the special directions are-- sorry, so here is a two-dimensional, two-by-two matrix that produces a stretch in this direction. There are two eigenvectors, two unit eigenvectors, one in this direction and one in that direction. And notice, that because the eigenvectors are the columns of this rotation matrix, the eigenvectors form a complete orthonormal basis set. And that is true. That statement is true only for symmetric matrices that are constructed like this. So now, let's calculate what the eigenvalues are for a general two-dimensional matrix A. So here's our matrix A. That's an eigenvector. Any vector x that satisfies that equation is called an eigenvector. And that's the eigenvalue associated with that eigenvector. We can rewrite this equation as A times x equals lambda i times x-- just like A equals b, then equals 1 times b. We can subtract that from both sides, and we get A minus lambda i times x equals zero. So that is a different way of writing an eigenvalue equation. Now, what we're to do is we're going to solve for lambdas that satisfy this equation. And we only want solutions where x is not equal to zero. So this is just a matrix. A minus lambda i is just a matrix. So how do we know whether this matrix has solutions where x is not equal to zero? Any ideas? [INAUDIBLE] AUDIENCE: [INAUDIBLE] MICHALE FEE: Is, so what do we need the determinant to do? AUDIENCE: [INAUDIBLE] MICHALE FEE: Has to be zero. If the determinant of this matrix is not equal to zero, then the only solution to this equation is x equals zero. OK, so we solve this equation. We ask what values of lambda give us a zero determinant in this matrix. So let's write down an arbitrary A, an arbitrary two-dimensional matrix A, 2D, 2 by 2. We can write A minus lambda i like this. Remember, lambda i is just lambdas on the diagonals. The determinant of A minus lambda i is just the product of the diagonal elements minus the product of the off-diagonal elements. And we set that equal to zero. And we solve for lambda. And that just looks like a polynomial. OK, so the solutions to that polynomial solve what's called the characteristic equation of this matrix A. And those are the eigenvalues of this arbitrary matrix A, this 2D, two-by-two matrix. So there is characteristic equation. There is the characteristic polynomial. We can solve for lambda just by using the quadratic formula. And those are the eigenvalues of A. Notice, first of all, there are two of them given by the two roots of this quadratic equation. And notice that they can be real or complex. They can be complex. They are complex in general. And they can be real, or imaginary, or have real and imaginary components. And that just depends on this quantity right here. If what's inside this square root is negative, then eigenvalues will be complex. If what's inside the square root is positive, then the eigenvector will be real. So let's find the eigenvalues for a symmetric matrix. a, d on the diagonals and b on the off-diagonals. So let's see what happens. Let's plug these into this equation. The 4bc becomes 4b squared. And you can see that this thing has to be greater than zero because a minus d squared has [INAUDIBLE] has to be positive. And b squared has to be positive. And so that quantity has to be greater than zero. And so what we find is that the eigenvalues of a symmetric matrix are always real. So let's just take this particular-- just an example-- and let's plug those into this equation. And what we find is that the eigenvalues are 1 plus or minus root 2 over 2. So two real eigenvalues. So let's consider a special case of a symmetric matrix. Let's consider a matrix where the diagonal elements are equal, and the off-diagonal elements are equal. So we can update this equation for the case where the diagonal elements are equal. So a equals d. And what you find is that the eigenvalues are just a plus b and a minus b-- so a plus b and a minus b. And the eigenvectors can be found just by plugging these eigenvalues into the eigenvalue equation and solving for the eigenvectors. So I'll just go through that real quick-- a times x. So we found two eigenvalues, so there are going to be two eigenvectors. We can just plug that first eigenvalue into here, call it lambda plus. And now, we can solve for the eigenvector associated with that eigenvalue. Just plug that in, solve for x. What you find is that the x associated with that eigenvalue is 1, 1-- if you just go through the algebra. So that's the eigenvector associated with that eigenvalue. And that is the eigenvector associated with that eigenvalue. So I'll just give you a hint. In most of the problems that I'll give you to deal with on an exam or many of the ones in the problem sets, I think, in the problem set will have a form like this and [INAUDIBLE] eigenvectors along a 45-degree axis. So if you see a matrix like that, you don't have to plug it into MATLAB to extract the eigenvalues. You just know that the eigenvectors are on the 45-degree axis. So the process of writing a matrix as phi lambda phi transpose is called eigen-decomposition of this matrix A. So if you have a matrix that you can write down like this, that you can write in that form, it's called eigen-decomposition. And the lambdas, the diagonal elements of this lambda matrix, are real. And they're the eigenvalues. The columns of phi are the eigenvalues, and they form an orthogonal basis set. And this, if you take this equation and you multiply it on both sides by phi, you can write down that equation in a slightly different form-- A times phi equals phi lambda. This is a matrix way, a matrix equivalent, to the set of equations that we wrote down earlier. So remember, we wrote down this eigenvalue equation that describes that when you multiply this matrix A times an eigenvector equals lambda times the eigenvector, this is equivalent to writing down this matrix equation. So you'll often see this equation to describe the form of the eigenvalue equation rather than this form. Why? Because it's more compact. Any questions about that? We've just piled up all of these different f vectors into the columns of this rotation matrix phi. So if you see an equation like that, you'll know that you're just looking at an eigenvalue equation just like this. Now in general, when you want to do eigen-decomposition, when you have a symmetric matrix that you want to write down in this form. It's really simple. You don't have to go through all of this stuff with the characteristic equation, and solve for the eigenvalues, and then plug them in here, and solve for the eigenvectors. You can do that if you really want to. But most people don't because in two dimensions, you can do it. But in higher dimensions, it's very hard or impossible. So what you typically do is just use the eig function in MATLAB. If you just use this function eig on a matrix, it will return the eigenvectors and eigenvalues. So here, I'm just constructing a matrix A-- 1.5, 0.5, 0.5, and 1.5, like that. And if you just use the eig function, it returns the eigenvectors as the columns of the matrix and the eigenvalues as the diagonals of this matrix. So you have to pass it. Arguments F and V equals eig of A. And it returns eigenvectors and eigenvalues. Any questions about that? So let's push on toward doing principal components analysis. So this is just the machinery that you use. Oh, and I think I had one more panel here just to show you that if you take F and V, you can reconstruct A. So A is just F, V, F transpose. F is just phi in the previous equation. And V is the lambda. Sorry, they didn't have phi and lambda, and they're not options. For variable names, I used F and V. And you can see that F, V, F transpose is just equal to A. Any questions about that? No? All right, so let's turn to how do you use eigenvectors and eigenvalues to describe data. So I'm going to briefly review the notion of variance, what that means in higher dimensions, and how you use a covariance matrix to describe data in high dimensions. So let's say that we have a bunch of observations of a variable x-- so this is now just a scaler. So, we have m different observations, x superscript j is the j-th observation of that data. And you can see that if you make a bunch of measurements of most things in the world, you'll find a distribution of those measurements. Often, they will be distributed in a bump. You can write down the mean of that distribution just as the average value overall distributions by summing together all those distributions and dividing by the number of observations. You can also write down the variance of that distribution by subtracting the mean from all of those observations, squaring that difference from the mean, summing up over all observations, and dividing by m. So let's say that we now have m different observations of two variables, pressure and temperature. We have a distribution of those quantities. We can describe that observation of x1 and x2 as a vector. And we have m different observations of that vector. You can write down the mean and variance of x1 and x2. So for x1, we can write down the mean as mu1. We can write down the variance of x1. We can write down the mean and variance of x2, of the x2 observation. And sometimes, that will give you a pretty good description of this two-dimensional observation. But sometimes, it won't. So in many cases, those variables, x1 and x2, are not correlated with each other. They're independent variables. In many cases, though, x1 and x2 are dependent on each other. The observations of x1 and x2 are correlated with each other, so that if x1 is big, x2 also tends to be big. In these two cases, x1 can have the same variance. x2 can have the same variance. But there's clearly something different here. So we need something more than just describing the variance of x1 and x2 to describe these data. And that thing is the covariance. It just says how do x1 and x2 covary? If x1 is big, does x2 also tend to be big? In this case, the covariance is zero. In this case, the covariance is positive. So we're taking if a fluctuation of x1 above the mean is associated with a fluctuation of x2 above the mean, then these points will produce a positive contribution to the covariance. And these points here will also produce a positive contribution to the covariance. And the covariance here will be some number greater than zero. That's closely related to the correlation, just the Pearson correlation coefficient, which is the covariance divided by the geometric mean of the individual variances. I'm assuming most of you have seen this many times, but just to get us up to speed. So if you have data, a bunch of observations, you can very easily fit those data to a Gaussian. And you do that simply by measuring the mean and variance of your data. And that turns out to be the best fit to a Gaussian. So if you have a bunch of observations in one dimension, you measure the mean and variance of that set of data. That turns out to be a best fit in the least squared sense to a Gaussian probability distribution defined by a mean and a variance. So this is easy in one dimension. What we're interested in doing is understanding data in higher dimensions. So how do we describe data in higher dimensions? How do we describe a Gaussian in higher dimensions? So that's what we're going to turn to now. And the reason we're going to do this is not because every time we have data, we're really trying to fit a Gaussian into it. It's just that it's a powerful way of thinking about data, of describing data in terms of variances in different directions. And so we often think about what we're doing when we are looking at high-dimensional data is understanding its distribution in different dimensions as kind of a Gaussian cloud that optimally best fits the data that we're looking at. And mostly because it just gives us an intuitive about how to best represent or think about data in high dimensions. So we're going to get insights into how to think about high-dimensional data. We're going to develop that description using the vector and matrix notation that we've been developing all along because vectors and matrices provide a natural way of manipulating data sets, of doing transformations of basis, rotations, so on. It's very compact. And those manipulations are really trivial in MATLAB or Python. So let's build up a Gaussian distribution in two dimensions. So we have, again, our Gaussian random variables, x1 and x2. We have a Gaussian distribution, where the probability distribution is proportional to e to the minus 1/2 of x1 squared. We have probability distribution for x2-- again, probability of x2. We can write down the probability of x1 and x2, the joint probability distribution, assuming these are independent. We can write that as the product of p-- the product of the two probability distributions p of x1 and p of x2. And we have some Gaussian cloud, some Gaussian distribution in two dimensions that we can write down like this. That's simply the product. So the product of these two distributions is e to the minus 1/2 x1 squared times e to the minus 1/2 x2 squared. And then, there's a constant in front that just normalizes, so that the total area under that curve is just 1. We can write this as e to the minus 1/2 x1 squared plus x2 squared. And that's e to the minus 1/2 times some distance from the origin. So it falls off exponentially in a way that depends only on the distance from the origin or from the mean of the distribution. In this case, we set the mean to be zero. Now, we can write that distance squared using vector notation. It's just the square magnitude of that vector x. So if we have a vector x sitting out here somewhere, we can measure the distance from the center of the Gaussian as the square magnitude of x, which is just x dot x, or x transpose x. So we're going to use this notation to find the distance of a vector from the center of the Gaussian distribution. So you're going to see a lot of x [INAUDIBLE] axis. So this distribution that we just built is called an isotropic multivariate Gaussian distribution. And that distance d is called the Mahalanobis distance, which I'm going to say as little as possible. So that distribution now describes how these points-- the probability of finding these different points drawn from that distribution as a function of their position in this space. So you're going to draw a lot of points here in the middle and fewer points as you go away at larger distances. So this particular distribution that I made here has one more word in it. It's an isotopic multivariate Gaussian distribution of unit variance. And what we're going to do now is we're going to build up all possible Gaussian distributions from this distribution by simply doing matrix transformations. So we're going to start by taking that unit variance Gaussian distribution and build an isotopic Gaussian distribution that has an arbitrary variance-- that means an arbitrary width. We're then going to build a Gaussian distribution that can be stretched arbitrarily along these two axes, y1 and y2. And we're going to do that by using a transformation with a diagonal matrix. And then, what we're going to do is build an arbitrary Gaussian distribution that can be stretched and worked in any direction by using a transformation matrix called a covariance matrix, which just tells you how that distribution is stretched in different directions. So we can stretch it in any direction we want. Yes. AUDIENCE: Why is [INAUDIBLE]? MICHALE FEE: OK, the distance squared is the square of magnitude. And the square of magnitude is x dot x, the dot product. But remember, we can write down the dot product in matrix notation as x transpose x. So if we have row vector times a column vector, you get the dot product. Yes, Lina. AUDIENCE: What does isotropic mean? MICHALE FEE: OK, isotropic just means the same in all directions. Sorry, I should have defined that. AUDIENCE: [INAUDIBLE] when you stretched it, it's not isotropic? MICHALE FEE: Yes, these are non-isotropic distributions because they're different. They have different variances in different directions. So you can see that this has a large variance in the y1 direction and a small variance in the y2 direction. So it's non-isotropic. Yes, [INAUDIBLE]. AUDIENCE: Why do you [INAUDIBLE]?? MICHALE FEE: Right here. OK, think about this. Variance, you put into this Gaussian distribution as the distance squared over the variance squared. It's distance squared over a variance, which is sigma squared. Here it's distance squared over a variance. Here it's distance squared over a variance. Does that makes sense? It's just that in order to describe these complex stretching and rotation of this Gaussian distribution in high-dimensional space, we need a matrix to do that. And that covariance matrix describes the variances in the different direction and essentially the rotation. Remember, this distribution here is just a distribution that's stretched and rotated. Well, we learned how to build exactly such a transformation by taking the product of phi transpose lambda phi. So we're going to use this to build these arbitrary Gaussian distributions. OK, so I'll just go through this quickly. If we have an isotopic unit variance Gaussian distribution as a function of this vector x, we can build a Gaussian distribution of arbitrary variance by writing down a y that's simply sigma times x. We're going to transform x into y, so that we can write down a distribution that has an arbitrary variance. Here this is variance 1. Here this is sigma squared. So let's make just a change of variables y equals sigma x. So now, what's the probability distribution as a function of y? Well, there's probability distribution as a function of x. We're simply going to substitute y equals sigma x with x equals sigma inverse y. We're going to substitute this into here. The Mahalanobis distance is just x transpose x, which is just sigma inverse y transpose sigma inverse y. And when you do that, you find that the distance squared is just y transpose sigma to the minus 2y. So there is our Gaussian distribution for this distribution. There's the expression for this Gaussian distribution with a variance sigma. We can rewrite this in different ways. Now, let's build a Gaussian distribution that stretched arbitrarily in different directions, x and y. We're going to do the same trick. We're simply going to make a transformation y equals matrix, diagonal matrix, s times x and substitute this into our expression for a Gaussian. So x equals s inverse y. The Mahalanobis distance is given by x transpose x, which we can just get down here. Let's do that with this substitution. And we get an s squared here, s inverse squared, which we're just going to write as lambda inverse. And you can see that you have these variances along the diagonal. So if that's lambda inverse, then lambda is just a matrix of variances along the diagonal. So sigma 1 squared is the variance in this direction. Sigma 2 squared is the variance in this direction. I'm just showing you how you make a transformation to this vector x into another vector y to build up a representation of this effective distance from the center of distribution for different kinds of Gaussian distributions. And now finally, let's build up an expression for a Gaussian distribution with arbitrary variance and covariance. So we're going to make a transformation of x into a new vector y using this rotated stretch matrix. We're going to substitute this in, calculate the Mahalanobis distance-- is now x transpose x. Substitute this and solve for the Mahalanobis distance. And what you find is that distance squared is just y transpose phi lambda inverse phi transpose times y. And we just write that as y transpose sigma inverse y. So that is now an expression for an arbitrary Gaussian distribution in high-dimensional space. And that distribution is defined by this matrix of variances and covariances. Again, I'm just writing down the definition of sigma inverse here. We can take the inverse of that, and we see that our covariance-- this is called a covariance matrix-- it describes the variance and correlations of those different dimensions as a matrix. That's just this rotated stretch matrix that we been working with. And that's just the same as this covariance matrix that we described for distribution. I feel like all that didn't come out quite as clearly as I'd hoped. But let me just summarize for you. So we started with an isotopic Gaussian of unit variance. And we multiplied that vector, we transformed that vector x, by multiplying it by sigma so that we could write down a Gaussian distribution of arbitrary variance. We transformed that vector x with a diagonal covariance matrix to get arbitrary stretches along the axes. And then, we made another kind of transformation with an arbitrary stretch and rotation matrix so that we can now write down a Gaussian distribution that has arbitrary stretch and rotation of its variances in different directions. So this is the punch line right here-- that you can write down the Gaussian distribution with arbitrary variances in this form. And that sigma right there is just the covariance matrix that describes how wide that distribution is in the different directions and how correlated those different directions are. I think this just summarizes what I've already said. So now, let's compute the covariance matrix from data. So now, I've shown you how to represent Gaussians in high dimensions that have these arbitrary variances. Now, let's say that I actually have some data. How do I fit one of these Gaussians to it? And it turns out that it's really simple. It's just a matter of calculating this covariance matrix. So let's do that. So here is some high-dimensional data. Remember that to fit a Gaussian to a bunch of data, all we need to do is to find the mean and variants in one dimension. For higher dimensions, we just need to find the mean and the covariance matrix. So that's simple. So here's our set of observations. Now, instead of being scalars, they're vectors. First thing we do is subtract the mean. So we calculate the mean by summing all of those observations, dividing those numbers, divide by m. So there we find the mean. We compute a new data set with the mean subtracted. So from every one of these observations, we subtract the mean. And we're going to call that z. So there is our mean subtracted here. I've subtracted the mean. So those are the x's. Subtract the mean. Those are now our z's, our mean-subtracted data. Does that makes sense? Now, we're going to calculate this covariance matrix. Well, all we do is we find the variance in each direction and the covariances. So it's going to be a matrix in low-dimensional data. It's a two-by-two matrix. So we're going to find the variance in the z1 direction. It's just z1 times z1, summed over all the observations, divided by m. Th variance in the z2 direction is just the sum of z2, j, z2, j divided by m. The covariance is just the cross terms, z1 one times z2 and z2 times z1. Of course, those are equal to each other. So in a covariance matrix, it's symmetric. So how do we calculate this? It turns out that in MATLAB, this is super-duper easy. So if this is our vector, that's our vector, one of our observations, we can compute the inner product z transpose z. So the inner product is just z transpose z, which is z1, z2, z1, z2. That's the square magnitude of z. There's another kind of product called the outer product. Remember that. So the outer product looks like this. This is a 1 by 2. That's a rho vector times a column vector is equal to a scalar. 1 by 2 times 2 by 1 equals by 1 by 1-- two rows, one column-- times 1 by 2, gives you a 2 by 2 matrix that looks like this. z1 times z1, z1, z2, z1, z2, z2, z2. Why? It's z1 times z1 equals that. z1 times z2, z2 z1, one z2 z2. So that outer product already gives us the components to compute the correlation matrix. So what we do is we just take this vector, z the j-th observation of this vector z, and multiply it by the j-th observation of this vector z transpose. And that gives us this matrix. And we sum over all this. And you see that is exactly the covariance matrix. So if we have m observations of vector z, we put them in matrix form. So we have a big, long data matrix. Like this. There are m observations of this two-dimensional vector z. The data dimension, the data vector has, mentioned 2. Their are m observations. So m is the number of samples. So this is an n-by-m matrix. So if you want to compute the covariance matrix, you just in MATLAB, literally do z. This big matrix z times that matrix transpose. And that automatically finds the covariance matrix for you in one line of MATLAB. There's a little trick to subtract the mean easily. So remember, your original observations are x. You compute the mean across the rows. Thus, you're going you're going to sum across columns to give you a mean for each row. That gives you a mean of that first component of your vector, mean of the second component. That's really easy in the lab. mu is the mean of x summing cross the second component. That gives you a mean vector and then you use repmat to fill that mean out in all of the columns and [INAUDIBLE] subtract this mean from x to get this data matrix z. So now, let's apply those tools to actually do some principal components analysis. So principal components analysis is really amazing. If you look at single nucleotide polymorphisms and populations of people, there are like hundreds of genes that you can look at. You can look at different variations of a gene across hundreds of genes. But it's this enormous data set. And you can find out which directions in that space of genes give you information about the genome of people. And for example, if you look at a number of genes across people with different backgrounds, you can see that they're actually clusters corresponding to people with different backgrounds. You can do single-cell profiling. So you can do the same thing in different cells with a tissue. So you look at RNA transcriptional profiling. You see what are the genes that are being expressed in individual cells. You can do principal components analysis of those different genes and find clusters for different cell types within a tissue. This is now being applied very commonly in brain tissue now to extract different cell types. You can use images and find out which components of an image actually give you information about different faces. So you can find a bunch of different faces, find the covariance matrix of those images, take that, do eigendecomposition on that covariance matrix. And extract what are called eigenfaces. These are dimensions on which the images carry information about face identity. You can use principal components analysis to decompose spike waveforms into different spikes. This is a very common way of doing spike sorting. So when you stick an electrode in the brain, you'd record from different cells at the end of the electrode. Each one of those has a different way of form and you can use this method to extract the different waveforms people have even recently used this now to understand the low-dimensional trajectories of movements. So if you take a movie-- SPEAKER: After tracking, a reconstruction of the global trajectory can be made from the stepper motor movements, while the local shape changes of the worm can be seen in detail. MICHALE FEE: OK, so here you see a c elegans, a worm moving along. This is an image, so it's a very high-dimensional. There are 1,000 pixels in this image. And you can decompose that image into a trajectory in a low-dimensional space. And it's been used to describe the movements in a low-dimensional space and relate to a representation of the neural activity in low dimensions as well. OK, so it's a very powerful technique. So let me just first demonstrate PCA on just some simple 2D data. So here's a cloud of points given by a Gaussian distribution. So those are a bunch of vectors x. We can transform those vectors x using phi s phi transpose to produce a Gaussian, a cloud of points with a Gaussian distribution, rotated at 45 degrees, and stretched by 1.7-ish along one axis and compressed by that amount along another axis. So we can build this rotation matrix, this stretch matrix, and build a transformation matrix-- r, s, r transpose. Multiply that by x. And that gives us this data set here. OK, so we're going to take that data set and do principal components analysis on it. And what that's going to do is it's going to find the dimensions in this data set that have the highest variance. It's basically going to extract the variance in the different dimensions. So we take that set of points. We just compute the covariance matrix by taking z, z transpose, times 1 over m. That computes that covariance matrix. And then, we're going to use the eig function in MATLAB to extract the eigenvectors and eigenvalues of the covariance matrix OK, so q-- we're going to call q is the variable name for the covariance matrix it's zz transpose over m. Call eig of q. That returns the rotation matrix. And that rotation matrix, the columns of which are the eigenvectors, it returns the matrix of eigenvalues, the diagonal elements are the eigenvalues. Sometimes, you need to do a flip-left-right because I sometimes return the lowest eigenvalues first. But I generally want to plot put the largest eigenvalue first. So there's the largest one, there's the smallest one. And now, what we do, is we simply rotate. We [AUDIO OUT] basis. We can rotate this data set using the rotation matrix that the principal components analysis found. OK, so we compute the covariance matrix. Find the eigenvectors and eigenvalues of the covariance matrix right there. And then, we just rotate the data set into that new basis of eigenvectors and eigenvalues. It's useful for clustering. So if we have two clusters, we can take the clusters, compute the covariance matrix. Find the eigenvectors and eigenvalues of that covariance matrix. And then, rotate the data set into a basis set in which the dimensions in the data on which variances largest are along the standard basis vectors. Let's look at a problem in the time domain. So here we have a couple of time-dependent signals. So this is some amplitude as a function of time. These are signals that I constructed. They're some wiggly function that I added noise to. What we do is we take each one of those times series, and we stack them up in a bunch of columns. So our vector is now a set of 100 time samples. So there is a vector of 100 different time points. Does that make sense? And we have 200 observations of those 100-dimensional vectors. So we have a data vector x that has columns. That are hundreds dimensional. And we have 200 of those observations. So it's 100-by-200 matrix. 100-by-200 matrix. We do the means subtraction we subtract the mean using that trick that I showed you. Compute the covariance matrix. So there we compute the mean. We subtract the mean using repmat. Subtract the mean from the data to get z. Compute the covariance matrix Q. That's what the covariance matrix looks like for those data. And now, we plug it into eig to extract the eigenvectors and eigenvalues. OK, so extract F and V. If we look at the eigenvalues, you can see that there are hundreds eigenvalues because those data have 100 dimensions. So there are hundreds eigenvalues. You could see that two of those eigenvalues are big, and the rest are small. This is on a log scale. What that says is that almost all of the variance in these data exist in just two dimensions. It's 100-dimensional space. But the data are living in two dimensions. And all the rest is noise. Does that makes sense? So what you'll typically do is take some data, compute the covariance matrix, find the eigenvalues, and look at the spectrum of eigenvalues. And you'll very often see that there is a lot of variance in a small subset of eigenvalues. Then, it tells you that the data are really living in a lower-dimensional subspace than the full dimensionality of the data. So that's where your signal is. And all the rest of that is noise. You can plot the cumulative of this. And you can say that the first two components explain over 60% of the total variance in the data. So since there are two large eigenvalues, let's look at the eigenvectors associated with those. And we can find those. Those are just the first two columns of this matrix F that the eig function returned to us. And that's what those two eigenvectors look like. That's what the original data looked like. The eigenvectors, the columns of the F matrix, are an orthogonal basis set. A new basis set. So those are the first two eigenvectors. And you can see that the signal lives in this low-dimensional space of these two eigenvectors. All of the other eigenvectors are just noise. So we can do is we can project the data into this new basis set. So let's do that. We simply do a change of basis. The f is a rotation matrix. We can project our data z into this new basis set and see what it looks like. Turns out, that's what it looks like. There are two clusters in those data corresponding to the two different wave forms that you could see in the data. Right there, you can see that there are kind of two wave forms in the data. If you projected data into this low-dimensional space, you can see that there are two clusters there. If you projected data into other projections, you don't see it. It's only in this particular projection that you have these two very distinct clusters corresponding to the two different wave forms in the data. Now, almost all of the variance is in the space of the first two principal components. So what you can actually do is, you can project the data into these first two principal components, set all of the other principal components to zero, and then rotate back to the original basis set. That is that you're setting as much of the noise to zero as you can. You're getting rid of most of the noise. And then, when you rotate back to the original basis set, you've gotten rid of most of the noise. And that's called principal components filtering. So here's before filtering and here's after filtering. OK, so youve found the low-dimensional space, in which all the data sits, in which the signal sits, everything outside of that space is noise. So you rotate the data into a new basis set. You can filter out all the other dimensions that just have noise. You filter back. And you just keep the signal. And that's it. So that's sort of a brief intro to principal component analysis. But there are a lot of things you can use it for. It's a lot of fun. And it's a great intro to all the other amazing dimensionality reduction techniques that there are. I apologize for going over.
MIT_7016_Introductory_Biology_Fall_2018
19_Cell_Trafficking_and_Protein_Localization.txt
BARBARA IMPERIALI: I always like to just remind you that the sixth -- it's kind of an assignment, but the numbers-- we're going to do this news brief project, where it's a teamwork project if you choose. If you take a look at the piece that you have in your hands now, it asks you for a little bit of information on that, who you're going to be working with, if you choose to work with someone. Or you can work on your own. That's fine. And we're looking to get a news brief that's of significance to research going on in the life sciences. And I've given you-- there are a couple of links in the sidebar of the website, so good places where you can find interesting material. What I'm super interested in for you, as a group where many of you are in the engineering fields, is to find something really cool at the interface between the life sciences and engineering, where engineering has a huge impact on the life sciences. You have alternatives. You can download the coordinates of a protein and print it on a 3D printer and give us a summary of what the protein is, what it does, and submit your 3D print. I'll give it back to you afterwards, once we've had a look at it. But actually submit the 3D print. And then the other opportunity is-- I think you'll remember back to when we were talking about molecular biology of the cell. I did kind of a clunky demo at the front of the class, nothing like Professor Martin's demos at all. This was me with the ethernet cables showing you what topoisomerase did. But in my demo, I didn't show you how topo also cuts a strand of DNA, holds it while the supercoiling unwinds, and then stitches it together. So I thought some of the engineers might be able to come up was something that was really better than that for me to use, for us to use, next year in class. So I'm really laying down the challenge there. So I always like things in the news. I thought this was kind of interesting that the first vertebrates evolved in shallow waters. I thought those were really cool first vertebrates. I'd love to get one of them in a fish tank and keep it. But anyway, that's that. It's truly amazing what you can see in the science reports, news briefs. I look at them whenever they come in. I get the posts every two or three days. And I'm kind of pleased to see that there's a lot of things that are in those news briefs that I feel that we're enabling you to read with some appreciation because of what we're covering in the class. So what we're doing now is we're really taking a leap forward here into cells and organisms, with respect to understanding how structure and function of individual macromolecules, proteins, nucleic acids, sugars, determine life, determine the dynamics of life that are necessary for an organism to really go through a life cycle, divide, have cells divide, go forward, have cells move. So what we're going to be talking about in the next lectures, which is section 6, is cellular trafficking and signaling. And so for the first lecture, which is 19 that we're on now-- so we're past the midway mark-- I'm going to be talking about trafficking. And that is how, within a cell, things get to where they need to be, or they get exported from a cell. Because all of the actions of a cell-- I really like thinking about the cell as a circuit board, where there's a receiver that gets information. And then the complex circuitry determines what outcome you get at the end of the day. So many of the proteins that we've talked about need to be in specific places for the cell to function. We have to have DNA polymerase in the nucleus. It's not going to be useful in the cytoplasm. We have to have a transcription factor that helps transcription go to the nucleus at the right time for transcription to occur. But we don't want it there all the time, because otherwise you'd have the light switch on the entire time. That wouldn't be useful. So we need to regulate where certain macromolecules are. We need to have the receivers on the surface of the cell to receive signals from outside. This is not just pertinent for multicellular organisms. It's pertinent for unicellular organisms, for them to sense their environment, know what's going on around them. Is the salt concentration changing? Is it getting very hot? Is it getting cold? Is there enough oxygen? Even unicellular organisms need to receive signals and respond to them. Multicellular organisms are way more complicated. Because you need to establish organs and different parts of a multicellular organism that have specialized function. So trafficking really is about what happens after you've made a replicated DNA in the nucleus, transcribed it, made a mature messenger that goes out to the cytoplasm in most cases. We'll talk about the exceptions to that case. And then in the cytoplasm, when proteins are expressed, all the different things that happen that guarantee that the protein gets to a proper destination for function. And some of those are quite complicated. Because remember, if I'm going to park a receiver in the cellular membrane with the signals being captured from outside, I've got to get from the cytoplasm out there in a reliable way. In lectures 20 and 21, I'll talk to you about cellular signaling with a focus on mammalian cells and the sorts of signaling processes that may go awry in cells, for example, proliferating cells. And then Professor Martin will really focus in on neuronal cells, optogenetics in lecture 22. So this bundle really allows you to call in the things that you've learned until now and apply them into much more intriguing and complex situations. So here's a wonderful, sort of silly drawing of a triangular cell. There's always a joke in cell biologists, when they're trying to talk to mathematicians and mathematicians want to simplify everything. And so everything gets-- imagine a cell, and there's this box shows up on a screen. Well, we all know that cells aren't triangular or box-shaped. But nevertheless, I thought this one was particularly cool. And so trafficking, the process of trafficking, is really all about, where is the information encoded into the protein that ensures that the protein is where it needs to be for the dynamics that we observe in living system? We've talked a lot about static things. We make the protein. Here's the protein. The protein folds. We've talked a lot about things that are kind of fixed in time and space. But what we want to do is understand what makes a cell programmed to undergo a new function. For example, something as simple as cell division, we have to orchestrate a huge variety of activities in order for the cell division process to start to occur. Something as really simple a cell mobility, think about, how do cells move? They're not moving all the time, but sometimes they will move towards a signal. What triggers that kind of interactions? So in looking at the cell, these are some of the older images, where certain organelles, for example, are stained so that you can see them. So peroxisomes are where degradation happens. The golgi and the ER are a part of what's known as the endomembrane system. You'll see a lot about this towards the later part of the class, where we talk about how things get outside the cell through the endomembrane system. There's the surface plasma membrane. The cytoplasm is this sort of not really aqueous-- it's an open space. But it really isn't open. It's highly congested with all kinds of molecules, all kinds of structural proteins and so on. So don't think of the cytoplasm as a solution, but think of it as a much more gel-like structure with a lot of things happening in it. The nucleus itself is also surrounded by a membrane, as is the endomembrane system. So this would be the nuclear envelope. Within the nucleus, you have a structure called the nucleolus, where aspects of the nucleic acids necessary for protein biosynthesis are made. Then there are structural proteins like microtubules and actin. But now, in this day and age, we don't have to deal with these vanilla images. We can actually use the methods that you've learned about in the last section, recombinant biology, to create new versions of proteins that have along with their sequence a marker that gives them a fluorescence-colored marker. So we are, later on in the semester, going to spend three lectures on fluorescence and cellular imaging, where you'll learn more about these fabulous proteins beyond just saying we've got a green one and a red one. We're going to give you all the background on the protein engineering that enabled those to become tools for biology. But for now, I'm just going to show you how much more interesting the images of the subcellular structures are when you've labeled, for example, a particular protein that goes exclusively to the nucleolus with a blue fluorescent protein, or to the mitochondria. Remember, Professor Martin told you we always think of these as-- and I'm not going to do the push-up. I'm just going to say it, powerhouse of the cell. I'm not doing-- [LAUGHS] I'm not great with push-ups, to be honest. But you see these sort of more tangled, extended structures. Vimentin is more of a structural protein. Here are the golgi, the endoplasmic reticulum, and the nucleus. So the colored fluorophore proteins, or the fluorescent fluorophore proteins, actually allow us, in real time, to observe dynamics. Once a protein is made, where does it go? If we add a trigger to the cell to cause an interaction, can we observe that protein, for example, migrating to the plasma membrane. Can we watch proteins being made through the ER? A variety of different things that allow us in modern biology to really look at dynamics, not just static information. And so what I'm going to talk to you about is the ways in which proteins are coded very early on in their genesis, in their biogenesis, in order to go to certain locales within the cell. So let me just give you a bit of a road map here with a protein. And where things may start-- so we have some options. Do we want to send the protein outside the cell or keep it inside the cell? Obviously, two big default differences, if you're going to go to a particular venue inside the cell. Are we going to just stay in the cytosol? That's a sort of simple-- actually, that is the default position. Because you want to remember that most proteins are made on ribosomes in the cytosol of the cell. But the statistics are that about 50% of proteins end up somewhere else than the cytoplasm. They may end up in an organelle, back in the nucleus on the surface, or secreted. So there's a lot-- so it's a good, solid 50% that don't end up staying in the cytosol, where they were originally made. Their alternative is to go to organelles. And if you're going to an organelle, remember, the ribosome is not membrane. It doesn't have a membrane perimeter. But many of the organelles do have membrane perimeters. So we're talking here about the mitochondria. That is far too long of a word. The nucleus-- so I'm going to abbreviate things like peroxisomes, or various membrane-bordered organelles, where we're going to have to figure out, if something is made in the cytoplasm, how does it get into those organelles? Now we've spoken a little bit about the fact that some proteins are made in the mitochondria. I'm going to get back to that in a moment. But all the proteins in the mitochondria are not made in the mitochondria. Some of them are shipped in. Remember the thing the endosymbiont theory, where we said that mitochondria may have originated from bacteria and been engulfed into cells. Those bacteria obviously were originally self-sufficient. But a lot of the proteins that were expressed in the mitochondria were dispensed with, and mitochondria now use proteins that are encoded by the nuclear DNA rather than the mitochondrial. But to this day, some proteins remain encoded within the mitochondria. So these are opportunities for where that may be. And I'm going to talk very specifically about signals that can get proteins into the mitochondria and into the nucleus. And it turns out that the barriers around those organelles are pretty different. I'll come back to that in a second when we get on the next slide. With respect to going outside the cell, there are two options. One option is for the protein to remain in the plasma membrane but with part of its structure outside the cell. So the other option is for the protein actually to be spit out of the cell as a soluble entity that can travel around an organism, for example, in the bloodstream and go to a remote site. And that becomes very important in signaling. So we would call those proteins secreted and soluble. So these would be membrane-bound. These would end up being soluble proteins. Let's take a look at the structure of the cell and look at where these various components are. So if you see these dots, those are free ribosomes in the cytoplasm. They would start to express different proteins. A lot of proteins are expressed in the ribosome. But in some cases, proteins become expressed on ribosomes that are associated with the endoplasmic reticulum. And therefore, you start a process whereby proteins end up being shipped to the outside of the cell. So where you see the speckles here, the free ribosome, and then the ribosomes bound to the rough endoplasmic reticulum, here, your destinies are on the right-hand side of that picture. And here, the destiny of these proteins ends up on the left-hand side of this sort of family tree that I'm showing you. There's obviously one more place where proteins are made, and that's in the mitochondria. And if you remember the first question on your exam, it described the DNA that's in the mitochondria. Going back to the endosymbiont theory, that's a circular piece of DNA. And it sets it apart. And the ribosomes in the mitochondria look more like bacterial ribosomes than you eukaryotic ribosomes. So remember, all along, we're going to try in the second half of the course to bring back knowledge we've taught you, but sort of, in a sense, endlessly remind you to keep the big picture in mind. Because we've already spoken to you about it. So this now is a nice pictorial vision of what I've just described to you. And I'm going to first of all talk about proteins that are made in the cytoplasm and may be shipped to various organelles, and how that's accomplished. And then in the second part of the class, I'll talk about how proteins are shipped to cell surface, or through expulsion from the cell. So the key mechanisms whereby proteins are trafficked to new locations are first of all using targeting sequences that are part of the protein sequence. And this is a very common way in which proteins are trafficked. They are part of the sequence. They may be at the amino or the carboxy terminus. But they are woven into the structure of your protein. So your protein comes along with a barcode saying where it's going to necessarily end up. And for the nucleus mitochondria and peroxisomes, for example, people have done extensive work with bioinformatics to basically look up protein sequences and find common themes of particular sequences that may be common to where a set of proteins may end up. Sometimes those sequences may not be easy to see just at first glance. But now there are websites that you can very, very readily put your protein sequence into the web site, and it will say, it's got a nuclear localization sequence, or a mitochondrial-targeting sequence. So we can either do this by eye or we can use informatics analysis. Informatics analysis is very valuable because sometimes information may be a bit more encrypted. And it may be a real struggle to slog through a lot of sequences. So you can really find out about the targeting sequences through bioinformatics. Because nowadays, the genomes of dozens and thousands of organisms are available readily online. And you can literally parse out information from the genomic information that gives you the proteomic information. So that's one way, so with sequences that are targeted. In some cases, those targeting sequences remain part of the protein. But in other cases, in order to ensure that the protein stays put, the targeting sequences are removed. So that's another important point. You may keep the targeting sequence, or you may lose it through the action of another enzyme that cuts off the targeting sequence when destination has been reached. Now, there's a second way that we can program where a protein may go. And these are rather useful transformations that make things even more dynamic. So let me walk you through a concept. If you think of a protein that's made on the ribosome, it's got a targeting sequence. In order to get that protein to destination, you've got to make a new batch of protein that's going to go to its destination. It's going to end up in the mitochondria. You've got to make the protein de novo. Sometimes when we need to have the action of a cell we can't wait that long. We can do things quickly and expect the cell to suddenly change what it's doing. Because we're sitting around waiting for the ribosome to make new copies of the protein. So the second way in which proteins are targeted to new destinations is through what's known as post-translational modifications. This is so unfair, Adam. I saw you using the middle boards, but it looked so much easier. So the second way to target a protein to a destination is using post-translational modification. What does this mean? What it means is that the protein is made. It's ready. It's waiting. But we haven't engaged its final destiny. We haven't triggered it to go where it needs to be. But we're waiting for an enzyme to just carry out a seemingly minor modification of that protein. And then the protein will go to its destiny. And I've shown you here examples of three types of modifications. One we will talk about today, because it's very simple to understand, lipidation. And then the other two, we'll talk about next time, phosphorylation and ubiquitination. And these are all what are known as PTMs, Post-Translational Modifications. And they are changes that occur to an amino acid side chain within an already made protein to alter its destiny. And I'd like to talk about lipidation first, because I get to remind you about cellular membranes. So remember, we've talked about these semipermeable barriers that are around organelles and around cells. And let's say that this is a membrane-- I've got to put my-- that exists between the cytoplasm and the outside of a cell. And let's say I have a protein lurking around in the cytoplasm, but I need it at the membrane. I need it to get involved in a signaling process. And I need it now to be there. If I have a soluble protein, it's not associated with the membrane. But I can use another enzyme to attach a hydrophobic, greasy tail to that protein. So what it really wants to do is to get to the hydrophobic membrane. Lipidation is such a modification. It's just the modification with a long-chain, often C16, C18, fatty acid that then renders the protein lipophilic and makes it want to move, and insert this lipophilic tail into the membrane, and part the protein of the plasma membrane. So the information is still, though, encoded within the protein. How could that happen? How could I have made that information be in the protein? What might be the strategy there? It's still encoded, but it's secret. It's cryptic. Any ideas? So I'm not going to just glom this group onto a protein. I'm going to put it somewhere specific. And so oftentimes, lipidation reactions occur site-specifically at particular sites within a sequence, and an enzyme recognizes that site and transfers the lipidic molecule to it. So lipidation actually may occur, for example, of the amino terminus of a protein. But if there are certain features within that protein, you may then attach the lipidic group. So once again, using bioinformatics, you can look at the target protein of interest and predict that it's the target of a post-translational modification reaction. So once again, the information is programmed into the sequence, but it's quite cryptic. It could be within the middle of the sequence. There could only maybe be a couple of clues. But the clues are there nonetheless that can be parsed out using computer learning and screening of sequences to say that is a target for lipidation, or phosphorylation or such. Is that clear to people? Does that make sense? The information is encoded, but you can't see that it's there. But the advantage of the post-translational modifications is that they occur on demand, as opposed to making a new protein de novo, and then having it go to a particular cellular location. Later on, when we talk about phosphorylation, you will see that phosphorylation is the bread and butter of cellular signaling. It's the light switch in every room in the cell that turns on and off in order to make functions happen within the cell. And that's a really major, dynamic post-translational modification that has significant meaning. So the reason on this little image-- I just wanted to show you the membrane and just remind you that the membrane is a supramolecular structure that's assembled with a hydrophobic core and polar head groups on both faces, as I've sort of indicated in this cartoon. So let's start with sequences that might take us to the nucleus. Now, the nuclear membrane is rather a strange entity. Because the nuclear membrane isn't a simple membrane like the plasma membrane. It's actually a double-layered membrane. So if you look at a nuclear membrane-- and I'm just going to do a job of showing a portion of the nuclear membrane. Within the nuclear membrane, there are pores, quite launch openings. And the membrane is actually a double membrane, where all of these lipid bilayers. So it's not a single membrane. It's a double membrane with large openings. And you might say, well, that's no use. There's just these great big, gaping holes in the nucleus. Anything can come and go if it wants. But the nuclear pores are kind of a special structure. Because they have a protein that's kind of disordered, that creates a tangled network. That means that that pore isn't totally open, but there's some stuff that something's got to get through to get from one side to the other. And my colleague Thomas Schwartz in biology works on the macromolecular structure of nuclear pores to understand these structures. Because these are also made through the auspices of having a lot of proteins that help create this structure. Otherwise, that membrane wouldn't stay in its proper format. So in order for a protein to get into the nucleus, if it needs to, or leave the nucleus, it has to have some kind of mechanism to get through this structure that's plugging the nuclear pore. So this would be the inside of the nucleus. And this would be the cytoplasm. So as shown on this slide, the nucleus, there's a particular protein sequence that's appended to a protein. That's known as the Nuclear Localization Sequence, or NLS. And what an NLS sequence is, it's a short sequence of amino acids that enables a protein to get to its proper destination. And these sequences are quite well recognized. They may end up being highly basic sequences. So an example of an NLS would be Lys-- it's not very exciting, but it just goes on, Lys, Lys, Lys, arginine, lysine. And it may be bounded by hydrophobic residues or other types. So that would be a typical NLS sequence that's in a protein. And I want to remind you that lysine and arginine all have side chains that at physiological pH are positively charged. So the nuclear localization sequence is something that's easily recognized because of this sort of short sequence that may be at the N- or C-terminus. I think there's either possibility. But it's a very clear sequence. You could look at your protein sequence and say, there's an NLS on that sequence. And it's the NLS sequence alone that's responsible for getting the proteins in and out of the nuclear pore. Let's mostly focus on getting into the nucleus. Basically, you have a protein structure that has an NLS sequence at one terminus. And that NLS sequence binds to another protein. Creatively, you had a little bit of chance to give proteins names in the exam. It's called importin. So it's an import protein that binds to the NLS, and as a consequence of that, will carry cargo. It will escort cargo into the nucleus of the cell. And it sends it through this meshwork of proteins. That's a very loose mesh work of proteins. And they're not ordered proteins. They're highly disordered proteins. So they make more of a filter than a plug. But they are definitely something that doesn't allow any old protein to go through that nuclear pore. NLS tags are very easy to recognize, once again, through bioinformatics analysis. And what's really cool is that you can reprogram a protein to be where you want by manipulating the NLS. So this is rather a nice set of experiments. Let's say we have a protein that we're going to micro-inject into the cytoplasm of the cell. And we want to program it to either go into the nucleus or stay outside the nucleus. That can be done readily by attaching a nuclear localization sequence to a protein along with a fluorophore dye or fluorescent protein that will allow you to observe that experiment. If you micro-inject into the cytoplasm, that protein that's got an NLS will get run into the nucleus through association of the NLS with importin. But if you chop that NLS, the protein the stuck, remains out in the cytoplasm. Let's say you want to study a new protein. I just want to show you that these NLS sequence are totally independent of the cargo they carry. You can just stick an NLS on your favorite protein who you want to interrogate. Let's take pyruvate kinase. It doesn't have anything to do with specific transport to the nucleus. But nevertheless, if you put-- if it doesn't have an NLS, it's fluorescently labeled, it stays outside in the cytoplasm. But if you put an analysis on it, you concentrate into that region of the cell. So these experiments show you that what we know about these targeting sequences can be manipulated and used to enable you to move things around in the cell. So that's one particular type of mechanism. The next mechanism I want to describe to you is the mechanism that's used for mitochondrial transports. And it's a little bit different in its strategy. So to get into the mitochondria, there is, again, a recognition sequence, in this case, a mitochondrial localization sequence that has particular characteristics. In this case, the mitochondrial localization sequence, let's say it's at the N-terminus of your protein. And it would be something that might be a mix of charges. Some Arg, Glu, Arg, Glu. So that's a typical MLS sequence. And in this case, the charge at physiological pH is different from the nuclear localization sequence, because it's an alternating positive and negative charge. So this is pretty different from this. It doesn't say bioinformatics to figure that one out. So you can then pick out mitochondrial localization sequences. And so in this case, remember, mitochondria make some of their own proteins on their circular DNA. But they've abandoned expressing all the proteins that are needed in the mitochondria. And some proteins are transported into the mitochondria using these types of sequences. But the approach, the strategy, is different from getting into the nucleus. In this case, the MLS sequence associates with a protein channel that is in a closed state. So here's a membrane. Here's the makings of a channel. But it's in a closed state. But once the protein with the NLS sequence binds to that, that channel opens. It's triggered by the binding of that sequence to a portion of the protein that's outside that membrane. And that then allows the protein to be unfolded and transported into the mitochondria, where that sequence may be removed. And then protein refolds in the mitochondria. So it's a very different strategy for that and the nuclear localization sequence. So you'll find, for many different organelles in the cell, there might be very specific localization sequences that you could look up and learn about. But one thing I want to mention to you is that these localization details are very important. And many diseases in cells are a consequence of proteins not being localized to the right place. If you're not in the right place at the right time, then things will start to go wrong with the signaling or the processes of the cell. So diseases are frequently associated with mislocalization. So now what we're going to do is basically say, we've taken care of understanding things made in the cell. They either stay in the cytosol or they'll go to organelles based on particular types of strategies that are largely dependent on short tagging sequences, but in other cases, may be dependent on post translational modification. All right. So here is a cartoon. But actually, I want to do something slightly different if it doesn't take too long. Now, when we first talked about translation on the ribosome, what you see there in green and yellow is the ribosome. The dark band is a messenger RNA. The dark blue are transfer RNAs that are being helped with elongation factors to get to the ribosome. But what I want to point out here is the emerging sequence of polypeptide coming out through a tunnel on the ribosome. Now, if a protein is going to be destined outside the cell, it is expressed with what's known as a signal sequence. It's about a 20-amino acid residue sequence that is recognized by the signal recognition particle. And then translation slows down and clamps the ribosome on the endoplasmic reticulum membrane so that the new peptide starts being threaded into the endoplasmic reticulum through what's known as the translocon. So you're now not sending the protein out to the cytoplasm, but you're rather sending the protein into the endoplasmic reticulum. And you're also sending it down this branch of the protein biosynthesis pathway. You see this piece of protein emerging. This hatched portion is the cytoplasm. The gray portion is the endoplasmic reticulum. So there is a complex machinery at play that enables proteins to be made in the cytoplasm but now targeted to a completely new location. And these are the proteins that are going to be destined to either stay in the plasma membrane or be secreted from the cell. And this view here gives you a little bit more than the cartoon. So ribosome-- a signal peptide is made that is a green peptide sequence that's about 20 amino acids long. That is actually called a signal peptide. It's signaling for synthesis through the endomembrane network. That causes the ribosomes to dock down on the cytosol ER membrane and keep on being synthesized so that proteins are made into that endomembrane system. And you can think of this cavernous endomembrane system as your tunnels out of a cell for either display on the surface of the cell or for secretion entirely in vesicles. So let's take a look at how that occurs. When you make a protein in that way, see the dark dots, the rough ER? These are ribosomes that are attached to the membrane. Proteins are made into the membrane. And then the endomembrane system is not really just a tunnel or a labyrinth. But actually, each of those layers spits off vesicles that fuse with next layers to gradually make their way outside of the cells. So here you see there are vesicles. You're always keeping proteins associated with membrane as you go through the endomembrane system. And here is a vesicle that's got protein in it. It may either release it to the outside of the cell, or the protein may be associated with the membrane of the vesicle and stay parked in the plasma membrane. And so I just want to give you one final slide where I talk about the biogenesis of membrane proteins. Now, this is pretty complicated stuff. Because you have to remember what's inside and out. So I spent more time than I should have on this cartoon to show you which end of the protein ends up outside the cell and which inside the cell, and how you make multi-membrane-spanning proteins. So let's take a look at this in detail now, looking-- here's the ribosome. Here's the protein emerging. If there's signal sequence there, that ribosome docks down on the membrane and starts translating the protein, amino terminus first, into the endoplasmic reticulum. We'll all OK with that. As synthesis continues, we may reach the stop codon on the messenger RNA. And what may happen is that the protein may remain associated with membrane. The amino terminus will be in the ER. And the C-terminus will remain on the other side. There are a number of different configurations. But if we want to start to transport this protein to the surface of the cell, that will then stay associated with membrane but not in the form of the flat membrane that it was delivered into. But that membrane may pinch off into a spherical vesicle. But you still have the C-terminus outside and the N-terminus inside. That will then work its way through the endomembrane system, and ultimately, fuse with the cytosol. This is the really fun part. And then, once it's fused with the cytosol, it has the option to be displayed on the outside of the cell. Why? You have a protein. The N-terminus is on the outside. The C-terminus is on the inside. So that shows you the biogenesis of the cell surface protein that's stuck in the membrane through its membrane-associated domain. If you're not going to stay with the membrane, you can actually also simply release this into the vesicle for release of a soluble protein. I will not go through this. But there are miraculous steps that end up in the biogenesis of multi-transmembrane proteins. Because each of those transmembrane domains gets made in the translocon and gets shuttled sideways. And you start piling up transmembrane domains that span the membrane. And in the next class, we're going to see how useful these proteins are in cellular signaling. So those are very important proteins to think about. One last thing-- so let's think about this. For either configuration, either post-translational modification or using targeting sequences, when do we define where the protein's going to end up? Where's the information first defined? Anyone want to answer me and explain why? Yes? AUDIENCE: Would it be B, the mRNA sequence, because that would have a significant portion of the splicing? BARBARA IMPERIALI: It's a good try. But you want to remember, yes, splicing is important. But when was the sequence actually in the entire pre-mRNA? When would that have been defined? Yeah? Sorry. Carmen? AUDIENCE: Is it in the genomic DNA sequence? BARBARA IMPERIALI: Yes. Because you never have information in the RNA that wasn't in the DNA. So the DNA has got the information there. Yeah, it may need a bit of splicing to put things in the right place. But the information is there in the DNA. So you want to remember, for all of this targeting information, it's in the genomic information most commonly. It's the genomic information that has the patterns of sequences for post-translational modification. It's the genomic information that has things like NLSes and MLSes. They're already there. But they are often encrypted. And there was a very nice point there, though. If you want to send to make a single chunk of a genome that encodes either a protein that's going to be exported through the secretory pathway or stay in the cytosol, you might splice in or out a signal sequence. So that's a really good way, using the same original DNA sequence, to actually get to proteins that fulfill different final destinies within the cell. So next time, we're going to talk about signaling. It's going to be a blast.
MIT_7016_Introductory_Biology_Fall_2018
6_Nucleic_Acids.txt
BARBARA IMPERIALI: So we are moving along. Lecture 6 is the last of the biochemistry lectures. We're going to be talking about nucleotides and nucleic acids. And you'll understand these terms in a moment. I'll clarify them for you. But this is a tremendous stepping stone to the next portion of the class. So I show you a few images here. I'm going to reshow you some of these in a moment when we talk about addressing understanding the noncovalent structure of DNA, which is so critical to understanding information storage and information transfer. But for now, let's just have a quick peek forward. After this section, I'm going to be covering molecular biology, so how to go from DNA to RNA to protein. And then Professor Martin will take over with the basic structures and functions of cells and then genetics. But for all of this, we're going to need nucleic acids. And I'll explain to you why here. So nucleic acids form fundamental units for information storage, storage. And that is the DNA that is in our nucleus and in our mitochondria, and then information transfer. And if I get a little bit of time at the end, I have three or four quick slides that you don't have on your handout because it's sort of a floating topic on the use of DNA and DNA-based computing, because it's a nanoscale structure that one can program to do different things. And I think you might enjoy that. So in this picture of the components and what's known as the central dogma, that is how DNA is converted into messenger RNA, which, through the help of transfer RNA and ribosomal RNA, we get proteins. The key elements on this slide are DNA, messenger RNA, ribosomal RNA, and transfer RNA. And those are all made up of nucleotides being brought together into polymers that are nucleic acids. So obviously, we really need to crack the structures of these and understand how the structure informs function. Remember, we did that for proteins. We've done that for phospholipids. We thought about it very briefly for carbohydrates. But the thing that I really want to stress to you with the fourth of these macromolecules is looking at how the last component of the biomolecule's structure really informs function. And it's really cool to think about how it's done. So how is that chemical molecular structure something that we can understand from the perspective of function? So what we need to do, first of all, is think about what nucleotides are and understand their structure so that we can move forward to understand how they come together to build these macromolecules. They're so pivotal and essential in life for programming the biosynthesis of our proteins. And now we're understanding more and more about not only that, but also how RNA, not DNA, is involved in a large number of regulatory processes. So it's not just DNA, double-stranded DNA goes to a messenger, and so on. Also, a lot of regulation occurs because of a lot of the other nucleic acids that are within the cell. So I'm going to go here because I want to describe the composite components of nucleotides so we understand their structure and their properties. So what are nucleotides? And you look at these structures up on the board. They look kind of complicated. So let me deconstruct them for you. It'll make life a lot easier. So they're two familiar building blocks and one new one. So the familiar building blocks are, first of all, carbohydrates. So the key carbohydrate in nucleic acid is a five-carbon pentose sugar, which looks like this. You can count the carbons, 1, 2, 4, 5, and 5. And you can reassure yourselves everything is there with respect to the carbons by translating this line-angle drawing into a drawing where you put all the hydrogens on and you know where everything is. There are two types of five-carbon pentoses that are used in the nucleic acid. They are ribose, which is shown here with all OHs on all of those carbons, and two deoxyribose, which is a building block of DNA, whereas ribose is a building block of RNA. What else do I need to tell you? You'll see this later on. That ribose sugar ends up being connected to what are known as nucleobases. You do not necessarily need to draw those, because you've got them on your handout to put sketches on. So I put them on the board for so I don't have to stand here and draw them for you. And I want to explain certain things. So the nucleobases in the numbering system-- and I'm going to keep on reiterating this so you'll get familiar with it-- number the carbons 1 through whatever it is, or rather, the atom numbers when you're walking around the ring. So when we talk about the ribose component, they have what's known as a prime numbering system to differentiate it from the numbering system in the riboses. So this would be 1 prime, 2 prime, 3 prime, 4 prime, and 5 prime. Why is that? This becomes incredibly important when we talk about putting together polymers of DNA and the direction in which DNA is assembled in life, and also, even when we describe 2-deoxyribose, or a ribose, because this would be called 2 prime deoxyribose in the nucleic acid. So I'm going to bore you with that numbering system because I'll start to use it very commonly. And it will make a lot of sense as we start to assemble the DNA macromolecule when we talk about the way it's built and drawn and written. The numbering system will be important because we'll constantly refer to 5 prime and 3 prime. That's just a little preview for later. The next component of the nucleic acid is a phosphate. Phosphorus looks like this. But in nucleic, in the nucleotides, these are joined to other units as phosphoesters. But you want to remember that in phosphorus, you have 1, 2, 3, 4, 5 bonds to phosphorus, and you commonly have a negative charge on one of those oxygens. And in the structure of DNA, you actually have phosphates occurring as phosphodiesters. And you, once again, you will see that when we see the intact structure of DNA. So what are nucleotides? Nucleotides are a combination of a carbohydrate or a sugar, a phosphate and a nucleobase. That's the third component, the one we're going to learn about now. So the nucleobases look like this. There are two families, two flavors of nucleobase. There is one flavor-- let's get this cleaned up a little bit here-- that has two rings. And it has the shorter name, purine. And there's a different family or flavor of nucleobases that has one ring, and it has the bigger name. And that, to this day, is the way I remember purines and pyrimidines. Small name, big structure; big name, small structure. If that's helpful to you, go for it. Use it. I haven't patented it or anything. So in nucleic acids, there are two different purines. They are known as adenine and guanine. You do not need to know these structures. I actually only know my favorite three of the five to draw easily. And the other two, I'm always stumbling around the ring. So don't worry about that. We all get to know the ones we work with every day. For me, it's uracil, it's adenine, and it's cytosine, but not the others. But what you do need to understand is a little bit about their structures. Because when we start to talk about the noncovalent structure of nucleic acids, principally, the double-stranded helix of DNA, we need to know where the hydrogen bond donors and acceptors are in these structures. So if you want to indulge me, you can take a look at these structures. This hydrogen would be a donor. You can see that it's a hydrogen on a nitrogen. This nitrogen is interesting. It has 1, 2, 3 bonds to nitrogen, which means there are a lone pair of electrons also on that ring system. So that would be a hydrogen bond acceptor. And the adenine nucleobase can accept and give a pair of hydrogen bonds. And you can work that out for all of the others. So in guanine, there is an acceptor, another acceptor, and a donor, and so on. So those rings in the nucleobases are very important because they have places that you can hydrogen bond to. Now, is everyone feeling comfortable about this? Does anyone want to ask me a question that might help clarify, because it's quite-- yeah, do you have a question? AUDIENCE: [INAUDIBLE] What does uracil [INAUDIBLE]?? BARBARA IMPERIALI: What does-- sorry? AUDIENCE: Uracil. BARBARA IMPERIALI: Uracil. These are all-- sorry. All these nucleobases have fancy names. So, so far, I've shown you the structure of adenine, guanine, cytosine, and thymine. Uracil, which is not drawn on the board, is very similar to thymine, except this methyl group is a hydrogen. Knowing the names is also complicated. I really care that you understand the hydrogen-bonding patterns; not to draw the whole structures, but to identify hydrogen-bonding patterns; not to remember fancy names, because there's no logic to those names; but really, to remember ribose, deoxyribose, phosphate and phosphodiesters, purines and pyrimidines, just the sizes of them to pick them out. Does that make sense, what I want you to know, and what you can remember if you think it's interesting? Now, in nature, we use the nucleotide building blocks or the nucleotides in many different ways. It's not just in DNA and RNA. And so here, I'm showing you some really important nucleotides that are found in nature. And I'll give you a little bit of information about their signaling. So here are the components that you can pick out. There is, in this case, a ribose sugar. In this case, it's phosphate, but it's a phosphate triester. So it's got three phosphates in a row. And here's a nucleobase, which is a purine. And this is adenosine triphosphate. So it's one of the bases, one of the nucleotides used in energy, energy transfer. In a lot of metabolic processes, we use ATP as a molecule that has energy that can be unlocked for chemical processes. There's another one of these, which is guanosine triphosphate, where the nucleobase is different. They're both purines, but they have different structures. You can see them there. And then finally, the last one I show you here is a nucleotide that has a cyclic phosphate. But it still has a nucleobase, a ribose, and a phosphate. And this is cyclic AMP. And when come back after Professor Martin has talked, we'll talk about the role of cyclic AMP as a second messenger. So these two molecules, in addition to being building blocks for DNA and RNA, also are forms of energy where you can use ATP or GTP as a form of energy in a lot of metabolic processes. And in fact, though, when we start constructing proteins using the ribosomal system, you'll notice we use GTP as a form of energy, not ATP. It's interesting how nature chooses to do that. Any questions about this? One tiny wrinkle left to deal with, and that's a little bit more about those building blocks for the nucleic acid, and one more item that it's useful to understand the name of. So here are the five nucleobases, two purines, and three pyrimidines. In DNA, we have AT, G and C, so A, T, G, and c. So we have different building blocks. Three are common to both polymers. One is different. Uracil and thymine are exchanged when you go from DNA to RNA. The pyrimidines are cytosine, uracil, and thymine. And in RNA, you have a AU, G and C. So there are reasons for these differences, and I'll nudge into some of those chemical differences in a moment. So the information up there is the same information that I have on this board. The next thing I need to talk to you is we very commonly use the term, or two terms, nucleoside and the nucleotide. How irritating is that? The nucleoside is just the ribose plus the nucleobase, but no phosphates. As soon as you put on phosphates, they become nucleotides. So for example, nucleobase, ribose, and in this case, a phosphate on it. And that becomes a nucleotide. No matter how many phosphates they are, it's called a nucleotide. I'm less concerned that you will remember that nomenclature, more that you know what it's all about, because otherwise, it might become a little bit confusing. So just remember, if you can remember that. But I think I've tried to define the things I would like you to remember-- the building blocks, the numbering system, the phosphodiester linkages, and the nucleobases, as far as understanding where donors and acceptors are for hydrogen bonding. And there's one thing. So we call that a nucleoside, whereas we call it a nucleotide when it includes the phosphates. And there's one thing that you want to notice, is that the bond from the nucleobase to the ribose is a glycoside bond. It's a bond to a carbohydrate. So that's why it's called a glycoside bond. There are glycosidases that cleave the bond from the base to the sugar. Those are very important when we have mutations in our DNA, and we want to cut out the sugar to fix it so it doesn't get misread in the biosynthesis of DNA, in the biosynthesis of messenger RNA. So that bond is important. We may often talk about it, but only when we get to learning about how DNA sequences are corrected if there are mistakes in those sequences. And that will be later on. So let's start to now look at the polymers. Now, I want to tell you that by the early 1900s, people pretty much knew the structure, the noncovalent structure of DNA. And I'll describe it to you now. DNA is made up of nucleotides. And this is its basic structure, where you have a phosphodiester backbone linking riboses, and each of those ribosomes is modified with a purine or a pyrimidine. And that is the basic structure of a nucleic acid polymer, only it's very, very, very, very long. So let's take a look at the components here. Look at the bonds. And maybe on your notes, just highlight the bonds and some of the things I'll talk about. So first of all, the numbering system here, we always talk about a nucleic acid. And we describe the sequence of the nucleic acid based on from 5 prime to 3 prime. Because the phosphodiester bonds join the 5 prime-- there should be a number there-- and the 3 prime sites. So the linkage would be here, would be 5 prime and 3 prime joining to the ribose molecules. So the architecture of that nucleic acid is a polymer that includes a phosphodiester backbone linked by phosphate esters-- that's 1 phosphate ester; that's the other one-- on two of the OHs of the ribose sugar. When this is DNA, there's no OH group on that carbon site. That would be the 2-prime site. You can see-- you can pick straight out that this is DNA. The sequence is then defined by what the identity of the base is here. So this would be guanine, adenine, thymine on that sequence. Now, by convention, if we write out this sequence, the way the sequences are written, are 5 prime to 3 prime direction. So if I look at that, I would be able to name it as an A, G, T sequence, because we always write the sequences 5 prime to 3 prime. We can remember that later on because we actually also build sequences 5 prime to 3 prime. So there are some conventions in biology and biochemistry. You want to remember that by convention, we write peptides N terminal to C terminal. But we also build them N to C. So that's why the convention is strong, and it's good to remember, because it can get you out of a lot of trouble if you remember those things. Now, when we are building a DNA polymer, we grow that sequence. You'll see the biochemistry for all of that polymerization in the next class. It's amazingly cool how the entire contents of a cell, the DNA, can be replicated in amazing time frames, but all through growing those chains from 5 prime to 3 prime. So when we add another building block on, we remove a molecule of water. So that's a condensation reaction. And we form a new phosphodiester bond. So in the biosynthesis of DNA, you keep on adding new nucleotides to the 3 prime end. There's a chemical reason for that. When we build DNA, we don't just cram the two groups together. We, rather, come in with a triphosphate and use that activated triphosphate as the new building block. And you kick out triphosphate. And you'll see that when we talk about DNA synthesis. But what I want you to remember here is this is another condensation reaction. We talked about them when making peptides. We talked about them when making carbohydrate polymers. And now we're seeing, once again, a condensation reaction to make a nucleic acid polymer. Now, the last term that's kind of worth mentioning is the word nucleic acid. What's that about? I don't see any carboxylic acids. It turns out the polymers of DNA are very acidic because the OH group on those phosphodiester backbones is very acidic. So you give up H plus. And this is in its most stable form as O minus. So when DNA was first isolated, it was isolated from white blood cells by isolating the nucleus. And it was found that it was a very acidic material packed into the nucleus. That's why it was called nucleic acid, acids in the nucleus. Before people even understood anything about the composition, it garnered that name, nucleic acid. So we talk about polymers of nucleotides, we call them nucleic acids. Then with respect to writing our sequences, we could write them in this way. So pdGATC. That would be that structure. What do all the little extra Ps and Ds stand for? The P stands for whether there's a phosphate at this end. The D stands for whether it's a deoxy sugar as a building block. Going all the way to the other end, there's no little p at the other end. So it means that OH is free. Does everyone understand that shorthand writing? There's another way I could know this was DNA without needing to put deoxy on each of the building blocks. Does anyone know how I know immediately it's a stretch of DNA? Yeah? AUDIENCE: No uracil? BARBARA IMPERIALI: Yeah, there's no uracil, and there's thymine instead. So in principle, as long as there's a T in there, you know it's DNA. As long as a U in there, you know it's RNA. Now, let's talk about the noncovalent structure, because I really feel that that's the most exciting part of this entire endeavor because the covalent structure really doesn't allow us to understand how DNA stores information for building proteins. It doesn't tell us that much about it. It looks like a cool polymer, but we can't really understand the details by not looking at the covalence of the noncovalent structure. So there was one key piece of information, and it's called Chargaff's data. And this piece of scientific information ran around the scientific community in the early '50s because it seemed incredibly important. And what Chargaff's data was, he collected all kinds of organisms, and then their nuclei, and then measured-- or their DNA-- and then measured the ratio between the purines and the pyrimidines. He measured the ratio of the large ones and the small ones of the nucleobases. So how many of these relative to how many of those? And what he found by looking all across organisms from all domains of life is that there was a one to one ratio of purine to pyrimidine. So that became very interesting, because what it suggested was that in some way, the noncovalent structure of nucleic acids had some correlation between the number of the purines and the number of the pyrimidines. And what you can imagine is it sounds like we're always pairing a small one with a large one by looking at that number. So this is really, really important because it's like the light bulb that went on with respect to understanding the structure of double-stranded DNA. So despite all kinds of variations, some organisms have a lot more GCs. Some have more ATs. But no matter what, the ratio is always one to one. And this ultimately led to understanding the noncovalent structure of double-stranded DNA because it provided clues to how there could be some way that information was coded, but then could be replicated. Now, the next thing that became the clue to the structure of double-stranded DNA came from a very talented researcher, Rosalind Franklin, who sadly died way before her time of ovarian cancer, really, in large part because she spent a lot of time near X-ray beams. So that would have caused mutations to her DNA. And she developed a way to make fibrils of DNA that were ordered enough to collect electron diffraction data. And that diffraction data actually gave a clue to some of the dimensions of the double-stranded DNA structure. And it actually was the clue that told the spacing between the strands of DNA. So it really was a piece of information that you simply couldn't do without. With Chargaff's data and with this, what was called Photograph 51, it really gave you the clue. And it was really during those years that Watson and Crick were desperately model building to try to understand the noncovalent structure of DNA. And once they had those two pieces of information, they could actually put together hand-built models. This looks kind of clunky, but I know the room they took this photo in from my years at Caltech. In fact, I can recognize the room. They built not just little tiny molecular models, but big molecular models so they could make measurements to say, the diffraction data told me this was so many nanometers apart. And they were able to piece together the structure of double-stranded DNA. But I still haven't shown you how those two strands come together. It's really intriguing, because at that very same time, Linus Pauling, had been-- done very well with the structure of the alpha helix and proteins, also was trying to figure out the structure of DNA. But he came up with a sort of a crazy structure where he thought that it was a triple-stranded structure where the bases actually stuck out, and somehow, this triple-stranded structure coded for replication of DNA. Now, there's a ton of things that are really awful about this structure. First of all, it's a triple-stranded. But the other terrible thing is there's so many phosphates in the backbone there would have been massive electrostatic repulsion. Those sequences would want to blow themselves apart because you can't cram that much negative all in one place. But it was really an intriguing sort of sociological phenomena of the time. Pauling was a major pacifist, and he was really, really active in nuclear disarmament. And they said that his mind just wasn't on some of this stuff and that this model came out of him really worrying about other things and not focusing on the DNA structure. So let's try to explain Chargaff's data by looking at the nucleobases and thinking about how they might come together. So here I show you the structures of the four nucleobases in DNA. Wherever I have an R, you can assume that's part. That's a ribose that is part of the phosphodiester backbone. What we want to understand is, how do the nucleobases come together to form some kind of pair that could be useful to programming their resynthesis? So I've drawn them all here, but it's not quite intuitive. I need to do a little bit of flipping around to line things up better. And the other thing I need to do is get things at the right angles so you can start seeing how those bases might come together, because Chargaff's data dictates that you have a purine and a pyrimidine, purine pyrimidine. You have pairing between the nucleobases in your double-stranded DNA in a structure that looks more like this. And in each case, you've paired a purine and a pyrimidine. So what I want you to do is take a look. I've shown you now where donors and acceptors are. You can go back and do this for all the nucleobases. But I'm going to do this for you right now, by showing you the donors and acceptors of hydrogen bonds within those structures, what I've done is I've lined them up beautifully so they look straight at each other, so you can tell that there is a complementarity between a purine and a pyrimidine that makes very nice hydrogen bonding, which is the noncovalent force that's very important. Between G and C, I can set up three hydrogen bonds. Between A and T, I can only set up two hydrogen bonds. So the one purine is complementary to one of the pyrimidines. One purine is complementary to one of the other pyrimidines. And then we can draw those hydrogen bonds in place. That totally explains the measurement from the Franklin data of the distance, the width of the double-stranded helix, because it's identical for both of those base pair options. And that gives you the structure that forms the noncovalent structure of DNA, which is a series of interactions where the solid line is the phosphodiester backbone, but sticking out like steps on a spiral staircase are the bases, where each base is complementary to a specific additional base. So it predicts the Chargaff ratio, and it also predicts the distances. Now, within all the model building, it became quite clear that the structure, the noncovalent structure of DNA, was afforded by antiparallel strands, where one strand went in one direction, 5 prime to 3 prime, and the other strand went in the opposite direction, 5 prime to 3 prime. When we start replicating DNA, we're going to see that that's pretty convenient. But thermodynamically, it is also the favored orientation. So let's just look at the orientation. Where you would draw one strand of DNA, 5 prime to 3 prime, now I've taken this all down to cartoon level. These are the phosphate diesters, the riboses, the 3 prime end, and the 5 prime end and the bases that come off at the 1 prime carbon. And then when you pair it with another strand, one strand goes in one direction. 5 prime-- whoa, I don't know why this is misbehaving, 5 prime, whoops-- 5 prime to 3 prime. The other strand goes in the other direction, 5 prime to 3 prime. And when asked this question a few years ago, I couldn't really explain it very well. I just said it had to be because it always has been. But what's really cool is people have been able to solve the crystal structure of a parallel pair of DNA strands. So this is canonical DNA, the beautiful antiparallel structure. And it's very regular, very, very even. It turns out, though, when you try to pair the two strands in a parallel orientation, they're very uncomfortable, and it's much less stable. So the antiparallel orientation is very important for the thermodynamic stability and the optimum hydrogen bonding interaction of all those bases that are pairing. So it's actually what nature favors because it is more stable. Any questions? And this, it's on your slides. But you can see just how regular DNA looks so organized, whereas the antiparallel one, the one, the parallel one, really does not afford you good hydrogen bonding interactions at all. So let us now-- so what we've done now is we understand the structure of DNA, the noncovalent and covalent structure of DNA. We understand it's antiparallel. What we'll do in the next class is show how you can peel apart those antiparallel structures to make unpaired structures. And you can use each of them as the template for the synthesis of a new strand of DNA. So you can get two daughter double strands from a single parent double strand. And that all comes from understanding the structure. Now, what I want to do is move you just very briefly to the structure of RNA and comparing the DNA and RNA structures, because there are some differences. So let's just work through what the differences are. I have this written down. And the differences are very important for the functional properties. So DNA, RNA. First of all, obviously, deoxyribose, ribose. And you may go, why, why, why is nature so complicated? Why do I have this extra factoid to remember about RNA versus DNA? And it's really amazing that the difference between having that hydroxyl on the 2 prime position versus not happening, not having it, makes enormous differences to the stability of the polymer. RNAs breakdown very, very readily. DNAs are stable for the lifetime of a cell, all perfect in the nucleus or at mitochondria. They stay intact. So there's a stability difference between the two sugars. Because DNA has to be the place where you store your genetic material, it's got to stay good, whereas RNA is the message that you make transiently to program a protein being made, and then you want to get rid of it. So we need the differences in stability that originate from that small feature. ATGC-- there's the difference-- AUGC in the bases. The most common DNA is double-stranded DNA, whereas RNA forms various structures, so much more irregular structures than the DNA, probably in part because the ribose is substituted differently. So that continuous strand of double-stranded material is not quite so stable in RNA. We find DNA principally as double-stranded DNA. But the RNA we find as transfer RNA, messenger RNA, ribosomal RNA-- it does go on forever-- short interfering RNA. So various RNA is used for a lot of purposes, whereas DNA principally stays as the double-stranded DNA. There's a little double-stranded RNA, but it is a precursor to some of these other forms of RNA. So this slide just summarizes some of that for you, the differences comparing DNA and RNA. And so what we'll see later is how RNA lends itself to these interesting structures where you still have some base pairing, but you have a lot of loops and turns and diversity of structure. And that's really kind of the origin of this RNA world, where RNA structures were not-- could have variety of form that might contribute to different functions beyond just as a message, as a place to store a DNA message. So there are a lot of things that one can understand about DNA by knowing its hydrogen bonding patterns. So can you guys guess which of these strands would have a complementary strand and be the most stable double-stranded DNA? So this would be one strand. You could draw for each of them it complementary strand. Can you guess the clues to figuring out which would have a most stable organization of the antiparallel double-stranded DNA? What would I be looking for? Yeah? AUDIENCE: More Gs and Cs [INAUDIBLE] BARBARA IMPERIALI: So number one, higher GC content because Gs and Cs form three hydrogen bonds. As and Ts only form two. And what's the other determinant, just looking at those structures? Yeah? AUDIENCE: [INAUDIBLE] BARBARA IMPERIALI: Yeah, you are doing-- no. It's actually even more silly. It's more simple than that. AUDIENCE: Length? BARBARA IMPERIALI: Length. So all you do is you go along and say, I can make three hydrogen bonds, two, three, two, two, three, two, two, two. So you truly just count hydrogen bonds in its partner sequence, and you can guess which is going to be the more stable because it has the most hydrogen bonds. So we might ask you that. Which one will come apart? Now, the intriguing thing about DNA is you can peel it. You can heat it, and it'll come apart. But it doesn't denature the way proteins do. If you just cool it down, it comes back together. So another feature of DNA is that you can heat, denature, and then reanneal exactly how it was in the first place. It doesn't denature to something that's not very useful. And now the question, can you draw the complementary strand? I always find, of this top strand here, which of these is the complementary strand? Frankly, the best way to do it is sketch out the complementary stand. You can see it kind of upside down because it's really hard to draw things 5 prime to 3 prime when you're also trying to figure out base pairing. So draw it upside down. Make sure you know the 5 prime and the 3 prime end. And then you can guess the right answer for these types of questions about complementary strands. Now, one last question, the stability of double-stranded DNA. I've made a whole big deal about hydrogen bonding. That's what holds it together. What other forces could be at play in double-stranded DNA that might contribute to its stability? Any thoughts? What else? Well, it certainly doesn't look like it's charged, because the predominant charge is negative. There's not an-- you've probably got metal ions there, kind of neutralizing that charge. What would be the other force, and how would I describe it? It's a tricky one. So we've got these bases, and they're pretty hydrophobic. They're planes. They have electron density on both sides. So it turns out there is some stability gained between the packing of the steps of DNA between each base pair with the next, with the next. So there are hydrophobic forces. And researchers at Scripps have actually proved this paradigm by making extra DNA bases that don't have hydrogen bonding partnerships, but just provide the stuff that's the flat hydrophobic entity with the right size that can slip into DNA sequences and make stable [INAUDIBLE],, make stable not really base pairs anymore, but just be stable in that polymeric structure. Are people understanding and following that? So finally, when we look at the structure of DNA, there are some trenches where things can bind to, proteins can bind, and we talk about the major groove and the minor groove. But I will talk about those later on when we talk about transcription factors. Now, I just want to, in really triple-fast time, and I'll put this on the website, there's tremendous interest in using the building blocks of DNA for information storage in computing. So if you look up DNA-based computing on Wikipedia, you'll learn a whole lot about it. Because what's so exciting about it is it's an organized nanoscale material that can be programmed to base pair and form certain structures. So in the sort of range of different sizes, there's been a lot of interest in DNA as a material for information storage, not for your genetic material, but for plain old information storage. So people have learned how to build structures of DNA where they can construct these sort of cruciform structures by base pairing. They can make the arms of these structures a little bit extended. So you could start joining those things together to make very defined three-dimensional entities. They went kind of nuts doing this sort of stuff because you can build sort of tetrahedra and other sort of shapes and sizes, all by strands that base pair, that are about 10 base pairs long, that are stable, and only complement certain other base pairs. So you could literally build up-- they often called it DNA origami because you can build up macroscopic structures just by the assembly of strands of DNA that will ultimately fold to form the best complementary DNA to form the structures. And it's also been found-- as I said, they went completely nuts-- smiley faces and stars and stripes and so on. But the most valuable thing you can-- as I said, you can read more about this-- is to use DNA as logic gates to define and, or, or not, so the sort of three options, and actually use them to program certain puzzles where the DNA will spit out the answer to a particular puzzle through a logic diagram. So those of you who are interested in computing and these kinds of logic puzzles may want to read a little bit more, because DNA is such a reliable noncovalent structure, where those base pairs are incredibly reliable, that you can start envisioning not just building double-stranded DNA, but building all kinds of architectures or programming things with the sequence of DNA. And that's it for today. And that's the end of the biochemistry section.
MIT_7016_Introductory_Biology_Fall_2018
21_Cell_Signaling_2_Examples.txt
PROFESSOR: OK, here we go. Couple of things. Sorry, I forgot to bring candy, but it'll be on sale next week. So we can probably bring it next week. But walking over, I thought, boy, those of you who are here deserve some candy. But I've just tried to sprinkle in a few interesting slides for your benefit. I saw this on the MIT news of the day, and I thought that was really cool. Who would have thought to turn the giant dome into a Halloween pumpkin? And I was also jealous this morning when my husband got ready to go work in the emergency room and he put on his Star Trek outfit. So I was like, oh, I didn't even have a-- because I don't usually get to actually have a class on the day of Halloween. So he headed off. I think people are unfortunately going to expect him to be able to really fix things very readily in the emergency room today because he's going to have all those extra powers that he doesn't usually have. But anyway, so actually it's kind of a good day for a lecture, Halloween, because we're going to talk about the fight or flight response, which is a great paradigm for cellular signaling. So you're going to see how signaling really works in action. Because what one has to think about with respect to cellular signaling is that it's dynamic and transient. And when we look at the molecular details of the switches that enable dynamics and transient behavior in cells, you're going to see how perfectly adapted they are for these types of responses that have to be carried out in cells or in organs in order to respond to a particular signal rapidly and with a definitive time frame and then have that signal then stop once the time frame has passed. So I really want to sort of stress to you the characteristics of signaling that can emerge just by knowing about two particular cellular switches, knowing the molecular details of those switches, to really understand, then, when we look at them in action in a couple of cellular signaling pathways, then we'll see how adapted those signals are. And the great thing about biology is that once you learn a few very specific things, then those often get reused in nature. So the cellular signals that I'm going to describe to you are used again and again in different formats to create different signaling pathways. So it's not at every pathway. And every cell in the body has different nuances. It has general paradigms that we can learn about and understand, all right? So the key feature is then to think about the molecular basis of these switches. So last time, I was talking to you about cellular signaling. And remember, there's always a signal, a response, and an output. So something happens. It's a signal. It's usually a molecule of some kind outside the cell or able to diffuse into the cell. As a function of that signal, there's a response. So this is molecular. The response obviously is biochemical. And the output is biological. So I want you to sort of think about these as we look at pathways. What's really my output at the end of a particular signaling pathway? What was my input? How did it get to the cell? How does it have a dramatic effect on the cell as a whole? How does the timing and action of this effect occur so rapidly? So we talked last time about different types of signals, those that mostly occur in the cytoplasm of the cell with signals that are able to diffuse across the plasma membrane and bind to an intracellular receptor and then cause an action. But really, the most important ones for today are going to be the types of receptors that span the membrane. And the reason why these are much more significant, they're more in number, they're more predominant, is if you have a membrane spanning receptor, you have the opportunity to use signals of very, very different types. Small polar molecules, small proteins, lipids, amino acids, carbohydrates. You've got a much larger range of signals than you could possibly have if you restricted yourselves to the type of signals that can cross the cell membrane. Those are very limited to non-polar small molecules that can get across the membrane. The more dominant types of signals are going to be the ones that are outside the cell. They arrive at a cell surface. They bind to a receptor that is transmembrane and transduce a signal from the outside to the inside. So that's an important term here, the process of transducing external information to internal information. So when we started the course, we were really thinking about laying down the cellular membrane as an encased environment where things could occur at high concentrations. You could set up systems that were functional within membranes. But in doing that within a membrane encased area, in doing that, you've built a formidable barrier around the cell. So the types of receptors that we'll talk about are those that have adapted to take this external information into the cell and then have a cellular consequence occur. So I'm going to talk to you about two specific types of cellular switches, and they're going to be intracellular. And we're going to be referring back to these because they're going to be important as we dissect a signaling pathway. So the first type-- so stand aside a moment. Put aside a moment the actual process of a signal binding the response, both biochemical and biological. Let's look first at the molecular detail of these switches and see how they are adapted to their function. And the first type of cellular switch are what are known as G proteins. The G is because they bind guanidine nucleotides. So that's why they're called G proteins. They're small proteins that bind GDP or GTP. So this is the guanine nucleotide that has either two phosphates, guanidine diphosphate or triphosphate. So there may or may not be a third phosphate here. And the G proteins bind them. And the dynamics of the situation are that when the G proteins are bound to GDP, they are inactive. The switch is off. There's an aspect of the structure, and it's very dependent on how many phosphates there are in this structure. But it's when it's bound to the diphosphate variant of the nucleotide, then it's an off switch. And when it's bound to GTP, it's an on switch and it's active. So this is the molecular basis of one of the switches. It relies on the shape, the conformational dynamics of these small G proteins. And that shape is quite different if it's bound to guanidine diphosphate or triphosphate. And we'll take a look in a moment at how the structure, the shape, changes. The shape shifts upon binding the triphosphate analog. So this is a dynamic interconversion. When the GTP is hydrolyzed, you go back to the GDP-bound state, the off state. And there are a variety of proteins that actually help these processes, which we won't talk about in any detail. The main thing that you want to remember is that when the G proteins are bound to GTP, they're in a non-state. GDP, they're in an off state. And that's shown in this cartoon. And here I should be able to, if all goes according to plan, show you the structure of a GTP analog bound to a G protein. So lets-- this little guy is twirling around. He's settled down a little bit. The key thing you want to look at is where it's magenta and cyan, the structure of the GDP and GTP-bound G protein are very, very similar. But big changes happen in the yellow, which is the GDP-bound form, and the red, which is the GT-bound form. Let me go back again so you can see that one more time. So in the GTP-bound form, a portion of the protein swings around and binds to that third phosphate on the GTP and forms a different shape to the structure. And that's a dynamic change that's responsible for activation. When it's just GDP, that's shorter. There's nothing for that red arm to bind. And so it's much more of a floppy structure. What I want you to notice in that little yellow portion in the GDP-bound form, you actually don't really see where the rest of the protein is. This is because this is a crystal structure. And in the crystal structure, when things are very mobile, you can't even see electron density. It's as if the part of the protein isn't there because it is so dynamic. It's only in the GTP-bound form it forms this tight, compact structure that represents a switch that has been turned on. Does that make sense to everybody? Is everyone good with that? So just that change, that extra phosphate reaching further to the protein and making an interaction with the protein itself, makes the difference in the dynamics of the G protein and the activity here. Now, there are different types of G proteins. And you'll see both types reflected in this lecture. There are small G proteins, and they are monomeric. And then there are slightly more complicated G proteins that are trimeric. They have a heterotrimeric, structure. So they have quaternary structure where you have three different proteins as part of the complex. So the other ones are trimeric. And the G protein actually comprises three subunits where one of them is the important one that binds GDP or GTP. But they are a little bit more complicated. And in the first example when I talk about a particular response to adrenaline, we're going to see the trimeric G proteins. And because they're trimeric, that means there's three subunits. And the convention is that they get the Greek lettering system. So they are the alpha, beta, and gamma subunits. They've each got their name. They're three independent polypeptide chains. And it's actually the alpha subunit that binds GDP or GTP. So that's the formulation of one of the types of switches that we're going to see when we start to look at a pathway. What do you need to remember here, you need to focus on the fact that in one state, the protein is in an off state. It doesn't kick off a signaling pathway. But in the other state, the protein is a different shape because of binding a loop-- let's just make this a little longer-- to that phosphate that's negatively charged to the protein. So that's an on state. And they are very definitive types of structures. Now, both of these proteins are intracellular, which means they're part of the response once a signal reaches a cell. They are things that change. And they're what the signal gets transduced to, the G proteins. Now, there's another type of intracellular switch which is used very, very frequently in nature. And in fact, it crosses, permeates, through all kinds of cellular processes. And this is phosphorylation. So here are the G proteins, which is one. And the other one is phosphorylation. I don't like that chalk. OK, now remember, we talked about reactions of proteins that alter their behavior, their properties, their dynamics. So protein phosphorylation, remember, is a post-translational modification, a PTM. It's something that happens to a protein after it has been translated and folded. And the PTMs in a phosphorylation involve amino acids that have OH groups. So that's the structure of tyrosine where the squiggles represent the rest of the protein. And on phosphorylation, we append a phosphate group-- whoops, a minus, a minus-- to the oxygen on tyrosine on the side chain. So actually, it looks pretty different. It behaves pretty differently. There are two other residues in eukaryotic cells that are commonly phosphorylated. There are the other two that include OH groups. So they are serine and the third one-- as I run out of space, but you get the general message-- and threonine. So these are the three amino acids that commonly get a phosphate group attached. And they change their properties. They are called kinases. Kinase, the root of the word is actually "to change." So the enzymes that catalyze this change are known as kinases. And in contrast to the G proteins that use GTP, the kinases most commonly use ATP to give up a phosphate to phosphorylate the protein. So there's another substrate in this reaction is ATP. Now, when we look at this structure, there's two or three things I really want to call your attention to. If we're dealing with this kind of switch, we can go back to the off state by chopping up the GTP and making it GDP again. So that's how to turn the light back off. In the case of the kinases, we've got to do something to go back from this state to the non-modified state to turn the light switch off. So for every transformation in the cell that involves kinase, there is a corresponding set of enzymes that reverse the reaction called a phosphatase. It takes that group off again. So let me write that down here. Now, phosphate is used a lot. So this is a phosphoprotein phosphatase. So kinase puts the phosphate on. The phosphoprotein phosphatase takes the phosphate off. There are three types of amino acids that get most commonly modified in our cells-- tyrosine, serine, and threonine. And one of the ones that forms an important part of an extracellular signaling mechanism are the tyrosine kinases. And we'll delve into them in a little bit, not the section I'm going to cover now, but later. Because tyrosine, various kinases come in a lot of different flavors. The common flavors are whether you modify tyrosine or threonine/serine, because these are more similar to each other, and this guy is different. But we'll get to that later. So in the cell, we have about 20,000 genes that encode proteins, encoding genes. All right, 515 of those are kinases. That's a pretty big chunk of the genome you've got to accept. So in excess of 500 kinases. Specifically, protein kinases, the ones that modify. So there's a big hunk of the genome dedicated to this kind of activity. And there's a dynamic because there's also about 100 phosphatases. They are a little bit more promiscuous. You don't need so many of them. But a large sort of component of the genome is responsible for phosphorylating proteins and dephosphorylating phosphoproteins. And that part of the genome has its own special name. And it is called the kinome. I hope that's not too small in there. So if we're describing all the enzymes in the genome that catalyze phosphorylation, we would call it the kinome as a collective set. Because it's the set of kinases. And you'll hear that term quite commonly. And the kinome is really important and represents major, major therapeutic targets. Because it's when kinases go wrong that we have physiological defects. So let's just go back to this. You can see we've got the kinase and the phosphatase. The donor for phosphorylation is ATP. And it is the gamma phosphate that's transferred to the protein to switch it from the off state to the on state. It is a post-translational modification, meaning it occurs on a protein after the protein has been fully translated. There are a few cotranslational PTMs. That seems like a bit of an oxymoron, cotranslational modifications. But phosphorylation isn't one of them. Glycosylation is. And we won't go into those in any detail even though it breaks my heart not to go into those. But OK, all right, so now I want to first of all introduce you to a paradigm for signaling as opposed to really go into what's happening. And so one of the first paradigms is a situation where you have a cell. In the plasma membrane of that cell is a receptor. And it gets hit with a signal. So a signaling paradigm is that a molecule from outside the cell binds to something that's transmembrane. And then you start getting signal transduction through a pathway. So any extracellular signal could be game in this process. And then there's a sequence of events. Upon signal binding, there's a sequence of changes that ends you up with a final output. And generally, signaling pathways go through a number of steps where there is the opportunity for the amplification of a signal. So I talked to you last about time about some of the hallmarks of signaling. Specificity defines how accurately that extracellular signal binds to the receptor. But amplification really refers to how signals get bigger and bigger through certain steps in a signaling pathway in order to have a big impact in the cell, not just a single event going through a single pathway one molecule at a time. And we'll see that in the example that I show you. And then oftentimes when we look at signaling pathways, we care a great deal about what's the first response upon the signal hitting the outside of the cell. So in many signaling pathways, this could be a protein. The receptor could be a protein that is bound to a G protein. And that would be the first responder through the pathway that really triggers off the cellular events. OK, and now, so what I want to talk about now, is the-- I always get this wrong. I always thought it was flight or fight or fight or whatever, fright. I always thought it was flight or fright, but it's not. The response is actually the fight or flight. So let's set this up to understand why this is such a great manifestation of a cellular signaling response. Because it includes a lot of the hallmarks that are really characteristic of the cellular response. So this response involves a cellular receptor that is called a G protein coupled receptor. We saw a little bit about them last time. They are always called GPCRs for short. And what that term means is that it's a receptor that is linked in some way to a G protein. So it could be coupled to a monomeric or a trimeric G protein. So don't confuse the two. One is the receptor. It's transmembrane. It's responsible for receiving signals and transducing them. The first responder is the G protein that changes from a GDP-bound stage to a GTP-bound state. And in the one that we're going to talk about, we're going to deal with a trimeric G protein in the fight or flight response. And what you see here is a cartoon of the players that are involved. So remember the G proteins? We talked about them briefly last time. They have seven transmembrane helices. They span the membrane. They have the N terminal outside. 1, 2, 3, 4, 5, 6, 7. N terminus out, C terminus in. And each of these is a transmembrane helix. And this would be outside the cell. This would be in the cytoplasm. OK, and you can actually often look at the transmembrane protein and know its behavior. Because the width of these transmembrane helices often comes in at approximately 40 angstrom, which is the span of a membrane. You can sort of say, that looks like a transmembrane helix because it's exactly that dimension to cross a membrane. And the GPCRs in this case would bind to a ligand outside the cell and have a response inside the cell. So those seven transmembrane helices are responding to ligand binding. So let's take a look at this picture because it's almost impossible for me to get it onto the screen. So it binds to a trimeric G protein. Remember, I talked to you about the two different types. The trimeric has an alpha, beta, and gamma subunit quaternary structure. And they're shown here in different colors. The green is the alpha subunit. The red is the gamma subunit. And the yellow is the beta subunit. So what happens, when the ligand binds the G protein coupled receptor, there is a reorganization of those seven transmembrane helices. Last time, I identified them to you. When you look at a couple of these, they're actually fairly large loops that grab onto your ligand. And then that will translate conformational information through the membrane to the other side where the G proteins are sitting. And in this response, what happens is upon binding the ligand, the alpha subunit leaves the team and goes from their GDP-bound state to the GT-bound state. So it literally changes its state and changes its mode of association within the cell upon that action. So you can see how nicely we have transduced the ligand binding out here to a pretty discrete cellular event, turning on the switch of the G protein alpha subunit. Is everyone following me there? I know it sort of looks complicated to start with. But you'll see it in action. OK, so here are the cellular components of the response. So basically, this is the kind of response where if you get scared or you feel you're in harm's way, you will trigger this response in order to generate a lot of ATP in order that you can respond-- run away, hide, do something very active in order to rapidly respond to a threat of some kind. And this response is triggered by a small molecule. In this case, it's epinephrine or adrenaline. Different names on different sides of the Atlantic, but you all know what adrenaline is. And here's the structure of epinephrine or adrenaline. And it is the signal for the flight or fight response because it's the small molecule that binds to the extracellular surface of the receptor, changes its shape so that things can happen intracellularly. And it's just one small molecule. Normally that would be charged. It doesn't diffuse across the membrane. So it's stuck being on the outside of the membrane, OK? So this is the signal that triggers the response. So if you have to respond to a threat of some, kind you can't stop, sort of go to the fridge, get a big snack, eat it, digest all your food, and hope you're going to get energy quickly. What you've got to do is have a response where you can generate energy from your glycogen stores that are in the liver. So there is a signal comes from the adrenal region, which is the release of adrenaline that goes to the cell surface receptors to trigger the response. And what kind of signal would this be? Would it be paracrine, autocrine, exocrine? What kind of signal would that be? Sorry-- endocrine, paracrine, juxtacrine Do you remember last time? Yeah-- AUDIENCE: Endocrine. PROFESSOR: Endocrine, so it's a response that comes from the kidneys and goes to the liver. So it's going, it's traveling. Autocrine is self. Paracrine is near. Juxtacrine is cell contact. But any of these hormonal responses are pretty commonly endocrine responses. So what happens once the signal binds? So the specificity in this situation is that the G protein coupled receptor-- now shown in pink in very stylized form, but you can count those seven transmembrane domains-- will bind exclusively to this GPCR with high specificity. Another signal, another small molecule that looks like it won't bind because we have to have specificity for the signal. Upon that binding event, it will trigger a change within the cell. And that change within the cell is that the alpha subunit-- here you see alpha subunit, beta, gamma. They're all shown in green. The alpha subunit of the G protein, remember, I told you it was a trimeric G protein where the alpha subunit is the key player. The alpha subunit leaves the team and it exchanges its GDP-- that's its resting state, nothing's happening-- for GTP, which turns it on. So that's the first response. The G protein is responding to the signal from the outside of the cell through the auspices of the G protein coupled receptor to give a change within the cell that's a discrete change. OK, following me so far? So now what we need to do is trigger the remaining biochemical events that are going to get us out of this sticky situation where we need to produce a lot of ATP. So it turns out that the GTP-bound form of the alpha subunit can then bind to another enzyme. And that enzyme is adenylate cyclase. So we've bound, we've changed the GDP to GTP, there's a response, and we activate the enzyme known as AC, which you look up here, it's called adenylate cyclase. So this is a messenger within the cell that's now being generated as a response to the signal coming from outside through the GPCR to the alpha subunit of the G protein, which, then, in its GTP-bound state, binds adenylate cyclase. OK, is everyone with me? And once that is bound, adenylate cyclase can do its biochemistry. And the biochemistry that adenylate cyclase does is shown down here. Here's ATP. Adenylate cyclase cyclizes ATP. You lose two of the phosphates, and you get this molecule known as cyclic AMP, which is a messenger molecule that will propagate information through the cell. So now, the adenylate cyclase is activated because it's bound to the GTP-bound form of the alpha subunit. That means we can make a bunch of cyclic AMP. And cyclic AMP is what's known as a second messenger. And that often means it's a common messenger in a lot of pathways. It shows up quite frequently within the wiring of a pathway. And it acts locally to where the pathway is being processed. So once cyclic AMP is formed by adenylate cyclase, that then activates an enzyme. It activates protein kinase A. So PKA is a kinase. It's actually a serine threonine kinase. And that then results in certain proteins within the cell becoming phosphorylated to continue propagating our effect. So we have specificity by the adrenaline binding. We have amplification somewhere in this pathway. So I told you that many pathways go through steps where you start amplifying the signal. Where do you think is the first stage in this set of transformations that I've described to you where you start amplifying the information? Think about what each of the events comprises. Is this one binding to one in one event, or is it one binding to one and we get multiple events? Where is the first step of amplification that's essential? Because it wouldn't do us any good if we make one molecule of ATP at the end of the day. We've got to make dozens and dozens of molecules of ATP. What's the first event that could be an amplification? Over there-- AUDIENCE: Is it when [INAUDIBLE] PROFESSOR: Yeah, when it's made, yes. So this, let's go through them. One binds to one, great. Once one binds to one, one of these is released. It gets converted to one of these. Once one of these is made, adenylate cyclase is an enzyme, so it can make a bunch of cyclic AMP, which can then activate a bunch of protein kinase A, which can then phosphorylate a bunch of cellular proteins. So we've got an expansion of our response. All right, so everyone, does that make sense to everyone? So amplification is really important. Feedback is also important. If you ended up needing an EpiPen because you have an allergic response, you might remember that you've got the jitters forever because there's too much firing and action going on. But in the fight or flight response, there's feedback at a certain stage that slows down this entire process. And that feedback actually comes from an enzyme that chomps up the cyclic AMP to make it inactive as a second messenger. So there's feedback in this process. OK, now, what happens within the cell to get us that biological response? This is a sort of a shortened version of what's happening. So epinephrine binds. Here we are with the alpha subunit with cyclic AMP. For each one of these, you might make 20 molecules of cyclic AMP. That would inactivate many, many PKAs. And then you go through a series of biochemical steps where different enzymes are activated with the overall goal of in the liver chewing up glycogen. OK, so glycogen is a pretty impenetrable polymer of carbohydrates. And you need several enzymes to start to break glycogen down to make glucose phosphate. And so these enzymes here of phosphorylase, B kinase, glycogen phosphorylase all end up converting glycogen into glucose 1 phosphate. So you access your liver stores of stored carbohydrate, which is in a polymeric form, to get a lot of glucose phosphate, which is then hydrolyzed to glucose, which then hits the blood system. And then you can deliver glucose to all the cells to go glycolysis and make ATP. And every glucose molecule, as you know, can really churn out ATP. So what we see in this process is going through the entire dynamics of the system where we've seen specificity, amplification, and feedback. Later on, I'll describe integration to you. OK, everyone following? The series of steps that go from a molecular messenger to biochemical steps to physiological, biological response. Now, I want to just emphasize one quick thing here. I've got a couple of slides I popped in of drug targets. About 45% are receptors in cells. 25% of the entire drug targets are GPCRs. They respond to all kinds of signals-- amines, amino acids, lipids, little peptides, proteins, nucleotides-- all commonly going through the G protein coupled receptor to give you a similar phenomenon to what I've described to you. And what I think is particularly interesting-- I'm going to post all of these as slides. What I want you to see, this was quite a while back, but it just shows you so many of trademarked drugs that target different GPCRs-- they're shown here-- and what diseases they're used for to treat, and what's the generic name of those drugs. So you can see here many, many diseases have at the heart and soul of their problem different receptors. And these are all G protein coupled receptors that are treated with small molecules that bind to the receptor and often glue it in an inactive state so it can't, then, bind to an activating signal and have all the rest of the events occur. There are very few structures of the G protein coupled receptors, but there are some of them. So many of the target G protein coupled receptors can be modeled computationally. And then you can do a lot of work where you actually model the receptor in a membrane environment and start searching for drugs through computational approaches. And I thought a lot of you might be interested in this. Because this is a really strong axis where bioinformatics, confirmation, and advanced physics and molecular dynamics can be brought to bear on drug discovery when you don't have perfect molecular models of your targets. OK, so now we're going to move to a different kind of signal. We're going to talk about the receptor tyrosine kinases. All right, so in the receptor tyrosine kinase responses, we can often see very similar paradigms to what I've just shown you. But there is an important distinction. Receptor tyrosine kinases, we often call these RTKs. So that's their shorthand. So over here, I described to you different kinases. That we have kinases that modify threonine, serine, and tyrosine. The receptor tyrosine kinase is a subset of tyrosine kinases that form part of a receptor. So if you were to think about various kinases, you would have the serine/threonine, and you would have the tyrosine ones. But these would be differentiated into the ones that are part of a membrane protein, the RTKs, and then the ones that are soluble in the cytoplasm. And the serine/threonine ones are most commonly soluble in the cytoplasm. I'm going to focus on the receptor tyrosine kinases because they do slightly different activities when they signal relative to the GPCRs. So that's once again, this is a situation, another paradigm where you see a series of events. But with a number of the receptor tyrosine kinase pathways, the ultimate action ends up being in the nucleus where, as a result of an extracellular signal, you get a series of events that ends up with a protein being sent into the nucleus. And that protein may be a transcription factor that binds to a promoter region. And as a result of that, you'll get gene transcription occur. You'll transcribe a gene, make a messenger RNA that will leave the nucleus and cause action within the cell. So this is a little bit different than the other response that was mainly cytoplasmic. OK, so let's take a look at the receptor tyrosine kinases. Receptor tyrosine kinases are proteins that span the membrane but rather differently from the GPCRs. And they have a domain that's extracellular, just a single transmembrane domain. So this is out. This is in. And then they have an intracellular domain. This would be where the ligand binds. This would be how there's some kind of signal transducing. And this would be a kinase domain. OK, so how do we get the information in? When we saw the GPCRs, we saw the ability of those seven TMs to kind of reorganize and send information in the cell. With the receptor tyrosine kinases, it's kind of different. There are regions of the membrane where there are a lot of these proteins. They commonly bind small peptides and protein molecules. And when they're in their activated form, once the small protein binds, the receptor tyrosine kinase forms a dimeric structure. That is, two of these get together only upon ligand binding. They move together once there is a ligand bound. And then what happens is that the tyrosine kinase domains phosphorylate each other. And that's activation in the case of receptor tyrosine kinases. So when the small protein ligand is not around, this is a singleton. It doesn't work on itself. Once this ligand binds, interactions change. You get a dimeric structure where one kinase can phosphorylate what's called in trans the other kinase domains. So it's different from the GPCRs. It's got a different kind of feel to it, but it's still a dynamic transient signal. Let's take a look at this within a cell and see what kinds of responses-- and this is in response to EGF, which is epidermal growth factor. It's a cytokine that promotes cell division. So a lot happens with respect to the action of a cell, not to produce ATP, but now to respond by producing all the elements that enable cells to grow and proliferate. So the epidermal growth factor binds. You get dimerization. Upon that dimerization, the kinase domain in one structure-- in the blue one-- phosphorylates the other, and vice versa. They phosphorylate each other intermolecularly. Once that has happened, through the auspices of another protein I won't bother you with the name of, this phosphorylated intracellular RTK binds to a small G protein. In this case, it's a monomeric G protein, not one of the trimeric ones, a small one known as RAS. Once that binding event occurs, guess what? RAS gets activated. It's now binding GTP instead of GDP. And then it starts going through a sequence of events where there's a ton of controlled cellular phosphorylation events that result in moving a protein into the nucleus that helps form a transcription complex that results in cellular proliferation. Similar but different series of events. There's still amplification. There's still dynamics. And in this case, it's a lot of phosphorylation events. And what I want to sort of define for you is that many of these pathways are in trouble in disease states. Be it inflammation, neurodegeneration, or cancer, there is aberrant behavior of proteins within these pathways that cause them to go wrong, cause cells to proliferate out of control or undergo bad responses. And that is why these proteins end up being therapeutic targets like the G protein coupled receptors. OK, so we've seen the characteristics of signaling. We've seen a signal. We've seen amplification. We've seen responses. What I just want to quickly show you is an idea about integration. So here's an idea with two signaling pathways that sort of end up with the same signal outside where you integrate actions through two different signaling pathways to achieve a bigger, different kind of response. So that's that last hallmark of signaling pathways. It's not that every pathway is clean and straight. It has cross-talk with other pathways and you get amplified or different responses. Tremendously complicated. I want to give you one more term. And then I'll show one table. When these pathways go wrong, it's often because switches get stuck on. So for example, a G protein gets stuck in its GTP-bound state or doesn't even need GTP to be activated. Or a tyrosine kinase is stuck activated. And that's what's called constitutively active, basically meaning it's permanently on. So many of the diseases that are caused by mutations in your genome, not genetic diseases but mutations in your genes in some particular cells, end up with constitutive activation where you don't need a signal to have a response. And so for example, cells may proliferate out of control. So that is an important term to know and understand. Because constitutive activation basically means that a receptor may be active in the absence of a ligand. And I believe this is my last slide. I just wanted to leave you with this. When one thinks of GPCRs, there are tremendous therapeutic targets. The world of kinases is no less important. This scale is in billions of dollars spent on developing molecules that may be curative of diseases that involve dysregulated signaling. And what you see on this, I want to point out two things. Of course this thing stops working at the last minutes. But what I want to point out is this particular bar. This represents the billions of dollars spent on protein kinase inhibitors over a five-year period. And it's just escalating and escalating. Similarly, monoclonal antibodies are very important. But the small molecule drugs hold a real dominance. What do these drugs do? They enable you to have small molecules that can go into dysregulated signaling pathways and stop the activity somewhere in the pathway to avoid signals going constantly to the nucleus and turning things on all the time. So both of these types of functions in cellular signaling are ones you want to understand both from a biological perspective but from a medical perspective. OK--
MIT_7016_Introductory_Biology_Fall_2018
18_SNPs_Human_genetics.txt
ADAM MARTIN: And so I just want to say a couple sentences about DNA sequencing, just to finish that up. And so you'll remember this slide from last lecture. And remember, the way this Sanger technique works is to set up four different reactions where each reaction has a different one of these dideoxynucleotides. OK, so there's four reactions, each with different dideoxy NTP. And I brought along a gel that I ran a while ago, which is basically-- it's from sequencing gel, and you can-- I'll pass this around so you can take a look at it. So the four different lanes for each sample are the different dideoxynucleotide reactions. And what I want you to notice as that's passing around and you're looking at it is that the different reactions with the different dideoxynucleotides give different patterns of DNA fragment lengths. So there are different patterns of fragment lengths. And the different patterns are based on the fact-- this is based on the sequence, the sequence of the template, OK? And so if we look at the example up here, what you'll see is that in this banding pattern for dideoxy TTP, you see that there's a really short fragment at the bottom there, and so that fragment indicates that there must be an A in the template sequence. The next fragment up would be this one in the dideoxy GTP lane, and that indicates that one nucleotide beyond this A is a C position, and so on and so forth, such that you can sort of order the fragments and see which reaction has a fragment and then read off a DNA sequence. OK, so conceptually, that's how you would read off the sequence of a given strand of DNA, OK? So you might be wondering, if now, we just read off sequence as a series of colors, why am I even introducing this technique? And the reason is because I think it's important for you as potentially future scientists to know that when you're faced with a problem, how you might discover something new. And I see the Sanger method of DNA sequencing as a really clever and elegant way in which Fred Sanger solved the problem of DNA sequencing, and while we don't necessarily do it that way today, it still illustrates a concept that's important, the concept of chain termination, and I think there is something to be learned from this older technique, even if it's not exactly how we sequence DNA today. So for today's lecture, we're going to continue on our quest to basically clone a gene that's responsible for a disease. And so we started this in the last lecture. And I guess one thing we would want to start with is a disease, so I'm going to introduce to you now a disease called aniridia. And in order to clone the gene for a disease, it has to be a heritable disease, in this case, because we're going to use linkage analysis to identify it. So aniridia is a disease that's an eye disease in humans. It's a rare eye disease. So I want to show you a bit of an example of this eye disease. The way this disease manifests itself is it's basically the affected individual has an eye that is lacking an iris. So I'm going to show you what this looks like. If you're squeamish or don't like weird eyes and you don't want to look, you can look away. But I will show you affected phenotype in 3, 2, 1, OK, everyone looking who wants to see weird eyes. OK, good. So that is a individual that has aniridia, and also this one. So you see there's no clear iris in these eyes. And this disease is associated with other abnormalities of the eye that severely impair vision. And this is an inherited disease, and this is a pedigree from a family or series of families where the disease is propagating. And so anyone have a suggestion as to what mode of inheritance this is? Anyone want to rule a mode out? Rachel, you have an idea? AUDIENCE: I was going to say X-linked dominant, but [INAUDIBLE] ADAM MARTIN: OK, so let's take X-linked dominant. So if it was X-linked dominant, then this male would have an X chromosome with the dominant allele of the disease and should only pass it to his females. So I don't think that it would necessarily be X-linked dominant. Anyone else have an idea? Yeah, Georgia? AUDIENCE: Autosomal. ADAM MARTIN: Autosomal dominant. I like autosomal dominant. So in this case, you see you have an individual with the disease and they marry into a family with no history of the disease. One thing I'll point out, for many of these diseases, they're extremely rare, so if you see sort of a family tree where there's no instance of the disease, if it's a rare disease, it's likely that these individuals are not carriers. And so in this case, if you assume that this person doesn't have any form of the-- isn't a carrier for the disease, then this cross here resulting in about half of the individuals affected with the disease, that would be a characteristic of an autosomal dominant disease. So everyone understand my logic? Yes, Carlos? AUDIENCE: What are-- why is two and that looks like three on the slide, why are they crossed out? ADAM MARTIN: I think they're deceased. Yes. OK, so let's say you have a pedigree. You have pedigrees, you're able to try to link this marker to-- or the disease phenotype with various molecular markers, which we discussed in last week's lectures, then you're on the way to performing a process which is known as positional gene cloning. And what positional gene cloning is is it's basically cloning a gene and a allele that's responsible for a disease based on its position in the genome, it's position in a particular chromosomal region. So it's basically cloning a gene based on its chromosomal position or its chromosome position. And the first step of positional gene cloning would be to establish maybe what chromosome it's on. And a straightforward way to do this, as we've basically been discussing almost from when I started lecturing, is to create some sort of linkage map or do linkage mapping to identify, in the case of humans, molecular markers that this disease allele is linked to. And remember, in last week's lecture, we talked about a number of different polymorphisms that are present in the human genome that we can use to establish linkage with a given phenotype. In this case, it's a human disease. And we talked about this example for a microsatellite marker. And in this case, we talked through this example of how this dominant allele, P, is linked to this microsatellite allele m double prime, because if you look at the pedigree here, all of the affected individuals here contain this m double prime sized fragment for this microsatellite. Another thing to notice here is you can see that this couple has been faithful to each other, because basically, each of the children have an allele from the father and an allele from the mother. So you can see that type of-- you can see that using this type of molecular marker as well. OK, so you establish linkage. So linkage mapping establishes the chromosome position of a given allele and the gene. And this chromosome position sort of gets maybe in the right country, but you still have a long way before you get to the specific street address. And so you have to then sort of narrow it in to identify a smaller region of the chromosome that could possibly contain this gene. And so what you would do is go from this linkage map, where you maybe identify the position of this gene within a couple map units, to this next resolution of map called a physical map, OK? So we go from the linkage position to the physical map of the chromosome. And the physical map, as the name implies, is when you have physical pieces of DNA that are present in this region of the chromosome. So the physical map means you have cloned, so recombinant pieces of DNA, cloned pieces of DNA which encompass a given chromosome region. So these are encompassing a chromosome region. OK, so how would you get a piece of DNA that sort of is in this region? How would you start? How would you start fishing for that DNA? So you've gone through the process of linkage, you've identified sort of a polymorphism that is linked to the disease allele. How would you go from there to getting a physical piece of DNA that is present in that region of the chromosome? So let's think back to-- Jeremy, did you have an idea? AUDIENCE: Start by using PCR to just amplify that chunk. [INAUDIBLE] ADAM MARTIN: And what primers, I guess, would you use for the PCR? AUDIENCE: Depending on which chunk you're trying to get, you'd use [INAUDIBLE] ADAM MARTIN: OK, so Jeremy is saying if you knew the sequence, and I guess if you're doing this microsatellite analysis, you had primers that recognize a sequence at a given genomic position, so you actually know something about the sequence because of this polymorphism, so you can use that knowledge to then look for this sequence. And you could even look for the microsatellite in a DNA library. OK, so you have cloned pieces of DNA, and you're going to start with-- I'm going to swap this. Your starting position could be one of these polymorphisms in the sequence around it, which you already know. So let's say you had this microsatellite marker. You could then-- what I'm drawing here is a piece of genomic DNA. So this is genomic DNA. I'm just drawing the insert. This would be recombinant DNA. It would be present in some vector or plasmid. But if you can identify the sequence that contains this microsatellite marker, then you would have the microsatellite, but also the surrounding DNA, OK? So that sort of anchors you at a given position. Now, you don't know if your gene is in that piece of DNA, but you know that it's linked, and so it should be around that piece of DNA somewhere. And so it's unlikely your gene is going to be on this small piece of DNA that's cloned. This is probably just a few kb, and you could still be very far away from this, but that serves as a starting point from which you can go from to get more and more pieces of DNA such that eventually, you have a bunch of pieces of DNA that are going to span the entire region. So the way you identify other pieces of DNA is you could start with a piece of DNA maybe at the end of this insert and look for other inserts that are not identical to this piece that also contain this piece here. So that might get you a piece that's overlapping, but extends farther than your initial piece. So now you've moved slightly farther away from your starting point, which is this starting polymorphism. Then you could choose maybe another DNA sequence here and look for a piece of DNA that, again, is extending a bit farther out. And so you can see how iteratively, you can get farther and farther away from this starting point that you know your gene is linked to. And this process of going sort piece by piece and clone by clone away from a starting position is known as a chromosome walk. And you can do this bidirectionally. So you could also start with a sequence of DNA here and look for a clone that goes the other way. And you can see on my slide up there, you can see that in this case, they've taken a one map unit region of the chromosome and they're illustrating physical pieces of DNA that are overlapping that encompass this entire region. So this could be much bigger than the amount of DNA that would fit in one of these clones in the bacteria, but by sort of identify overlapping clones, you get the entire region. And what this is called here, because these pieces of DNA are contiguous with each other, this is known as a contig. Yeah, Jeremy? AUDIENCE: So would how you get the-- once you find one of those pieces, how do you get the primer for the end of it to start? Do you actually sequence each of these pieces of DNA? ADAM MARTIN: You could sequence it, or you could use a technique that I'm going to talk about at the end of my lecture, which I'll come back to. So nowadays, you'd probably just sequence it and then maybe look for that in another clone. But even before we could sequence DNA in entire genomes, you could do that type of experiment by using a technique called hybridization, which I'll come back to. OK, so the question in this chromosome walk then becomes, how do you know when to stop? Because you could do this for a very long time, but it might not be useful. So you have to know when to stop, and you need to know when you arrive at the gene that you're interested in, which would be the gene that is responsible for the disease. So another way to phrase this question is, how do you know when you have an interesting gene on one of these fragments? So let's say this is an interesting gene here. How do you identify interesting genes? So now, let's talk about identifying interesting genes. Anyone have an idea for how they would-- what criteria they would use to define a gene as being interesting here? I mean, one could say that all genes are interesting. If it's a gene, it's interesting, right? How might we define whether or not there's a gene even there? It could be-- there could be a gene-- how would you define a gene? Can someone define for me a gene? Yeah, Miles? Is it Miles? No? Malik, OK. AUDIENCE: [INAUDIBLE] that would create a starting and stopping point. So like [INAUDIBLE] ADAM MARTIN: So you'd look for a piece of DNA that has a start and a stop codon? So you'd look for an open reading frame, basically. Yeah. You could look for an open reading frame. And so I totally agree with Malik there. And another criteria you could use is if it's encoding a protein, at some point, it also must have been transcribed as an mRNA. And there are some genes that are transcribed as RNA but don't make a protein, and they're often involved in coding or in regulation of gene expression. So I'm going to-- I'm going to say, is it transcribed? So is there some transcript that's made? And specifically, is it transcribed in the tissue that we're interested in? So if we're talking aniridia, we might be looking for genes that are being expressed or transcribed specifically in eyes. You're looking for something that might be expressed in the eye. If it's not expressed in the eye, that gene's going to be much less interesting to you because the phenotype of aniridia is clearly in the eye. OK, what might be some other criteria here? Well, one criteria might be, is there a conserved gene that has an interesting function that's maybe similar to the disease related phenotype? So is there a conserved gene with an interesting function? And to take this example of aniridia, let's say you're doing this chromosome walk, and you identify a gene, maybe you sequence part of this clone, you get a string of sequence, and you realize that the sequence that you get is related to a gene from a model organism, and maybe that gene is called eyeless. If you've identified a region of sequence in a human, in the human genome that's mapping to an eye disease gene, and you find out that in that region, there is a conserved gene called eyeless, might be a very interesting gene for you. So eyeless is a gene. So here's a normal fly. You see it has that bright red eye. The eyeless gene, when mutated, results in a fly that now just doesn't have a white eye, but has no eye altogether. So it turns out that the aniridia gene is the homolog of the eyeless gene in flies. That's not how it was identified initially, but nowadays, there's a lot of information in model organisms. And so if you're sort of trying to identify a gene, and you see that there's a gene in the neighborhood you're looking at with a function that's related to a gene like eyeless, which has a clear sort of analogy in terms of phenotypes, then that's going to increase your interest in that gene. So I'm going to come back to this point here, which is how do we determine whether a piece of DNA that's on one of these inserts that we're getting as we walk across the chromosome, how do we know whether it is transcribed or not? And to get at this, I'm going to introduce you to a concept which is important in and of itself, which is the idea of cDNA. So cDNA. And specifically, I'm going to show you how one would make a cDNA library, which is basically a library of different cDNAs. And so what cDNA is, as shown up there on my slide, a cDNA is complementary DNA. It's complementary DNA, meaning that is the complement of an mRNA transcript. This DNA is the complement of an RNA or mRNA transcript. One thing to watch out for is it's not complimentary DNA. So this is MIT. This is a no compliment zone, so I don't want to see any complimentary DNA. All right, so let's think about complementary DNA. So remember, we've talked about the central dogma and how DNA encodes for RNA, which encodes for protein. And so the information flows from DNA through RNA to protein. But there are some specialized cases in biology where this information flow is reversed. So there can be a reverse of information flow where information flows from RNA to DNA. OK, so that's pretty cool. Where does that happen? Well, there are viruses, such as retroviruses, one example of a retrovirus is HIV, and the virus life-- the virus genome is a single-stranded RNA molecule, and the life cycle of the virus is that inserts into the host-- the host genome, which is double-stranded DNA. For a retrovirus to do that, it needs to take its RNA genome and make double-stranded DNA in order for it to insert. So this is an example in biology, which is basically breaking the rules that we talked to you about earlier in the semester. Also, there are retrotransposons which do a similar process, going from an RNA molecule to double-stranded DNA. So this is a specialized case, and it's interesting, and we can take advantage of it to basically clone and identify mRNA transcripts. OK, so I'm going to tell you how to make complementary DNA, and I'll go through a series of steps. The first step is we want to make complementary DNA of mRNA, so we need a way to purify the mRNA. So anyone have any idea how to purify mRNA? First, we could maybe draw an RNA molecule here. What are some salient features of mature mRNA? Yeah, Carlos? AUDIENCE: It'll have the five-prime cap [INAUDIBLE] phosphate. ADAM MARTIN: Yeah, it'll have a five-prime cap. Anything else? Jeremy? AUDIENCE: Poly-A tail. ADAM MARTIN: It'll have a five-prime cap and a poly-A tail. I'm going to take advantage mostly of the poly-A tail here. So here, we have a poly-A tail. OK, how might we use that poly-A tail to purify mRNA? Natalie? AUDIENCE: Well, you can add a [INAUDIBLE] because you know they're [INAUDIBLE] ADAM MARTIN: Mm-hmm. What sequence would you use? AUDIENCE: [INAUDIBLE] ADAM MARTIN: Yes. So Natalie has suggested using poly T, which she said would stick to this poly A tail because of base pair hybridization, OK? So let's say we have a bead or some type of resin with dTs hanging off of it. So I'll draw a few of them, but you'd have maybe a lot of them sticking off, OK? So you have a bead with pieces of DNA, all of which are poly dT hanging off of it. And then these poly dTs, if you add cytoplasm from cells, the mRNA in that cytoplasm is going to stick to this poly dT bead, and it will stick with a higher affinity than other things that are non specifically sticking to the beads, and you can wash these beads with buffer and salt to get rid of everything that's non-specifically sticking to the bead, and then you're left with just a bead that's enriched with mRNA, which is what was specifically sticking to this, OK? So you could purify-- you're purifying the mRNA based on its affinity for a poly dT, OK? So then you're going to have enrichment of mRNA in your sample. And so then once you have your RNA, you're going to want to somehow go from RNA to DNA, OK? So the next step will involve somehow going from RNA to DNA. So let's draw our piece of RNA here. Here's our RNA. It has a poly A tail so it's mRNA. There is 5 prime. OK, so now we need to take advantage of a trick. We can still take advantage of dT because we can use this as a primer because polymerase usually requires some primer and a three prime hydroxyl in order to extend. Now, can we use DNA polymerase to extend this primer? Jeremy is shaking his head no. Why? AUDIENCE: Because DNA [INAUDIBLE] ADAM MARTIN: Exactly. So what Jeremy is saying is DNA polymerase is a DNA dependent DNA polymerase, OK? DNA polymerase can only use this if this is DNA here, OK? So we need a different type of enzyme, essentially, in order to make DNA from RNA, and luckily, molecular biologists-- actually one of whom was here at MIT-- discovered this type of enzyme, and it's called reverse transcriptase. Reverse transcriptase. This is an enzyme that's encoded by retroviruses in order to make double stranded DNA from RNA, and that allows the retrovirus to insert into the host genome, OK? And what reverse transcriptase is is it's an RNA dependent DNA polymerase, OK? So it takes RNA as its substrate, and then it synthesizes DNA on the opposite strand, OK? So this is an RNA dependent DNA polymerase. OK, so if you add reverse transcriptase to mRNAs that have these dT primers, then what you get is a new strand, which is DNA here. This is the strand of DNA. And then you have a strand of RNA opposite it, OK? So at this step, you have a DNA RNA hybrid. So this is a DNA RNA hybrid. Let's see. Reveal some more of this. This is the process which I'm basically outlining on the board. So then you want double stranded DNA, so you don't want this strand of RNA that's down here, so you have to get rid of it. So you would degrade the RNA, and this is done using another enzymatic activity, which is derived from reverse transcriptase, which is termed RNAs H activity. So you can add an enzyme RNAs H, which RNAs H takes this DNA RNA hybrids and degrades the RNA part of it, OK? So this is going to degrade the RNA strand. And if you degrade the RNA strand, then you're left with a single strand of DNA. So you have single strand of DNA here, and now what you need to do is to synthesize the second strand of DNA. So you need a second strand synthesis. And so you need, again, a primer in order to prime the synthesis here. So there are a variety of ways to do this. You can add some type of hairpin, which is five prime here and three prime here, and then you can use either DNA, polymerase, or reverse transcriptase, which also can be a DNA dependent DNA polymerase to transcribe this strand here, OK? So again, you add polymerase, and now you've gone and you've generated double stranded DNA, OK? So everyone see how we've gone from an mRNA transcript, and we've done the reverse of everything we just told you in the first half of the course because we've gone from RNA and we've made DNA, OK? But this will be really useful because now we have a stable piece of DNA that we can clone into a plasmid and we have a record of this transcript being present in our sample, and we can propagate that on and on, so we've cloned it, OK? All right, what's going to be special about this piece of DNA versus a piece of genomic DNA? Natalie? AUDIENCE: [INAUDIBLE] ADAM MARTIN: Yes, so Natalie suggesting that it doesn't have introns, and that's totally right. So this is not like genomic DNA, and what Natalie said is because mRNA is processed, the introns are spliced out, such the mature mRNA only has the axons, and so this piece of complementary cDNA is going to have no introns. How else is it different? Yeah, Jeremy? AUDIENCE: It's not going to have promoters. ADAM MARTIN: It's not going to have a promoter. Yes, Carmen? AUDIENCE: It doesn't have [INAUDIBLE] ADAM MARTIN: You might see a poly A and T sequence in the cDNA. Yes, that's true. OK, so you might have poly A, poly T. I'm going to focus on the other part from-- there's going to be no promoter, enhancer, regulatory sequences. Basically, it's got no sequence that's not transcribed, right? The DNA is only going to have the part of the gene that was physically transcribed by the RNA polymerase originally. OK, so no non-transcribed regions. No non-transcribed regions, and Carmen's absolutely right. You will also have possibly a poly A or poly T sequence. OK, so when you get these cDNAs, you might have-- you have more than one mRNA in a sample like a cytoplasmic extract, so you're going to prime-- you're going to make multiple cDNA and different cDNAs will reflect different transcripts that are present in your sample, OK? So you could have one clone that's one gene, another clone that's a different gene, and another clone that's another gene, and you could have thousands of clones of these different DNAs. What's going to be special about what types of genes are you going to get for I guess different tissues. Are they going to be the same or not? Yeah, Carlos? AUDIENCE: [INAUDIBLE] ADAM MARTIN: Exactly. You're not going to see-- if you've prepared a tissue and there is no gene being-- if one gene was not expressed or transcribed in that tissue, you will not get a cDNA for that particular gene in your library, OK? So the representation of genes-- the representation of genes in a cDNA library is totally dependent on what genes are being expressed, OK? So this representation is going to be proportional to the expression level, and the more genes-- the more a gene is expressed in a given tissue, the more copies of cDNA for that gene you would see in the library, OK? So there's really a proportionality between the number of clones in a library and the expression level of a gene, where in the most extreme case, if this gene is not expressed at all, you're not going to see it represented at all in the cDNA library, OK? And then a corollary to this statement is that if you make cDNA libraries from different cell types or different tissue types, the cDNA libraries are going to be different between those different types of sources of mRNA, OK? So in other words, different tissues give you different cDNA. OK, so there is the process. So I went through most of the side. Yes, miles? AUDIENCE: Is this a way you can determine what gene sequences are expressed in all cells? Because in certain mRNA strands across all tissue samples, those are basic cell functions and expressed in a [INAUDIBLE] organism? ADAM MARTIN: So you're asking, if you grind up like an entire organism and if you get a cDNA from that library, could you tell if it's expressed in all different cell types? Even if you have one cell type that expresses a gene, if you grind up the entire organism, then you're going to have some mRNA that represents that gene. So I don't think it would be as an effective measure to determine the ubiquity of expression of a given gene, but in just a minute, I'm going to give you a tool that would allow you to answer the exact question that you're asking, OK? Any other questions about the cDNA library? OK. So I just wanted to mention that a comeback to this example I gave on the identification of the human CDK gene. So remember, we started with yeast that were mutant. They had temperature-sensitive mutants, and we transformed these mutants with a library, but I didn't really tell you what the library was. It was in fact the cDNA library from humans that was transformed into yeast, OK? And that's because yeast genes-- for the most part, they don't have a lot of interests, and so the yeast-- the machinery is not able to splice out the human interactions and human genes, OK? And so this was done with a human cDNA library, which then encoded-- one of which encoded the cumin CDK gene, and that allowed Paul Nurse to discover the piece of DNA that encoded for the human CDK, OK? So I just wanted to kind of retroactively go back and sort of tell you how that experiment was done. OK, so now I'm going to get to my final point for this lecture, which is this final technique, which will allow us to determine whether or not a transcript is expressed in a single cell type or ubiquitously through an organism, and this involves a technique, which is known as hybridization. And what hybridization is is if you're starting with a piece of DNA, you don't need to know its sequence in order to determine whether there are sequences that are similar or identical to it, because hybridisation is basically if you have some sequence and it's single stranded such that you have a DNA backbone but you have base pairs that are able to pair with their complementary bases and you can use a piece of single stranded DNA like this and you can label it such that if the labeled piece sticks to another piece that has identical or similar sequence, you'll be able to visualize it in some way, OK? So this is called-- you're looking for things that anneal or hybridize to a particular specific sequence. So you don't need to know the sequence a priori, OK? You just need to have this physical piece of DNA, and you can use this single stranded piece of DNA to then fish for similar sequences, OK? So we could take a piece of DNA here maybe that's in a gene, and we could fish through a DNA library to try to identify a cDNA clone that has sequence identity to that piece of DNA, OK? And the way this is done is to take a cDNA library. So each of these colonies here would express or have a different clone of DNA. You can then take a nitrocellulose filter, put it on this plate, which would stick the bacteria in place to that filter, and you could then lice the bacteria and denature the DNA, and then the DNA is stuck to the figure, but now it's single stranded. You can then add your probe, which is labeled, and look for the colonies that this probe sticks to, and that would then identify a particular cDNA, which would identify whether or not a piece of DNA is expressed in a given tissue type, OK? So everyone see how that would work? So in addition to doing this on a nitrous cellulose filter, you can also do this in a tissue, and that's known as in situ hybridization. And in this case, in situ hybridization, you're searching for mRNA in a section of fixed tissue. OK, and I have an example from this paper here, which is the paper this are cloned. In this paper was the cloning of the aniridia gene, and they identified a gene of interest, which is called Pax6 now, and they basically used a piece of DNA that they thought was interesting, and they did in situ hybridization in an organism, in this case, you see an eye. This is an eye here, and the label Pax6 is labeled in yellow, and you can see how this transcript is present throughout the entire eye, right? And the way you would see if it's tissue specific is you look in other tissues and you wouldn't see this yellow label. So that's how you would determine if it's expressed in a specific tissue or ubiquitously throughout an organism. OK, so this Pax6 gene. Oop. So I was going to ask, what do you think would happen if you hyperactivate Pax6 in humans, and this is one idea, but actually, I just made that up, or Stan Lee made that up, but actually, Stan Lee never in fact mentioned whether or not cyclops is a Pax6 mutant, but we can do a different type of experiment, which might be more ethical, which is we know there's a fly gene that's homologous to Pax6. And what we can do in flies is we can topically express this islets gene in non-eye tissues and see what happens. OK so, this is pretty wild. This is my Halloween image of the class. So this is a fly where eyeless has been expressed all over its body. OK, so here you see there's an eye-- It's normal eye-- here. You can see there's now another eye growing in the front of its head. You can see here's an eye growing on this fly's back, and you can see the legs. There's eye tissue all over the legs of this fly, OK? So this Pax6 gene, which is conserved from flies to humans is the master regulator of eye development, OK? And at least in flies, if you topically express this in other parts of the body, you get an eye. I should say these are not functionalized. They don't hook up to the brain the same way the normal eye does. So it's not like this fly can see out of the back of its head. OK, that's it. I'm done, and good luck on your exam on Wednesday. We will see you here.
MIT_7016_Introductory_Biology_Fall_2018
27_Visualizing_Life_Dyes_and_Stains.txt
PROFESSOR: All right, so fluorescence. You know, I know you hear this from me a lot. But this really is my favorite topic. The applications of luminescence and fluorescence in service to biology are incredibly important. So what I'm going to try to do in these two lectures is explain to you the difference between fluorophores that we can encode into proteins through genetic engineering and fluorophores that we use that are made by chemists in the lab but then appended to molecules. So today we'll talk about the nuts and bolts of fluorescence. And then on Wednesday, we'll start to see some of these tools that you've seen images of. We love to wow you with images of fluorescent cells and cells in action. But I want to step back and actually show you how that all came about. Where do these fluorescent proteins come from? What are we looking for? How much protein engineering was done to make these such an amazingly useful set of molecules, macromolecules to really allow us in real time to study biology? And there are many, many other applications as well. So we're going to talk about luminescence and fluorescence in general. Luminescence is the general term. And fluorescence is a little bit more specific. There are different types of luminescence. And you'll get to see some of those varieties of luminescence. I've put a decent amount of our content today on the screen. So we'll go up here and take a look. So luminescence in general is the emission of light not associated with heat, not like a burning flame which has a lot of light accompanying it. But rather the emission of light in the absence of heat. And there are different types of luminescence the biologists use intertwined into biological experiments to illuminate life, to understand details of cellular activity. But also as reagents in diagnostics in all kinds of imaging modalities. And through these two-- three lectures, actually, because Professor Martin will give you one that has even more imaging in it-- you'll sort of really get to understand where these pretty magical reagents come from. So the two types of luminescence that we won't discuss in detail today, first of all chemiluminescence. This is a molecule known as luminol sitting in a little bile. I think I like fluorescence so much because the images are so captivating. Now, things like luminol, has anyone heard of luminol before? Does anyone-- yeah, do you watch a lot of CSI or-- yeah. So tell everybody what people use luminol for. AUDIENCE: To pick up on a blood spatter or remnants of blood. PROFESSOR: Yeah, so when you see these TV shows and there's this beautifully clean motel room and nothing looks like it ever happened there. But people thought there was a murder took place there, you'll notice they come in with these spray bottles. And spray all over the carpets, and the drapes, and the chairs. And then there's this great moment where they turn off the light and luminol interacts with the heme of blood at an amazingly sensitive level, such that when the lights are turned out, the room's just sort of this battlefield of bright luminescence that indicates that this was a crime scene. So that's the most famous luminol sort of example. So that is what would be called chemiluminescence, the interaction of a chemical with another chemical to give luminescence. Another pretty useful type of luminescence is bioluminescence. I've done a lot of scuba diving in my life. And there's nothing more exciting than a dive at night where the whole ocean is this sort of inky black. And sometimes you'll move your arms through the black water at night. And you'll see all these, like, little fireworks. And many, many marine organisms actually undergo bioluminescence. It's a biological reaction that causes luminescence. And this is a cuttlefish shown here in this image in the corner where they are brightly lit at night. And actually, it's just a whole party there at night in the ocean where all sorts of organisms are signaling to other organisms through bioluminescence and reactions such as luciferase reactions. So in bioluminescence, a molecule of ATP is generally used and combined with another molecule through the action of an enzyme that ends up kicking out light energy. So those are both important. But what we're going to talk about principally is fluorescence. This is a more specific term. And you may wonder why I'm putting this in capital letters. The first thing to learn about fluorescence is how to spell fluorescence. So if you look at the word fluorescence and the first word first part of the word looks like flower, you know the stuff you bake your pumpkin pie with, you spelled it wrong. It actually is fluor, F-L-U-O-R-E-S. What are you guys whispering about? AUDIENCE: [INAUDIBLE]. PROFESSOR: Did I get something else wrong? AUDIENCE: The E-S. PROFESSOR: E-S, yeah, well, forget about that part. E-S. There's another C in there. There we go, we snuggled that in. So the important part is the first part. So make it look like you know what you're talking about and spell fluorescence correctly. I cannot tell you how many papers, scientific papers I read where they spelled fluorescence wrong. It's really hysterical. It's one of those-- there's two or three amazingly common typos in people's slides. One of them is spelling fluorescence wrong and the other one is spelling complement wrong. Complement as opposed to compliment where you're telling someone they look good. And complement where you're trying to sort of match up things. Anyway, but fluorescence is a key point. So what is fluorescence? It's the absorption of light energy by a molecule. Now it could be a small, organic molecule. It could be a small part of a fluorescent protein molecule that has a particular structure. But it will absorb light energy at a certain wavelength. So let's put this into just a cuvette experiment. These are the kinds of little containers that we use for certain types of fluorescence. We could use a plate and a plate reader. But this is quite common. So you shine light on this molecule. Do you want to nab the doors, the outer doors on both sides? I don't know. Everybody's pretty happy today. Anyway, and the molecule goes. And I'm going to just draw something, you know, with a bunch of double bonds and things. And the molecule absorbs light and goes to an excited state. So this is the ground state before light. After light you get the molecule in an excited state. So it has absorbed that light energy. So you've hit it with a wavelength of light. And I'm going to redefine all these terms properly in a moment. Lambda excitation of a particular wavelength. Once that molecule has absorbed light, there's a very transient period until the molecule lets out energy in the form of light. And returns back to its ground state now. So that the photo physics of fluorescence involves the excitation of a molecule with light of one energy. That light energy is at a particular wavelength so it's called its lambda of excitation. That molecule very, very transiently, it's usually picoseconds or nanoseconds for organic molecules. It may be a little bit longer for other types of complexes and may stretch to the microsecond or millisecond. But most of what we talk about will be picosecond to nanosecond lifetime. And once it's excited, it just drops its energy back out and goes back to the ground state. And the most important thing that you want to remember is the wavelength, which is given in nanometers. And the wavelength for emission, which is also in nanometers. This wavelength is higher energy. Obviously you don't create energy when you shine light on something. You'd be breaking a few fundamental rules if you did. So the light that comes back out is lower energy because there's been rearrangements in the excited state of the molecule. So you can't possibly kick energy out at a higher energy. And so this is a shorter wavelength in nanometers. And this is at a longer wavelength. So remember, higher energy are shorter wavelength. Lower energy are longer wavelengths. And that is a rule for fluorescence. When you excite a molecule, you'll take it to the excited state. It'll sit and vibrate there a little bit. Then it will kick back energy out at a longer wavelength. And for the majority of the fluorescence experiments that we do in biology, the wavelengths that you see emission at are in the visible range. Whereas the wavelengths that you might excite your molecule at are often in the UV or a bit longer, ideally, longer. And these things are going to come up. This isn't the first time that you're going to see them. And it's not the last time you're going to see them. So let's take a look at fluorescent dyes in the electromagnetic spectrum in the next couple of slides. So what you see here, you see a bunch of little eppendorf vials with fluorescent molecules that emit light at all different wavelengths. These would be down at the ultraviolet end of the electromagnetic spectrum. These would be up in the very red end, the lowest energy. So these emission wavelengths, these would be emitting at the lowest energy, longest wavelength. These would be emitting at the shortest wavelength, lowest to highest energy. Is everyone following me? So just make sure you remember that. And just this principle rule that you can't possibly break with respect to the wavelengths of light for fluorescence experiments. So what we're going to see is the relationship for the electromagnetic spectrum. There's a little bit more detail in a minute. And then look at some fluorescent dyes that are very, very commonly used in biology. And in fact, you will have seen a lot of cells stained with these dyes even so far in pictures that you've seen on the screen. And then we'll talk about the application of antibody reagents and where does that come in with respect to fluorescences. So let's take a look at fluorescence and the electromagnetic spectrum. So here it is, going from wavelengths. The ultraviolet wavelengths would be shorter than 400 nanometers. So ultraviolet, so beyond violet. And the very red wavelengths would be in the range from 600 to 700. And here you see the relationship between wavelength, and then what the light emitted would look like. So if we're looking at here, what we would expect to see is if we've got a fluorophore and it shows fluorescence, we would be exciting the fluorophore in wavelengths in this region. And we would see emission in wavelengths in this region. But a cardinal rule is that we excite with a shorter, higher energy wavelength and observe emission at a longer, lower energy wavelength. Now, there's very important dyes. Straight away the most common dye that you will see immediately in biology is a dye known as ethidium bromide. And here's its structure. You can often recognize fluorophores. They have lots of rings fused together with lots of double bonds in them. This is the structure of a compound known as ethidium bromide. It's a dye that intercalates into DNA. And when the dye changes its environment from being in water to being snuggled in between stacks of base pairs in DNA, it changes its fluorescent properties and it becomes fluorescence. So fluorescence isn't just the intrinsic shape of the molecule and what it looks like. It's very, very often related to what's around it. Why is that the case? It's because the excited state may behave differently in different environments. Maybe stabilize for a while, and that's why you might see fluorophores experience a change in their fluorescence as a function of their environment. Is that clear to everyone? So the molecular environments, if I'm a fluorophore and I'm in water, I'm going to feel pretty differently in my excited state if I'm a fluorophore and I'm sitting packed between DNA bases. It's pretty dramatic when you see it. So when you mix ethidium bromide with DNA, and it could be in a cell or it could be a lysate from a cell where you're capturing the DNA and trying to manipulate it in recombinant biology. That ethidium bromide will intercalate into the DNA. And it will light up as a bright orange dye. So here, say we've got a gel that we've run DNA on. We might have a set of standards. And in other places, we're looking for the size of DNA. Remember that great experiment we saw where we saw how quickly small and large pieces of DNA ran through an agarose gel? So here is what the DNA gel would look like if you soaked it ethidium bromide. So here's the gel as a ladder of bands. But then let's say you wanted to do some work on a piece of DNA and maybe ligated you see DNA pieces at different wavelengths that have different mobilities based on size. We couldn't see the DNA directly. We couldn't pick it up. We couldn't visualize it. So we have to use ways to visualize it. One way is to radio label it. Messy, we don't really want to do a lot of that if we can avoid it. The other way is simply to soak a dye into the gel. And the dye, because of the positive charge here, the counter ion gets displaced. And the positive charge gets attracted to the DNA and associates with it quite tightly. So that would be a way that you would observe DNA bound to dye in a gel. And this fluoresces a pretty long wavelength. So this fluoresces this bright orange that's actually at about 605. So you can see, this is really in the visible range. 605 would be right around here. And it has this bright kind of orange fluorescence. And the wavelength that you would use to irradiate the dye on the DNA would be a shorter wavelength than 605. You will often have a prescription, excite at this wavelength so you observe at this wavelength. And these are fixed physical parameters for fluorescent molecules. Now, ethidium bromide is a dye that can get into cells. And we can look at DNA within cells. And here's a picture of how it would look. So here's the ethidium bromide. And here's a pair of stacked bases, this one and this one. And there he is, right in the middle. See that ring coming towards you is that. And then here's this thing that slides straight between the bases and might cause a little bit of a bulge. And we would call this a DNA intercalator. It slides into the DNA. And you can see over here, the structure of DNA. So you could picture ethidium bromide sliding between the bases. Now, there's a big problem here. Because you can't use DNA ethidium bromide. It's pretty toxic. Why would it be toxic? Well, no, it's not the bromide. It's the fact, more, think of what the dye does when it gets to the DNA. What would that do to things like replication and transcription? It just kind of messes it up. And so these are toxic dyes that can only be used in fixed cells to do observations of cells. So we use it a lot. We need to be careful of it because if it gets absorbed through our skin, it could get into our cells. And it could interfere with replication and other cellular processes. Because it would accumulate on our cellular DNA. And in fact, the interesting thing that a lot of molecules that actually have these sort of flat, pancake shapes with lots of double bonds are actually pretty important in biology. Because they end up being chemotherapeutic agents. So what I've shown you here is what's known as an anthracycline structure. I believe this is adriamycin, I could be off. But it's a natural product that's isolated from bacteria. And it has this structure that also makes it a DNA intercalator. And it's used as a cancer chemotherapeutic agent because it interferes with cell division and proliferation. So we actually exploit that property. But only with cells that we want to kill or stop dividing. So you could picture, well, I don't want to use something that's going to interfere with cells if I'm doing live cell imaging. Because I'm going to have trouble with the properties of the cells. So fluorophores are great, but you've got to worry about them because they can get transferred through the skin, through cellular membranes because they're often quite greasy. And they can get in and interfere with essential processes of DNA. So because of this, there's been quite a revolution in the work done with DNA binding agents that bind a little differently and are way less toxic. So I want to describe to you a series of dyes that are known as DAPI and HOECHST, H-O-E-C-H-S-T. This was, I believe, discovered in a bio company in Germany. And these are different kinds of dyes that fluoresce on binding to DNA. So they are still useful in that same context. You can in fact substitute ethidium bromide with these dyes. But they bind to DNA pretty differently. And I want you to take a look at these pictures. So here in green and blue-- and apologies to the color blind-- you see a molecule of this compound here bound to DNA. So it's pretty clear it's different from intercalation, right? You can see it's more sliding around one part of the DNA. And when those molecules bind to DNA in water, they don't fluorescence. When they bind to DNA they fluorescent an intense cyan blue. So that's at a shorter wavelength from the ethidium bromide. So taking a look at this structure, does anyone want to explain to me how the molecules might bind to DNA? We wouldn't call it intercalation. We know intercalation is perpendicular to the axis of the DNA. So where, looking at this, do you think these bind? A while ago when I was talking about the structure of DNA, I like to think of DNA as having two grooves, two places where things combine to it. And that's a minor groove. And then this big trench is what's called the major groove. And certain molecules bind in one groove. And other molecules bind in the other. Where do you think it's binding? Just by inspection. Yeah. AUDIENCE: Looks like it's in the minor groove. PROFESSOR: That's correct, it's just snuggled just perfectly in the minor groove. If it was in the major groove, it would be swimming around in that groove. It's almost too big. So what's really cool about these dyes is they slide in between into the minor groove. And they also make some contacts with the phosphodiester backbone. But it's not that they're dancing on the phosphodiester backbone. They're literally in the groove. But there's some opportunity for electrostatic interactions. And so these compounds would be known to bind in the minor groove. And in fact, they bind in particular regions of DNA where there's AT, not GC. Those are the places where there's just the pair of hydrogen bonds instead of the trio. That's just their habit, their personality. So chemistry was very important here. You had a good dye, but it was toxic. But improved dyes came along that could be used in living systems that are not toxic. Because if you bind in those grooves and you're dissociating easily, you're not going to interfere so much with replication. Does that make sense? So it's a weaker force. It's not going to have a big detrimental effect. Now, I moved this slide up. I realized he was in the wrong place in the deck. This is just an application of the DNA minor groove binder CEOCHST. And in this case, we're looking at three cells. These two are not actively dividing. But take a look at this cell, it's actually clearly in the state preparing for cell division. And what you can see here is that in the nucleus, the DNA is pretty diffuse before things really start to condense and line up for DNA replication and cell division. And what's intriguing to me is that this rather diffuse blue dye, that's probably a bit more loosely associated with DNA, becomes much clearer and brighter when the chromosomes are in the state they're in for cell division. So you can see them here. And one question here. So, if you're looking at cells, you're trying to observe cells, where else in the cell are you going to see DNA? So we can see the nuclear DNA, we can see what stage of the cell cycle it's in-- Yeah, Carmen. AUDIENCE: The mitochondria. PROFESSOR: Yeah, in the mitochondria. So you could also spot that within the cell if you're at a sufficient amplification. So you know that there's not DNA running around everywhere. It's literally in very specific places with the cell. These dyes will bind also to other nucleic acids. But they don't bind so well. Because those don't have the really repetitive, double stranded nucleotide structures. But there are other dyes that bind much more specifically to RNA. But we won't discuss them. So nucleic acids seem to be something that we can definitely pinpoint with fluorescence. We can see where it is. We could follow cell division. We could look to see the progress of cell division. For example, upon adding things to a cell, can you see-- remember very, very early on, we showed you movies of cells dividing. You could do that with this kind of dye because it's a non-toxic dye. So, great, so far, so good. So the key thing, though, about biology is we have so many other entities within a cell that we want to be able to track and monitor. And what we needed, what is absolutely essential are reagents to do that. So I want to talk to you about biological tools of monitoring. And you know what these are? These are antibodies, monitoring proteins. And in fact, you can also coax antibodies to recognize carbohydrates. So I'm going to just put these here. But they're a little bit harder to bind to antibodies. But nevertheless, those are useful. So we're going to talk now about antibodies, which are agents of the human adaptive immune system. And how they have been exploited intensively to study biology. Now, what you will learn from Professor Martin in two or three lectures time is much more about the nuts and bolts of the immune cells, of the immune system. And how it mounts a response to disease and other features. I'm going to focus completely on the technological side of antibodies and how they are useful reagents to study biology. Because if you want to recognize a protein in a cell, you need a particular entity that will bind to that protein and show you where it is through some kind of signal, for example, fluorescence. So what I want to do is give you the minimal description of antibodies so you can understand this. But later you're going to revisit it in a bit more complicated venue. But for the time being, I'm just going to talk about B cells, which are cells that produce antibodies. And I'm going to talk to you about how they recognize their targets. Because what we have in the adaptive immune system is an amazing system where you can do combinatorial biology and basically recognize any target entity you're interested in. So let's take a look at this, keeping in mind that this is us exploiting biology to make reagents to do biology. It's kind of a cool sort of cyclical process. So the cells of the hematopoietic immune-- sorry the hematopoietic system, those are the ones that are important in blood cells, form a lot of different-- wait a minute, I'm on triple-- triple tools here. There are a bunch of different cells that are produced from the pluripotent hematopoietic cells. They're either the white cells or the red cell types. But what we're going to focus right in on are the B cells. These are the cells of the immune system that produce soluble antibodies, OK. And when you challenge a B cell population with a foreign entity, the B cell population will go into gear to produce antibodies that very specifically recognize that foreign target, because in the human adaptive immune system, that might be a wonderful tool to get rid of that foreign entity. So we're going to focus exclusively on the B cells and the way that they mature to produce soluble antibodies, those are down at the bottom here, based on what they've been challenged with. So what this little schematic shows you is you have a bunch of different B cells. And there's something you want to recognize. Let's just say it's a molecule-- an EGF molecule, a cytokine. What you might do is challenge this population with the cytokine. Only one B cell type will bind to the cytokine. And then that will get amplified. And then you will end up with B cells that produce a lot of an antibody to a cytokine such as EGF. I'm truly dumbing this down. I just want you to get the gist of it for the purpose of this discussion. Now B cells adopt a very classical shape. And I'm just going to show you the quaternary structure of a B cell-- of an antibody in linear form here. So it doesn't look very exciting right now. There are two light chains, which I've just shown in schematic form. These are just polypeptide chains. And there are two heavy chains. All right, so that's their basic structure. It's held together in a stable quarternary structure with this complex. And there may be disulfides across and throughout the structure. Now what's so special about antibodies? They're pretty big molecules. The molecular weight is pretty high. But the key thing about antibodies is that the majority of the structure stays fairly constant. So I'm just going to-- so this doesn't get modified when B cells mature. But another part of the structure is variable. And when B cells mature, there's loads of rearranging in that variable section in order that it adapt to bind to target. So what you've seen here is that the target-- see if this little fellow works anymore. No. Ah. What you see here is the light chain in green, heavy chain in blue. And it's a double version of it. We always draw antibodies as this V shape. And an antigen-- you've heard this word before-- it's a foreign entity that's foreign to the immune system. The antigen binding site is right here at the tips of the antibody. And I think the picture up there is kind of clearer. Let's put that forward again. And you can see very specifically the structure. The C's designate constant regions. See C all the way through here. And V's represent variable regions, which I've shown you. And at the tip of the V's are the antigen binding sites. So you're going to see more about antibodies in the immune system. But what you want to accept is that these are biological macromolecules that particularly evolved to recognize target antigens. And you can use them reliably as biological reagents. All right, so how do you achieve the diversity? There are hundreds of thousands of different antibodies in the human system. If we had a gene for every single different light chain and every heavy chain, you know, our DNA would be completely swamped by being dedicated to the genetic material for antibodies. So instead there is a particular system which provides little portions of the DNA structure that are in little pieces of variable components that can get zipped together through transcription and slicing events to give you a bunch of antibodies that have different variable regions. And this is what's known as the BDJ system. And you'll hear more about that from Professor Martin. So basically what you want to think about is it's a combinatorial system to take little pieces of DNA into a super molecular structure to give you antibody combining sites that can recognize virtually any target. Anyone got any questions here? I'm seeing a few worried faces. Somebody ask me a question if you feel I could clarify a component of this or are you OK with this? Anybody? OK, I'll move forward. So we talked about antibodies. When you get a population of B cells that produce antibodies to a particular target, these may be what are known as polyclonal-- a polyclonal-- sorry, I should have written that up here-- a polyclonal antibody, as might be suggested by the name, is an antibody-- let's say it recognizes a molecular entity. So the antibodies-- I'm going to draw them as little y shaped molecules-- may recognize different parts of the antigen. So antibody A, B, C recognize different parts of the antigen. Those would be polyclonal. It would be a mixed bag of antibody molecules that shows specificity for a target. But people also tend to use a great deal of monoclonal antibodies, because they are a lot more specific. So if you had a selection of polyclonal antibodies, a monoclonal antibody would be a single population that recognizes a single antigen or epitope in your antigenic molecule. And the way those are made is through the engineering method where you fuse spleen cells with myeloma cells. And then you get a hybrid-- what are known as hybridomas that produce very specifically just monoclonal antibodies. So that's a bit of background there about the antibodies. You'll see it on the rerun when Professor Martin talks. But I just wanted to give you a bit of exposure to this. So let's now look at how antibodies can be useful. So let's say you want to visualize in a cell-- let's move straight to a real targeted application. We want to make an antibody that might recognize actin and a different antibody that might recognize tubulin to take a look at cells through the use of antibody structures. The way you generate antibodies is through laboratory animals. And very commonly we use either mice or rabbits for antibody production. The rabbit is used when you need a lot more antibody material. The mouse will satisfy for some experiments. So the way you make antibodies is by injecting. So this would be the foreign agent or antigen that you want to make an antibody to, which would normally bind at the variable region of this immunoglobulin molecule. You inject the mouse with that antigen. Excuse me-- that's a rabbit. You know, I haven't been in biology very long. But I can tell that's a rabbit, right? Anyway, so you inject the rabbit with a human protein. If you-- what would happen if you injected the rabbit with rabbit actin or with rabbit tubulin? What would you-- would you expect to see a response? No, right? Yeah. Because the organisms are adapted not to recognize their own proteins unless there's some disorder like an autoimmune disease. So you would inject the rabbit with a human epitope, so for example human actin, generate antibodies with a specificity for actin, and then you would label those antibodies with a fluorescent marker, so you could track it or follow it through chemistry. Alternatively, you might want to make a different antibody for tubulin. And then you would differentiate the antibody against tubulin from the antibody against actin by labeling it with a different color fluorophore. So you've really got two types of macromolecule that can be interacted with a fixed cell and recognize those two macromolecules within the protein. So what's important here is that you don't use the protein from-- if you want to study human cells, you use antibodies produced in rabbit or mouse. If you want to study rabbit cells, you could produce it in different organisms. You're not going to produce the antibodies in the same host. Nowadays-- yes. AUDIENCE: How do you know that you have the correct antibody, like if that is the right one? PROFESSOR: Right. OK, so you would prepare the-- you would do them. There's a lot of screening takes place. So you inject the mouse or rabbit. And then you collect the serum. And you look for whether the serum gets enriched and enriched in antibodies that recognize the target. So you'll see an increase in what's known as the titer. And then once you get a high titer, you'll-- unfortunately, you can just collect the serum or depending if you want to make monoclonal, you'd have to sacrifice the animal to get the spleen cells. But then you collect the antibodies through an affinity method that's directed just at antibodies. Then you've got your population. And you can throw a fluorophore dye at it and chemically label it. So it's a good point there. There's a lot of work being done now with antibodies from different organisms, in fact, you'll see them from camels. And they're also ones from shark. And the reason why they're kind of interesting is that they sort of have mini antibodies that are much more useful for technology. So let's see what we can do here. We can do fluorescence experiments. In order to do this with an antibody. And this is going to highlight a shortcoming of antibodies. You may take a cell that you want to observe different proteins in that cell, you have to fix the cell to make it permeable. Why do I need to make the cell permeable in order to use these antibody reagents? Do you think the antibodies are just going to cross-- DAPI gets into a cell easily. But what about antibodies? Can they cross the plasma membrane to get into the cell to label a target? What do you think? Who says yes? Who says no? Good. Thank goodness. OK. You guys don't say much. But I know you know the answer here. They just can't float into cells. They're too large to get into cells. So you have to fix the cells on a glass side and permeabilize them, for example, with methanol so that the antibodies can gain access to all parts of the cell. So here is a bright field view of cells. That's a little bit more specific. That's OK too. But this is what I really want to show you. This is what you could achieve with DAPI and an antibody to the actin and an antibody to tubulin. And you can look at the various colors of the fluorescence emission. And you see here what you've got is an anti-actin antibody with a red dye. And you can see that at the perimeters of the cells. You've got an anti-tubulin antibody with a green dye. And you can see the filamentous structure there. And then you've got DAPI staining bright blue where the nuclei are. So you have three unique labels that fluoresce at different wavelengths and you can directly pinpoint things. So fluorescence is extremely valuable for looking a biological systems, because we don't have a lot that fluoresces in the body. So a cell in general, if you irradiate it with light, you won't see any major fluorescence at all. So these reagents that you use to study biology, if they fluoresce, you've got unique signals where you can look at really complicated cells that may have thousands and thousands of proteins, sugars, nucleic acids, and very specifically see things by fluorescence because the fluorescence is a unique signal in biology relative to the intrinsic fluorescence of proteins or rather macromolecules. We have tryptophan and tyrosine, a couple of amino acids. They fluoresce, but it's so dim. It's nothing like this. You wouldn't see those kinds of signals at all. It's really what we call extrinsic fluorophores, fluorophores from outside that shine very, very brightly. Many of these fluorophores shine so brightly they can be used to look at in single molecules-- professor Martin described to you single molecule DNA sequencing. That actually exploits very bright fluorophores that is so bright that you can see just a few of them in one place very, very clearly. OK, how am I doing? I just want to actually leave you with something that's another technology. So you could almost picture-- with all of what we've seen so far, you could almost target any cell with an antibody that's specifically raised to a particular protein that's within the cell. So we can see-- in the next class we'll discuss what the limitations of that are. But we've already talked about the fact that we have to use antibodies with fixed, not living anymore, cells. So they are really dyes that can only be used in that way. The other place you can use fluorophores is for labeling DNA and from looking at DNA. And a particularly important technology is known as a DNA microarray has anybody heard of these? Yeah. So DNA microarrays absolutely exploits the complementarity of DNA sequences. So if you want to probe for a particular sequence of DNA, you would make the complementary strand and label it with a fluorophore. And new could, out of thousands of DNA stretches, literally light up the stretch of DNA that's complementary to your target. And so DNA microarrays-- what you see here are just the size of just a microscope slide. On this slide through arranged technologies, you can literally spot 40,000 distinct sequences of DNA in grids to recognize. So these DNA microarrays can be used for profiling genetic material or for profiling not just DNA, but RNA, and we'll see how at the beginning of the next class, in order to probe for particular stretches of DNA that might be disease related and have single nucleotide polymorphisms. So in order to actually just give you a little bit of a warm up to that, I want you to go-- I'm going to put a link to this in the web site. And it actually just shows you a virtual running of a DNA microarray experiments and how you can use it to profile disease states and cells versus healthy cells. And then at the beginning of the next class, I'll describe how you get information out of DNA microarrays. But at the end of the day, you're always using the fluorophores as the probes for where certain things are, OK. And the thing to remember here. Fluorescence is a magnificent tool. We can use fluorophores on their own. We can use fluorophores as attached to antibodies. And the DNA microarray experiments show you how you can use fluorophores attached to DNA sequences, OK. And I'll see you in the next class.
MIT_7016_Introductory_Biology_Fall_2018
17_Genomes_and_DNA_Sequencing.txt
PROFESSOR: And today I'm going to talk about DNA sequencing. And I want to start by just sort of illustrating an example of how knowing the DNA sequence can be helpful. So you remember in the last lecture, we talked about how one might identify a gene through functional complementation. And this process involved making a DNA library that had different fragments of DNA cloned into different plasmids and then involved finding the needle in the haystack where you find the gene that can rescue a defect in a mutant that you have. So if this line that I'm drawing here is genomic DNA, and it could be genomic DNA from, let's say, a prototroph for LEU2, the leucine gene. So this is from a prototroph. Then you could cut up the DNA with EcoRI. And if there is not a restriction site in this LEU2 gene, you get a fragment that contains the LEU2 gene. And then you could clone this into some type of plasmid that replicates in the organism that you're introducing it and propagating it in. And so that would allow you to then test whether or not this piece of DNA that you have compliments a LEU2 auxotroph, OK? Now one thing I want to point out is that because these EcoR1 sites, these sticky ends, would recognize this EcoR1 one end or this EcoR1 end, you can imagine that this gene-- if the gene reads this way to this way-- it could insert this way into the plasmid. Or it could insert in the opposite direction. So it could be inverted. So this would have some sort of origin of replication and some type of selectable marker. But if you have the same restriction site it can insert one way or the opposite way. That's just one thing I wanted to point out. Now let's say rather than leucine, you're interested in cycling dependent kinase, and you had a mutant end CDK and you had this sequence of your yeast CDK gene. Well, rather than having to dig through a whole library of pieces of DNA for the CDK gene, basically you're sort of fishing for that needle in the haystack. If you knew the sequence of the human genome, you'd be able to identify similar genes by sequence homology. And you could then take a more direct approach, where you take-- let's say you have a piece of human DNA now, double stranded DNA, and it has the CDK gene. You could take human DNA with this CDK gene. And you have unique sequence around the CDK gene, which would allow you to denature this DNA. And if you denature the DNA, you'd get two single strands of DNA. And you could then design primers that recognize unique sequences flanking the CDK gene. So you could imagine you'd have a primer here and a primer here. And then you could use PCR to amplify specifically CDK gene from, it could be the genome or from some library. And then you get this fragment here, which includes CDK. So knowing the sequence of the genome would allow you to more rapidly go from maybe a gene that you've identified as being important in one organism, and find the human equivalent that might be doing something similar in humans. So this step here is basically PCR. And let's say the CDK gene had restriction sites. Let's see, we'll say restriction site K and A here. Then if you have these restriction sites in your fragment of DNA, you can then digest or cut that piece of DNA with these restriction endonucleases. And then you'd get a fragment of CDK that has K and A sticky ends. We'll pretend that both of these have sticky ends. And now you have unique sticky ends between K and A. And you might have a vector that also has these two sites. And you could digest this vector with these two enzymes. And that would allow you to insert the specific gene in this plasmid. And if you have two unique sites, because K only recognizes K here and A only recognizes A, then it will ligate in. But you can do it with a specific orientation because you have two different restriction sites. So I hope you all see how it's with one restriction site versus two. All right. Now let's say you want to do something more complicated than this. Let's say rather than just identifying the gene that's involved in cell division, you want to engineer a new gene, in order to determine where this particular protein, CDK, localizes in the cell. So we have CDK, which could be from yeast or human, it doesn't matter. And you want to engineer a new protein, basically, that you can see. So remember Professor Imperiali introduced green fluorescent protein earlier in the year. And this green fluorescent protein is from a gene from jellyfish. So now we could, using what I've told you, reconstruct or engineer a gene that has DNA from three different organisms, in order to make a CDK variant that we are able to see in the cell. So remember, a green fluorescent protein is like a beacon, if it's attached to a protein. If you shine blue light on it, it emits green light. And so you can use a fluorescent microscope in order to see it. In this case, let's say there's also another restriction site here, R. And let's say you have a fragment of GFP that has two restriction sites, A and R. You could then cut this fragment and this fragment with these restriction enzymes A and R. And you could insert GFP at the C terminus of the CDK gene. So you could go and have a gene that has CDK GFP inserted inside a bacterial vector. Now which one of these junction sites do you think would be most sensitive in doing this type of experiment? So there are three junction sites. There's this one, this one, and this one. Which is the one you're probably going to put the most thought into when you're doing this experiment? Yes, Miles. AUDIENCE: The A? ADAM MARTIN: The A site. Miles is exactly right. This one is going to be important. And why did you choose that site? AUDIENCE: Of the three sites, two are half insert, half originals [INAUDIBLE].. But at A, both sides of it are inserts. So [INAUDIBLE] carefully. ADAM MARTIN: And if you're trying to make a fusion protein, what's going to be an important quality of this? Malik, DID you have a point? AUDIENCE: Well, they try to [INAUDIBLE] we'd have to make sure that the [INAUDIBLE].. ADAM MARTIN: Excellent job. So Malik just pointed out two really important things. To make this a fusion protein, you have two different open reading frames. These two open reading frames have to be in frame with each other. So this junction here has to be in frame where GFP is in frame with CDK, meaning that you're reading the same triplet codons for GFP, there in the same frame as CDK. Also, you want to make sure there's no stop codon here. Because if you had a stop codon here, you're just going to make a CDK protein. And then it's going to stop and then you won't have it fused to GFP. And you guys will work through more of these in the homework. So you'll be able to get a sense of it. So now for the remainder of this lecture and also for Monday's lecture, I want to go through a problem with you. Basically, if you have a given disease that's heritable, how might you go from knowing that disease is heritable to finding out what gene is responsible for that given disease? And this is going to involve thinking about different levels of resolution, in terms of maps. So the highest resolution map you can have for a genome is the sequence. You can have the full nucleotide sequence of a genome. And that's the highest possible resolution because you have single nucleotide resolution as to what every single base pair is. But that's like knowing like your apartment number and your street number and basically knowing everything. But starting out, you might want to know what continent it's on, or what country is it in. And so you first have to narrow down the possible locations for a given disease gene. And that will, at first, involve establishing what chromosome and what region of a chromosome a given disease allele is linked to. And that involves making essentially a linkage map, where you establish where a disease gene is located based on its linkage to known markers that are present in the genome. Now this is going to require that you remember back two weeks ago, to when we talked about linkage and recombination. And you'll recall that we were looking at the linkage between genes and flies and genes and yeast. One difference between that type of linkage mapping and human linkage mapping is we don't have really clear traits that are defined by single genes. You can't just take hair color and map the hair color gene to link it to a disease gene. Because hair color is determined by many, many different genes. So in fruit flies, you can take white eyes and see if it's connected with yellow body color because both of those are determined by single genes. So we need something other than just having phenotypic traits that we can track. We need what are known as molecular markers to be able to perform linkage mapping. And so what we need in these molecular markers-- well, if we just think about if we wanted to determine the linkage between the A and B genes. And if you did this cross, would you be able to determine linkage? Georgia, you made a motion that was correct. Tell me. Why did you shake your head no? AUDIENCE: They'd all be heterozygous. ADAM MARTIN: Yeah they'd all be heterozygous. Because this individual has the same allele on both chromosomes, you're not going to be able to differentiate one chromosome from the other. And so the point I want to make is that in order to see linkage, what you need is variation. So we need to have variation. And another term for genetic variation is polymorphism. So we need polymorphism, or genetic variation, between these molecular markers. We also need genetic variation in the disease. But we have that. We have individuals that are affected by a disease and individuals that are not affected by a disease. So we have variation in alleles there. But in order to map it with a molecular marker, to map linkage to a molecular marker, you also need variation here. So the problem with this cross is here you need to have heterozygote. There needs to be variation in this individual, where both of these alleles are heterozygous. So now I want to talk about some of these molecular markers that we can use, and how they vary between individuals and between chromosomes. Now this is going to be maybe the lowest resolution map. But I'm talking about this linkage map here. And you can see highlighted that the bottom here are various types of polymorphisms that we can use to link a given disease allele to a specific chromosome and a specific place on chromosome. So I'll start with the first one, which is a simple sequence repeat. It goes by many names. But I will stick with what's on the slide. So a simple sequence repeat is also known as a microsatellite. So you might see that term floating around, if you're reading about this. And what a simple sequence repeat is, as the name implies, it's a simple sequence. It could be a dinucleotide, like CA. And it's just a dinucleotide that's repeated over and over again. So on a chromosome, you might have a unique sequence, which I'll just draw as a line. , And then you could have a CA dinucleotide that's repeated some number of times, N. And then that's followed by another unique sequence. And that's what's present in it. So that would be one strand. And then in the opposite strand, you'd have a unique sequence, the complement of CA, which is GT, and then, again, unique sequence. And so there's variation in the number of repeats of the CA. And so there's polymorphism. So we can use this to establish linkage between this marker and a phenotype, like a disease phenotype. So how might you detect the number of repeats that are present here? Anyone have an idea of a tool that we've discussed that could be used here? So one hint that I gave you is that the sequence here is unique and the sequence here is unique. So is there a way we can leverage that unique sequence to determine whether there's a difference in the number of repeats? What's a technique we discussed that involves some component of the technique recognizing a unique sequence? Yeah, Natalie? AUDIENCE: CRISPR Cas9. ADAM MARTIN: Well, CRISPR Cas9 is a possibility. Jeremy, did you have an idea? AUDIENCE: PCR? ADAM MARTIN: PCR-- so it's true. You could get it to recognize that. But then you have to detect it, somehow. So what's more commonly used is PCR. Those are both good ideas. But using PCR, you could design a primer here and a primer here. And you could amplify this repeat sequence. And the number of repeats would determine the size of your PCR fragment. So if you did PCR, then you'd get a PCR fragment that has the primers on each end, but then has this certain size based on the number of repeats. So in that case, we need some sort of tool that enables us to determine the size of a particular DNA fragment. And so I'm going to just introduce to you one such tool, which is gel electrophoresis. And gel electrophoresis involves taking DNA that you've generated, by either PCR or by cutting up DNA with a restriction enzyme, and loading it in a gel that has agarose. Maybe it's composed of agarose. It could be composed of polyacrylamide. And then because DNA is negatively charged, the backbone, if you run a current through it, such as the positive electrode is at the bottom, then the DNA is going to snake through this gel. Now we'll do a quick demonstration, if you two could come up. I need one volunteer. Ori, find 10 of your friends and bring them down. All right. That's probably good. Yeah. All right, Hannah, why don't you-- you guys have to link up, OK? Stay over here. We'll start at this end. This is the negative electrode over here. The positive electrode is going to be down there. And Jackie is going to be our single nucleotide. You guys link like-- yeah. You don't have to do-si-do, or anything like that. All right. Now what I want you guys to do is I want you to slalom through these cones like it's all agarose gel. So that you're going towards the other side. And I'm going to turn on the current now. So go. All right, stop. All right. See how the shorter DNA fragment is able to more easily navigate through the cones and get farther. So it was somewhat rigged. I know. But I just needed some way to make sure you always remember that the shorter nucleotide, or the shorter fragment, is going to migrate faster. You guys can go back up. Thank you for your participation. Let's give them round of applause. [APPLAUSE] All right. So what you just saw is that the longer DNA fragments, they're going to be more inhibited by moving through the gel. And so they're going to move slower and thus, not move as far in the gel. Whereas, the small fragments are going to move much faster because they're able to maneuver their way through this gel much more quickly. So there's going to be an inverse proportionality between the size of the DNA chain and its rate of movement. You're always going to see the shorter DNA fragment moving faster. So what one of these gels actually looks like is shown here. So this is a DNA gel that's agarose. And DNA has been run in these different samples. And what you're seeing is this gel is subsequently stained with a dye, like ethidium bromide, which allows you to visualize the individual DNA fragments. And so a band on this gel indicates a whole bunch of DNA fragments that are all roughly the same length. So essentially, you can measure DNA length using this technique. What's over here at the end of the gel, this is probably some sort of DNA ladder, where you have DNA fragments of known length that you can use to calibrate the length of these bands over here. So this is how you measure DNA length. And we're going to use it over and over again, as we talk about DNA and sequencing. So now, let's think about how this is going to help us establish linkage between a particular marker in the genome and a genetic disease. So if we think about these microsatellite repeats, I told you they're polymorphic. They exhibit a lot of variation in size. And so here's an example showing you a female who has two intermediate sized microsatellites. And if you look at this-- if you did PCR and measured the size of these, you get two different bands because there are two different alleles of different length here. So you can see this individual has two intermediate length repeats. And this person has had children with an individual that has a short and a long microsatellite. And you can see that on the gel, here. Now this female is affected by some disease. And these two individuals have children. And you can see that a number of those children are affected by the disease. So what mode of inheritance does this look like? If you had your choice between autosomal recessive, autosomal dominant, sex linked dominant, and sex linked recessive, what mode of inheritance is this looking like? Oh, Carmen. AUDIENCE: Autosomal recessive. ADAM MARTIN: Autosomal recessive? Why do you go with recessive? Yeah, go ahead. AUDIENCE: Because there is a male that's affected. But not both of the parents are affected. So it seems like the father is heterozygous and the mother is homozygous recessive. ADAM MARTIN: That's possible. That's exactly the logic I want to see. Is there another possibility? Yeah, Jeremy. AUDIENCE: Autosomal dominant. ADAM MARTIN: It could also be autosomal dominant. So you're right. You're right. If this was not a rare disease, then that male could care be a carrier and could be passing it on to half the children. So that's good. You'd essentially need more information to differentiate between autosomal recessive and autosomal dominant. For the purposes of this, we're going to go with autosomal dominant. And what you see is that you want to look at the affected individuals and see if the disease phenotype is linked, or connected, with one of these microsatellite alleles. So if we look at-- we basically PCR DNA from all these individuals. And if you look at who is affected, each one of the individuals has this M double prime band. And none of the unaffected individuals has it. So obviously, it would be better to have more pedigrees and more data to really establish significance between this linkage. But this is just a simple example, showing what you could possibly see if you have one of these molecular markers linked to a particular disease allele. So that kind of establishes the principle. Now let's think about what are some other molecular markers that are possible? So another type of marker, and this is one that's the most common one, if I go here. So here, you see here's is a linkage map, here. And you see most of these bands are green. And the green markers, here, are what are known as Single Nucleotide Polymorphisms, or SNPs. So single nucleotide polymorphisms-- and this is abbreviated SNP. And what a single nucleotide polymorphism is, is it's a variation of a nucleotide at a single position in the genome. So it's just a one base pair difference at a position. So there's variation of single nucleotide at a given position, at a position in the genome. And because that's a pretty general definition, there are tons of these in the genome. Now one thing to think about is you could have a mutation in an individual that creates a SNP. So you could have a de novo formation of a SNP. But if you have a SNP and it gets incorporated to the gametes of an individual, then that variant is going to be passed on to the next generation. So this is something that could occur de novo. But it is also heritable. And if it's heritable, then you can follow it and use it to determine if a given variant is linked to a given phenotype, like a disease. So to identify a single nucleotide polymorphism, it's helpful to be able to sequence the DNA. And I'll talk about how we could do that in just a minute. But before I go on, I just want to point out a subclass of SNPs that can be visualized without sequencing. And these are called restriction fragment length polymorphisms. So restriction fragment-- so it's going to involve some type of restriction enzyme digest length polymorphism. It's a long word. But it's abbreviated RFLP. And what this is, is it's a variation of a single nucleotide. But this is a subclass of SNP. Because this is when the variation occurs in a restriction site for a restriction enzyme. So if you remember your good friend EcoR1, EcoR1 recognizes the nucleotide sequence GAATTC. And EcoR1 only cleaves DNA sequence that has GAATTC. So if there was a single nucleotide variation in the sequence, such that it's now GATTTC, or something like that, that destroys the EcoR1 site. And so EcoR1 will no longer be able to recognize this site in the genome and cut it. So you could imagine that if you had one individual in the genome having three EcoR1 sites, if you digest this region, you'd get two fragments. But if you destroyed the one in the middle, then if you digested this piece of DNA, then you'd only get one fragment. And that's something. Because it results in different sizes of fragments, that's something you can see just by doing DNA electrophoresis. And maybe you would use some method to detect this specific region, so that you're not looking at all the DNA in the genome, but you're establishing linkage to this specific area. You could use PCR. You can have PCR primers here and here. And you could then cut with EcoR1. In one case, you'd get two fragments. In this case, you'd get two fragments. In this case, if you amplified this region of the genome and cut with EcoR1, you'd only get one fragment. So you'd be able to differentiate between those possibilities. Yes, Malik. AUDIENCE: When you use PCR, are there [INAUDIBLE]?? ADAM MARTIN: What's that? AUDIENCE: Are there [INAUDIBLE]? ADAM MARTIN: Oh. You're saying what causes it to stop? That's a great question, Malik. Yeah. So initially, it's not going to stop. That's absolutely right. But because every step, each time you replicate, it's then primed with another primer. So you'd replicate something like this that's too long. But then the reverse primer would replicate like this. And it would stop. So if you go back to my slide from last lecture, look through that and see if it makes sense how it's ending. Because if you do this 30 times, you really will enrich for a fragment that stops and ends at the two primers, or begins and ends at the two primers, I should say. Good question. Thank you. All right. Now, let's talk about DNA sequencing. Because as I showed you, obviously, these SNPs, because there are so many of them, are probably the most useful of these markers to narrow in on where your disease gene is. And to detect a SNP, we need to be able to sequence DNA. So I'm going to start with an older method for DNA sequencing, which conceptually, is very similar to how we do DNA sequencing today. And so it will illustrate my point. And then at the end, I'll talk about more modern techniques to sequencing. So the technique I'm going to introduce to you is called Sanger sequencing. And that's because it was identified by an individual named Fred Sanger. And I'm going to just take a very simple DNA sequence, in order to illustrate how Sanger sequencing works. So let's take a sequence that's really simple. This is very, very simple, and then more sequence here. So let's say we want to determine the nucleotide that's at every position of this DNA fragment. So one way we could maybe conceptually think about doing this, is to try to let DNA polymerase tell us where given nucleotides are. And if we're going to use DNA polymerase, what are we going to need, in order to facilitate this process? Yes, Rachel. AUDIENCE: [INAUDIBLE]. ADAM MARTIN: You're going to need nucleotides, definitely. So we're going to need nucleotides. What else? To start, what are you going to need? Miles? AUDIENCE: Primer. ADAM MARTIN: You're going to need a primer, exactly. Good job. So you need a primer. So here's a primer. And now, we're going to try to get DNA polymerase to tell us whenever there is a given nucleotide in this DNA sequence. And so think with me. Let's say we were able to get DNA polymerase to stop whenever there was a certain nucleotide. So if we go through just a couple nucleotides, let's say, at first, we want DNA polymerase to stop whenever there's an A. So let's say there was a possibility it would stop at this A. If it's stopped at this A, you'd generate a fragment of this length. But if it read on through that A, there's another possibility that it would stop at this A. So we're kind of looking at when these are stopping. And the final possibility is it goes on and stops at this A. So if this DNA polymerase stopped only at As, you'd get fragments that are these three discrete lengths. Now let's consider another possibility. So pink here is stop at A. And in blue, I'm going to draw what would happen if it stopped at T. So they all start from the same place. If it stopped at T, it would just stop one nucleotide beyond this A in this simple sequence. So in blue here, this is stop at T. But if it's just a possibility, it stops. And some of the polymerases could go beyond this T and go to the next T and stop here. And again, this would be one nucleotide length longer than this pink one, here. And the final one would-- I'll just draw it down here-- would get out to this last T, here. So what you see is if we could get DNA polymerase to stop at these discrete positions, we'd get a different sized fragments, whether it was stopping at one nucleotide versus the other nucleotide. You all see how this is resulting in different fragment lengths. Yes, Andrew. AUDIENCE: How would you create a pattern [INAUDIBLE]?? ADAM MARTIN: There are companies now. You can basically take nucleotides and synthesize these primers chemically, not using DNA polymerase. AUDIENCE: I'm saying how would you know what primer to use, if you don't know the sequence? ADAM MARTIN: Oh, in this case, you'd have to start with some sequence that you know. So in most sequencing technologies, you kind of make a DNA library, where you know the sequence of the vector. And then you'd use the vector sequence as a primer to sequence into the unknown sequence. Great question. Good job. All right. So what we need now then is some sort of tool or ability to stop DNA polymerase when there's a certain nucleotide base. And to do that, we can use this type of molecule, here, which is known as a dideoxynucleotide. Remember, for DNA polymerase to elongate a chain, it requires that the last base have a three prime hydroxyl. And so what this dideoxynucleoside triphosphate is, is it's a nucleoside triphosphate that lacks a three prime hydroxyl. Here, I'll highlight that. So you see this guy? You see it bolt the highlight H? There's a hydrogen there on the three prime carbon, rather than the normal hydroxyl group. So if this base gets incorporated into a elongating chain, DNA polymerase is not going to be able to move on. So this method where you can add a certain dideoxynucleoside triphosphate to stop chain elongation is known as a chain termination method. So you're getting chain termination. And you're getting this chain termination with a specific dideoxynucleoside triphosphate. So these dideoxynucleotide triphosphates, if they get incorporated into the DNA, are going to halt the synthesis of that DNA strand. So if we take our example, here, this might be a reaction that has dideoxythymidine triphosphate. So if we had dideoxythymidine triphosphate in this sample and it's elongating, then when the polymerase reaches this point, there's a possibility that it will incorporate the dideoxynucleoside triphosphate. And if this is a dideoxynucleoside triphosphate, then there won't be a three prime hydroxyl. And DNA polymerase will just be like, oh, I can't go on! Because it's not going to have a three prime hydroxyl. So it's not going to be able to continue with the next nucleotide. So this is known as chain termination. So let me take you through an example, here. All right. So here's an example that you have a slide of. And again, there's a template strand, which is the top strand. And this method requires that you have a primer. And what's often done is you label the primer. So the first step is you have to denature your DNA. So you have to go from double stranded DNA to a single stranded DNA. And then you mix the double stranded DNA with first, this labeled primer, such that the primer can then yield to the single stranded DNA. You need DNA polymerase, as I've mentioned. And as, I believe, Rachel mentioned before, you need the building blocks of DNA. So you need the four dideoxynucleoside triphosphates. So you always have the four dideoxynucleotide triphosphates. But what's special here is you're going to spike several reactions with one of the dideoxynucleoside triphosphates. So you spike the reaction with a tiny amount of one of your dideoxynucleoside triphosphates. So let's say you have a reaction, here. And this this one here has dideoxyadenosine triphosphate. Then polymerase will along get this strand until there's a thymidine on the template. And then there's a possibility that it will incorporate this dideoxy NTP. And if it does, then you get chain termination. And you get a fragment of this length. But the other possibility, because there is still the deoxy form of the NTP present, it's possible that it incorporates a deoxyadenosine triphosphate there. And keeps going, and then incorporates a dideoxy ATP later on, where you have another T. And so the polymerase will essentially randomly stop at these different thymidine residues, depending on whether or not a dideoxynucleoside triphosphate is incorporated. And that means for a given reaction, one in which you have dideoxy ATP, you get a certain pattern of bands that represent the length of fragments, where you have, in this case, a thymidine base. And then you do this for all four bases, where you have four reactions, each with a different base that's dideoxy. So when you're adding these, you're going to do four reactions, one with dideoxy ATP spiked in, one with dideoxy TTP, one with dideoxy CTP, and the last with dideoxy GTP. And because these nucleotides are present in different positions along the sequence, you're going to get distinct banding pattern for each of these reactions. But using that banding pattern, you can then read off the sequence of DNA that's present on the template strand. So this is how sequencing was done for many, many years. These days, it's been made cheaper and faster. And now what's often used is next generation sequencing. And one the pain in the ass about sequencing before is you'd use a lot of radioactivity. Your primer would be radioactive, so that you could detect these bands. Right now, everything's done using fluorescence, which makes it much nicer, I think. And so in next generation sequencing, your template DNA is attached to a solid substrate, such that it's immobilized on some type of substrate. And then you add each of the four nucleoside triphosphates. In this case, they're labeled with a dye, such that each one is a different color. But the dye also functions to prevent elongation, such that, again, it's this chain termination. When you incorporate one of these, the polymerase just can't run along the DNA. It incorporates one and then stops. So if you get your first nucleotide incorporated, it will incorporate one of these four. And it will be fluorescent at a certain wavelength, which you can see using a device or microscope. And then what you then do is chemically modify this base, such that you remove the dye and allow it to extend one more base pair. And so you go one nucleotide at a time. And you read out the pattern of fluorescence that appears. And that gives you the sequence of DNA on this molecule that's stuck to your substrate. And you can do this in parallel. You can have tons, many different strands of DNA. And you can be reading out the sequence of each one of these strands in parallel. Great. Any questions about DNA sequencing? OK. Very good. I will see you on Monday. Have a great weekend.
MIT_7016_Introductory_Biology_Fall_2018
35_Reproductive_Cloning_and_Embryonic_Stem_Cells.txt
ADAM MARTIN: So to start out, I just wanted to mention we talk a lot about dogma in the lab. So we talk about dogma, right? There's the central dogma, which is DNA. Information flows from DNA to RNA to protein. I'm going to describe another dogma, if you will, which is that life starts as sort of a fertilized egg. So you get a fertilized egg. And this fertilized egg, which you see in the video up here, undergoes development. And as part of that development there is what is known as differentiation, where cells acquire more specialized cell types. So there's development and differentiation. And this results in adult cell types that are known as differentiated. So it results in differentiated/adult cell types with specialized functions. And you see just a few of them up on the slide above. There are, in fact, thousands of different sort of cell types that humans have. But the dogma, at least up until fairly recently-- in the past couple of decades or so or at least before 1960, 1950s-- is that this is you unidirectional, that basically the development goes in one direction, where you go from the fertilized egg to the differentiated adult cell types. And so this state down here-- the differentiated adult cell types are fairly stable, right? I mean, if you look at your neighbor you see they don't have muscle growing on the outside of their bodies. Their skin cells stay skin cells. And that is a relatively stable state. OK. Now, last week you learned about a contradiction to the central dogma, which is the behavior of retroviruses, which have reverse transcriptase, which can sort of go backwards up this pathway, where you can get DNA from RNA. OK. And today, we're going to talk about how you can sort of overcome this you unidirectional path here. And this is going to be what we will call reprogramming. But first, I want to talk about this differentiation process because it's important for us to understand this before we get into the reprogramming event. And so the fertilized egg is capable of producing all of these different cell types. OK. So the fertilized egg will be known as totipotent, meaning it has the potential to form all cell types. And then over development, this potency goes down such that when cells differentiate into their final adult form there is a much more limited repertoire of cell types that the cell can go into. And one way of thinking about this is that if you think of this marble run here, the totipotent state is right at the top here, right? If I put a marble in the top here, it has the potential of going into any of these three sort of different fates. OK. So this is the totipotent state up at the top. And you'll see these marbles will be able to go into any of these three different states here. OK. But you see how that marble went down on this path. Some marbles go down in this path. And so you can think of cells going through development is sort of getting funneled into these distinct paths. And in this case, it's kind of random. But in development, it relies on signaling between cells and interaction between cells in the multicellular organism. So as an example, I want to tell you about early mammalian development and the differentiation of cell types in the early mammalian embryo. OK. So we start with an oocyte. That's the female gamete. And the male gamete, the sperm. So these can come together to form the zygote. If I draw a little circle in the middle, it's a nucleus. And this zygote is totipotent. And the embryo undergoes cleavage divisions, which I'll show here. You see how that zygote divided into two cells. Now it's going to divide into four. And it'll keep dividing till it's 16 probably. So you get cleavage divisions that generate more than one cell. You start with a single cell. The cleavage divisions give you more than one cell. And what you see up here now is a stage known as the blastocyst or the blastula. You can see there's a hollow sort of inner fluid-filled area here. And you see there are cells around that fluid-filled cavity. And there's a thickening on one side of the embryo. OK. So I'm going to draw that out here. Sorry. Mine is a little flat up here. It should be perfectly spherical. And there is this thickening on one side of the blastula, which is a bunch of cells that are kind of interior on the inside of the embryo. These cells on the interior are known as the inner cell mass. And these cells are now restricted in what they can become. These cells will become the embryo proper. So they'll become the cell types that will be a part of the fetus. These cells on the outside are known as the trophoblast cells. And these cells will form part of the placenta-- the embryonic portion of the placenta. So they form part of the placenta. And they're important for this embryo being able to implant into the uterine wall. So if we think about-- this is the first example of differentiation in the early mammalian embryo because you can see, based on what I said here, these cells are becoming restricted in what they can become. So this is kind of like the first branch point in differentiation. So we get some more marbles going down here. All right. So this would be the sort of stage right before the blastula, before differentiation. And you can see these marbles-- when I let them, they will either go this way and become one fate, or they'll go the other way and be sort of directed down another potential fate. There we go. So this is the first sort of branch point, if you will, in fate determination for the mammalian embryo. And so this branch point here is different from this because once the marble goes one way or the other, it's restricted in what fate it can become. OK. So these cells here are not totipotent because they can't form the trophoblast. They can't form the part of the placenta needed by the embryo. But they do form the embryo proper. So they are still capable of many fates. And so these cells are known as pluripotent, which means capable of many things OK. And so a type of cell type, which I'm sure you've heard about, is called Embryonic Stem cells, or ES cells. These ES cells are derived from the inner cell mass. So these would be-- if they were taken out of the blastula-- sorry-- this is called the blastula-- if they are taken from the blastula and cultured in vitro, these would be sort of embryonic stem cells that can be propagated. And they form the embryo proper, so these are pluripotent stem cells that can basically-- they are still capable of forming sort of any of the cell types in the embryo-- capable of forming embryonic cell types. Now let's look at the next slide. So this branch point-- this first branch point sort of in development is associated with changes in gene expression. So there are changes in gene expression. And you're seeing one up on the slide above. The slide above is a fixed blastula. And all of the nuclei in the blastula are stained with green. But there is one gene that's stained in red. It's called Oct4. And this is a transcription factor that marks sort of pluripotency. And you can see how it's expressed specifically in these cells of the inner cell mass, which are what the embryonic stem cells are derived from. So there are clearly changes in gene expression. And one question you might have is whether or not these cells that are going to form the embryo proper, whether they have lost information such that they're unable to form the part of the placenta. And you can also ask this for an adult cell. Has an adult cell in your body-- has it lost gene content such that it's unable to make an entire organism? I'm going to tell you the answer. I want you to think about what experiment would allow you to sort of determine the answer. But I'm going to tell you the answer is that there is not a loss of gene content. But this differentiation process is due to changes in gene expression for the vast majority of our cell types. So let's say you wanted to determine whether or not a differentiated cell had lost some sort of capability to regenerate an entire organism. What might be some type of experiment you could do? Brett. AUDIENCE: Creating conditions that will perhaps be similar to what embryonic stem cells or perhaps [INAUDIBLE] factors that allow for you to change the expression back to more than one impact to see if that would actually change it. ADAM MARTIN: All right. Great. So Brett had two really good points, I think. The first is to try to sort of reproduce conditions of pluripotency. Or you can even try to reproduce conditions of totipotency, right? In which case you'd want to sort of take the genetic material of the somatic cell and sort of put it back in this situation where it's present sort of in the cytoplasm of the zygote. And the other point that Brett made is to try to maybe express something that would induce this type of pluripotent state. If we knew exactly what the genes were that create this pluripotent state, maybe we could just express those genes and regenerate sort of a cell that moves from way down here all the way up to the top again. And, actually, what Brett suggested were the two experiments that won these two folks the Nobel Prize in 2012. So in 2012, the Nobel Prize was awarded jointly to Sir John Gurdon, right here, who incidentally has the best hair of all Nobel laureates, and also Shinya Yamanaka. And so these two folks were awarded the Nobel Prize for being able to show that mature differentiated cells can be reprogrammed to become pluripotent again, which basically gives us the conclusion that there's not a loss of gene content during the differentiation process. And the work from Yamanaka showed us several genes whose expression is critical to induce this type of reprogramming. And their work spanned from frogs-- so John Gurdon worked on development of frogs, specifically Xenopus laevis, which you've seen before. Shinya Yamanaka's work involved mice and also human cell lines. So I'm going to tell you about their experiments and how this-- sort of demonstrated this conclusion here. And the first thing I want to show you is an experiment, which is very simple conceptually. Technically it's very complicated, which is if you were able to take a nucleus from an adult cell that's differentiated, could you get it to change by introducing it back into the egg cell or a zygote cell? So this experiment involves having an oocyte, but in this case an oocyte where the nucleus has been removed. So you take an enucleated oocyte. And want to know if the cytoplasm of this oocyte is somehow special that would allow it to reprogram the nucleus of a differentiated cell. And so you could suck up the nucleus of a differentiated cell. So a somatic cell is another way to say differentiated. So it's a somatic cell nucleus. And the experiment conceptually then is to just take this somatic cell nucleus that you've sucked up and inject it into an oocyte without a nucleus and see if the cytoplasm of this oocyte is somehow able to change the properties of the nucleus such that it now is in an undifferentiated state. So you generate oocyte now with the somatic cell nucleus. And the question is whether the somatic cell nucleus is capable of generating the diversity of cell types that are normally present in an organism. So you could then let this go and develop. You could let it develop into a blastula, which I'm drawing here. So you could generate a blastula. And this will be a way to get embryonic stem cells derived from sort of this type of nucleus. But you could also let it grow into an entire organism. But this organism would be genetically identical to whatever organism donated this nucleus. So because you're duplicating an organism, that is known as reproductive cloning. So this is reproductive cloning, if you go all the way to the organism. I'll show you just a little video on the nuclear transfer. [VIDEO PLAYBACK] - You see now that this drilling pipe head is going to suck-- drill a little hole into the membrane. You can maybe see a little bit of the hole right here at the next part. [END PLAYBACK] ADAM MARTIN: He's talking about this pipette here. The embryo is being held with another pipette over here. [VIDEO PLAYBACK] - You can see a bit of the hole. And now the pipette's going to go in and remove the nucleus. And if you look carefully in the pipette, you'll see a line in the nucleus, which are all the chromosomes lined up. So the nucleus is going to be squirted out now because we don't need it anymore. And then we have an enucleated egg. Now, the next step is to take a set of eggs like that-- and I'll show you two-- and then transfer into them a nucleus from another kind of cell, a fully-differentiated somatic cell. So here, the enucleated egg is set on the side. And it's held by this holding pipette on the left. There's drilling the little hole in the membrane. Here we go in. Here comes the nucleus from the right. [END PLAYBACK] ADAM MARTIN: That's from a somatic zone. [VIDEO PLAYBACK] - And these pipettes are operated with a piezoelectric device. So you can't see it here, but it's like a little jackhammer going very quickly, (TRILLING SOUND) like Woody the Woodpecker, getting in there. [END PLAYBACK] ADAM MARTIN: I don't know if I would have used Woody the Woodpecker as an analogy, but that's basically the idea. So you can take a nucleus from a somatic cell and transplant it, if you will, into an oocyte and then determine whether you can get either a blastula or an entire organism from that differentiated cell's nucleus. So this is one result from Sir John Gurdon. And what the experiment was, in this case, was to take oocytes, or eggs, from this wild-type laevis frog and to transplant nuclei from donor frogs that are albino. So you can see this is an elegant experiment in that you can sort of track the origin of the nucleus, because it's genetically marked with this albino sort of phenotype. So you're getting a nuclei from-- you're getting nuclei from albino tadpoles. So these are differentiated cell nuclei. And they're transplanted into wild-type donor eggs. So normally, the wild-type frog would reproduce frogs that look like it-- that are non-albino. But in this experiment, Gurdon and his lab were able to get frogs that were albino. So these would be sort of clones of whatever albino tadpoles they got the nuclei from. So you see, in this case, it's the origin of the nucleus that is determining sort of the phenotype of these frogs. So that allowed them to show that the nucleus is getting reprogrammed from this albino tadpole and is able to still generate all of the cell types present in a normal organism. Yes. Brett. AUDIENCE: So this is an unfertilized egg that they were taking from the-- ADAM MARTIN: Yes. AUDIENCE: And so they're extracting all this DNA, then putting in a full set of DNA from the albinos. ADAM MARTIN: Yep. AUDIENCE: It would go on to differentiate a full set of DNA? ADAM MARTIN: Yeah. AUDIENCE: OK. ADAM MARTIN: Yeah. So, yeah, they're-- and actually, it was taking unfertilized eggs, which is the biggest trick. I think people had tried to do this in frogs before, and it failed because they were using fertilized eggs. And there's something about sort of the oocyte development that makes it better at reprogramming the nucleus. OK. So that's with frogs. So that experiment was actually done in the late 1950s, published in the early 1960s. And so it took another 40 years or so for the first mammal to be cloned. And you guys probably weren't even born yet. But for those of us who are older, we remember this because there was a big brouhaha over Dolly the sheep, which was the first cloned mammal. I believe that's Dolly there. So this is-- I'm not sure which one is Dolly. They're both the same type of sheep. So this was done by Ian Wilmut and his group. One thing I want to point out about this is it's incredibly inefficient. If you look over here, this Dolly resulted from over 400 oocytes having this sort of nuclear transplant take place. So it's not a very efficient process. And so that lack of efficiency is due to the fact that the nucleus is resisting getting reprogrammed. And this is a graph from one of John Gurdon's sort of-- he wrote a review article on sort of reprogramming after he won the Nobel Prize. This is from that. And what's plotted is sort of the frequency in which transfers results in sort of a differentiated organism. And what's on the x-axis is the stage of the cell that's used to transplant. And so what you see is that over the course of differentiation it gets harder and harder for the nucleus to get reprogrammed to create sort of all the cell types that are normally present in an adult animal. So nuclei do become more restricted in their ability to be reprogrammed over development. But the fact that any of them are able to be reprogrammed suggests that when they are-- that during differentiation there is not a loss of gene content. And John Gurdon has done a lot to characterize sort of the changes in the nucleus that happen during differentiation which sort of resist this reprogramming by the oocyte cytoplasm. All right. So mechanistically, what's happening? Well, one big-- one of the people who really showed what's going on there is Shinya Yamanaka. And what his work shows is that you can take just a few genes-- it turns out it is four-- and you can induce reprogramming by just expressing these four genes in an adult differentiated cell. So in this case you have a differentiated cell. And what he initially used we're fibroblasts, which are a type of differentiated cell. And what Yamanaka showed is that you can express four transcription factors, one of them being this factor here, Oct4, which is expressed in the inner cell mass and was shown to be involved in sort of maintaining pluripotency of these inner cell mass cells. So you could take Oct4 plus another transcription factor, which is important for pluripotency, plus two others and c-Myc. These are the four. So these are all transcription factors. And what they found is if you express these four transcription factors in a differentiated cell, you could get it to revert to a more pluripotent state. So this results in what is known as Induced Pluripotent Stem cells, or IPS cells, for short. And then like embryonic stem cells, here these cells can give rise to different cell types. And so one of the goals of this field of developmental biology and cell reprogramming is to use this technology to replace cells that are lost in patients. So this is known as regenerative medicine. And the idea of the theory of regenerative medicine is to take ES cells or induced pluripotent stem cells from patients and to be able to culture them in vitro-- so culture in vitro-- and then to be able to direct these cultured cells into different cell fates in order to use these cells possibly to transplant them back into an individual where maybe these cell types are dying. So you could then differentiate these cells using certain biochemical signals that you can add to the media in order to create different cell types, like neurons, muscle, maybe skin. And one example of this was shown by Shinya Yamanaka, where what they did was to take human fibroblast cells, put them back in the pluripotent state by expressing those four factors, and then get these cells to differentiate into cardiac tissue, so cardiac cells. Here's a movie. So these are from adult fibroblast cells. But now you can see they're beating sort of like a cardiac tissue would. So these were made into IPS stem cells, cultured in vitro, and then directed into a cardiac muscle fate. So the goal would then to be use these and to transplant them back into a patient, such that if a patient had, let's say, a neurodegenerative disease and was lacking a certain type of neuron, you could then take cells, start them on the path to neuronal development, and then transplant them back into that patient. And if these cells are derived from the DNA of that patient, then there won't be transplant rejection. Because what regulates transplant rejection is the major histocompatiblity complex locus. And it's polymorphic. But if you're taking a nucleus from the patient and then causing the cells to differentiate in vitro, you're taking the exact same-- you're taking clonal cells and putting them back in the same patient, such that they won't be rejected, ideally. All right. Now I want to try something with the remainder of our time. I want a group-- all right. Everyone on this side of the room over, I want you over here. And everyone on this side of the room, I want you on this side over here. OK. Maybe you guys can go over here so that we're more balanced. We're going to have a little debate. You can sit down. It's OK. Be in a position to talk to each other. I want you guys talking to each other. That was the goal of putting you close together. OK. So in the past couple weeks, there's been a little bit of a stir because there's a researcher in China who has claimed to have made the first gene-edited baby. Perhaps you have heard of this. You probably have because it's been all over the news. So I want you guys-- let's see. I want us to debate whether we should-- what are the advantages or disadvantages of gene editing or human cloning. You could talk about both. And then I want you guys to be able to present them to me. I'll write them down, and we'll have a discussion about it. Guys, continue this discussion sort of outside class. I think it's really interesting. And it's going to be something that you're going to hear a lot about in the coming years, I'm sure.
MIT_7016_Introductory_Biology_Fall_2018
7_Replication.txt
BARBARA IMPERIALI: So we're going to get started. This is a complicated lecture to choreograph, but I'm going to do my very, very best because I think there's some pretty amazing stuff that we have to explain that is carried out in nature. And one of those things is how do we replicate the entire genome of organisms in one fell swoop almost perfectly-- sometimes there are little errors-- and fast. So we're going to refer to these numbers in a moment. But first of all, I just want to get you back in the picture of where we were at the end of the last section, the biochemistry section. And in the next few classes we're going to address issues related to the central dogma and the use of nucleic acids for information storage and information transfer. And before I do that, I just want to highlight a couple of things, just some terms that you should recall and recap from the biochemistry section about nucleic acids. So nucleic acids form complementary double strands through base pairing, and that base pairing involves hydrogen bonding between the nucleobases, one purine to one pyrimidine, and the AT base pair is worth two hydrogen bonds. The GC base pair is worth three hydrogen bonds. And those are actually pretty useful facts to remember because they tell us about the stability of double-stranded DNA, where it's easier to tease it apart, and other characteristics. So it's kind of a useful thing to keep in mind. So the base pairing between what-- the purine pyrimidine pair, which sets the exact register, down the double-stranded DNA. The backbone is the exact distance apart because you always combine a small pyrimidine with a much larger purine. So that's something you should feel comfortable about. The components are organized in an anti-parallel orientation, so one end runs 5 prime to 3 prime. The other end runs 3 prime to 5 prime. And I showed you last time an image that allows us to state quite clearly that the anti-parallel orientation is significantly more stable than the parallel orientation because you can't make all those great hydrogen bonds well in the parallel organization apart from everything else that becomes complicated. Another important thing that you're going to have to remember, especially in this lecture, is that we add new nucleotides to the 3 prime end of a nucleic acid. And so I'm just going to put this little guy in the corner here. This is the ribose sugar. That would be where the nucleobase is attached. And it's a deoxy sugar, so there's no substituent at that carbon. This is the one position where the base is. the 2 position, where it's deoxy, the 3 position. And these are all prime numbers. Remember, the sugars have prime numbers. The bases themselves have numbers without primes to distinguish what we're talking about. So 3, 4, 5, and they're all prime. And when we grow the single strand of DNA, we always add new basis to the 3 prime end. That's an important numbering system. I don't like that piece of chalk. So we always grow from 5 prime to 3 prime. And that will be important to you as we go through the discussion today. And then finally, a feature of double-stranded DNA, quite unlike proteins, which sometimes you'll thermally or with pH melt out and will often have a problem reforming their reliable structure because instead of the structure refolding, they aggregate. Double-stranded DNA doesn't aggregate because it's got all the negative charges that would be quite repulsive. So it would be very difficult for DNA strands to aggregate as such because there's too much of a concentration of negative charges. That would be repulsion amongst the same type charges. So DNA can be peeled apart with heat, and it will reanneal faithfully to its partner a complementary strand. Once you get to six, seven, eight base pairs, that is a nice interaction between the strands, and it forms faithfully. If we have a mistake in the strands, the stability of the double strand will be a bit less, and we can measure that through physical measurements that we carry out in the lab. So we can measure the denaturation or the thermal melt temperature. So we call this process, after you separate double-stranded DNA, we call it reannealing, or it's also designated as hybridization to form a hybrid, which is the composite of the two structures. Those terms are used fairly interchangeably. All right. So the goals in the next four classes are to show you how the structures of the nucleic acids really are purposed for the sorts of processes that they undergo. So the replication of DNA, the conversion of a strand of DNA into complementary messenger RNA once the process of protein synthesis needs to be initiated, that second step. The first step is called replication. The second step, when you transcribe DNA into RNA, is called transcription. And then finally, when you translate messenger RNA into proteins. So going to a completely new language, we call it a translation. And that's the basis of these lectures. There is a little bit of stuff in these lectures about more complicated issues in the mammalian cell, where we have to process the messenger RNA a bit before we can have it leave the nucleus. And I'll explain at the time when it comes. So this is the lineup for the four lectures. All right. Well, I already talked to you how when you add-- I mentioned it over here-- when you add cytidine triphosphate, GTP, ATP, TTP, these are the activated nucleobases, so they're the deoxy nucleoside triphosphates. They are nucleotides because they include phosphate, but when we describe them, we'll call them nucleoside triphosphates if we're going to mention the phosphate part. I know that probably doesn't make any sense at all. Is everyone OK with that? If you're just generically describing something with a phosphate in it, you call it a nucleotide. If you want to be a little bit more specific, you say nucleoside, and then you say how many phosphates. All right? And so for the building blocks, it's a nucleoside triphosphate. And believe me, I get that wrong all the time, and my students correct me. So I don't have a problem if you're not perfect with that nomenclature. So these are the building blocks for DNA, for polymerization. And I just showed you how you grow from the 5 prime end-- that number's not shown there, but it's shown over here-- and you add to the 3 prime end. And the convention, remember, when we describe a strand of DNA, because we make it 5 prime to 3 prime, we write it 5 prime to 3 prime just so that we're all consistent and we know what we're talking about. Now I want to talk about two experiments that enlist the use of isotopes because these were quite useful early on to describe some of the characteristics of replication, so some of the details of replication, and also the fact that DNA was the genetic material. So isotopes are elements that share the same number of protons and electrons, but they differ in the number of neutrons. So the common isotopes of the elements that are in all the covalent structures of the body are hydrogen, carbon, and what I'm putting as the number next to it is the common isotope, so that number designates the sum of the number of neutrons and protons. So carbon-12, phosphorus-31. I should have put nitrogen here. Let's see. Wait a minute. I'm going to put these in order because it makes more sense. Phosphorus-31, nitrogen-14, PS, phosphorus-31, and sulfur. What's the sulfur isotope? 32. So these are the common isotopes that comprise the majority of those elements within the body. But there are lots of experiments done that have different isotopes of these atoms that we can use as traces or markers, kind of the same thing. But you'll see when I mention the word tracer, to me, it brings up the radioactive isotopes because we use very little of them. So when we talk about these elements, there are different isotopes that we use commonly. There's the hydrogen that has an extra neutron. That's also called deuterium. And then there's the hydrogen that is radioactive. It's metastable. It will decay and emit a radioactive particle. So this one, we would call it the heavy isotope, and this one we would call the radioactive isotope. So we could trace certain hydrogens in biomolecules by either making those hydrogens the heavier ones, or making them the radioactive ones. Carbon also has useful isotopes. Carbon-13 is the heavy isotope and carbon-14 is the radioactive isotope. So they're used quite commonly. And then things get a little bit different for nitrogen. We use a heavy isotope of nitrogen, which is N-15, just one extra neutron. But while there is a radioactive isotope of nitrogen, it's too short of a half-life to work within the lab. It's not useful to us, so we never talk about it. But I will describe to you an experiment with the heavy N-15 when we talk about the mechanism of replication. And then for phosphorus and sulfur, the most important ones are P-32, which is a radioactive phosphorus, and S-35, which is a radioactive sulfur. There is another radioactive phosphorus that's P-33. Has a slightly longer half life. It's kind of handy to work with if you have certain experiments. The half-lives vary a lot, but the half-life of tritium C-14 are long. That's why we do carbon dating. The half-lives of P-32 are short, less daytime frame, and sulfur 35 is a little bit longer. But those are nuances that you don't need to worry about. Now how are these isotopes useful as traces and markers to tell us about biology and details of biology? So what I'm going to just show you here is a particular experiment that's carried out with what are known as baculovirus. So viruses infect eukaryotic cells. Baculoviruses infect bacteria. So there's a commonality there. And when baculoviruses infect bacteria, just as the parallel with eukaryotic cells, they deposit material into the bacteria so they can replicate, so they can hijack the bacteria to make new baculoviruses. So an early experiment was done to ask what is the genetic material that transfers the information from the baculovirus to the bacterial cell in order to make more baculovirus. So they were able to demonstrate, in this Hershey and Chase experiment, that you could label protein and DNA with particular isotopes and find out what part of the baculovirus was important for transferring information for the production of new baculovirus. So they wanted to label the capsid protein, so the proteinaceous material that's on the outside of this lunar lander. This is a baculovirus. It's sort of amazing how it looks. So they wanted to label the protein. But they also wanted to label the contents of the baculovirus, the DNA, with tracer radioisotopes. So what would be the best isotopes to use if you wanted to differentiate between protein and nucleic acid? And what you want to think about is, what does protein have that nucleic acids don't have, and what do nucleic acids have that proteins don't have? Which are the elements that are important for differentiating? Yeah. Over there. AUDIENCE: Phosphorus. BARBARA IMPERIALI: Yes, phosphorus. Do you want to give a little bit more of a-- AUDIENCE: [INAUDIBLE] BARBARA IMPERIALI: OK. So the answer is we'd use phosphorus to label nucleic acids because that phosphodiester backbone is rich in phosphorus. There are a few phosphates in proteins, but they're very transient. They're part of signaling. But every nucleotide has a phosphorus in every building block. And then in proteins, they have sulfur in them where nucleic acids do not. And the sulfur is in two amino acids cystine and methionine. So you can use those two tracers for the building blocks. And the way the experiment works is if you've labeled the protein with sulfur, you infect a cell, the virus replicates. And what you're going to do is then centrifuge the bacterial cells to see what's in the cell, and then you can, alternatively, label the DNA-- and in this case, I've shown it as green, but that would be the other isotope-- let it infect the cell and deposit its contents in the cell, centrifuge the cells out. And you want to know where the radioactivity is, and the radioactivity is strictly associated with phosphorus-32 because the genetic material that coded for the production of new baculovirus is the thing that stays associated with the bacterial cell. In contrast, there's no radioactivity associated when you label the cells with S-35. All right? Nice experiment. Easy way to do it. Rather nice way to do it. The other experiment I want to describe to you very briefly because it relies on central irrigation technology that's very powerful. And it's a method that utilizes N-15 very, very nicely. So let me tell you about that experiment. There are things in the laboratory that we use every day, ultra centrifuges and regular centrifuges, that spin at a very fast speed where we can really differentiate things by molecular mass, which relates to sedimentation coefficients. How fast does the particle spin to the bottom of the tube when it's under high centrifugal force? The heavier it is, the faster it will spin. So the question that came up very early on-- it looks like a nonsense question now because it looks so obvious that we replicate DNA the way we do. But originally, it wasn't absolutely certain whether DNA was replicated through a semi-conservative mechanism, where the strands came apart and you made two copies, a copy of each strand, to make two identical daughter new double-stranded DNA. Alternatively, could you have a mechanism that was quite conservative, you kept your DNA, and somehow you made a copy of it. A little harder for me to understand. So your two new strands, one would look like the original one, but one would be very different. Or dispersive, where you're just copying bits of the DNA and somehow reassembling this gigantic jigsaw puzzle. The centrifugation experiment allowed people to absolutely and clearly state that the replication is via a semi-conservative process. So let's walk through it. And what was done is that the nucleotides within the DNA were all labeled with heavy nitrogen. So for every nucleic acid-- let's say, for example, when there's an adenine in the backbone, there are five nitrogens, so that would be five atomic mass units heavier than nitrogen-14 in that nucleobase. So N-15 nucleobase, five mass units heavier than N-14 nucleobase. Make sense to everybody? All right. So it's heavier by some amount. It's not a massive amount, but it's enough to differentiate in a centrifugation experiment. So the bacteria were first grown up in heavy nitrogen, so all of the DNA sediments at a certain rate and place. And it's all got N-15 in it, so it's as heavy as it can sediment. And then they let the bacteria replicate in the presence of N-14, the lighter isotope of nitrogen. And what one would anticipate to happen is whenever you replicate, you peel apart the two heavy strands, and each one pairs with a light strand. So you now have two copies, two new identical copies of DNA where half of one strand has N-15, one strand has N-14. So it will sediment less quickly because it is of a different density, a lighter density than the all N-15. So you would get an intermediate weight band in the centrifuge tube when you're sedimenting. If you peel those two strands apart again, you've got a heavy and a light. If you're growing in N-14, the light will combine with another light, and there are two of those. And then the heavy will combine with a light. So you'll get new sedimentation where you still have a band that's the combination of the light and heavy. But then you start having some material that is all of the light. So this would be this cycle. So two that are still light plus heavy, and then two that are exclusively light. You keep on replicating, and you'll keep on diluting the heavy. Does that experiment make sense? It's kind of a cool experiment. You can hardly believe it's feasible, but the centrifugation ability to differentiate with these isotopes is really valuable. So I just want to put in a plug for the use of isotopes. Obviously, they're used in nuclear physics. We use a lot of the radioactive isotopes for different reasons. But in biology, they are indispensable for some of these experiments. And even to this day, you can do things with isotopes you can't by other experiments, either the radioactive or the heavy. There's a great deal of work done nowadays in proteomics using mass spectrometry and heavy isotopes, where you can really track where things go and treat cancer cells differently from healthy cells, but actually track what's happening by putting heavy isotopes in one of the growing dishes of cells. So I like both of those experiments a lot. I would put them in the category of oldies but goodies. All right. Now I've got some details over here that we've got some explaining to do as we move forward. And I just want to, first of all, start with a couple of details that I want to highlight on this board. First of all, prokaryotes such as bacteria, E. Coli, and eukaryotes such as human cells. Have some differences in their DNA. So the size of the bacterial genome is about five million base pairs. The size of the human genome is not quite 1,000 times bigger, but a good deal bigger. The DNA in bacteria is circular, whereas the DNA in eukaryotic cells is linear. And it's actually in the form-- you see all these little images where you see the pairs of chromosomes sort of stuck together at the center, and that's linear DNA from one end to the other. In bacteria, the DNA is circular, so it doesn't have an end. So they're different, all right? So replicating circular DNA is a little bit different from replicating linear DNA. Now in both cases, the DNA has to be packaged up so it will fit in the cell. Otherwise, it's just too much of a disordered thing. In the case of bacteria, the circular DNA is wrapped up and super coiled with what are known as a polyamines, compounds that are very positively charged to neutralize all that negative charge in the DNA. And in man and other eukaryotes, the chromosomes are wrapped around very, very positively charged proteins known as histones. They're similar principles, but they're different entities. So when we start talking about replication of a prokaryotic cell, here's the typical circular DNA. So you can see the double-stranded DNA. And I've put a little symbol here that I call the ORI. So that's origin of initiation. So that is the place where you start copying your DNA. And we're going to talk about how to spot these places in genomes in a moment. And so the bacterial genome is copied bidirectionally, so you can go in both directions copying in the appropriate direction-- we'll talk about that in a moment-- to make the entire circular DNA and a perfect copy of the DNA with bidirectional copying. And that's not bad for small genomes because you've only got one origin of replication. You've got to get all the way around the circular DNA. It's quite a bit smaller than the human one. But what do you do with gigantic genomes such as the one that's about 1,000 times bigger than the bacterial one? How do you manage to still catalyze the replication of the entire genome quickly enough to make the copy of DNA in some reasonable amount of time? And that is taking also into consideration that the speed of bacterial replication is 1,000 base pairs per second. Pretty impressive-- 1,000 of those bonds made every second-- whereas in eukaryotes, it sits somewhere down from 50 base pairs per second, depending on conditions. These are not very explicit numbers. There's a little bit of variability. So how would you do it? If you had this massive chunk of gene, and you've got to copy it just really pretty quickly in order to replicate an entire cell contents in eight hours, what would you do? How could you expedite things? Yeah. Yes. So the answer here is start in a lot of different places so there's a lot of collaborative work going on all along that large genome. And so when the replication of the long linear chromosomes that are in eukaryotic cells, you end up with just a lot of origins of replication. So you're basically starting all over the place so you can get the job done quickly enough for it to make sense. Does that make sense to everybody? So start in a lot of places. It's a pretty easy thing. All right. Now I mentioned very briefly that before you can copy DNA, you have to unpack it. Now we think a lot about packaging DNA, less about unpacking it. So I've got these pictures sort of in a reverse direction. So it's only this form of DNA stretched out that can be copied, whereas the DNA in your cells is bundled up very tightly into chromosomes. How are they put together? Those chromosomes are made up of chromatin, which is a bunch of balls which are histone proteins with DNA wrapped around them. That's compact chromatin. Sorry, guys. Let me go back, not to go forwards. So this is chromatin in a compact form. This is chromatin now unraveled. They look more like beads on a string, DNA wrapped around each histone protein to form those nucleosomes structures. And then each nucleosome looks like this. It's got protein in the middle and DNA wrapped around it. So in order to go forward and replicate, we have to unpackage DNA. There are lots of signals to unpackage the DNA that we will talk about later. The nucleosomes look like this. So you can trace the DNA and the proteins bundled in the middle. And if we have time at the end, I will show you the video of the packaging of DNA and the video of the replication of DNA. They're about a minute and a half each, so it's not a whole bunch. Later on, we will talk about what happens when there is a determination that a cell needs to replicate its genome. A bunch of things have to happen as signals to do the unwrapping. And one of those things is actually to alter the charged state of the histone proteins so you neutralize the positive charges so the DNA can unravel for it. Makes sense. It's still just a plain old electrostatic interaction. So the synthesis of DNA is what's known as template-driven polymerization. OK. So template-driven polymerization. You'll get a sense of exactly what that means. We know the two strands are complementary, and we're going to use one strand as the code to make a new complementary strand. So when we're making DNA, we have one strand there. That's what's known as the template strand. I want to make a complementary copy of that strand. So what the DNA polymerase is going to do is systematically add nucleotides to the 3 prime OH of the ribose, and then you keep growing. But the base that gets put in is the one that's complementary to the base on the template strand. So if it's thiamine on the template strand, it's adenine put in on the new strand, and so on. And just to get these numbers again, the new strand is grown from 5 prime to 3 prime. The old strand is read 3 prime to 5 prime. So you're reading a strand and you're reading it 3 to 5, and you're making 5 to 3. That's how you end up with complementary anti-parallel strands. OK with that, everyone? All right. And when you form the bond, you go in with nucleoside triphosphates, you form a new phosphodiester with just one of those phosphoruses, and you kick out phosphate phosphate, which we would call pyrophosphate or diphosphate. OK. So here you could just sort of see, just check that you know what you're doing. I'm going to just say this is really a pretty simple thing. We're going to put in a T, then we're going to put in an a then we're going to put in a C because our template is telling us to put in the appropriate purine or pyrimidine that forms a nice base pair with that. So there's a question here. But I've already given you the answer to it. So it should be very straightforward for you to look at a template strand and decide what would be on the complementary strand, and also decide on the directionality of the complementary strand. Now origins of replication. What do they look like? It's the old where do I start problem. Where do I start? What's the best place to start? Someone? Not you. Not you. Anyone else want to answer here? I've got this whole genome. I've got to get going. I've got to start making a copy of it. Where's the best place to break into it to start making copies? Yeah. There. AUDIENCE: [INAUDIBLE] BARBARA IMPERIALI: Correct. It's simple. It's the difference between two and three. If you go to the area where there's lots of G's and C's, every GC has three hydrogen bonds. If you go to a different area where there happen to be a lot of A's and T's in a row, each pair is only worth two hydrogen bonds. You're going to pick the stretch that has the most A's and T's because it's the place where the base pairing is weaker. It's actually very simple concept. So it would be the AT-rich region is the place where origins of replication are more predominant. OK. So what we're going to do now-- and this is going to be tricky, but I'm going to make this work-- is we're going to talk about replicating an entire chunk of gene. And what I've done here is I've put the pieces that we're going to discuss over here. That's my menu. And we're going to work through what we need to copy a large chunk of DNA. All right? Everybody with me? So I redrew this this morning. I promised myself I'm going to be super tidy on this blackboard, which is a far stretch for me. So replication, we know where replication starts, AT-rich region. We could pick those out in the genome. We know that we've got to unwind the DNA to make a copy of it, so we must have had to unwrap the DNA beforehand. These are all known things. So let's now take a look at double-stranded DNA and try and figure out how we use all the components. Actually, I'm going to bring this board down because I don't see it as well up there. How do we make use of all these components that are part of the menu that we need to make the new strand of DNA? So we have double-stranded DNA and there are base pairs across. That's typical double-stranded DNA. That's what we're starting from, unwrapped. I haven't made it helical because my helices will get messy. And so we need to start. So the first thing that will happen is that the important enzymes like the helicase will scan to find an origin of replication, so somewhere for the process to start because that's where they're going to break in. So David, one of your TA's, works in Steve Bell's lab. And they study the mammalian origin of replication complex, which is this much more complicated situation than what I'm going to describe to you today. But they do fabulous single molecule work to show how those pieces are assembled. But I'll tell you the truth. As dorky as I am, the thing that I find the most cool about the origin of replication complex is that it spells ORC. And if any of you are Lord of the Rings fans, who can't be excited by a complex called the ORC complex? So there's going to be the screening of the genome for ORI. And I want you to remember what is up here is the rest of the chromosome. It's not just a fuzzy end. It's something that's got a lot of stuff there that's still base paired. So the first thing that happens is that helicase needs to start unzippering the double-stranded DNA, so you get to a new intermediate. We're going to draw like this. And the helicase is going to intercept to basically start separating those two strands of DNA in order to do the replication. But don't forget that these are still base paired, and these now are single strands. So I'm going to need something from that menu. I'm going to need it quickly. What do you think's the next thing we're really going to need in order to be able to move forward in our job? What do we need from over here? We're already using the double-stranded DNA. It's not going to be too difficult to figure out the NTP's. We're already using the unzip arrays, the helicase. What are we going to need? Yeah. Primase. Ah, primase shortly. But we've really got to do something to stabilize our unzippered stuff. Anyone else? So those two single strands are right near each other, and they're complementary. What's going to stop them going straight back together again and not allowing replication? So there are proteins that basically sit on the single-stranded parts of DNA as are known a single strand binding proteins that stabilize this transiently made complex long enough for it to be copied. So so far, we've used the helicase. We've used the single strand binding proteins. And I'm going to draw them there, still remembering that I have an entire chromosome up here. Now the heavy lifting has to start fairly soon, so we need the enzyme that's going to start making a copy of one of those single strands. That's going to be a DNA polymerase, which is right here. But DNA polymerase is kind of a finicky enzyme because it doesn't like to just start cold on the genome, on the single strand, and start making a copy. DNA polymerase needs a primer of some kind. So I'm going to just put that as a note up here. DNA polymerase needs what's called a primer. And what is a primer? A primer is a sequence-- I think I've got something up here-- a primer is a sequence that is complementary to a little bit of DNA so that when DNA polymerase comes along, it's actually grabbing a double strand and then filling up the rest of the single strand. So this would be a typical depiction of a primer. Here's a strand we want to copy. But the blue one is a primer strand giving you something double-stranded to hold onto. And then in this situation, DNA polymerase would be happy to fill in the rest of the sequence. So for the purpose of this discussion to start, we're just going to provide a primer that is complementary to the DNA just to give us a go. We have an alternative in the cell, where it's to actually use a primase and RNA building blocks to build a primer. But let's keep things simple for a minute and get on with a lot of the major work. And then we'll come back to that in a second. So this part is a primer. And that primer-- if this is the 3 prime end, that's 5 prime 3 prime. So the primer is complementary to the DNA in that direction. It's anti-parallel. And then DNA polymerase can go along and fill in the strand. And I'm going to draw it dashed, and it's going to keep on growing. And what I want you to notice is DNA polymerase grows from 5 prime to 3 prime. That is a cardinal rule, the directionality that we grow the new DNA in. So basically, looking at this picture, it's going in this direction because the 3 prime OH is free, and you're building onto it. So you use a primer first, and then you use DNA polymerase plus deoxy nucleoside triphosphates. So all those AT CG building blocks. So we've been using these. We've used the single-stranded binding proteins. We've used a primer, but there are an alternative. I'll get to that in just a second. But we now have a complex where I'm going to briefly divert you to tell you about, in the cell, what else you could use as a primer. And then we're going to have to deal with copying this other strand. But this first strand that is made is called the leading strand. It's the one that's easy. I peel apart the DNA. I have a primer there, and I can just build. Goes really smoothly. I'm building in the right direction. If you look over on the other side of the fence, you've got a problem because I can't build on this side from here, right? Why not? AUDIENCE: [INAUDIBLE] BARBARA IMPERIALI: Yes. I'd be doing it the wrong direction. I would be the total mess, frankly. There would be a crisis in the cell. So we're going to have to deal with that. So let me just get you out of the primer, just deal with this primer issue. In the cell, it's not like we can micro inject all these little primers to get our DNA synthesized. So RNA polymerase, believe it or not, doesn't need a primer. So you can build little bits of primer with RNA. You don't build very much, just build short segments. And then once you've got sort of enough there, DNA polymerase can start polymerizing. So you use RNA polymerase and nucleoside triphosphates, not the deoxy ones. So later on, we've got this piece here that might be made deoxy RNA, which is why, coming over here to my menu-- do I have it? Yes. We need an RNA primase, and we need RNA's to chop them out so that DNA polymerase can come back because now there is double-stranded material and fill in the gap. And then we need one more enzyme to stitch together the gaps. And that's the ligase. The logic of this stuff is great. I think it's just really amazing because all these moving parts have evolved to make all these functions fall into place. Now we need to talk about this pesky other strand. So we have a problem here because we've got the wrong directionality. So what happens in nature is as soon as there's enough of a stretch, a primer is put in place, growing the DNA in the right direction. That's from where the helicase is unzipping. So we put in another primer. And then DNA polymerase-- that's the white one-- can build its complementary strand, building in the appropriate direction to be consistent. So you basically fill in a chunk and then you have to wait awhile till more is unzippered, put in another primer, and then grow the DNA. So in that other piece you're making little segments of DNA intervened by RNA primers. Later on, those small primers are going to be chewed up by RNA's. DNA polymerase is going to fill them in, and ligase is going to join them. Does that make sense? It's a lot to grasp. And this other side is called what's known as the lagging strand. And there are pieces of short DNA that are transiently made that have their own name. You can remember it or not. I just feel I'm not being complete if I don't mention their name. They're called Okazaki fragments after the guy who discovered them. And they're longer in bacteria than in humans. So Okazaki fragments are the transient short stretches of DNA that are made in the lagging strand, and then they get zipped together. So that's sort of the story for replicating the DNA. Let me make sure I've described everything to you. Leading strand, lagging strand. Oh, can't believe I almost forgot this. All right. Now we got a problem. So we've come all this way. We've used all the pieces. We haven't used one of the pieces. So we're in a small bit of trouble because what's going to happen when you're, for example, trying to peel apart your DNA? What trouble are you going to run into? Who wants to come up here and get involved in my demo? And you've got to be aware that you're going to be on screen. You and you up there. You've got your hand up first. OK. I need someone who's going to hold onto this as if their life depends on it. You can't let the yellow ones slip out. So you are the rest of the chromosome, so make sure that you really hold-- and you don't let things come apart. Now you are helicase, OK? OK. So come on. Start helicasing and really pull as hard as you can. He's going to hold it. No, I don't want you to unwind it because we're in the middle of a chromosome. So you've literally got to do what helicase does. Yeah, keep on pulling. Come on, you can do it. Can you go much further? I mean, your arms aren't any longer. But it's almost impossible, right? We get to a stage where we need help. The help we have coming is from topoisomerase. Topoisomerase does something-- so walk just one step this way. Yeah. Topoisomerase some comes along and it cuts the DNA. It holds the pieces in its hands, if you will, lets the super coiling relax, and then it joins them back again. So topoisomerase-- thank you very much, guys. That's it. Chromosome, helicase, thank you. We really have some tightly wound DNA here. So topoisomerase is the get out of jail free card because it allows you to deal with all of this tension you got in place that you cannot unravel. So you need someone who is literally going to cut, hold, let the thing relax, rejoin it. And there are some topoisomerases called DNA gyrases. But the really cool thing about topoisomerase is that it's a wonderful drug target in both mammalian and bacterial cells because they're quite different in the two types of organisms. So in bacteria the antibiotic ciprofloxacin is a topoisomerase inhibitor that actually stops your bacteria dividing because if topo doesn't work, you can't go on. And in human biology and cancer biology, there are also topoisomerases-- there's one called [INAUDIBLE] -- that does the same thing in the eukaryotic topoisomerase to stop cancer cells dividing rapidly so that those cannot go on and make larger tumors. So I want to show you something. And I am actually certain that I have time. Come on. Now I'm going to show you. [VIDEO PLAYBACK] - In this animation, we'll see the remarkable way our DNA is tightly packed up so that six feet of this long molecule fits into the microscopic nucleus of every cell. The process starts when DNA is wrapped around special protein molecules called histones. The combined loop of DNA and protein is called a nucleosome. Next, the nucleosomes are packaged into a thread. The end result is a fiber known as chromatin. This fiber is then looped and coiled yet again, leading finally to the familiar shapes known as chromosomes, which can be seen in the nucleus of dividing cells. BARBARA IMPERIALI: Now I'm going to take you forward to DNA, what we just talked about here. - Using computer animation based on molecular research, we are now able to see how DNA is actually copied in living cells. You are looking at an assembly line of amazing miniature biochemical machines that are pulling apart the DNA double helix and cranking out a copy of each strand. The DNA to be copied enters the production line from bottom left. The whirling blue molecular machine is called helicase. It spins the DNA as fast as a jet engine as it unwinds the double helix into two strands. One strand is copied continuously and can be seen spooling off to the right. Things are not so simple for the other strand because it must be copied backwards. It is drawn out repeatedly in loops and copied one section at a time. The end result is two new DNA molecules. [END PLAYBACK] BARBARA IMPERIALI: All right. OK. That's what you saw today, replication, a lot of the mechanics, a lot of the moving parts. There's this tricky stuff with primers. You'll get used to that. But this is really the quintessential set of pieces to replicate DNA. It is an amazing process. Look at the speed. 1,000 base pairs a second. Can you even believe it? An entire circular chromosome copied in 20 minutes. So these are really things to think about because they are so impressive that it's a delight to actually be able to teach them to you. OK. I'm done for today.
MIT_7016_Introductory_Biology_Fall_2018
24_Stem_Cells_Apoptosis_Tissue_Homeostasis.txt
ADAM MARTIN: All right. So in Monday's lecture, we talked about how cells replicate, OK? And today, I want to talk about how now an entire organ would essentially replicate. In this case, it's not going to divide, but it's going to regenerate or renew itself, OK? And so this involves adult stem cells and also apoptosis , which you've heard a little bit about earlier in the course. And to explain this to you, I'm going to have basically a model organ that we'll use. And we'll use it for a couple lectures. And the model organ is going to be the lining of the intestine, OK? So the lining of the intestine is an epithelial tissue. And I'll tell you a little bit about epithelia in just a minute. But you have the intestinal epithelium. And this is the-- these are the cells that are the lining of the intestine, OK? And one of the reasons that I've chosen this system is because the lining of your intestine has pretty remarkable regenerative capabilities, OK? So your small intestine renews about every four to five days, OK? So the vast majority of your cells in the intestine were basically derived in the last four to five days, OK? So humans aren't as cool as some organisms, like newts, in that you can't cut off your arm and have it grow back. But at least we have the intestine, which undergoes a pretty dramatic regeneration, OK? And this is not-- the intestine is unique in how rapid this is. But you have other tissues that also exhibit continual renewal, like your skin and your blood cells. And even the cells of your that line your insides of your lungs, they do exhibit renewal capabilities, OK? And so I'm going to use the intestine as a model system. It doesn't mean it doesn't happen in other tissues. But it just happens that we can-- we really understand the intestine system maybe a little bit more than many other systems. So I'm going to use it as an example. So in thinking about the lining of your intestine, it's important, and it has important functions. One important function of this lining is it has to absorb nutrients from food going through your intestine, OK? So it exhibits a nutrient absorption function. Now, the lining of your intimate intestine, much like your skin, is also a barrier between the inside of your body and the outside world, right? Because basically, the inside of your intestine is contiguous with the outside world, right? If you open your mouth, you can get down to the inside of your intestine, OK? So it serves also an important barrier function. And one point I want to make about this system right now is that the intestinal epithelium, like many of your other organs, are composed of multiple cell types, OK? So it also has multiple cell types. OK. So let's now consider the lining of the intestine. And the way the intestine morphologically looks is that there are a series of invaginations. So this is a cross-section view through the intestine. So you have to imagine that this is a cross-section view, but that this is a plane. And it's a plane with a lot of invaginations in the plane, but also protrusions out of the plane. And so you have to remember-- you have to think of this as a surface. And then it's wrapped up into a tube, OK? So this is just a very simple cross-section image of the intestine, OK? And the lining of the intestine is a sheet of cells. So this lining is composed of many, many cells. They're columnar in morphology. So what I've drawn here is just a small section of the intestine. This would be the lumen, out here. This is where the food is, or the food going through your intestine would be. And then below here, this is interstitial fluid inside of your body basically, interstitial, OK? So the food's up here. The rest of your body is down here. And you can see that there are there there's a structure to it. It's not just a flat surface that's wrapped up, but there are invaginations. And the invaginations are known as intestinal crypts, right? Much like you would-- if you bury something, like a body, it would go below ground. So the crypt is below the surface of the lumen. And this these projections, out here, are known as villi. So they're the villi of your gut, OK? Now, if we consider-- and this lining is made up of multiple cell types, which I've outlined up here and which are in your handout. So you don't have to write these down. But this is just making the point that there are various differentiated cell types. There are enterocytes, which are the absorptive cells. These are the cells that are taking in nutrients and transporting them into your body. There are enteroendocrine cells that play an important signaling function in the gut to regulate the biology of the gut system. There are a goblet cells, which secrete mucus into the intestine system, which protects these epithelial cells from digestive enzymes that are present in the lumen. And there is this last cell type, the paneth cell, which functions as an important role in regulating stem cells, as I'll outline in just a minute. So these are all the differentiated cell types. But there's a type of cell that's an undifferentiated cell type. And that's the intestinal stem cell, which will be the hero of today's lecture. All right. Now, if we consider just a small group of cells, what these cells look like is this, OK? So these are what are known as epithelial cells. And epithelial cells have certain properties. The first is you see that this end of the epithelial cell looks different from this end, OK? And this is called an apical-basal polarity. So much like neurons have a polarity where the on one side of the neuron there are dendrites and on the other side of the neuron there's an axon, these cells have another type of polarity, which is called apical-basal polarity. So the side facing the lumen is apical. So the lumen would be up here. And the basal side would be down here, OK? So this is basically oriented along this axis of the tissue, OK? where apical is on this side, basal is on this side, OK? And these projections from the individual cell, these are called microvilli. And essentially, these plasma membrane corrugations increase the surface area through which nutrients can be absorbed into these cells, OK? Now, one other defining feature of these cells is they have proteins that protrude from the plasma membrane. And these are adhesion proteins that couple the cells together. And actually, the cells are stuck together much tighter than I'm drawing here, such that the cells form a barrier so that things can pass unregulated from the lumen into the body, OK? But these proteins, which I'm drawing sticking out of the membranes of the cells, are adhesion proteins. And these adhesion proteins simply link the cells together such that they form a sheet of cells or tissue. So they link cells together, OK? So these are two of the key properties of epithelia. They have an apical-basal polarity. And they also exhibit cell-to-cell adhesion, such as the cells reach out and connect to each other. And they basically are glued together, or Velcroed together such that they don't easily come apart. OK. Now, in considering this system, what I'm going to tell you is that there is renewal of this lining. And the renewal starts at the base of the crypts, OK? So there's going to be renewal. And this cell renewal is at the base of the crypts, right? There are many of these crypts, right? You have you have like the surface, but there are many, many invaginations that are present in your gut. And the renewal is happening at the base of these crypts, OK? And that's because at the base of these crypts are where a type of cell, known as intestinal stem cells, lie, OK? So it's at the base of these crypts where there are what are known as intestinal stem cells. And I'm going to abbreviate these ISCs for this, so I don't have to write out intestinal stem cell whenever I tell you about them, OK? Now, that's where renewal occurs. And if you just had more and more cells getting put in the system without any removal of cells, then the organ would get bigger and bigger, right? And so our intestine more or less staying the same size at this point in our life. And so in addition to renewal, there's also cell death. And what happens when cell death occurs is that cells are shed from the intestinal lining into the lumen. So cells are shed into the lumen. And then they just go with the rest of the crap that's in your intestinal lumen. And it's eventually removed from the body, OK? So some cells are leaving the tissue and going into the lumen after they've existed for a few days in the intestine, OK? So to have an organ then the constant size renewal has to essentially be more or less equal to death, OK? And so you can think of this as a type of homeostatic state for this tissue where, if renewal equals death, you have what is known as tissue homeostasis, where the number of cells in the system as a whole is basically remaining the same, even though there's constantly new cells going into this system. But then the cells are also-- the older cells are being removed from the system. So what's really key in this process are these intestinal stem cells. So I'm first going to tell you about stem cells. And these are what I would define as adult stem cells, OK? So they're stem cells that are associated with a particular organ, OK? And I want to differentiate this right now between these types of stem cells and another type of stem cell that we're going to talk about later in the course, which is an embryonic stem cell. So adult stem cells-- these stem cells, like intestinal stem cells, are associated with a specific organ. And they only give rise to cell types that are present in that organ, OK? So they're not-- they can't give rise to any type of cell type. Your adult stem cells are really specific-- organ-specific, we'll say. In contrast, embryonic stem cells are-- these stem cells have more possibilities. They can give rise to pretty much any cell type in an entire organism, OK? So you can think of these adult stem cells as being more restricted in their fates than the embryonic stem cells. And so what adult stem cells are called is they're called multipotent, because they can give rise to multiple different cell types, but not all of the cell types that are present in an organism, right? An intestinal stem cell will not be giving rise to a blood cell or other cell types in other organs, right? It's restricted to just giving rise to cells that are present in that organ. In contrast, embryonic stem cells are what are known as totipotent, or sometimes pluripotent, where this type of cell really is capable of making any type of cell that is present in an adult organism, OK? So that's less restricted than the adult stem cells. And we're going to come back to these embryonic stem cells towards the end of the course. But for understanding cancer and also tissue homeostasis in the intestine, we're really needing to focus on the adult stem cells. OK. So these adult stem cells have two key properties. The first is what I just said. They're multipotent. And what multipotent means is that they can give rise to multiple different cell types. So this stem cell gives rise to multiple cell types. And this is associated usually with a single organ system. OK. So in the case of the intestinal stem cell, the intestinal stem cell is multipotent. And it can give rise to many different terminally differentiated cells. And you can see here I've just written in example cell types that the intestinal cell could generate. It can generate any of the four different types of cells that I introduced to you at the beginning of the lecture. The other key aspect of this system is that, in addition to generating all of these different cell types, the stem cell can also renew itself, OK? So the other key property is this self-renewal. So the intestinal stem cell also gives rise to another intestinal stem cell. So it basically duplicates itself such that you still have a stem cell in the organ. And then one of the daughter cells is self-renewed and remains a stem cell. The other daughter cell can go on to divide further and give rise to these differentiated cell types, OK? All right. So one question you might be asking yourself is, how is it that-- what regulates whether or not a cell will go on to differentiate or whether it will stay a stem cell? And the answer to this question involves communication between this stem cell and other cells in the system. And it involves a special type of cell called a stem cell niche cell. And so I'm going to tell you about a model, which is the stem cell niche model. And what the stem cell niche model is, is that basically there is a niche or compartment which promotes the self-renewal of cells in that compartment that makes them stem cells. So the stem cell niche you can think of as a compartment where signals, similar to the types of signals that we've been talking about over the past four or five lectures, regulate the behavior of the cell to ensure self-renewal and to suppress differentiation, so a compartment where signals promote stem cell renewal. I want to ask you one question before I move on, which is, how would you define-- how would you determine that there is a special type of cell that gives rise to all of the cells in an organ? If you were tasked with finding this and determining whether this was true or not, how might you do it? Does anyone have an idea of what experiment they would do, or what criteria they would have to determine whether or not this is a case? Let's say I gave you a cell, right? In a dish. And I asked you to tell me whether or not you would think this is a stem cell or not. Yes, miles? AUDIENCE: [INAUDIBLE] ADAM MARTIN: Mm-hmm. AUDIENCE: One cell is produced by the same cell, assuming that it's not a stem cell but if the cell [INAUDIBLE] it's a stem cell. ADAM MARTIN: Mm-hmm. So Miles suggested he would like to take the cell that I just gifted him, and let it grow up, and determine whether it gives rise to multiple cell types. And Miles said that, if it just gave rise to a single cell type, that would suggest that it's not a stem cell. But if it gave rise to multiple cell types, then it could be a stem cell, OK? And it's this type of experiment that-- this type of experiment has been done, where you can take an intestine from mice, and even you can take tissue from humans, and you can associate the cells from each other so that you're left with single cells that are separate. And you can then use some type of flow cytometry to separate cells. And you might be interested in a cell that maybe expresses some marker that you're interested in. And you can separate those cells and isolate them. And then you can take an individual cell and grow it in a dish, OK? And this has been done for intestinal stem cells. And if you take an intestinal stem cell and you grow it in a dish, it can grow up and form a bulk of tissue. But what's really remarkable about the result of this experiment is that, from a stem cell, you get this massive tissue. But it self--organizes into a structure that very much resembles a normal gut, meaning that there are crypts where the stem cells are localized. And then if you look at the different cell types in this, what's known as an organoid, you see all of the different cell types that are normally present in the gut, OK? And so this is an example of a type of experiment that's done to show whether or not a certain cell has the capability of functioning like a stem cell, OK? So if you start with a stem cell, you can regenerate the entire organ system essentially. You might also be familiar with bone marrow transplants. And so if you kill all the hematopoietic stem cells in, let's say, a mouse, then you can transplant in a single hematopoietic stem cell into that system. And it will regenerate all the blood cells in that system, OK? So those are just a few examples of how one might functionally define a stem cell in an experimental setting. All right. So now, I want to tell you about the types of signals and how these signals are promoting stem cell renewal, OK? So in this case, the niche, stem cell niche is going to be right here. And the types of signals involved-- I'll just draw some cells here. One of the types of cells that's part of the niche in the intestine is the cell type that I introduced you as the paneth cell. And these paneth cells localized to the base of the crypts together with the intestinal stem cells. And what paneth cells do is they send a signal to neighboring cells. And this signal is a Wnt signal, which I'll tell you about in just a minute. And so this is the intestinal stem cell here. And the paneth cell is signaling to that intestinal stem cell through a secreted ligand known as Wnt. So Wnt is a secreted ligand. And this signal is what's known as a juxtacrine signal, which just means that the cells have to be adjacent, or juxtaposed, to each other in order for the sending cell to signal to its neighbor, OK? So it's not signaling long range, but it's signaling to a neighboring cell, OK? So now, if we think about this intestinal stem cell, one thing intestinal stem cells do as they divide, right? So this intestinal stem cell could divide. Here is a cell in mitosis. It rounds up and divides. Here, you still have your paneth cell. And then when this cell divides, then it's going to be two cells, OK? So now, paneth cell, another cell. Here the two daughter cells. Here's the paneth cell. The paneth cell is still secreting this Wnt signal. So you have Wnt getting secreted. And it's going to signal to its neighbor. But this cell is starting to get farther and farther away from that signal, right? So you can imagine this is happening right here in the tissue, where this cell is at the boundary of the niche, OK? So this cell that gets pushed out of the niche is going to start to differentiate, OK? Because it's no longer seeing the signal, OK? So it's the lack of the Wnt signal which tells cells basically that they should start differentiating into the various cell types of the gut. But the cells that are stuck at the base of the crypt, down with the paneth cells, are still getting the Wnt signal. And so they remain an intestinal stem cell, OK? So here is just a diagram showing you part of that. So the niche is down here, where you have the stem cells. The paneth cells would be some of these blue cells at the base. But there are also other cells below the epithelial lining known as stromal cells that are also secreting Wnt. And so this compartment down here has high Wnt ligan activity. And that tells cells that remain here to stay stem cells. But the cells that are getting pushed up and moving up towards the lumen, they no longer receive this signal. And so they are going to start to differentiate, OK? So you can think of this system as almost a conveyor belt. There's a conveyor belt-like movement of cells from the base of the crypt up towards the tip of the villis, OK? And when the cells move away from the niche cells, then they no longer have the self-renewal signal. And therefore, they go on to differentiate. OK. We'll go back for now. All right. Now, I want to tell you a little bit about this signal, Wnt, because it's something that's come up before in the lecture, even though you might not know it. So Wnt is a ligand. So it functions much like a growth factor. This is a protein that is secreted from the cell, and then binds to receptors on other cells, and induces signaling events in those cells, OK? And Wnt stands for-- the W stands for wingless, OK? So you remember from earlier in the semester, wingless was identified as a mutant that disrupts the formation of wings in the fly, OK? So one of the places where these genes were discovered is in the fly. The nt of Wnt comes from int 1, which stands for the integration of mouse mammary virus 1. Sorry, that's a little bit more of a mouthful. The integration of mouse-- actually, sorry, mouse mammary tumor virus 1. I forgot the tumor part of it. OK. So the W in Wnt is from wingless. The nt in Wnt is from int 1, OK? And so as you can see, there are two different systems where this type of gene was discovered. They are very disparate from each other. One was in a developmental mutant, in the fruit fly. The other was in a mouse system, where the integration of the virus caused over-expression of Wnt, which caused it as tumor genesis, OK? So these disparate systems led to the identification of this Wnt molecule. And this is a defining member of a signaling pathway that regulates stem cell renewal. And I just want to briefly go through the logic of the signaling pathway, because I want you to get a sense that not all signaling pathways are like Ras-MAP kinase, but there can be different regulatory logic, OK? So what's the regulatory logic of this pathway? So the regulatory logic is shown in this cartoon. And I'm going to start with a cell that does not see Wnt ligand. So that would be the case on the left here. So if there's no Wnt ligand-- I want you to focus on what's going on here-- there's a complex that's present in the cytoplasm of the cell. And what it's doing is it's destroying this beta-catenin protein, OK? So what beta-catenin is-- among other things, it's a transcriptional coactivator. So it's basically a transcription factor. So it works with another protein to regulate the expression of certain genes, OK? And in the absence of Wnt, this beta-catenin transcription factor is destroyed and is not able to get into the nucleus, OK? So if there's no Wnt, then beta-catenin is destroyed. And it's destroyed using a system that I introduced in Monday's lecture, which is regulated proteolysis by polyubiquitination, OK? So the first real-- so you have regulated proteolysis by polyubiquitination. So the way this works, as seen here, is that beta-catenin is bound by this complex. And there's a kinase that phosphorylates beta-catenin. And then the phosphorylated beta-catenin recruits this E3 ubiquitin ligase, which polyubiquitinates it, OK? So that leads to the destruction of beta-catenin in the absence of a Wnt signal. But when Wnt ligand is around, this leads to the disassembly of this complex, which is known as the destruction complex. And that leads to beta-catenin accumulating. And once it accumulates, it goes into the nucleus and starts changing gene expression, OK? So in the presence of Wnt, beta-catenin is nuclear. And that's where it needs to be, if it's going to regulate gene expression, OK? So what you see is the logic of this pathway is you have a double negative, where you have a complex, which is known as the destruction complex. And it includes this gene, APC, which we're going to talk about on Friday. This destruction complex is inhibiting beta-catenin by destroying it. And the way that Wnt induces beta-catenin activation is by inhibiting the inhibitor. So Wnt inhibits the destruction complex, which then stabilizes beta-catenin and allows it to go to the nucleus, OK? So the other piece of the logic here is you have inhibition of an inhibitor to activate beta-catenin. Any questions on the pathway? OK. So now that we have our intestinal stem cells and we have a way, by increasing Wnt in this compartment, to maintain the intestinal stem cells through self-renewal, now we have to talk about the compensatory mechanism of death, which allows this tissue to maintain homeostasis. OK. So death-- in this case, death is going to be useful. And the process of death is called apoptosis. And apoptosis is Greek for falling off. And that's essentially what these cells are doing, because the cells, as they move from the base of the crypt up towards the lumen-- eventually, they're going to fall off into the lumen of the intestine. So again, I'll draw. Here's a villus. This is now a villus. And so what happens is, at the tip of the villus, cells are going to be shed from the lining of the epithelium into the lumen. The lumen is up here. This is the villus. And the cells are going to shed off into the lumen and be removed from the organ system, OK? So cells are shed into the lumen here, OK? And this is going to balance the renewal at the base of the crypts such that there's homeostasis. So I just-- the movie I was showing you at the beginning of class was a movie showing you what happens to a cell undergoing apoptosis. So in this case, the cell is binucleate and unhappy. And then you're going to see that it basically explodes, OK? So that's a cell undergoing apoptosis. But you see that there is a clear change in cell morphology and physiology associated with this. And I just want to point out that this is also something that we talked about earlier in the course. And we talked about experiments, a genetic screen that led to the identification of the pathway that regulates apoptosis. And that was done by Robert Horvitz. Much of it was done by Robert Horvitz in his lab here at MIT. And for that work, Robert Horvitz, in addition to his colleagues, won the 2002 Nobel Prize. So you'll recall this is something we talked about in the context of a genetic screen. But this is something that's-- this is what it's doing in your intestine system. It's balancing renewal so that you have homeostasis. So during apoptosis, a cell goes through a series of changes. First-- or what happens eventually is the nucleus becomes fragmented and chromosomal DNA even gets fragmented. It gets chopped up. And also, the plasma membrane also starts to bleb and fragment such that it breaks up into what are known as these apoptotic bodies, OK? And so you can think of these apoptotic bodies as bite-sized pieces of cell that neighboring phagocytic cells can eat up and remove them from the body, OK? So in this case, the cells are being shed into the lumen. So they don't need to get eaten up, because they're just going to go out of the digestive tract. So there are-- cells have numerous ways to activate this apoptosis process. I'm going to tell you about two types of signals that regulate whether or not a cell undergoes apoptosis. The first is that there can be a signal that basically tells the cell to kill itself, OK? So you can think of this as a kill signal. And one type of way to activate this signal is if the DNA is irreparably damaged. So if there is a high level of DNA damage, this induces a signaling process in the cell. And one of the results of that signal-- in addition to regulating the cell cycle, if the DNA damage is great enough, it will induce an apoptotic signal. And it will activate the pathway that the Horvitz lab elucidated in the worm, OK? So there's a signal. And that leads to apoptosis and death. Another type of signal that's critically important for tissue homeostasis and determining whether or not cells live or die is a survival signal. So there are cell survival signals. And many of the growth factors, such as EGF which you've heard about-- in addition to inducing proliferation, these growth factors also tell the cell not to die, OK? So these could be growth factors. And what these cell survival signals do is they repress apoptosis. And so you can think of it where you have a cell constantly needs to be communicated to. And it needs to be told don't die, don't die, don't die, don't die. And then if you remove that signal, it won't be getting that information anymore. And it can undergo apoptosis, OK? So if we were to remove this signal, you remove the brakes on apoptosis. And the cell will undergo cell death, OK? So this ensures that you don't have a cell just kind of like going on and doing its own thing, because cells, in order to live, often need to have some sort of signal from another cell that tells them to live. And so there's some coordination between cells such that you don't have cells rampantly dividing out of control. Now, the reason we're doing this right before we talk about cancer on Friday is because everything that I'm telling you is really essential to understand how a tumor is formed in an organ system, OK? And I want to end the lecture by just planting a seed of an idea in your heads before we move on to talk about cancer on Friday. And I want you to think about the organization of this system, where you have stem cells undergoing renewal. And then the stem cells are just a small fraction of the cells in the system. And they're dividing slowly, OK? So let's think about the stem cells. The DNA on the stem cells, these are dividing slowly. And because they're dividing slowly, they're not going to-- their DNA is not going to accumulate as many mutations. So there is going to be fewer mutations. But these are the cells, and this is the genomic DNA that is going to stay with the organ, OK? So the stem cells are like the crown jewels of the organ. This is the material the organ wants to protect, because it's what's going to be lasting in the organ the entire lifetime of the organism, OK? So you get slow division here and self-renewal. And this cell will stay with the organ. But then where most of the mitosis and cell division happens and replication happens, it leads to an expansion of cells that all differentiate. And because the cells all differentiate, they will all eventually die and get removed from the organ, OK? And this is termed transient amplification. So when there's transient amplification of one of the daughters of this stem cell, this is where there's rapid division. And where there is rapid replication and division, this is where you can get the most mutations. But from the standpoint of cancer, that doesn't really matter, right? Because in order to have a tumor, the cells have to stay in the body. And so all of these cells are going to undergo programmed cell death, and then be shed into the lumen of the intestine and removed from the organism entirely, OK? And so this is actually one important way that our organs and our bodies prevent tumors from happening, because the cell type that is going to remain in our body is the one that's protected from accumulating mutations, OK? And we'll come back to this on Friday. And so I will see you on Friday. And we'll talk about cancer on Friday.
MIT_7016_Introductory_Biology_Fall_2018
20_Cell_Signaling_1_Overview.txt
BARBARA IMPERIALI: Now, I want to talk today about one small thing before we move on to signaling, because it really kind of completes the work that we talked about with respect to trafficking. So I popped this question up last time, and it seemed like there weren't quite enough sort of people leaping to give me an answer. But let's just take a look at the big picture of things, as it's always good to do. Because this will also get me to one other topic, which is protein misfolding. So at the end of the day, what really defines where a protein is, what it does, is defined by its sequence. But you always want to remember that a protein sequence is defined by its messenger. The messenger is defined by the pre-messenger. Yes, there may be splicing events that really cause changes in localization. But the pre-messenger includes the content. And then what defines that is the DNA. There's certain aspects of regulation at the epigenetic level that we don't talk about barely in this course. But I want you to make sure that you realize at the end of the day, what the protein is, how it folds, is defined originally by the sequence of the DNA, although a long way along here. The post-translational modifications that we started talking about last time are defined by the protein sequence, which all the way is defined by the DNA-- so, so much of protein function. And there's one more aspect of proteins that's defined by the DNA sequence, and that's whether a protein folds well, or perhaps, in some cases, misfolds. And that's the thing I want to talk about very briefly today. Because I think that this captures the picture. So let's just go over here and write misfolded proteins, which, just like everything else, largely end up being dictated by the DNA. Because whether a protein folds faithfully into a good structure or misfolds can be a function of the protein sequence. So there could be mutations in the protein that ultimately end up that the protein misfolds and forms either a misfolded tertiary structure, or even worse, adopts an aggregated form that causes a lot of damage within cells and outsides of cells. So I want to talk just briefly about the processes that we have-- it's just one slide-- to deal with misfolded proteins. So when a protein is translated, it almost starts folding straight away, especially large proteins. A fair amount of a protein may have already emerged from the ribosome and started folding, even when the whole protein isn't made. The sequence ultimately ends you up with a well-folded protein. But if the protein does not fold fast enough, or there is a mistake in this, which might be caused intrinsically by the primary sequence, if there's a mistake in that-- so slow folding or incorrect folding-- then you will end up with a protein that's partially folded within the context of a cell. We especially encounter misfolded proteins when we are overexpressing proteins in cells, because you're just making one of a type of protein really quickly, and it doesn't have a chance to adopt its faithful structure. So there are proteins within the cell that helped sort of protect the folding process early on, to allow the protein to have enough time in singular, not with a lot of copies of itself around that are misfolded, to adopt a folded structure. And these proteins are called chaperones. I don't know if you guys are familiar with the term chaperone. It was a term that was heavily used in the sort of 18th and 19th century. A chaperone used to be an aunt or someone who you would send out with your beautiful young daughter to chaperone her, to protect her so she didn't get bothered by those mean men out there. So chaperones were-- that was the original definition of the chaperone. And it's kind of interesting that the chaperones are now proteins that help folding or protect against misfolding. How do they do this? Generally, a protein will fold poorly if it's very, very-- if it's quite hydrophobic. And hydrophobic patches are exposed in aggregate. So let's say, you have a protein, and there's a lot of copies, but they're not folded. If you have things that are hydrophobic that would normally end up tucked inside the protein, if the protein hasn't folded in its good time, these will just start to form aggregates, sort of associating with each other. It's just a physical phenomenon. If you put something that's got a lot of hydrophobic faces on the outside, this will start forming an aggregated bundle, and not a nicely folded protein at all. What the chaperones may do is in part hold the partially folded protein. So let's just think of this big jelly bean as a chaperone until things start to adopt a favorable state. But sometimes it's just too much. The chaperone cannot handle the flux of protein. So the protein ends up being recognized as misfolded. And then it gets tagged as a misfolded protein, and it gets taken to a place in the cell for disposal. So if you are unable to fold, there is a tagging process. And I mentioned it last time. It's a process known as ubiquitination. This is also a post-translational modification, but it's one that occurs on poorly folded proteins. And I'm going to describe to you that system, because the ubiquitination is the flag, the signal, or the tag, to take this protein to the great shredder, basically. And so what does the-- what's a paper shredder-- I like the analogy with a paper shredder. So here's a fellow who's got too much on in his inbox. So he just sends it straight to the shredder. It's a little bit about too much misfolded protein being made. So instead of sort of waiting to deal with the paperwork, you just send it straight to the shredder. And the proteasome is the cellular shredder that actually breaks proteins up into small chunks, and then digests them out. So think of the proteasome as a shredder, which chops up proteins into small pieces, mostly into short peptides that are 8 to 14 amino acids in length-- fairly small. Short peptides won't cause a problem in aggregation, and will then be further digested. Now, if you've got this shredder sitting around in the cell, it's like having a paper shredder on all the time. You've got a-- there's a risk things may end up in there without meaning to be. So for things to be tagged for shredding, they go through what's known as the ubiquitin system. So the first thing to get into that is-- and it's only then that proteins are tagged for shredding up or chopping up by the proteasome. So as you can sort of tell by its name, it's got protease function, but it's a large, macromolecular protease, with lots and lots of subunits that are important to cut that polypeptide into smaller pieces. But because many of your proteins may be partially folded or misfolded, they first have to be unfolded. So the ubiquitin is the signal to send proteins to the proteasome, where the second action is protease activity, and the first action is unfolding. So what I show you on this picture is the barrel structure of a proteasome. Let me explain the components of it. The red component of the proteasome is a multimeric ring that uses ATP and starts tugging apart the protein that you need to destroy. But it will only do that if the protein becomes labeled for destruction by the ubiquitin system. And I am showing you here a massively simplified version. Let's say this is a misfolded protein. It gets tagged with another protein. It's a really little protein known as ubiquitin. I've shown you the three-dimensional structure here. And using ATP, you end up managing to put a ubiquitin chain on the protein that's going to be destroyed. That is a post-translational modification that is a tagging for destruction. If the protein is not tagged, then it's not going to be chewed up. That makes sense. You don't want to be chopping up proteins in a cell with wild abandon. Once the ubiquitin chain is on here, the protein will bind to the unfoldase part of the proteasome, and with ATP, it will just stop tugging the rest of the residual structure apart to thread the protein down into the blue part of the barrel. It's a little hard to see it like this, but it's literally, the proteasome, are four concentric rings. Let me see. I hope my artwork is going to be good enough. Well, that's an unfoldase. And so is that. And then in the center, there is a protease. And each of these components is multimeric, having six or seven subunits. So it's a huge structure. It has a sedimentation coefficient of 20S, that entire structure. I don't know if you remember when I talked about ribosomes, they were so big we didn't tend to talk about them by size. We talked about them by sedimentation coefficient. And the large and small subunits of the ribosome, the eukaryotic one, just to remind you, were 40S and 60S. So just remember that S stands for Svedberg. It's a sedimentation coefficient unit. It describes how fast approaching precipitates. So once the protein has been labeled with ubiquitin, it binds to the unfoldase. And then the single strand feeds into the center core, which is two sections of protease. So it's feeding in here. It sees the protease activity. And then it's just short pieces of protein are spit out of the proteasome. Once these are really little pieces of peptide, they're readily digested by proteases within the cell. And you can recycle the amino acids, or you can do other things with these small pieces of peptide. They actually end up sometimes being sent for presentation on the surface of the cell by the immune system. And you may hear a little bit more about that later. So the proteasome-- oh, I apologize-- this should have been 26S-- has a molecular weight that's very large-- 2,000 kilodaltons. That's why we refer to it by its sedimentation coefficient. So this machinery is very important to get rid of misfolded or aggregated proteins to destroy them. Now, does-- are people aware of the sorts of diseases that can result from misfolded proteins? Has anyone been reading the news much about certain types of diseases, particularly in neurobiology? Anyone aware of those? Yeah. AUDIENCE: Was it mad cow disease? BARBARA IMPERIALI: Which one? AUDIENCE: Mad cow. BARBARA IMPERIALI: Yes. Mad cow. So there are a variety of neurological disorders, and mad cow is one of them. Creutz U-T-Z feldt-Jakob. But Alzheimer's disease is another one. Pick's disease is another. There are a wide variety of neurological disorders that result from misfolded proteins, both inside the cell and in the extracellular matrix, forming these tangles that are toxic to the neurons, causing them to no longer function, and then resulting in many of these neurological disorders. The ones I've described to you, I've mentioned to you here. I know many of you are familiar with Alzheimer's disease. Mad cow disease is a variant of a particular protein misfolding disease that was first noted in cattle. And they basically just fell down, dropped down. And it was in some cases ascribed to-- the contagion with the disease is ascribed not to a virus or to a microorganism, but literally, to misfolded proteins causing the formation of more misfolded proteins. So these are all collectively designated as prion diseases. I think you'll have read that term. And it's a particular kind of disease that the infectious agent isn't a living system-- not a virus, not a microbe, a fungus, a protozoan, but rather a protein, where it's misfolded structure nucleates the formation of more misfolded structure that leads to the disease. So I grew up in England during the years where there was a lot of mad cow disease in England. And even though I'm a vegetarian in the US for 30 years, I can't give blood in the US, because I lived in England during the time when there was a lot of mad cow disease. And this can be dormant for a long, long time before it suddenly takes over. So there's restrictions on blood donation in certain cases. And it's because it's not something you can treat with an antibiotic, you can treat with an antiviral. It's literally traces of badly folded protein that can nucleate the formation of more badly folded protein, that can lead to the diseases. These were-- there's particular instances of some of these diseases in tribes where there's pretty serious cannibalism, and eating your sort of senior relative's brains was considered to be something-- an important act of respect. And there was this transfer of some of these prion-type diseases through cannibalism as well. So eating contaminated meat, be it a cow, be it your grandparents, whatever, it's something that actually is-- it's a serious transmissible disease. And it's really-- in the situations where it can be sort of related back to contaminated meat are one thing. But there are variations in the case of Alzheimer's, where the sequence of proteins may dictate that they don't fold well, or they're not post-translationally modified properly, so they end up as misfolded proteins. So these are often genetically linked disorders, some of the things like Alzheimer's. And once again, remember that goes all the way back to the DNA, which might, in some cases, trigger the misfolded disease. So it's a fascinating area, and there's a tremendous amount to be studied. Because of the aging population, these diseases are piling up, and we need to mitigate the causes of the disease, and find ways, for example, to slow down. If there are these fibrils of protein that are misfolded, can we maybe inhibit that formation with some kind of small molecule inhibitor to mitigate the symptoms of the disease? So it's a very, very active area, because almost every-- many, many neurological disorders seem to be coming down to misfolded proteins. So let's move on now to signaling. All right. So we're going to spend two lectures on-- the remainder of this lecture plus the next lecture. And what I want to do in this lecture is introduce you to some of the paradigms, the nuts and bolts, the mechanics of protein signaling. And then in the next lecture, I'm going to show you examples of how all the characteristics that we define signaling by get represented in signaling pathways within cells. So I'm going to give you all the moving parts, and then we'll move forward to see how the moving parts might function in a physiological action, such as a response to something particularly scary, or as a trigger to do-- for the cells to do something different. So let me take you, first of all, to a cartoon-like image of a cell. And we're going to just take from the very simplest beginning. But then this topic will get quite complex, as you see. But that's why I think it's important to reduce the process of protein signaling down to simple aspects of it that we can really recognize, even in much more complicated pathways. So in protein cellular signaling, this is a complex system of communication that governs all basic activities of the cell. There are no cells that don't do signaling. Bacteria and eukaryotic cells may do signaling slightly differently. But they still do have an integrated correlated system that's responsible for triggering functions of the cell through a series of discrete steps. So protein signaling can be dissected into three basic steps, where you, first of all, receive a signal. And we're going to talk about what that signal is. What's the nature of that signal, is it small molecule, large molecule? Where is the signal? Where does it act? Then the next step is to transduce the signal. And finally, you have an outcome, which is a response. So we're going to talk about each of these components in order to understand flux through cellular signaling pathways, and how they work to give you a rapid response to a necessary signal. All right. So in this cartoon, let's just, for example, think about what if we want to trigger cell division? We might have a signal, which is the yellow molecule-- a small molecule, large molecule. We'll get to that later. There's a cell here, where on the surface of the cell is a receptor. And that would be the entity that receives the signal. So in the first step in the process, there's a buildup of a concentration of a signal. And it occupies the receptors on the surface of the cell, and in some cases, inside the cell. We'll talk about a bifurcation there. But really, a lot of cellular signaling is dominated by cells coming-- by signals coming from outside the cell. What happens upon this binding event is the transduction. If you bind to something on the outside of the cell, as a consequence, you might have a change on that structure. If it crosses the membrane, you might have a change on that same molecule structure that's on the inside of the cell. So that's why it's called transduction. You're transducing a soluble signal from outside, binding that signal to the cell surface receptor. And the cell surface receptor is responding in some way. And there are two principle ways in which we respond to extracellular signals, and we'll cover them both. The next event that might happen is through the change that happens to the intracellular component of the receptor. There might be a change, a binding event, another step occur within the cell. And as a function of that, you get a response. All right? So it's really-- thinking it in these three components is a good way to kind of dissect out the beginnings of the complication. And then what we'll be able to do is really start to see what kinds of molecules come in? How are they received? How is the signal transduced? And what's the ultimate outcome with respect to a response? Everyone OK with that? All right. Now this is what you have to look forward to. So we give you something with three moving parts, and suddenly we show you something with sort of, you know, 100 moving parts. And cell biologists very, very frequently look at these maps of cells, where what they're looking at with each of these sort of little acronyms or names, all of these are proteins, where they have been mapped out through cell biology and cellular biochemistry to be existing in certain components of the cell. And what has also been mapped out very frequently is, who talks to who? So the fact that JAK S might interact with STAT 35 and so on. So much of this was worked out through cell biology and biochemistry, and also by genetics. So Professor Martin has talked to you about identifying a player in a complex system by genetics. Let's say you have a cell that fails to divide. You might perhaps screen or divides unevenly, or has some defect in cell division. You might be able to pick out a particular player. Now, the key thing I want to point out to you with this cell is what's on the outside of the cell and runs across the membrane, and might have the chance, the opportunity, to have both an extracellular receptor and an intracellular function. And those key proteins are things like receptor tyrosine kinases. And we're going to talk about all of these in a moment. G protein-coupled receptors, and various other cell surface receptors-- so all of these-- anything that spans a membrane has the opportunity to be an important component of a signaling pathway. Because what you're routinely trying to do is have your signal recognized on the outside of the cell by something that spans the membrane. The signal will bind to that. And then you will have an intracellular response. So that's breaking it down. That's why proteins that are made through the secretory pathway that we talked about in the last lecture, that go through that endomembrane system, and end up being parked in the plasma membrane are so important. Other proteins that actually get secreted through that pathway are also important. What do you think they may be important for? Let's say you've made a protein within the cell. It goes through all the system. It doesn't stay parked in the cell membrane. It actually gets released from the cell. What might that be doing? AUDIENCE: [INAUDIBLE] BARBARA IMPERIALI: Yeah. Exactly. So that endomembrane system that I described to you, that pathway is great for making receivers. And it's great for making signals. And that's really what can sort of fuel the functions of cells. OK. So in systems biology, you may have heard this term quite frequently. Systems biology is research that helps us understand the underlying structure of signaling networks. So a lot of people who have common interests in engineering, computational analysis and cell biology, might bring in data to allow them to make models of cellular systems, to understand flux through signaling pathways. So they may make fundamental measurements about the concentrations of some components within the cell. And then try to say, OK, I know based on everything I've measured that this is a dominant pathway for gene regulation. And I could control this pathway by different-- by sort of different types of interactions. In this cellular system, I also show you another component, which is the nucleus. And when we discuss and describe specific cell signaling networks, in some cases the signaling network may involve receiving a signal, undergoing a variety of changes in the cytoplasm, but then a change that eventually results in a protein going to the nucleus. And oftentimes, those proteins that run into the nucleus are transcription factors that then trigger DNA replication or transcription. And then promote activities. So this is how you think about it. When you think of cellular signaling, it's really about, what does the signal need to do? And what's the pathway that I follow to get there? So all of those are membrane proteins. So now let's look at the canonical aspects of signal transduction. So the first-- and I'm going to rely on these little cartoons. But I want, both in this lecture and the next, to really show you where these recur in so many systems. So to that purpose, I want to talk about the characteristics. So the first critical characteristic is a signal and it's specificity. So a signal will be something that comes from outside of the cell. It could be a hormone that's produced in the hypothalamus and sent to another organ. But the most important thing about the signal is that that signal, which binds to a receptor in a cell membrane, is specific for a particular receptor, and the different signal won't blind to the same receptor. You have to have faithful signal specificity to trigger the right function. So if it's a hormone, it's got to be the hormone that you want to trigger the receptor, not a related but different-looking structure. If it's a small protein, you want it to be the exact one that binds with high specificity to a receptor. So what that means is if something is binding-- if a small molecule is binding to a protein on the surface of a cell with high specificity and high affinity, it means that even at a low concentration, it will make that binding contact. But all the other small molecules that are around won't crosstalk into that triggering that interaction. So we have high specificity, and we gain that specificity through macromolecular interactions, just like the ones we talked about in biochemistry. So if we have a small molecule or a protein bind to the receptor, it's making all those hydrogen bonding, electrostatic, noncovalent types of interactions with high specificity, so that a low concentration of the signal molecule is efficient for binding to the receptor to trigger the function. The next characteristic is amplification. Now let's put some lines between these guys. Now, with all the signaling pathways that you're going to see, we're going to be looking where in a pathway you get amplification. Very commonly, you might have a response that's just the result of a single molecule binding a single receptor. But at the end of the day, you might want a large response. You might want to make a lot of ATP. Or you might want to replicate all of the genome. So you need some kind of amplification where in a sense you're turning up the volume on your signal. And you need to do that rapidly. So frequently in signaling pathways, you go through a cascade of reactions where the signal might affect an enzyme. But once you make that enzyme active, it might work on many, many copies of another enzyme. And then each of those may work on even more copies. So that's what I mean by amplification, where at some stage you've generated a molecule that can result in the cascade of a reaction. So we often refer to these as cascades. So if you're Spanish-speaking, cascada. You want to think about a waterfall coming from just a single molecule of water. You're getting a large increase in your signal as a result of amplification. The next feature or characteristic of signaling is feedback. At the end of the day, if you're signaling, I got to make some ATP. I got to run out of the woods. I'm getting chased. At a certain stage, you need to stop all of the process occurring. So feedback is just a negative feedback loop that might slow down some of those steps that are involved in amplification. So for a pathway, you only want the pathway turned on for a prescribed amount of time. And then you want to be able to say, I'm done with that whole pathway. I don't need to keep churning through all those enzymes. It's time to stop that. And that usually occurs through negative feedback. And remember, we talked about negative feedback when we were talking about enzyme catalyzed pathways. So feedback is very often some kind of negative feedback, which suppresses a series of transformations, perhaps through a product of those transformations acting as an inhibitor on an early step. And then finally, the other component of a signaling network-- if you think of signaling networks as electronic structures, you have integration. So that's the last characteristic feature. And let me go back to that big circuit diagram quickly to show you an example of integration. So if you look at this signaling pathway, all these signaling steps are not single. You just have a signal come in, and you end up, for example, in the nucleus. But rather, other components may have crosstalk within one pathway, and start out either amplifying or turning down a particular signaling pathway. So these are networks. They're not pathways. They're networks that interact and communicate, all to amplify signals or turn down signals. So integration is an important part of signaling, because you're often dealing with the integrated function of a number of pathways to get a particular response. And that actually ends up being one of the situations where sometimes a particular enzyme may look like a perfect target for a therapeutic agent. But if you don't take into account the integration steps, you may be trying to deal with a-- you may think you're dealing with a single pathway, but you're, rather, dealing with crosstalk with a lot of other pathways. And what often happens in a cell is there's compensation from other pathways. Is everybody following? Any questions here about this? So what I want you to think about is that it's just amazing what is orchestrated to have even the simplest functions in the cell, how many interacting components there may be. Specificity, amplification, feedback, and integration-- all right, so let's talk briefly about types of signals and how we name them, where they come from, in order to make sure we're all on the same page with respect to the language that's used. So now, signals may take different molecular forms. They may be small molecules, for example, an amino acid or a phospholipid-- just something little. Alternatively, they may be proteins. They may be carbohydrates. They might take different forms in terms of their molecular structure. But we tend to describe signals by where they come from. So what I've shown you here is a picture from the book that just describes how we refer to certain signals. So there are four different terms-- autocrine, juxtacrine-- and I'm going to just give you a little hint to how to remember these terms-- paracrine, endocrine. OK. So these don't tell you anything about the molecule. They tell you about where it's come from. So an autocrine signal is a signal that may come from a cell, but it's signaling to itself. So it may produce a component that's released. And so it's producing this through a secretory pathway. It's a release, and it stays in the vicinity of the cell. So the self is self-signaling. So whenever you see something auto, you just want to say, oh, that means it's coming from the same cell where the signal occurs. Let's move to the next one, which is paracrine. I'm going to talk about jux-- and that's usually from a nearby cell, not a cell that's in contact-- definitely a different cell. So paracrine is-- we would always call nearby. And endocrine is completely from somewhere else, so perhaps coming through the circulatory system. One cell may release an endocrine signal. It may weave its way through the vascular system, and then target a cell. So endocrine is always from a distance. And juxtacrine is the only one that's a little odd. It's really from cells that actually are in contact with each other. So it's not self-signaling within a cell. It's not a cell that's nearby but pretty close. It's actually physically making a contact. And so that's the last terminology there. So hopefully, I can get this calcium wave to show you. This is just a video of juxtacrine to signaling. I just want you to sort of keep an eye on things. It's usually a cell. What you're observing here is a dye that lights up in the presence of calcium flux. It's called Fura-2. And so when you stare at these for long enough, what you can notice is that when one signal will often come from an adjacent cell right near it-- so there are long prostheses. You're not looking at the entire cell, but they're definitely-- for example, this little duo down here, they keep signaling to each other. And that's just a juxtacrine signaling, because the cells are in the contact. So that just shows you the difference there. If it was autocrine, you just have a single cell responding. If it's paracrine, they would be at more of a distance to each other. I hope that imagery-- this is from a website in the Smith Lab at Stanford. OK. And then the last thing I want to give you an example, there are many, many hormones in the body that undergo endocrine signaling, and so one example I thought I would tell you about, you all know that insulin is made in the pancreas. It's an important hormone for regulating glucose levels. And it's actually-- functions at the muscle level. So insulin is an example of an endocrine signal, because it travels a distance from where it's made in the body to where it functions in the body. All right. Now-- so we've talked about the types of signals. Let's now move to the types of receptors. Now, we cover both the intracellular and the cell surface receptors. But we really will focus a lot on the cell surface receptors. I just want to give you a clue that not all signaling is cell surface. So what I've shown you here is a cartoon where you see signaling, where a signal comes from outside the cell. It goes into the cell and triggers a change. And then the majority of the time we'll talk about these receptors that are in the plasma membrane. And they have an outside place where the signal binds, and they trigger a response inside. And it's only very specific signals that are able to signal intracellularly, that is, to cross the membrane to get inside the cytoplasm to do the triggering. What kinds of molecules can cross the membrane easily? We talked about that before, when we talked about getting across that barrier. Yeah? Nonpolar. OK. So you can look at a-- you can stare at a molecule, and if it's very polar or pretty large, it's not going to be able to sneak through a membrane. So something like a steroid molecule, a large, greasy molecule, can definitely make that transition. And so those are the only types of signals that we can really do inside the cell, because they can get across the cell. Many, many other signals have to go through this-- bind to the outside of a cell, and transduce a signal to the inside of the cell. So one very typical signal that can bind to an intracellular receptor is a steroid. So remember when I talked to you about these lipidic molecules, things like testosterone and cortisol. These are very hydrophobic molecules. So they literally can cross from the outside of the cell without a transporter. So for example, the hormone cortisol. And when that functions, it just-- an amount of it becomes available, for example, in the bloodstream. It crosses the cell, and it binds to an intracellular receptor. Once it binds to that intracellular receptor, this disengages a different kind of chaperone protein that's keeping it stable. Once it's found, it can then go into the nucleus and trigger transcription. So this is the one example of an intracellular receptor that we'll talk about. I just wanted to show you a little bit about the steroid receptors. These are molecules that are very-- these are macromolecules, proteins that are very-- quite a complex structure. But they can literally-- and I'll show you the picture at the beginning of the talk next time-- they can literally engulf these proteins. So once the steroid is bound to that, it completely changes shape. And that's what enables the change for it to be triggered and sent to the nucleus. Now, the key types of receptors that we'll focus on, though, are the cell surface receptors. And there are three basic classes of molecules that occur in the plasma membrane that are critical for cellular signaling. They are the G protein-coupled receptors, the receptor tyrosine kinases, and then you will talk in the lecture 22 about ion channels, and how they perform a receptor function. So the membrane proteins, first of all, I want to underscore their importance. 50%-- they comprise 50% of drug targets, the receptor tyrosine kinases and the G protein-coupled receptors. The G protein-coupled receptors have this 7 transmembrane helix structure, which spans a membrane. This would be the outside of the cell, and the inside of the cells-- so there's signals going across there. The receptor tyrosine kinases are another important type of receptor. They are dimeric proteins that in the presence of a ligand dimerize, and then cause intracellular signaling. Once again, they cross the plasma membrane from the outside to the cytosol. And then lastly, there are the ion channels, which also may cross the plasma membrane. And when you think about these classes of proteins, there's a tremendous amount to be learned with respect to their functions. And they are so important to understand their physical functions in the body, because they really represent the place, the nexus, where signaling happens in the cell. So I want to briefly show you a picture of a GPCR. It's a 7 transmembrane helix structure. You can see it here. There are about 30% of modern drugs actually target the GPCRs. And here, I'm just going to show you the structure of a GPCR. Those are the 7 transmembrane helices. If you stretch them out, that's about the width of a membrane. That's typical of a signal that would bind to that kind of receptor. This is a chemokine. It's a small protein receptor. So you can see that structure and how it would go from one side of the membrane to the other. In it's bound state, the chemokine binds to the 7 transmembrane helix receptor through kind of a clamping action. The magenta is the chemokine. The blue and the green space filled parts are actually what holds the chemokine. And if you look at it where the membrane would be, you can see how you can transduce a signal from one side of the membrane to the other, by the binding of the magenta molecule to the outside of the cell, to those loops outside the cell. That would have a significant perturbation to the biology and chemistry of what's going on on the inside. So next class, we'll talk about pathways that are initiated by these G protein-coupled receptors, and what that terminology means. OK.
MIT_7016_Introductory_Biology_Fall_2018
10_Translation.txt
[CREAKING] BARBARA IMPERIALI: So what we are going to talk about today, and I'm going to do something a little annoying, as usual, I want to take you somewhere here. Yeah, sure, back to carbon, that's fine. But what I want to show you is the sizes of the molecular players in translation because we are going here now from the smallest carbon atom. And we spent the first four lectures on amino acids, nucleotides, sugars, phospholipids. But we're entering new territory here, and I'm just going to do this in such a way that it comes onto the screens. What you can see here is, in order to make proteins like hemoglobin and antibodies-- these are made up of amino acids, the really tiny things that are out of view right now-- we need large entities that are made up of either nucleic acids alone or nucleic acids plus proteins. And so the things that will feature today are the transfer RNAs and the ribosome. And I want you to look at the size of this molecular machine. This is we're getting pretty large now. We're needing to use large machines to make the smaller catalysts that are essential in the cell. And, just so sort of maybe size doesn't seem so important, but let's just go a little bit further here. And what to me is intriguing is that the size of the ribosome is pretty similar to the size of the rhinovirus, a little smaller than the hepatitis virus, but quite a bit smaller than some of these other viruses. So the ribosome organelle is a large entity in the cell. And, when you do look at electron micrographs of cells, you can see these dark dots, which represent ribosomes. They're big enough to see, whereas the proteins themselves are not big enough to see. So that's what you're destined for today. All right, OK, so I've started to place on this board some of the molecular players, the messenger RNA, the transfer RNA, and the ribosomes, which are made up of ribosomal RNA plus protein. And I want to just remind you about the structure of the mature messenger RNA just for a minute because, in the last class, we talked about a lot of manipulations of that pre-messenger RNA, the fresh thing out of transcription. And now I just want to remind you that the messenger is single-stranded RNA, obviously. It has a 5 prime cap, which has got this funky 5 prime, 5 prime bridge that's resistant to exonucleases. Somewhere in that sequence is a start codon. It's something that says this is the bit I want to translate. Often, there's a lot of stuff here that you don't translate. It's part of what's known as the ribosome binding site. And there are many features in this part of the sequence that are very important for translation. They contribute to the efficiency of translation. Generically, we'd call it the ribosome binding site, but there are funny things called Shine-Dalgarno sequences and stuff. Don't worry about any of that. I just want you to appreciate that, the mature transcript, you don't translate the whole thing. A lot of this stuff is structural, functional for other reasons that contribute to the success of translation. Once you see one of these, the ribosome mows its way through and reads the nucleic acids in the message. So the message is being conveyed over to the new machinery. And then, when you hit one of these three codons, and we'll discuss these properly when we get to them, it's time to stop and finish translating. The other end the message has a poly-adenine tail. Remember that, once again, is structural to protect the ends of that transcript. Even if some of the hundreds of adenine nucleotides are nibbled off, you don't get into the part of the gene that's critical to be translated. At a certain stage, though, you might get in. Exonucleases might chew up enough. And they may end up chewing up your transcript, but that probably suggests that the messenger has been around too long, and it's time for it to retire to a better life, OK? So remember the poly-A tail. And this, once again, plays other functional roles with respect to being recognized as a transcript and being helped to get out of the nucleus. So this was really what we talked about last week. There's one more feature in here. I'm just going to remind you that the final mature transcript has also been through splicing with removal of introns and the pasting together of exons, which is a wonderful way to diversify transcripts of translation and give you much more proteins than one gene can encode, OK? And we talked about that last time. Great. So the first thing we have to think about here is how do we go from four bases to a language that includes 20 letters, right? Or it's better really, more precise, for me to call them nucleotides. It's more precise because the base just represents the ring system that's attached to the ribose. Nucleotide means the whole thing, including a phosphate. So we go from four bases to encoding 20 amino acids. Now there are a few organisms-- and, in fact, we have one spare one as well, selenocysteine. There are a couple of other amino acids that might be designated as the so-called 21st, 22nd amino acid. They're not found globally in all organisms. We have selenocysteine in just very, very few proteins. So it's something beyond the list of 20 that you saw. There are other organisms, for example, in archaea, these guys who hang out in bubbling hot pots in Yellowstone, for example, that have another amino acid known as pyrrolysine. I'm going to mention that a little bit later on, but the ones you really need to think about are the ones we're encoding in the global genetic code. This is the ones that are common to everybody, all right? So, obviously, when you look at the language of bases, one base-- if the language translated directly one base to one amino acid, we could only encode for amino acids. So we know it's not one base, one amino acid. And I know, at this stage, you know that it's three bases, but let's just go through the math or the original questions that were sort of circulating. Like how do we go from this language to the other language? If two bases encoded each amino acid, we could only encode eight amino acids. That's not enough still. 16 amino-- wait a minute. 16 amino acids, sorry, I can never-- anyway, that's 4 to the 2. It's pretty good if I can't get 4 to the 2 up at the blackboard. This would be 4 to the 1 power and then-- so what that came down to realizing there were not enough. That wouldn't be a sufficient language to encode the 20 amino acids. So it's finally deduced that three bases encoded each amino acid. That would give us 64 possible words in the language that needs to be translated. That's a lot more than we need. We only need 20 for the encoded amino acids, so 64 possibilities. But what else do we need in the language? We need a few more things anyway. Yeah, up there. AUDIENCE: Oh, I don't [INAUDIBLE] BARBARA IMPERIALI: Oh, you weren't. Up there. AUDIENCE: You need to know when to start and stop. BARBARA IMPERIALI: So, exactly, so here this isn't necessarily one of the ones that uniformly codes amino acid through the sequence. We need a precise way to say start. And, in fact, we need a way to say stop. And there are multiple three letter words that say stop. So it turns out that the genetic code, which forms the basis of this entire concept, has some features to it where it does have some degeneracy. But we'll go through the degeneracy, and we'll take a look at the genetic code because it will tell us exactly how the three letter-- the words made of three bases encode everything we need for translation, all right? So let's just go back and take a look at this. So we've looked at the messenger. I've told you a little bit about the tRNAs and the proteins. But then we just sort of give you a little bit of the back history. And, once the structure of double-stranded DNA was deduced, really, everyone was moving on to trying to understand how that converted to the translation to proteins. And there were a lot of workers deeply involved in this. Crick and Brenner realized it was three bases to code for one amino acid, but Khorana, Nirenberg, and Holley, Khorana who was part of our faculty for many, many years, actually defined that genetic code and got all of the details. Brenner and Crick started to have the ideas, but, really, the definition by doing a process known as cell-free translation where they could very carefully add components to understand how the code, the genetic code, was formulated where they put in specific messenger RNAs and amino acids and transfer RNAs and actually made proteins from that. So that's the work that Khorana and others did. And that was awarded them a Nobel Prize for that work. And then, later on, things started to get-- you know, these are decades of work I want to point out to you. The ribosomes were discovered. That was a decade later, the sort of details of the structure, but not the structure itself. And it was really exciting in the 2000s when Ramakrishnan, Steitz, and Yonath solved the structure of the prokaryotic ribosome. So each of these things has taken a decade to happen, but they are fundamental, major, important things that we can act on and move forward to understand more. All right, so let's move to the transfer RNAs. And I've flashed up this slide a couple of times, but I actually now have the movie of the structure of a transfer RNA. So the transfer RNA is a linear segment of RNA, but it's folded up the way ribonucleic acids are with three, four stretches of double strand coming together and loops in between them. And the one end, this is the 3 prime end. I always try to draw it on the left because, otherwise, things get confusing. It's much easier to sort of see where everything ends up if you put the long arm on the 3 prime side to the left of your picture. And what I think is cool, when you look at the structure of RNA, we think of messenger RNA as being a sort of rather floppy entity, but it actually really likes to form short segments of double helix. It just doesn't do well with the really long double-stranded structure the same way that DNA does. But this folded up structure is very important. And it was on the observation of these folded structures that the ribosome hypothesis was formulated. But the two things that you want to remember about the ribosome are that the 3 prime hydroxyl group of the last ribose within this ribosomal sequence is where the amino acid that's going to be loaded into your protein is attached. So that's one point. And then there's another landmark on this structure. And that's what's called-- one of the loops has a special name. It's called the anticodon loop. It comprises three nucleotides that are complementary to the nucleotides in your messenger sequence. So this really is a decoder because, at one end, it's carrying an amino acid, but it's carrying the amino acid that corresponds to the code that's in the messenger via that anticodon loop. So it's a large structure, but don't mistake it for being something that's just sort of amino acid and anticodon. A lot goes on with the rest of the structure. It's a very important structure in the mechanisms of protein translation and synthesis. OK, so here I've got to sum that up with a couple more of the ways that you would see the transfer RNAs. You might see it in this globular form. And I pointed out the anticodon loop. The place where the amino acid gets linked is also called the acceptor stem. And, up here, I show you that linkage. And you should-- yes? AUDIENCE: I was just going to ask, I see how the anticodons are specialized. How does the 3 prime end of the tRNA know which protein is bound? BARBARA IMPERIALI: Yeah, and, in a moment, not quite yet, I will show you structures of tRNAs bound to their synthetases, which are the enzymes that load them. So there is specificity throughout that whole thing. It's not just bystander stuffing. It's really involved. And it's a great question, and I hope you'll get an answer that's reasonably satisfied from the structure perspective. And so, if you look up here at the acceptor stem, the amino acid is joined by an ester bond to the 3 prime end of the transfer RNA, but, hopefully, you can see in here. There's the carboxyl. There's the amine. And CHR designates the amino acid where R would be the side chain of your amino acid. So that's what that looks like at that end of the transfer RNA. And, coming down to the anticodon loop, you're going to read the messenger 5 prime to 3 prime. And the anticodon loop, when you draw this in this configuration, actually shows you that the anticodon loop is antiparallel to the codon loop to make that good hydrogen-bonding network. So that's why I like to be consistent in the way I render this structure. So the anticodon loop of the RNA complements that triplet codon in the messenger RNA, all right? Yes? AUDIENCE: [INAUDIBLE]. What's between the G and T? BARBARA IMPERIALI: What's between the G and T? Hold on. I'm going to-- what's between the G and the-- AUDIENCE: On the loop there. BARBARA IMPERIALI: Over here? AUDIENCE: No, above that on the left. BARBARA IMPERIALI: On the left. Oh, this guy? So cool you ask that. So this guy is what's known as a funny base. It's actually-- that's the symbol for it that you've picked out. And it's a base known as pseudouridine. And it turns out that these unusual bases show up in RNA sequences. Pseudouridine has an interesting structure where the bond to the ribose ring is not a carbon-nitrogen bond, but, rather, a carbon-carbon bond. And it's a bit more stable. So, in RNA, there's some of these other unusual bases. And pseudouridine is the most common of the unusual ones. It can still hydrogen bond, but it tends to show up in these sort of different loops and turn-type places, OK? Thanks for noticing that. Yeah? AUDIENCE: So do you get the G in parentheses on the yellow part [INAUDIBLE]? BARBARA IMPERIALI: The G, which is in parentheses, designates that even those sort of bulges between the real loops can vary in length. So it could be more than one. So that addresses-- comes back to that other question. But that stuff in between has variables. It has variable bulges and variable shapes associated with the synthetase enzymes that I'll introduce in a minute that it recognizes, all great questions. OK, all right, cool. OK, so [HUMMING] we've got all of that. So now let's move ourselves so we've looked at-- we know the messenger well. We're starting to understand the tRNAs. What we need to move on to now is sort of the most important part of the game, which is really taking a look at the genetic code. So this table is but one rendition of the genetic code. Sometimes, you'll see it in different shapes and sizes. In a second, I'll show you one of the other renditions. But it is the absolute-- the sort of Rosetta Stone for translating messenger RNA to amino acid sequence using codons. So this sort of-- whoa. Getting a little-- I love translation. I'm sorry. I'm getting a little bit excited about translation. OK, so there are a few features of this genetic code. Number one, you won't have to remember it. It would be foolish for us to think you could, but there are characteristics about the genetic code that are very important. But, first of all, let me sort of calibrate you. This would be the first of the three letters in the codon. And, by the way, the genetic code gives you the identities of what are known as the codons, which is how we designate the triplet of nucleotides. So you'll hear a lot about codons and anticodons of course. So the way you read this table is you read the first letter. So all of these begin with U. All of these begin with C and so on. Then you read the second letter, and there's those designations. So, if I'm going here, here, it's going to be starting UU. And then, within each block, there are the four alternatives for the other four bases. And then the third one, basically, just designates those. So you can read, for each amino acid, what three-letter codon would correspond to it. Some people quite like-- oops. I got rid of it too fast. Some people quite like this other rendition where the first of the three letters is in the center. So it's either G, U, A, or C. Then the second one comes out. It's in the brown circle. And then the third one is the third letter in the codon. And then that tells you what amino acid that corresponds to. And we'll generally tend to just stick with this one table and stick with it. So you might as well get used to that particular table. Now there's a couple of characteristics in the table. The first thing to notice is that, within that table, there is a codon to start. And it's AUG. And the one thing that's sort of sent to drive you crazy is that AUG codon equals start, but it also equals methionine. And, in bacteria, it equals a modified version of methionine. So, if the ribosome is reading the messenger RNA, it will look for the first one of these and start reading. But, once you find another one of these further into the sequence, it will put in methionine. Methionine is fairly rare. There may only be one or two more in the protein. And I hate the illogic of that, but, nevertheless, it is the case. In some organisms, you start with different amino acids, but the most common start is the codon for methionine is the start. So what that means is that every protein you translate has a methionine at one of the termini. Which terminus would it be at? AUDIENCE: [INAUDIBLE] BARBARA IMPERIALI: Yeah, OK, so that's the details with methionine and the start codon. Then there are several stop codons. And I've shown you the three of them there. The stop codons tend to be used variously. Some are more predominant in some organisms than other. And some of them respond differently when the process of stopping occurs, which we'll talk about in a second. But the next important thing to notice about the genetic code is that it's what's called degenerate. Now, when you call something degenerate, it seems like sort of a really nasty thing to call them. And it doesn't mean, oh, it's so bad. It's just a degenerate. It just means it is specific, but it's not-- wait a minute. I've got the right wording here because I always get this sort of-- degenerate, ah, it's not ambiguous. It's degenerate. That's the best thing. But it's not ambiguous. What this means is there are all, for several amino acids, more than one codon that specify it. And let's take a look at some of the examples of that degeneracy. It's at its most extreme with residues such as leucine where there are six codons that specify leucine. Alanine has four different codons. They all specify alanine. Lysine has two codons. And it's generally appreciated that the residues that have more codons tend to be the more common in your messenger RNAs, in your final protein sequence, because you'll find, in your protein, a lot of leucines. So we need a few more codons to stack the deck in favor of putting in more leucines. But so it's ambiguous-- it's degenerate. But, what I mean by it's not ambiguous, you know what it's going to code for. The other place where degenerate codons can become important is that there is species specificity. So I'm going to write that in here and explain what I mean. And what that means is that some organisms might prefer two or three of the degenerate codons, and others may prefer a couple of the others. And what that means, once in the laboratory, is it's really annoying if we want to express a protein in a really convenient bacterial system that we've taken from a mammalian system. Our codon mix may not be the same. And so there are companies now that actually will fix the codons in a gene for you to make them compatible with a different organism for expression. So it has huge practical implications to be quite honest. And it can be very annoying in the laboratory. OK, and then so codon usage varies amongst organisms. All right, so now, one last thing, so we've talked about the genetic code. It's the code that's going to be embedded within the messenger RNA. The last thing I want to do is, basically, explain to you one more time, when you're looking to read what your amino acids that get put in may be, you're going to look at the codon. And it will tell you exactly the amino acid. A long time ago, I used to be confused because I thought I should be looking at the anticodon. And I was trying to translate everything, and it was a real mess. But it's the genetic code in that box that I just showed you is written down for maximum clarity and ease of use. So, whenever you see a particular three-letter code on the messenger, you will then be able to know what amino acid it would code. So this is kind of interesting. It just reinforces to you that the codon and the anticodon are antiparallel. And so what I want to do is, basically, in this diagram, you would be reading from the 5 prime to the 3 prime end, as the transfer RNAs attach to the messenger RNA. And that would give you a codon here that would be AUC. If you'd written this the wrong way round, it would look like CUA. So you really want to be reading 5 prime to 3 prime in the codon. And then you can go to your favorite genetic code map and say this thing is read 5 prime to 3 prime. And the amino acid that gets put in is isoleucine, OK? So you'll need to be able to do that quite readily. All right, next portion, those monster-- loading the amino acid. OK, so, as I said, this is a many building block, many parts thing. So we've got the transfer RNAs. We know where we load them onto the amino acids. We know where we load the amino acids onto the transfer RNA. But we do not know how that is done. So I want to show you briefly how that occurs. So let me just start by drawing transfer RNAs the way I usually draw them in a very sort of cartoon form. And, to attach an amino acid to the 3 prime end of the transfer RNA, you have an amino acid residue-- we're just going to go R here-- carboxylic acid, amine. Actually, I'm going to draw them in their appropriate charged states. And what I need to be able to do is faithfully fix this amino acid to the 3 prime OH of the transfer RNA. And what we do is we need adenosine triphosphate to activate this chemistry. And then the OH at the 3 prime end reacts with the amino acid. So I'm just going to draw that. I'm leaving out steps because, otherwise, it's too many, and you'll be cross with me. So you attach through an ester to the amino acid from the 3 prime end of these transfer RNA. That's what's done. The ATP makes this chemistry feasible, but there's one more player here. And that's the enzyme that brings them all together, which is known as an aminoacyl-tRNA synthetase, meaning it's an enzyme that makes an aminoacyl-tRNA. And it synthetases it. So that's how its name gets subtracted. So, with reference to an earlier question, what I'm showing you here are different synthetases for different amino acids that show you that there's a recognition not just for the amino acid that's being loaded, but, rather, for the entire transfer RNA. So some of these look quite different. The isoleucine one interacts in one way. The valine one is a little different. And the glutamine one is different again. So they vary in the way they interact with the transfer RNAs. So the transfer RNAs are specific for the amino acid that is loaded onto them, but also for the synthetase that does the loading. That's how you get the specificity. Does that address your question from earlier? Hello? OK. That makes sense, right? So they fall into a lot of different families, but they're quite varied when you look at specific interactions between the synthetase and the amino acid. And, on those synthetases, there will also be specificity for the amino acid side chain of the amino acid that gets loaded. Now let's look at the ribosome components because they're the last big monsters. Ribosomes have a small and a large-- that's the large and a small subunit. They are made up, as I mentioned over here, of RNA and protein, so small and large subunits. And I've shown them there in two different colors. The prokaryotic ribosomes are pretty different from the eukaryotic ones. There's a higher proportion of RNA in the prokaryotic ones than the eukaryotic ones, which is kind of interesting. And these complexes are so big, and they're made up of so many protein and RNA strands, that we don't so much measure them by the number of those components, but, rather, by what's known as their sedimentation coefficient, which gives us a sense of the weight of the module or the complex. So the small subunit-- oh, he's back [INAUDIBLE]. The small subunit would have a 30S sedimentation coefficient. The S stands for Svedberg units. It's how fast its sediments in an ultra centrifuge. And the large subunit would have a 50S sedimentation coefficient. And those correspond to a certain number of daltons. If you see that S term after a number, that's what it's talking about. It's talking about the sedimentation coefficient, which gives us a sense of the size of the complex. And, when you start to bring all the pieces together now, what we can see on this slide is the messenger, the transfer RNAs, and the ribosomal all to scale in such a way that it can really explain it. So what I show you here is the small and large subunit. In orange-- well, that's kind of a burnt orange-- is a sneaky little bit of the messenger. In yellow are the transfer RNAs. And there's one more unit on here that I won't describe too much. It's a protein factor that helps all the processes occur. Generally, it's thought to help the loaded tRNA come to the ribosome, get it in place, and then go away. So it's some of these extra helper proteins that are involved. OK, so let's build a protein because I know it's the moment we've all been waiting for, and we're going to walk through how those pieces come together. It's a very clunky animation, but I'm very proud of myself because I did it myself. So I'm just going to show you how these things happen, as you assemble a chunk of polypeptide chain from a messenger. And so here's the messenger. It's being read 5 prime to 3 prime. What happens, first of all, is that the small subunit kind of floats along, looking for the place that would be the ribosome binding site that I mentioned here, and then sliding its way to position the start codon in the right place to start the synthesis. Once that happens, the methionine that's on its tRNA, the start one, gets into place. And, at the same time, the large subunit, completing the ribosome complex, comes together. So you now have large and small ribosomal complexes stuck onto that messenger, ready to carry on. And, in each case, you're translocating through the messenger RNA. And, in each step, you're bringing in a tRNA that's loaded with an amino acid where the anticodon of the amino acid-- and I've got those little letters shown on the bottom here-- is complementary to the codon that's within the messenger RNA. So we can start building this protein. AUG is that start codon. Methionine is the first amino acid. And it's always at the N-terminus. Then there's another amino acid comes in. The codon was UUU, and that corresponds to phenylalanine. And then the next thing that happens is there's a movement such that a new bond gets-- a new amide bond is formed between the methionine and the phenylalanine. And that new bond is an amide bond. It's not any of the others. So you're literally intercepting this complex on the transfer RNA with the amine of a new amino acid. That's how that comes together. I'm not going to worry you too much with the chemical details, but a lot of them have been illuminated by having the structure of the ribosome. And it shows you, in fact, that it's not proteins that are catalyzing that reaction. It's nucleic acids. So now I'm just going to move you on through the synthesis, in each case, building that polypeptide chain and then moving, translocating. This guy leaves. The ribosome moves along. This was a good Saturday afternoon's work. We've got to use up all those tRNAs. It doesn't get interesting until you get to the end. So we're using them, but now we come and hit a stop codon. We have UAG. So what happens here? The whole process slows down because you can either load what's known as a suppressor tRNA-- that's the one, the magenta one, that went up there-- or a protein release factor can come in and bind. But, in either case, there's no new amino acid to come in, and translation finishes and releases the protein in its complete form. Now the reason I want to differentiate between the release factor and an RNA that has the complementary stop codon is because the RNAs that are known as the suppressor RNAs have now been completely hijacked to make an enhanced genetic code where we can load lots of different amino acids using the suppressor RNAs. And, if anyone would like to chat to me about this, I'd love to because I think it's a fascinating field, but it's a bit beyond the scope of. OK, so this entire process took the following where you've got small and large subunits, all the elongation factors and initiation and release. And then, in each case, the energy is actually not provided by ATP. It's provided by GTP. But where ATP is important is in loading the amino acids onto the transfer RNAs. This occurs at about a rate of 20 amino acids per second, meaning you're reading about 60 bases per second, which is pretty consistent with the rate of transcription, not the rate of replication. That's far faster. OK, sometimes, when you're making a lot of a protein, you will see that ribosomes line up on the messenger RNA. And you'll have many proteins being made at once and at different stages in the game. And this is a lovely electron micrograph that actually shows this process in action. And, when you have a lot of ribosomes on one messenger, we call them polysomes. They have that name. OK, good, all right, so what I want to do now-- so does everyone feel good about how you translate a messenger into a protein and the various moving parts? You would always know the amino acid structures, have the genetic code. Just be familiar with reading it and making sure you could pick out which amino acid might be incorporated in response to which particular codon. So, when proteins are made on the ribosome, they have a bit of a choice. They can get made and fold beautifully into active proteins. Those proteins could be modified. They could go to different places in the cell. Occasionally, proteins misfold. Maybe the rate of synthesis is too fast, or the environment isn't right. So there will need to be mechanisms whereby proteins get degraded if they're not folded properly, but that's the story for another day. It's not always perfect, but what is known now is that, as proteins are emerging from the ribosome, they're starting to fold almost immediately from that N-terminus to, ultimately, attain their compact shape. The last thing I want to talk to you about today is what happens-- what are the types of errors we get in translation. Now translation, like transcription and regulation, has some editing mechanisms to fix errors, but, occasionally, there are errors in the DNA that put different amino acids. And the editing at that stage, by the way, to fix errors is editing when you've loaded the wrong amino acid onto a tRNA. But what happens when the DNA message, the DNA starting point, is wrong, which means the messenger is wrong, which then means we get a different translated proteins? So I'm just going to give you a couple of terms here. When we have here-- and, on all these slides, I'm going to show you the double-stranded DNA. I'm going to show you the messenger that gets made and the protein that comes forward. And they're all lined up so you can follow them really nicely, always writing 5 prime to 3 prime, except when we have double-stranded because we have to put the bottom strand in a different order. First of all, this entire chunk would be called the reading frame. It's the portion of DNA that's going to be read and transcribed into the messenger RNA. And, when you look at the two strands, you're going to have to figure out which one is the template strand. And you would figure that out by knowing that you read 3 prime to 5 prime, but that you transcribe 5 prime to 3 prime, right? So you could recognize which of the two strands you're going to make the messenger out of. Once you get the messenger, you come straight to the genetic code because you can use the genetic code to translate each of these. Does that make sense? So this would be a situation where you have a wild-type enzyme. Everything is transcribed and translated properly. Occasionally, though, there are errors that will introduce defects into the ultimate protein. The first type of error is a nonsense mutation, which might be leaving out a base pair, inserting one, substituting one. And this would, ultimately, cause an error in the DNA that then causes an error in the messenger RNA. So let's say we delete by mistake, or we're missing the C-G base pair. Then the messenger RNA is now different. And a lot of the base pairs-- the bases slide up to fill the gap. So we have what's known as a frame-shift mutation. We've shifted things. And what we've done by doing that is introduce wrong codons. So the first codon was read properly. The next one was read properly, even though there was a mistake, but then, all of a sudden, we get in this mess where one of the later codons carries a mistake and becomes a stop codon. OK, does everybody see that? So that would be called a frame-shift mutation that introduces a nonsense mutation and puts a stop codon into your messenger RNA, OK? The next types of mistakes are ones that are what are called silent mutations. It doesn't matter, for example, if you made an error in the DNA, which ended up with an error in the messenger RNA, if you still code the same amino acid, right? I go to the genetic code. I'm like, oh my goodness, I've got a mutation. Oh, it's fine. CCC codes for proline, but CCA also codes for proline. So that's called a silent mutation, OK? Then the last ones are the ones where we start to encounter errors in DNA that result in errors in proteins that may cause genetically inherited diseases. So let's take a look at that. Here it's a missense mutation where we've got an error in the DNA, which has resulted in an error in the messenger RNA. So, instead of valine-- instead of leucine, we've put in valine. That's not so bad because it's quite what we would call conservative. They're sort of similar amino acids. They have similar personalities. So this is a missense mutation, but it doesn't cause any dramatic changes, probably, in the DNA. And then the last one I want to show you is a missense mutation that is non-conservative and causes a serious defect. And this takes you back to the beginning of the class where we've put a mistake in the sequence where we've changed a glycine to an arginine. And that's a big change. And I just want to remind you of the situation in hemoglobin when we had a missense mutation, and we incorporated a valine instead of a glutamic acid, just through one change in the DNA, which made one change in the messenger, which put a drastic change in the protein that caused sickle cell anemia. So missense mutations are where you put in the wrong amino acids. And those are the ones where you end up, in a lot of cases, with inherited diseases. Nonsense mutations are not so bad because you probably just truncate the protein. You don't-- nonsense mutations aren't so bad because you end up with a truncated protein, which would be degraded. The missense mutations are the more serious ones because you end up with a full length protein that might have a mistake in it. And then that would affect the function. Am I being clear enough to everyone? Yeah? Good. OK, I am going to tell you that I'm handing over the baton to my colleague, Professor Martin. He'll take over on Monday. Mouse is pretty happy. He's pretty excited about genetics. And these will be the lectures that will occur. What? You haven't seen him, have you? He's keen on genetics, yeah. AUDIENCE: Great. BARBARA IMPERIALI: OK, and that's it. Don't forget my office hours on Monday if you need them. I think this field is fascinating. Once you get used to the mechanics of it, it's really cool to think of how you go from DNA to RNA to folded proteins.
MIT_7016_Introductory_Biology_Fall_2018
32_Infectious_Disease_Viruses_and_Bacteria.txt
PROFESSOR: OK. Let's get going here. So this week I'll be talking about bacteria and viruses. And these are really significant topics, because I think it's something that we often don't think about the magnitude of the problems and what kind of crises we're approaching with respect to the therapeutic treatment of infectious disease. So what I want to try and get home to you this week is the variety of different microorganisms that threaten our health, and just talk to you about the sorts of issues that are really prominent in the news concerning resistance to therapeutic agents. But in order to do that, we've got to meet some bacteria, meet some viruses, and understand that some of their lifestyles, their mechanisms, so that we can understand what kinds of agents are used and developed to try to mitigate these diseases, because it's only through a molecular mechanistic understanding of the life cycles of viruses and bacteria that we can understand how many of these therapeutic agents work and what may be happening in resistance development. Now I find this particular slide a little daunting, but I want to point out to you that it concerns the world's deadliest animals. So we worry a lot about tigers, and sharks, and things like that, nasty poisonous snakes, bites from dogs with rabies, and so on. I'm going to leave this black bar here, sort of unmentioned. I don't know what year this is, but if we talk about daunting, that's pretty serious. And then the biggest killer on this screen is the mosquito. But it isn't actually the mosquito, it's the protozoal microorganisms that the mosquito carries from one person to another that really make that such a serious consideration. But what's not here are all the bacteria and viruses that actually are far more serious. And the numbers on the next stage will show you just quite how shocking these numbers are. If you're interested in infectious disease as a field, because I think anyone going towards MD, MD/PhD infectious diseases, it really is a critical area that we have to get to grips with. There are not enough vaccines in the world. There is not enough treatment with a very microbe specific anti-infective agents. So I encourage you to look at the CDC. There's a few other places where there's loads of information collated, such as the NIAID, which is the NIH Center for Infectious Disease, and the World Health Organization. So there's lots of places where you can find stuff out. So what we're going to be talking about in the next three classes are our smallest enemies, things like bacteria, fungal infections from things like yeast or Aspergillus, which would cause candida and aspergillosis. Protozoal disease we won't mention, but those are the types of diseases that are carried by things like ticks, mosquitoes, tsetse flies. We think of those as the infectious agent, but it's really what those organisms carry and cause the spread of disease that's important there. And we won't either talk about prion diseases, which are the diseases that don't involve an infectious microorganism, but are believed to be spread from protein to protein through the nucleation of new prions from existing prions. What we'll focus on in the first class is bacteria and in the other two on viruses, with an eye to looking at antibiotics and antiviral agents, how they work, where they go wrong. And this is where the numbers get fairly shocking. So for example, bacterial infections of the lower respiratory tract, that's deep in the lungs, cause 4 million deaths a year. Think back to the numbers you just saw on that first slide. These are things like strep pneumoniae, Klebsiella pneumoniae. They're called pneumonias because they're infectious diseases of the lung, but the organisms that cause them are of the Streptomyces, and Klebsiella, and Staphylococcus aureus specifically. But there are others that cause lung infections and lower respiratory disease. These are particularly troublesome in areas where the atmosphere is bad. In big cities where there's a lot of insult from emissions and such that make the lungs weaker, then these sorts of organisms can really take a hold more readily, so they are more serious. There are many, many microorganisms that cause pneumonias. And sometimes it's a real problem to track down the precise microorganism, which makes the issue of treatment really difficult, really challenging. So I'm going to talk in a minute about absolute identification of infectious agents, so we can do better jobs of specifically targeting the causative agents. Diarrheal disease-- 2 million deaths. These are organisms like Campylobacter jejuni and Salmonella enterica. We tend to have these crises, because romaine my is contaminated with infection. There are very few deaths in the developed world. We get down to that very quickly, say stop eating Romaine lettuce until we figure out what's going on here-- very, very few. But once again, in the developing world, these can run rampant. And they can grab small children and older people who are already compromised, already a little bit not quite with strong immune systems, and people generally die of dehydration, because these diseases really hit the GI tract. It causes leaking us in the GI tract and really, really serious diarrheal disease. So those are the bad boys there. But once again, there are many others. Tuberculosis is yet another really serious infectious disease caused by Mycobacterium tuberculosis, that's the main one of the mycobacteria that is a threat. It used to be called consumption in the old days, because people almost looked consumed by the disease. They would just get thinner and thinner. Literally it was a wasting disease. People would be sent up into the mountains of Switzerland to try to recover from consumption, to where the air is clearer and cleaner, and maybe hope that they can recuperate. But TB-- look at these numbers. In 2015 there were almost 10 million new cases. There are about 1.2 million deaths from TB. A serious situation with TB is that it's often found co-infecting with the HIV virus, where you just can't fight the TB. So eventually, if you're infected with the HIV virus, it's the TB that gets you due to the weakening caused by the infection with TB. So these numbers are shocking in light of the numbers I showed you on the previous slide, right. Look at these numbers if you go to snakes and things like that. They're meaningless numbers compared to infectious diseases. So now, and I'm going to talk to you about the origins of this, many, many infectious agents that we thought we had conquered-- we thought we could take care of it. You just take this course of antibiotics and you're off, you're set. But now, because of the rapid mutation rates in bacteria and viruses, certain pathogens have completely worked out mechanisms to escape therapeutic agent. And I'm going to talk to you about those mechanisms towards the end of this class. So basically you can dose a person one day with a normal dose of an antibiotic agent, and then 10 months later that normal dose or 10 times or 100 times that dose stops working. Why is that? It's due to resistance acquisition due to rapid cell division and mistakes made on replication and transcription, that then may one in a million times confer an advantage on the microorganism. All of a sudden the drugs don't work anymore. The WHO and various community notice boards call this set of infectious agents the escape pathogens. It helps us remember which ones these are, because these are pathogens that escape treatment, because they've developed resistance to multiple drug cocktails. So commonly, when someone has a particular disease they don't take one drug, they take two or three to hit lots of pathways at once in the hope that resistance won't develop fast. But the escape pathogens have collectively acquired resistance to several antibiotics, meaning there's no good treatment. So the letters of escape stand for Enterococcus faecium, Staph aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa, and some of the Enterobacter species. Some of these infectious agents are what result from-- I always say this wrong-- nosocomial infections. Does anyone know what those are? These are infections that people get in hospitals. So Tom Brady had a knee operation. He got an infection in his knee that came from the surgery, right. These are hospital acquired infections, because you can't sometimes clear an area enough, and there's infectious agents around. So Acinetobacter baumannii was dubbed the Iraqi Bug for many, many years, because the vets coming back from Iraq were going to military hospitals, and these were abundant with cases of Acinetobacter baumannii. So that moved on to the escape pathogen list. So these are things to watch out for. It's the reason that nosocomial infections-- I hope I'm saying it right, otherwise you're going to go off and Google it and realize I said it wrong. It's the reason why old school physicians wore bow ties and not ties. Can you imagine why? So if you're wearing a tie, which I seldom wear to be honest, and you're working over a patient, the tie can be the thing that carries the infection, because it gets closer to infected areas. This is old school stuff. And so originally the physicians wore bow ties in order to distinguish themselves as important people, but not to wear ties that might carry infectious agents. That's sort of a scary thing. So with all this said, let me just lead you in to talk about bacteria antibiotics and resistance development. So we often name bacteria somewhat by their shape. So the long rod shaped ones are cocci. The round ones are cocci. The rod ones, whatever they are, come on one of you. And then what did the rod shape ones? Bacilli. I had a blank moment. So the rod shaped ones are bacilli. And then there's some others that have a different morphology like Campylobacter jejuni that have kind of a corkscrew shape. And that's thought to be important in their motility, digging through the mucous layers in the epithelial layers. So here I show you several shapes of bacteria. And I'm just going to, once again, reinforce what diseases some are associated with and some other diseases that you might be surprised by. So yes, we know about salmonella and the E. coli and food poisoning. But Helicobacter pylori, which is one of these flagellated bacteria, can infect the stomach. It's often the cause of ulcers. So it's a causative agent of stomach ulcers, but that has in turn led to a considerable risk factor in stomach cancer. So what we thought was just an infection causes a constellation of other problems, including cancers. And more and more microbial agents are now associated with cancers, in particular the viruses. Neisseria, these come along with the sexually transmitted diseases such as gonorrhea. Neisseria meningitidis is the one that causes meningitis. It is a very, very often fatal infection of the meninges. Staph aureus lots of infections around the body, just gruesome things like cellulitis, wound infections, toxic shock. Streptococcal bacteria, I've already mentioned-- the pneumonias, and then Campylobacter. And now another complicated factor of infection-- so I talked to you about stomach ulcers and stomach cancer. Another thing that seems to be coming along with infections is autoimmunity. So in the last section of the class you heard about immunity and you also heard about tolerance, that we don't react to things that are ourselves, otherwise we'd be in deep trouble. Autoimmunity can suddenly pop up from certain bacterial infections, because bacteria tend to cloak themselves with unusual sugar polymers and other kinds of structures that the body doesn't really know what to do with. And in some cases they kind of mimic things that are in the human body. So they they are mimetics of normal structures in the human body. And the body just doesn't notice them at all. And then there are incidences where certain bacterial infections later on cause autoimmune disease. So a bacteria may come along. It may have something that looks kind of like something human, but not quite. The human body responds, develops antibodies, and then they cross talk back to aspects of our physiology. So Campylobacter jejuni is often a contaminant in poultry. It's a severe GI infection. But later on people get diseases such as Guillain-Barre, which is a neuropathy where the ends of your limbs become numb and non-functional. So there was a famous football player, the one they called "The Refrigerator," who had a serious case of Guillain-Barre resulting from very much an infectious disease, which converted into autoimmunity. So let's now look at antibiotic targets. And to look at antibiotic targets, I think the first clear place to look is at the bacterial cell wall. Now when we first started talking about prokaryotes, things that include bacteria, we talked about the fact that these single celled organisms have to have a robust cell wall to prevent osmotic shock. They have to have some kind of thing to keep them from taking up too much water and basically exploding because of osmosis. Water floods in to balance the salt concentrations. So they have a complex cell wall, which is made of a macro molecule called peptidoglycan And it's usually one word, but I want to just underline peptidoglycan because it's a fascinating polymer that's made up of peptides and linear carbohydrate polymers. So if you look at this typical bacterium, this is just a cartoon of the peptidoglycan. So it's a cross-linked polymer, we in one direction it has repeating carbohydrate units. I'm not drawing those complex hexose structures there. I'm just drawing it in cartoon form. And those are carbohydrates known as NAG and NAN. NAG is N-Acetyl Glucosamine. It's a hexose sugar. NAN is N-AcetylMuramic acid. It's another modified sugar. And on the one of those sugars, there is a reactive site that allows you to basically cross-link these polymers into a mesh work. So it's a feat of engineering to build this amazing polymer. It starts being built on the inside, on the cytoplasm. And then the components get flipped onto the other side of the cytoplasm of bacteria. Then they get polymerized in place to make this complex mesh work of a polymer that creates the rigidity of the bacterial cell wall. It's generically known as a peptidoglycan. Different bacteria have different peptidoglycans. There are several modifications that might be specific to particular bacterial sera But this is the generic structure, where you have a polymer that's built of sugars. You can recognize the sugar structure there going in one direction and the peptide component that cross-links across in order to make this mesh. And bacterial wall have different amounts of this, but it'll build up to a really strong, rigid mesh work that's permeable to things, small molecules and water. There is holes, and so on. But it creates a mechanical rigidity so that osmotic shock doesn't occur on the bacteria. Any questions about that? Does that makes sense? So that, in a sense, it's their exoskeleton, if you want to think about it like that. So the properties are rigid. Without it, the bacteria would suffer osmotic shock. And it's plenty permeable to allow 2-nanometer type pores in order for nutrients and water to go into the structure. [VIDEO PLAYBACK] - --have E. coli growing here. And it's living. You can see it start to grow. Here we add penicillin. We're going to see these bacteria-- PROFESSOR: These are bacteria, rod-shaped bacteria. - There wasn't any microphone on this, so-- PROFESSOR: And I'm going to ask you to just keep watching this kind of carefully. - There goes another one, boop, boop. [LAUGHTER] - Poking holes in the cell wall, boom, bacteria is dead. PROFESSOR: Look at some of the bacteria disappearing. All right, I guess [INTERPOSING VOICES] OK, then we're going to-- [INTERPOSING VOICES] So we're going to leave it. [END PLAYBACK] Let's go back one. OK, now what was that? OK, so I've told you bacteria would suffer osmotic shock without peptidoglycan. Those are bacteria that you see popping, as the person who was talking said, because the peptidoglycan cannot be made. There is an antibiotic that's added. It is penicillin that's added to the bacteria. And it stops-- as bacteria grow, they have to make a bunch more peptidoglycan, because if you're doubling, you've got to make twice as much peptido-- you've got to double the amount of peptidoglycan. If you have something that inhibits the peptidoglycan being made, you have a bacterium that's trying to stretch out what it has, it's not resistant to osmotic shock. And what you saw was the bacteria basically undergoing cell death via osmotic shock, pretty graphic, pretty visual. So penicillin was one of the first antibiotics that was described for the treatment of bacterial infections. And we'll go to the timeline of that in a moment. So when we talk about bacteria, the original definition of bacteria is in three different subtypes, gram-negative, gram-positive, and mycobacterial. This is actually the first way that people would take a look at your cell-- at the bacterial cells and diagnose roughly what kind of bacteria they were. Did they fall-- which of these broad families did they fall into? Because it would help in defining how you would treat the infectious disease. So I want to show you the difference between the cell wall of these various types of bacteria. And the truth is, if you have an infectious disease, your wish is, if you had to pick one of the three, that you have a gram-positive disease. And I'll explain why that is in a moment, because it's all to do with how drugs can get into the bacteria to inhibit vital functions in order that they die and they don't take over your system. So let's look first at gram-positive bacteria. They're shown here. This is a section of a bacterium. Gram-positive have a single cell wall. And they also have a thick layer of peptidoglycan. So they gain rigidity by basically having an extracellular thick layer of peptidoglycan coating them. There is a schematic of it here. So here would be the inner cell wall. And here would be the peptidoglycan, shown in orange and pale, buff-colored circles. So that would be where their peptidoglycan is. And then there are some other glyco conjugates that actually stick out beyond that. But there is only one cytoplasmic membrane. That's the standard double bilayer. And the peptidoglycan is quite thick, relatively, 20 to 80 nanometers across. So that's how wide it is. And you can, if you've got a-- if you've stain a bacterium under a microscope, you would see that, the thickness of that wall, but the absence of a double wall. The gram-negative bacteria have a double wall. The inner membrane is pretty standard. It's just typical phospholipids. It looks like the inner cytoplasmic membrane of the gram-positive bacteria. And then it has an outer wall. So the inner membrane is typical. And then the outer wall has one leaflet that looks kind of normal. And then it has a second leaflet that's sort of decorated, honestly, like a Christmas tree. There is all kinds of things sticking out there that interact with hosts that they infect, and so on. And the space between the two walls is called the periplasmic space, because it's between. It's not the cytoplasm. It's what's called the periplasm. Now, what's interesting about these, the gram-negative bacteria, is they have quite a bit less peptidoglycan, only about 7 to 8 nanometers. So that's pretty interesting. But they sort of gain robustness from that second wall structure that's coating on the outside. Now, their challenge with gram-negative bacteria relative to gram-positive bacteria is any drugs you develop have to make a pretty-- if they're targeted at intracellular sites, they have to get through two walls, not just one wall. So they are harder to treat. And they also have a lot of characteristics that make them more prone to resistance development. So I want to point out to you, on this electron micrograph, you can actually see the double wall, the dark band of space and then another dark band, whereas here you see a thin single wall, but you see a lot of junk on the outside. Is everyone seeing the differences just to look at them? OK, so what's this gram thing about? What does this stand for? It simply stands for a chemical dye that stains peptidoglycan. And it was invented or discovered by Professor Gram. That was his name. So when someone says you got a gram-positive infection, gram-negative infection, it's how those cells look when they've been treated with this stain. Gram positives show up very positive to the stain because there is a lot of peptidoglycan on the outside that absorbs the dye and shows a strong color. The gram-negatives don't show very well with a Gram stain, because the peptidoglycan is tucked in the periplasm, not on the outside of the cell. So if someone does a quick check on a bacterial streak or an infection that you have, they might treat it with the Gram stain and say gram-positive or gram-negative just based on that simple color analysis. And so in one case, the peptidoglycan is abundant and accessible. In the other case, it's very, very much thinner and less accessible to the dyes. Now, this probably looks like stone-age stuff to you, because how much can you learn by these simple colorimetric stains? We're certainly moving in very, very different directions. But let me just finish off with the third type of bacteria, the mycobacteria, which include Mycobacterium tuberculosis. And they have a different kind of wall, again. And they're pretty unusual. And they are really, really hard to treat, because it's almost impossible to get therapeutic agents into mycobacteria. I used to work on a team with Novartis in Singapore. And they said, doing anything with mycobacteria was like trying to do biochemistry on a wax candle, literally. You just can't work with it, because they have a thick additional wall that's kind of different again. Did you have a question? No. Sorry, I thought I saw your hand up. So what they have is a typical cell wall then some peptidoglycan, but then they have this thick mycobacterial layer which comprises what are known as mycolic acids, which basically add this thick layer of greasy hydrophobic material on the outside of the mycobacteria that's pretty impenetrable. The cell wall is quite different. It doesn't have an outer coat. It's like gram-positives in that respect. But it doesn't stain very strongly. So it has a weak, what's known as Gram stain. So sometimes if you've got something that gives a sort a so-so response to the Gram stain, you might say, oh, it looks like a mycobacterium because of what's happening. Now, mycobacteria TB is a huge threat, because its treatment, its current treatment-- and it's the same treatment that's been around for, like, 30 years or something-- is a treatment with four different antibacterial agents that hit a bunch of different sites in the lifecycle of the bacteria. It includes these compounds shown here which are isoniazid, rifampicin, ethambutol, and pyrazinamide. And it's a six-month treatment with those medications, so handful, four different medications for six months. So what they were realizing in the developing world is that there was terrible compliance. The drugs are cheap, but there was no compliance. People just were not taking the pills, because they're like, I'm tired of taking these pills every day for six months. So what was developed was what's known as the DOTs program. Has anyone never heard of this? Is anyone interested in infectious disease? It was a situation where it was a social system set up in order to make sure people took these drugs every day for six months in order to comply. So social workers would go to the villages in remote areas and watch people take the medications. So it's directly observed treatment to make sure they followed through, because if they had regular TB, not very resistant TB, you could overcome it, provided that you took these medications. But still, it's a hugely debilitating thing to have to deal with these treatments. Now, there are two strains of TB. One is called MDR-TB. And the other ones called XMDR-TB You'll occasionally hear of these on TV programs. MDR is resistant to three of the four medications. And XMDR, which stands for extremely MultiDrug Resistant, is resistant to every single one of those medications. New medications, different mechanisms of action are sorely needed. All right, this is just what things look like with the Gram stains. So here you see gram-positive Bacillus anthracis. That's the deep purple rods. You know that's a gram-positive because it's a deep purple stain. The other cells in this picture are white cells. So you can really pick out the gram-positive. This is the structure of the chemical dye that stains peptidoglycan through absorbing into the peptidoglycan. It's a very sort of physical interaction of the dye with the polymer. And over on this slide, it's a mixture of gram-positive and gram-negative. And you can pick up the gram-positive and differentiate them from the gram-negative, which just stains sort of kind of weakly pink. And then mycobacteria, which are formerly gram-positive, don't stain very well because of that thick mycolic acid hydrophobic wall. So what would you do nowadays? Would you pull out a stain and drop it on bacteria and get some vague response? What's open to you now in the 21st century? You have a tiny sample of a bacterium. Grow it up. What would you do? You could tell exactly what it is. AUDIENCE: PCR. PROFESSOR: Yeah, you'd PCR up the genomic DNA and then go match it, because the thing that we, in addition to the human genome, there are thousands of pathogenic bacteria sequences that are completely annotated, known. The [INAUDIBLE] has a massive compilation of these sequences. And you just go and you find out what the bacterium is based on the sequence. So now rapid sequencing efforts-- maybe they're just a few number of key places in a genome that you would go towards and just do a really fast array and figure out what's there and within what bacterium it is, which gives you a much better clue as to how to treat it than the vague, ambiguous stains. So even though stains keep going, there is now other ways. Unfortunately, not everyone has the instrumentation to do rapid sequencing. So nowadays, there is a lot, lot, lot of interest in faster dipstick sorts of tests that can distinguish between different bacterial strains by, for example, interrogating that coat of glyco-conjugates that's on the outside of the bacteria, dipstick paper tests that can give you an idea of what organism and what serotype so you can move forward and do a much more rational treatment of those organisms. OK, let's see what's-- yes. All right, so where did the antibiotics first come from? Any questions so far? OK, so where did the first antibiotics come from? From a couple of accidental discoveries. Who has heard of the Fleming experiment? Who knows about that discovery of penicillin? Yeah, so there was an original observation that predated that which sort of suggests that Pasteur was a pretty smart guy, because he contributed in a lot of different areas. He discovered that some bacteria tend to release substances that kill other bacteria. That was in the 1870s. Then later on, there was another sort of spread of antibiotic agents. And it came with the discovery that we had things like arsenic derivatives actually showed some value in treating the organism that causes syphilis. So talk about the treatment being-- the cure being worse than the infection. People were being treated, seriously, with these arsenic derivatives in the hope of wiping out the infectious agent that caused syphilis. But you know, sometimes it was a mixed bag. But where things started to get a lot more interesting was that in 1928, there was this sort of famous historic story of Fleming discovering that some bacteria seemed to be inhibited by a particular agent that came from a fungus. And this was the origin of penicillin. So he would have a Petri dish where he was growing bacteria. And he noticed that in some of his samples, there was inhibition of bacterial growth due to an exogenous agent that had somehow contaminated the plates. So in that story, that was the substance that was named as penicillin. The mold from the-- mold is the fungus-- actually inhibited the growth of staphylococcus bacteria. And it was called penicillin. And then a lot more time went by. But in the 1940s, the active ingredient was discovered. So 1940s is sort of slap bang about, I would say, a couple of years into the Second World War. And they were able to mobilize the production of this agent. Towards the later end of the war, people had penicillin available to them. And it's basically pretty well believed that, if it wasn't for the antibiotic agents that emerged-- you know, the war ended in 1945. If it wasn't for those agents that emerged, there would have been way way, way more deaths from the war. As it was, there were way too many. So penicillin was the first antibiotic that was discovered with a discreet mechanism of action. And it was discovered at a very, very important time. So that was all great news. Penicillin was produced widely. Some of you may be allergic to penicillin. There are other options nowadays. But it's the cheapest and most viable of the first-line antibiotics. Here we go. And this thing, this pointer has a mind of its own. It sort of changed its mind. But the problem was the bacterial species started to survive treatment due to development of resistance. And all of a sudden, something that worked really well wasn't working anymore. So let's try and think about peptidoglycan, what penicillin looks like and what it does, and how penicillin resistance emerges. Those are the three things I'm going to cover here. OK, so what does penicillin do? Penicillin stops the formation of this big macromolecular peptidoglycan polymer by stopping the last cross-link, stopping the chemistry that happens to join the peptide chains to make a cross-linked polymer. And anyone who is in the mechanical engineering area will know that polymers that are just strands are much weaker than polymers that are crossed-linked structures which have tensile strength in both directions. So the uncross-linked peptidoglycan was weak. And what penicillin specifically did was inhibit forming that cross-link. What does penicillin look like? Here it is. It's a cool structure. It's what's known as a natural product, five ring, four ring, an interesting structure. And what it would do is it would interact with the enzyme the cross-linked the peptidoglycan and basically stop it dead in its tracks. What did the bacteria do? The key part of this structure is this four-membered ring within amide bond in it. The bacteria evolved an enzyme to chop it open basically making it completely inactive. So beta lactamase was evolved in the bacterial populations. It was probably derived from some other enzyme that did some useful function, but not targeted to the penicillins. But the bacteria started to survive because they made a ton of an enzyme called beta lactamase. And then it completely stopped working. So the chemists came up with other options, because they said, well, you know, if that doesn't work, we've got other antibiotics in our arsenal. And there is a compound that was used for years as a last line of resort antibiotic known as vancomycin. It was very, very important, so very serious infections, and really preserved for that use. And they thought that vancomycin might be a drug that just couldn't be defeated. This big molecule here is vancomycin. This little piece of peptide is actually the peptide that's in that cross-link. And vancomycin basically, like a glove, sat on that piece of peptide and stopped it being cross-linked. And what did the bacteria do? They evolved a set of enzymes to completely change that little piece of peptide into something that bound more poorly, giving you resistance to vancomycin as well. So when there is one drug involved, it's pretty easy to get resistance quite quickly. You just mutate one enzyme and you get a resistant strain. And the enzyme that can beat the antibiotic will win. If you've got a compound that takes five different enzymes or an antibiotic that has a very complex mechanism of action, you might say, well, this is never going to be defeated. It took five additional enzymes to evolve to make the peptidoglycan a different structure. And it's not that within every bacterium, you mutate five different enzymes and get them all working as a team. What was happening in these infections is that a plasmid with the set of enzymes was being passed around amongst bacteria. So a new bacterium could acquire resistance to this compound without evolving a whole bunch of new enzymes, but rather by lateral transfer of plasmids encoding the genes that it took to make the vancomycin inactive. All right, so let me just tell you a few of the targets. And then there is one movie I want to show you that's kind of cool. So currently, when we inhibit bacteria with antibiotics, there are a number of essential processes that are targeted with common antibiotics. So this would be a typical bacterium. One target of action is DNA synthesis and DNA polymerase. And the enzyme that is targeted is one we've talked about, topoisomerase. And that is inhibited by the fluoroquinolones such as ciprofloxacin that actually targets specifically the bacterial polymerase. So that's one way, inhibit DNA replication, bacteria can't divide. Another set of antibiotics are those that inhibit protein synthesis. So in particular, you know the tunnel that comes out of the ribosome where the growing polypeptide chain emerges after reading the messenger RNA and translating the messenger into protein? There are antibiotics to basically stick in that tunnel and stop protein synthesis. And those are things like the aminoglycosides. And they block exit from the ribosome. But you could imagine that mutating. There are the ones that inhibit cell wall biosynthesis that I've already talked to about, the penicillins, the vancoymcin. And then there are others that inhibit folate synthesis. And then there is a lot of synthetic drugs, but also a lot of natural product drugs. So both nature and chemistry have teamed up to inhibit all of these essential steps. OK, so how do you test for antibiotic resistance? You use plates where you're growing particular strains of bacteria on a plate. This would be a colony. And it's growing outwards. Where there is a colony but there is no growth around it, it means there is something in that plate that is inhibiting bacterial growth. So these are very clear types of ways that people check to see if bacteria have become resistant to drugs. You would look for that zone of inhibition. Does it disappear with some of the resistant strains, for example? And these get pretty sophisticated now where you can test a bunch of antibiotics in one go, where each of these colored dots represents an area where there is treatment with one antibiotic or another. So what's the problem? The problem is this graph, that as soon as an antibiotic is introduced, just a few years go by. And there is resistance to that antibiotic. So resistance basically is the gradual acquisition of machinery to somehow inactivate the antibiotic treatment. So if you take a look, here on the top is where the drug is introduced. And on the bottom is when resistance was developed. So let's go to something we're familiar. Here is penicillin, people introduced about 1940 to the general population. By about '47, there was resistance to penicillin. And you can see, this is really just a really serious sort of series of events. So what I want to show you was resistance in action. And that'll be the last thing I talk about today, because I just want to give you a feel for what does resistance look like. So this was an experiment that was done at Harvard on just a visualization of resistance development. I think what's so fascinating is you could then go back to the plate and pluck the first pioneers who crossed that line and find out what that was. What was that mutation that let the population expand, and so on? So you could really map out the entire evolution of very, very strong resistance. So in the next class, I'll talk to you about resistance mechanisms. And then we'll talk about viruses and resistance to antivirals.
MIT_7016_Introductory_Biology_Fall_2018
13_Genetics_2_Rules_of_Inheritance.txt
ADAM MARTIN: So today, we're going to continue with genetics. And we're going to talk about the rules or laws of inheritance. I expect that many, if not all of you have been exposed to these rules before. And so what I really want to do today is make the connection between these laws and the behavior of chromosomes, such that when you're thinking about genetics and inheritance pattern you're thinking about chromosomes undergoing meiosis. And I wanted to start just to make the point that there are a number of human traits and human diseases that show clear inheritance patterns. And what's shown up here is what's known as a pedigree. So this is a pedigree. Let's block this off. I'm showing you a pedigree, which shows relationships and a family tree. And so what you can see-- what these symbols denote are, you have an open box. That is an unaffected male. So open box is an unaffected male. And what I mean by unaffected is usually it means they don't have the disease that we're talking about. Circles represent females. And so an open circle would be an unaffected female. And finally, if you have one of these symbols, either male or female, and it's shaded in, that, by convention, represents an affected individual. So that's an individual that has the disease or trait. So this is an affected individual. So this is an example of a disease that you've already heard about from Professor Imperiali. This is a family tree that's indicative of a type of inheritance that's seen in families with a disease called phenylketonuria. And you'll remember from Professor Imperiali's lecture, this is a disease where there's a mutation or allele in which there is a defective enzyme that can't process the amino acid phenylalanine. And so patients with this disorder have to be very careful about their diet such that they don't intake too much phenylalanine. And what you see about this trait or disease is that it's skipping multiple generations. And then it manifests itself down here. I'm not getting any arrow. So I'm just going to use this pointer here. So here you see there are several individuals. There's a male and a female with the disease. And it results from a relationship between two first cousins. So only in this case does the disease sort of appear. And you can think of this as a recessive trait. In this case, it's autosomal recessive. So for PKU, this is exhibiting what's a type of inheritance known as autosomal recessive. And the reason that it's recessive is because if an individual just has one copy of a functional enzyme, then they don't have the disease. So you can see that it's only individuals that have both defective versions, which are labeled lowercase a here, that exhibit the disease. So in the case of PKU, lowercase a represents the defective enzyme. And uppercase A denotes a functional enzyme. Now, it's not shown here, but it's possible that an ancestor sort of above this parental generation also exhibited this disease, such that there'd be a sign that this is being inherited across generations. So that's not as obvious in this disorder, because it's a very rare genetic disorder. But more common disorders, such as color blindness, show a more clear inheritance from generation to generation. So is anyone here colorblind? I ask just to know how to set up my slides as well. No one's colorblind. OK, that's good. Then you'll all see the difference between the image on the left and the image on the right. So if you have normal vision, that's what you see on the left from this fruit stand. But for those that are missing the red photopigment in your cone cells, you exhibit color blindness. And this fruit stand would look like the image on the right. So this is a clear example of an inherited trait in humans. And this is an example of an inheritance pattern that would be similar to human colorblindness. And here you can see a clear example where you have an affected individual, an affected male here. That male has five daughters, none of which exhibit the phenotype or disease. But several of those daughters give rise to progeny, sons, that have the disease. So in this case, you see this trait skips a generation. But essentially, the grandfather here has passed on the trait to his grandsons. And so colorblindness is a little bit different from PKU, not only in the fact that it's more frequent-- so it's about 10% of males exhibit colorblindness in the population. But also you see with PKU you had both a female and a male affected. And in this case, you're seeing a preponderance of affected males, which seems to be not random. And so this colorblindness exhibits a different type of inheritance pattern, which is known as sex-linked recessive. And I'm going to come back to this inheritance pattern at the end of the lecture. Because it's actually this type of inheritance pattern which helped researchers about 100 years ago make the connection between the unit of heredity the gene and chromosomes. So we'll talk about another example of this type of inheritance pattern at the end. So for today, what we're going to talk about is we're going to start with some of the basic laws of autosomal inheritance. And so we're going to talk about Gregor Mendel and his seminal studies in the pea plant. And then towards the end of the lecture, we're going to talk about sex linkage. And I'm going to tell you about work done in fruit flies, and specifically, their eye color trait, which led to the linkage between the behavior of genes and the behavior of chromosomes. So that's what we have in store for today. So first, I'm going to tell you a little bit about what enabled Mendel's theory. I guess I could start over here. So what enabled Mendel's theory? And I presume that most of you have heard about Mendel before. So there might be a little bit of a reminder for you, but I also hope that we kind of make a very clear connection between Mendel's theory and the behavior of chromosomes. So Mendel did his seminal studies using the pea plant. And one aspect of pea plant biology that was really essential for Mendel's theory is that you can both self-pollinate pea plants, meaning you can take the male gametes from a pea plant and mate it to the female gametes from the same pea plant. So it's entirely within the same plant. So you can self-pollinate, meaning you do a cross-- you basically cross a plant to itself, which, obviously, we can't do with humans. You can't do with many organisms. Or you can cross-pollinate, meaning you take the male gamete from one plant, and you combine it with the female gamete from a different plant. And as we go through Mendel's experiments, you'll see how this was used to define the rules of inheritance. Another property of the pea plants and something that Mendel took advantage of was he chose traits of pea plants that exhibited a very clear dominant or recessive phenotype. So he used visible traits, visible traits with a very clear dominance or recessiveness. And if we go back to our example of PKU, you can see how this human disorder, the genes that determine this disorder-- the alleles have a very clear dominance or recessiveness. The dominant allele is often denoted with a capital letter. In this case for PKU, you we used capital letter A. Or in this case, the disease allele was lower case a. And that's recessive. And if an allele has dominance, what that essentially means is that being homozygous for the dominant allele is equivalent to having just one copy of that allele. And so I think if you think of the PKU example, this is very clear. Because the disease phenotype results from a defective enzyme. So if you just have one copy of the enzyme that's functional, then the human can have wild type or normal function. So in order to really lack function of this enzyme, you need to be homozygous for the non-functional allele. So you need to not have any normal copy of that enzyme. Because if you just have one copy of that enzyme, you're OK. Because you have an enzyme that's functional and will carry out that function in the cells of your body. The last point I want to make is that Mendel did his experiments starting with what are known as pure breeding lines. So he started with pure breeding lines. And what I mean by pure breeding are these are lines that if you take these plants and just self-cross them over and over again, generation to generation, they will only give rise to plants with traits that reflect the parental generation. So there's basically no variation. And you'll see in just a moment that another way to denote pure breeding means that for a certain trait you have an allele combination which is homozygous. So another way to think of pure breeding lines are these are plants that have a homozygous allele composition. Homozygous meaning that either they have two copies of one allele or two copies of the other allele. I just want to make the point that Mendel had to overcome a number of hurdles in order to get these results. And Mendel is not exactly your sort of clear example of a success story. So Mendel applied to get a teaching certificate in university. And he failed out. And I'll quote one of his instructors who said, I quote, "He lacks insight and the requisite clarity of knowledge." So that guy feels stupid. So next-- so Mendel did his experiments. And the significance of his work was never recognized throughout his lifetime. He did his experiments in the 1850s and '60s. He died in 1884. He died not knowing at all what the significance of his work was. Because his work was then found again in the early 1900s by the likes of Thomas Hunt Morgan and others. And they made the connection between Mendel's laws of inheritance and chromosomes. And it was really then that Mendel became the father of genetics. Because then there was a physical model for how inheritance was working. But he died before that happened. So he didn't realize that he was the success that we now know him to be. Another interesting story. He started his work with mice. He wanted to breed mice with different coat colors. But he got in trouble with his bishop, because his bishop didn't approve of him promoting sex among these mice. Luckily, his bishop didn't take a plant biology course. Because of course, plants also have sex. But he had to under come a number of hurdles. But he took advantage of his garden. And I was talking to someone in my lab the other day. And she said, you know what's great about Mendel? He had a garden. And he just didn't put it on Instagram. So he used his garden to his advantage. And so with his modest garden, he came up with what are now known as the rules of inheritance. And I'll start with Mendel's first law. So Mendel's first law is that every adult has a pair of genes for a given trait. And these are now what we refer to as alleles. And we now refer to them as genes as well. Mendel did not use the term gene. He just had these abstract sort of units of heredity. So his first law states that, every adult has two sort of units of heredity that can be different. And that they split during the formation of the gametes. And the probability that a gamete will have a given allele or a given unit is equal probability. So what I hope you can see is that this law, which is stated up there, is a direct result of the segregation of homologous chromosomes during meiosis 1. Mendel did not know that. But now looking back, we can see how this manifests itself. And so during meiosis 1, you recall that the homologous chromosomes here line up at the metaphase plate. And they line up opposite each other, such that some of the gametes will get the capital Y allele shown here. And the other half of the gametes will get this lower case y allele. So there's a 50% probability of a gamete either having one or the other. And of course, in meiosis 1, this is referred to as a reduction or division. Because the homologues are split. And so the genetic content of the gametes are divided in half. So Mendel's first law, the evidence for it were the results of what is known as a monohybrid cross. So Mendel did what is known as a monohybrid cross where he took pea plants that were pure breeding for two different traits. One was their pea color. So it's yellow peas versus green peas. And he made a hybrid where he takes a yellow plant that arises from yellow peas and crosses it to a plant from a green pea. And this is known as the parental generation. And so the result of this cross is that all of the peas were yellow. So 100%. This cross results in 100% yellow peas. This is known as the first filial generation, or the F1. And that should indicate to you which of these traits is dominant. Because if you take yellow peas and you cross it to green peas, and you get yellow peas, that means that the trait for yellow peas is dominant. So for the rest of this, I will denote the gene allele that confers yellow peas as capital Y and the gene allele that encodes for green peas as lower case y. And you see what I'm doing here is I'm putting the phenotype-- I'm describing the phenotype there, which is just what the trait is that manifests in the organism. But the genotype, which is the combination of alleles in the organism, I'm showing beneath the phenotype. And if these gene alleles are splitting during gamete formation and then recombining during the formation of a new plant, then, in this case, this hybrid plant is going to have one allele that's capital Y and one allele that's lowercase y. And because these two alleles are different, this situation is known as heterozygous. So this is a heterozygous plant. You can think of pure breeding being analogous to homozygous. Because if you cross yellow pea plant to itself, you will only get back yellow peas. So it breeds true. You can think of a hybrid as being equivalent to heterozygous, because there are two gene alleles. So then what Mendel did is he didn't stop there. He's self-crossed or self-pollinated these F1 plants and looked at the resulting seeds. And so in the F2 generation, what he found is that he got back both the parental phenotypes-- so 75% of the progeny were yellow, had yellow peas. And 25% had green peas. So there is this 3-to-1 ratio here. Now, if we think about this in terms of Mendel's first law, where there's a segregation of these alleles during the formation of gametes, and there's an equal probability of having either one of these alleles-- if we think about this cross here, these plants, both the male and female side, are producing gametes that are either big Y or little y. So this would be the female here. And so because they're separating, and there's a 1/2 probability of having either the capital Y or little y allele for the male-- and there's also a 1/2 probability of having either of these alleles for the female. So if you look at the possible combination of gametes that could give rise to the F2 generation, you have some that will be pure breeding yellow. And the probability here would be the joint probability of having this gamete and this gamete, which is one quarter. You then have two classes here that have one copy of the dominant allele and one copy of the recessive allele. So these will have the same genotype. And these three will have the same phenotype. They'll all be yellow peas. And so if you add up the probabilities of all three of these, you can see that 3/4 will be yellow. So 3/4 of the progeny will have the yellow phenotype. And you can see that another quarter of the progeny have the chance of getting two copies of the recessive allele, and therefore will be green. So just by considering this as a probability problem, which is what Mendel did, you can explain the ratios of the progeny that Mendel observed in his crosses. And I want to point out the parallels between this simple cross with peas and the inheritance pattern shown by PKU, or phenylketonuria. So notice here you have green peas in the parental generation. But green pea skips a generation and only appears again in a subsequent generation. So that's a lot like PKU, where you can see there's multiple generations that go by where the trait doesn't manifest itself. But then it pops up again in that later generation where you have inbreeding in this family. So there's a clear connection between the results that Mendel got and cases of human disease. Now we're going to go on and talk about Mendel's second law. So Mendel's second law, which is often referred to as the law of independent assortment. And another fortuitous thing in thinking about Mendel's experimental design and setup which was fortuitous-- he didn't know it at the time-- he chose traits that actually were present on different chromosomes of the pea plant. So the traits didn't exhibit what is now known as linkage, where they're are physically connected on the chromosome. So this law of independent assortment can also be explained by thinking about how chromosomes behave during meiosis, where the alignment of homologous chromosomes at the metaphase plate of meiosis 1 is essentially random. So if we take a look at this example here, you can see I've drawn one particular configuration for the chromosomes. And I'm using sort of one gene pair here and another allele pair here. And so if the chromosomes were aligned this way, then when they segregate during meiosis 1 you'd get two classes of gametes. Some that are capital Y, capital R. And another class that's lowercase y and r. So that's one possibility. But what's equally probable during the alignment of chromosomes during meiosis 1 is that the chromosomes line up like this. So rather than having the dominant alleles all on one side of the metaphase plate, you have a dominant allele for one homologous pair on one side and the other dominant alleles on the other side. So how they arrange, how these homologous chromosomes arrange during meiosis is totally random. And if they arrange like this, you'd get alternative types of gametes. You'd get gametes that are uppercase Y, lower case r, and lower case y, uppercase R. So this law of independent assortment can be completely explained by the behavior of the chromosomes during meiosis 1. So now I'm going to take you through the experiment that illustrated this. And this type of experiment is what is known as a dihybrid cross. And so a dihybrid cross is now a cross where you're taking plants that differ in two traits rather than just one. So di stands for two. And in this case, we're going to consider both pea color. Again, the pea colors are yellow and green. But now we're also going to consider p shape. So you can have peas that are round and peas that are wrinkled. All right, I'm going to make use of this board again. So let's consider the round wrinkled case. If you set up a cross between a plant that was from a round seed and a plant that was from a wrinkled seed-- and let's say the round allele is dominant, what would you expect to see in the F1? So you have-- yes, Carlos. AUDIENCE: All rounds. ADAM MARTIN: You'd see all round, exactly right. So let's go through, now, this cross. So we're going to have a parental cross where Mendel took two pure breeding lines. One of them has yellow round peas. And he crossed the plant from a yellow round pea with a plant that was derived from a green wrinkled pea. And we already know yellow is dominant. And as Carlos just pointed out, if round is dominant, then you'd expect all of the peas to be round as well. So in the F1 generation, what Mendel found is you have 100% of the progeny that are yellow round peas. And then similar to the monohybrid cross, Mendel self-crossed these F1 plants. And by self-crossing them, he observed a number of different classes of progeny. So he got back the parental types, yellow round. He also got back this other parental type, green wrinkled. So these, because these were the same combination of traits that were present in at least one of the original parents, are known as perennials. They're parental. They had the same parental phenotype as one of the original parents. But what Mendel observed was two other classes of progeny which were different combinations of these traits that weren't present in the original parental generation. So those were yellow wrinkled peas and green round peas. So you'll notice that this combination of traits, yellow and wrinkled, is not present in the parental generation. This is all F2. This is F2 continued. You also see green and round were not present in the parental generation. So these are referred to as being non-parental. So this non-parental class is a unique combination of traits that wasn't present in the original parents. And what Mendel noted was that in these dihybrid crosses he always got a stereotypic ratio of 9 to 3 to 3 to 1 for these different classes of combinations of traits. Now we have to think about the probability. What leads to this characteristic ratio? And again, we can think about this just in terms of probabilities and these different gene pairs segregating independently of each other. So we already talked about for a monohybrid cross for pea color, 3/4 of the progeny is yellow. Because they at least have one dominant allele for color. And one quarter are green. So the probability of having this phenotype is 3/4. The probability of being green is one quarter. And you can consider pea shape as just a separate monohybrid cross where the dominant phenotype is also going to be present at 3/4 probability. So 3/4 are going to be round. And one quarter is going to be wrinkled. So now if we just consider these different classes of progeny here, we can consider two monohybrid crosses. And what's the joint probability of being both yellow and round? So the joint probability of being yellow and round is 3/4 times 3/4. So if we have 3/4 times 3/4, that's going to equal 9/16. Now, if we consider yellow and wrinkled, that's the joint probability of 3/4 and one quarter. So this probability is being 3/4 times one quarter, which is equal to 3/16. Green and round is similar. There's a quarter probability of being green and a 3/4 probability of being round. So again, you have one quarter times 3/4, which equals to 3/16. And the least probable class is being homozygous recessive for both alleles. Because there's a one quarter probability of being recessive for each. So the joint probability of having all recessive alleles is one quarter times one quarter, or 1/16. So you could draw a massive Punnett square and also derive this. But really you can just consider it as two separate monohybrid crosses and then just calculate joint probabilities. Any questions on Mendel before I move on? I'll just point out one thing about Mendel's second law. This is a rule that I'm going to break, now, in just a minute. And this law of independent assortment assumes that there is no linkage. In other words, seeing this type of inheritance pattern really depends on the two genes not being physically connected to each other on the chromosome. Now we're going to talk about fruit flies. And specifically, we're going to talk about a certain trait in fruit flies, which is their eye color. I brought some pets to class today. And so we're going to talk about the white mutant phenotype, where the fruit flies have a white eye color. So I have three pairs of vials here. In one of them, there's the white mutant. And you're going to see it has white eyes. And then there's also a corresponding sort of normal red-eyed flies in the other vial. So I'll just pass these around. Hopefully there's enough light that you can see the eye color. You're able to see the eye color, Jeremy? Yeah. You might have to come up to the board lights at the end of class if you want to see it really well. So we're going to fast forward from Mendel now and talk about researchers who picked up on Mendel's work in the early 1900s. And specifically, I'm going to tell you about research done in the lab of Thomas Hunt Morgan, who had a fly lab at Columbia. So we're going to talk about Thomas Hunt Morgan. And actually, we're going to focus a lot on work done in Morgan's lab in the next couple of lectures. Because it turns out his lab also made the first genetic map. And we'll talk about that in Friday's lecture. So the type of inheritance that Morgan defined is what is now known as sex-linked inheritance, where a given trait isn't a sorting independently of an organism's sex, but is somehow connected to it. And I want to sort of return your attention to this example of human colorblindness where there appears to be some sort of connection between the disease phenotype, in this case, colorblindness, and the gender of the individuals. So you see this disease is only affecting males. And this type of inheritance pattern, while observed in humans, is really explained by work done in flies on this white mutant that you're carrying through the class. So sex-linked inheritance, the explanation for that is this is a trait that's carried by a special type of a chromosome known as a sex chromosome. So fortunately, for us, flies, like humans, have a similar set-- or male flies and humans have an X and a Y chromosome. So the inheritance kind of is similar between flies and humans when considering sex linkage. And females have two X's. So the presence of these sex chromosomes was known in the fly. And it was known that if a fly had an X and a Y chromosome it would be a male fly. And if a fly had two X chromosomes it would be a female fly. And normally, normal flies have red eyes. But Morgan's lab was interested in variation in organisms. And they searched and searched for flies that had abnormal characteristics or traits. And what they found in Morgan's lab was a mutant fly. It was a spontaneous mutant. But this fly had white eyes. And it was male. So they found a single white-eyed male, which they continued to study for some time. So they sort of defined some of the rules of its inheritance. So what Morgan and his lab did was they set up a set of crosses that look a lot like Mendel's crosses. So you have a white-eyed male. They took a white-eyed male. And they crossed this white-eyed male to a red-eyed female. And if you cross a white-eyed male to a red-eyed female, the result was actually similar to what Mendel had predicted, which is that 100% of the flies had red eyes. So that's similar to what you would expect from a monohybrid cross where red eyes is dominant. So I'm going to refer to the red eyed allele as an X with a capital R. And the white-eyed allele is an X with a lower case r. Because this gene is present on the X chromosome, but it's not present on the Y chromosome. So the Y chromosome is really small. And so the X chromosome has-- all of its genes are basically present in one copy in the male. And that's going to manifest itself in that sex linkage. So then they took this F1 generation where you have red-eyed males. And they crossed siblings. So they did a sibling cross. The one failing of flies as a genetic system is they can't self-cross. You have to have a male and a female. So they crossed two individuals in this F1 generation. And what they found-- again, similar to what Mendel would predict-- is that 75% of the flies had red eyes. And 25% had white eyes. So this is behaving a lot like the yellow trait in peas. Except that all of the white-eyed flies in this F2 generation-- all of these white-eyed flies were male. So only the males were getting this trait of white eyes. And you can see that that's very much reminiscent of colorblindness, where you have a grandfather that has white eyes. And the grandfather is essentially passing on this trait to his grandsons. So this pattern of inheritance that happens in the fly is very similar to that that happens in humans. So now let's think about how this-- if we can explain this by thinking about chromosome segregating. So if we think about this F1 generation, we have red-eyed males-- or actually, let's do the parental cross first, where we have white-eyed males and we have red-eyed females. So all of the females are going to get their X from their dad. And they're going to get a wild type copy of this gene from their mom. So all the females are heterozygous for the white gene. So I'm going to call it the white gene because that's what it's called. In flies, they name the genes based on the mutant phenotype. So if you mutate it, and it results in white-eyed flies, then they call it the white gene. All the males are going to have a functional copy of the white gene. And thus, all of these F1 flies are red-eyed. So now these siblings are mated. All the females are heterozygous for this gene. The males all have a functional copy of the gene. So they're going to make gametes that are either an X that's functional for this eye color or the Y chromosome. So now what you see is that all of the females are going to get this normal copy of the gene from dad, and thus, are going to have red eyes. Whereas half the males are going to get a functional copy from mom, and therefore, have red eyes. But the other half of the males are going to get this non-functional variant that can't produce red pigment, and therefore, are going to be white. So this here is your white class. And you can see because males only have one X chromosome, the only class of progeny that's going to be white-eyed here are those males that occur with a quarter frequency. One thing I want you to think about over the next couple days-- and I'll sort of take you through it at the beginning of next lecture-- is what would happen if you set up a reciprocal cross here? What if you mated red-eyed males to white-eyed females? And I want you to tell me what you expect would result. So my question for next Friday is, what if you took red-eyed males and mated it to white-eyed females? So I want you to think about this. And I want you to think how this is different from Mendel's experiment. What if you did a reciprocal cross for, let's say, the pea color? How would these two different crosses compare with each other? And we'll talk about that at the beginning of Friday's lecture. And we'll also talk about the first genetic map, which was actually created by an undergraduate. So stay tuned for that. See you on Friday.
MIT_7016_Introductory_Biology_Fall_2018
28_Visualizing_Life_Fluorescent_Proteins.txt
PROFESSOR: OK, all right, so just very briefly, I just wanted to remind you because it goes along with the video, bioluminescence is light emitted based on a biochemical reaction. The most common one is the enzyme luciferase, which reacts with a molecule called luciferin to undergo a biochemical transformation that uses ATP. And as a result of that transformation, there is light emitted. So you don't have to shine light onto these. The light comes from the biochemical reaction. So the assays and a lot of biological work is based on the luciferin, luciferase reaction, so the mushrooms that do this will have both luciferase and luciferin. So in an organism, you can transfect-- or in cells, you can transfect cells with luciferase, and then when you're ready to do the-- to see the luminescence, the bioluminescence, you can add luciferin exogenously, and there's a number of chemists that are trying to make modified systems that have different light energies and so on. So there's a lot of protein engineering that's taking place based on the luciferin/luciferase pair. There's all kinds of cool biochemistry going on there, OK? Anyone who comes in could come and grab a donut. We can move them, or do you want a donut? AUDIENCE: [INAUDIBLE] PROFESSOR: OK, all right. I didn't mean to call attention to-- OK, I want to go briefly back to arrays, DNA arrays, because I want to explain to you the power of this technology. So last time, we were talking about arrays which are basically printed layouts, and I'm-- this is going to be a very tiny array with six sites on the array. So it's a two by three array. The ones that are used in biology are way larger than that. You can see those up here-- far, far more spots, and this is just the size of a glass slide and at each of these sites, there would be different DNA sequences. So they're specific DNA sequences, and they are printed onto the array with-- it was originally done just with printers. Now there's more technological ways of handling this, and they're usually on silicon or glass slides. So there's a lot of engineering went into the original arrays to make these really packed tight distribution of DNAs. And these would be what are called addressed arrays. We know what's at each of the sites. We're aware of the sequences at each of the sites, and there's really cool chemistry that would have enabled the disposition of these arrays. So what these arrays are used for is to probe complementary pieces of DNA. So let's say we've got a GHIJ that come potentially from genomic DNA, and they complement-- each of them complement one of the sites on the J-K-- complement one of the sites on the array, but the DNA that you're screening is specifically labeled with a fluorophore, OK? So these would be fluorophore labeled, and arrays enable you to do is to rapidly screen either genomic DNA or, actually, much more usefully, as you'll know now, the transcriptome, so the messenger RNAs, and I'm going to remind you about technology to take the messenger RNAs from cells and then make them an appropriate copy of a complementary DNA to what's on the arrays. So let's go take a look at the next slide which describes the experiment. So let's say you want to screen populations of cells where one set of cells is cancerous and the other one is healthy. You would collect the lysate from those cells. You would collect the messenger RNA presumably based on a tag or a feature of the messenger RNA that's common to just the messenger. What would that be? What would be common to the mature messenger RNA that would allow you to capture just the messenger RNA, not all the other RNAs, not DNA. Anyone? Yeah? AUDIENCE: [INAUDIBLE] PROFESSOR: The poly-A tail. So you can, with an appropriate resin, for example, a poly-T support, you can capture that all, and then in each case, you would take the messenger RNA collection, and then what you need to do is turn it into a complementary DNA sequence. We're much happier screening at the DNA level because the RNA is less stable. So it's really handy to do that conversion, and you use the viral enzyme, reverse transcriptase. That's an enzyme that comes from the retroviruses that are exemplified by the HIV retrovirus that we'll be talking about in the last week of class. So you'll learn where reverse transcriptase fits in the life cycle of the HIV virus. Then once you've collected the DNA that's complementary to the RNA from each type of cell, you label them uniquely, for example, with green fluorophores or red fluorophores. It's kind of a global reaction you just paste-- chemically link the fluorophore to each set of DNA. And then you basically combine everything and take a look at how the chips, how the arrays, light up. So your typical experiment would be everywhere you've got something that has a red fluorophore might bind to a unique site. Everything with a green fluorophore might bind to another unique site on the array. And then if both types of DNA binding to the same site, that would show up as yellow. If nobody is binding, it would show up as a blank spot. So you get these collections where, pretty easily, you can look at thousands of sequences to see whether they're complementary. That's on your array. Everywhere you see yellow, the cancer cell and the healthy cell, a binding kind of equally, so there's nothing disease related there. So that's an interesting feature because you can say, well, none of these are problematic segments, but then you see very clearly spots that are clearly red. So the sequences of the DNA on the glass or silicon chip are complementary to sequences that are unique to the mRNA in the cancer genome, in the cancer cell line from that lyaste. So it's pretty cool. I don't know if any of you have hit that link, but it's basically this guy is sitting at a bench and doing an experiment where he's able to do thousands-- the equivalent of thousands and thousands of experiments just by using one gene chip to do the probing. In other cases, you will see spots that are green, which mean they come exclusively from the healthy cells. So those would be really the ones you want to interrogate, and where it's black, nothing's binding, so it's less important. So it allows you to go from a grid of thousands and thousands of different sequences to pick out suspects in a disease-related situation. Does that makes sense? Pretty cool technology. The original chips were called-- they were from, I think, Affymetrix, the Affy chips, and they are really widely used even to this day, and there are different types of variations, but I think one of the most useful and ones that will remind you of certain things we've talked about in the class is that you can grow cells of different sort of, you know, histories. For example, you collect the messenger RNA because, remember, the transcriptome is much more interesting to us than the genome. The chips would have to be a lot bigger to scan all of the genome, and then you use reverse transcriptase to make a complementary DNA of the RNA because it's more stable, more tractable to work with, and then you put on a fluorophore label, mix everything together, and see what you get. So those are my candidate spots. So if you had an experiment, let me think-- walk you through what the arrays can do, but let's make sure we know what they can't do because it's always important whenever someone says, I got this technology. It's going to solve all the problems of the world. Throw away or your beakers and test tubes. You just don't need to do any of that, and you can solve everything with arrays. So let's say you've got a situation that, in some cases, although genes may be defective, the messenger is still produced, OK? So there's something wrong, but you make the messenger RNA. So it would still kind of show up normally on the chip, but the gene defect prevents translation into proteins. So you get the gene, but you can't translate it, so it's going to look like a normal gene, but it's in the translation from the track-- from the messenger RNA to the protein where things go wrong. What are you going to see on the array? Is it going to look healthy? Is it going to look not healthy? What's going to be the outcome? Could I use an array, a DNA array, for this experiment? Yeah? What do you think? AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, Yeah. It's just going to look like everything binds to the chip. It's not going to be informative. So in this case, you need to do very different experiments that are at the protein level, all right? So I want you to remember that these are very good to get the genomic sequence, the messenger RNA sequence. But if you've got, for example, a protein that is made, and there's a problem with, let's say, a phosphorylation by a kinase that may be critical for function in a cell, you will not see that in the array. That will not be visible. You'd have to do different types of experiments, for example, with phosphoproteins, specific antibodies to tell that there was a disorder in those systems. So always when someone tells you, I've got this great experiment, you say, what can it do? That is awesome, but what can't it do? OK, I need to worry about that, all right? OK, so what we've seen so far is we've seen a variety of fluorescent tools, for example, to label the nucleus, either the original ethidium bromide types of stains that will intercalate into DNA and give you a fluorescent signal that could be seen on a gel. We've seen the evolution of ethidium bromide to less toxic materials, for example, the DAPI stain. So ethidium bromide intercalates-- ethidium-- sorry, DAPI binds in the minor groove. And we saw images of that last time. One important distinction that is made with these kinds of dyes is that DAPI is what's known as a supravital dye, which means you can use that dye and observe live cells. So supravital is associated with the types of experiments that you can do observing cells that are still alive. The ethidium bromide is not such a dye, and neither are the antibodies because so ethidium bromide is toxic, so even though the ethidium bromide gets into cells, it actually should have the H here because that would be ethyl bromide in my language. So ethidium bromide has the H there. You can't use that. It's not a supravital dye because it's toxic to cells because intercalates in the DNA and disrupts replication, so that is not supravital, so DAPI is. This guy isn't, and antibodies, which we learned a little bit about last time, these are from my description, are simply reagents to recognize components of biological systems, most commonly, proteins, and now we're getting better at making antibodies to carbohydrates or glycans. Are antibodies supravital or not, and why-- if they're not, why? Could I use that on a cell and follow a live cell with an antibody? Yeah? Carmen? AUDIENCE: It doesn't seem very likely to me that the antibodies will allow the proteins to do whatever it is that they do. PROFESSOR: Right. AUDIENCE: So I think that the cells would die. PROFESSOR: So it's a correct-- they are not supravital. What do you think would happen if I add an antibody to a living cell? Could it stain-- it could stain something on the cell surface, but it might, as you say, interfere with the function of the cell. Let's say I have an antibody to the epidermal growth factor receptor. That is going to alter the properties. So you're correct in that respect. What about targets inside cells? Are the antibodies going to get in? Yeah? AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah. Yeah. Yeah, it's a big wall, frankly. It's an impenetrable barrier to get in. So you can really only stain in-- you can only observe in what are known as fixed cells. And when we call a cell a fixed cell, we actually mean we've broken it because what we've done is we've treated the cell population with methanol, which pokes holes all over the cellular membranes. They're fixed. They're static on a slide. They're not going to move around anymore, and you can stain with antibodies. You can tell what happened at the moment-- what was happening at the moment the cell passed away, if you will, what was going on, which proteins were there. But you can't keep observing moving forward because the cells then are no longer viable, all right? So neither of these approaches are super vital, but they're for different reasons. One is toxicity. The other one is also toxicity, as Carmen pointed out, but it's also a problem with membrane permeability. So what we're going to talk about in the rest of this lecture is a way to get around this impenetrable problem, and this-- and the discovery of the green fluorescent protein, and we're really mostly about GFP and its close siblings that might be different colors. What I want to walk you through is the discovery of the protein, and I really want to impress upon you how science happens in funny, small steps early on. You could never have predicted that a jellyfish off the coast of Seattle was going to revolutionize biology, biochemistry, life sciences, right? Who would have ever thought of that? And so that's why, you know, they colleagues, we and our colleagues, are so excited about fundamental science where you don't quite know where you're going, but you're working on something cool, and then the prepared mind goes, wow! That's interesting. I could use that for this problem. So Shimamura was a Japanese biochemist, who was fascinated by jellyfish and their bioluminescence and worked for years, slaved away for years and years. Apparently, he would go with his family to the small islands in Puget Sound and have his kids go and collect jellyfish all day long because he noticed certain things about the jellyfish that were rather intriguing with respect to their properties. And the key thing that was observed was that there is a bioluminescent protein in jellyfish known as aequorin. So that's the kind of luminescence we just described, but in the dark, there was also a fluorescence species, and it turns out that the light energy from a aequorin actually can excite the flourophore in the green fluorescent protein, and then you emit the light of the wavelength in the green. So it was actually a couple pair of proteins, aequorin and the second thing that was just the green fluorescent protein, and what was fascinating about this protein is more and more work was done is they didn't seem to need any additives. So for bioluminescence, you got to add ATP, and you've got to add, you know, the luciferin. You've got to add things to see the fluorescence. What was unique about the green fluorescent protein is it didn't need anything to add. You just had the protein, and it was fluorescent. So what I want to talk to you about is the molecular basis of this fluorescent, and we'll also talk, though, about the protein engineering that was systematically done to make this more and more of a useful reagent. So Shimamura was the first person. He had his kids collect so many jellyfish that they could extract the green fluorescent protein through traditional old-school biochemical methods and grow crystals, protein crystals. Guess what? They are bright green, and we're able to solve the structure. Once they had the structure, they sort of knew what was going on with the jellyfish, and they were also able to recognize a particular part of the sequence of the green fluorescent protein that ended up being the precursor to the fluorophore that we see nowadays and we understand. Originally, the protein was not monomeric. I'll talk to you a bit about that later. For technology, it's much more handy to have this protein as a monomore, not as a dimer or a tetramer. That makes experiments complicated, but I'll show you a very easy trick that was done to fix that. And so the other people who shared Nobel Prize with Shimamura, who, by the way, passed away just a couple of weeks ago, in fact, but a an amazing old age. It must have been all that digging for jellyfish that helped him live that long. There was also Martin Chalfie at Columbia, who demonstrated that the gene for the green fluorescent protein could be put in all kinds of other animals, organisms, C elegans bacteria and they would fluoresce. And a really major player in this entire story was Roger Tsien, who died very young of a stroke, who was the chemist-biochemist who put the pieces together said, if this is GFP, we can use it for so many different things. So he really advanced the technologies of the applications. So protein expression could be-- they realized quite early on could just be programmed into a protein by just having the DNA for the green fluorescent protein stuck on to a favorite protein of interest. Then you would have that DNA be transcribed, translated in the cell, and then your favorite protein in the cell would fluoresce green because it was attached as a construct with the other proteins. So all of those things became enabled quite quickly. This could be done in all organisms, eukaryotes, prokaryotes. It's pretty non-toxic, so expressing a bunch of GFP in a cell doesn't kill it. That's kind of handy. So it really is a super vital system, and it's visible in all kinds of tissues. So what I'm going to show you here are some of the seminal results, first of all, the structure we'll talk about and then Chalfie was able to put the DNA into proteins and bacteria, and, also, proteins that were labeled in C elegans. And the story of Chalfie is kind of funny. He's one of those guys who sort of didn't necessarily have all the equipment he needed, but he knew this was really exciting, and he needed a fluorescence microscope. So he would call up all the companies who sold microscopes and, say, I'm really thinking of buying this fluorescent microscope. This particular one would be-- these are a couple of hundred grand, you know, these are not cheap things. And he'd talk the company into putting a microscope in his lab for a month as a demo. It's, like, you know, saying to the car dealer you want to buy a car and then driving a Ferrari around for a month and then saying, it's not really going to work for me. So he collected all his early data on microscopes that were on loan from Olympus and Leica and various other places. So that's a funny, funny twist. OK, so the fluorophore-- all right, so they could go from DNA to protein sequence. Then they could look at the structure and see that the place in the protein sequence that went serine, tyrosine, glycine. So here we go-- serine, tyrosine. You recognize that one. It's the one with one of the aromatic rings, glycine is the one with no substituent. It was common in a number of organisms, mostly aquatic organisms that fluoresce and seem to be the origin of the flourophore in GFP, and so the chemists got to work, and you know what chemists do, they start drawing arrows and joining bonds and figuring out how can we go from something that's basically dead, and it's not got any fluorescence at all for something that's fluorescent. And so from the structure and from working out the mechanism, they basically found that this little piece, this tripeptide within the entire structure of the DNA primary sequence, was able to cyclize through that nitrogen attacking that carbon and then an elimination, and then there was an oxygen-dependent oxidation to give you this structure. So if you look at this, you can still see the serine. You kind of know this was tyrosine, but it doesn't look like it anymore, and it turns out the glycine was incredibly important. It doesn't look like it's involved in the chromophore, but when you have a glycine in a sequence, it allows you to do some funny twists and turns because it has no substituents. So it allowed that loop to form. Once that loop is formed, chemistry can occur. If the glycine wasn't there, things might be not in an ideal situation to form the fluorophore. The oxidation to put in that double bond is oxygen dependent. So in some cases, the fluorophore would mature a bit more slowly. So you would go from the free state that's not fluorescent to the fluorescent state quite slowly if you withheld oxygen. So a lot of engineering was done to improve the maturation time, to improve the photo physics. There's really cool photophysical experiments that were done, but that's the basis of it, and the emission of this flouorophore, you can almost recognize that it's a fluorophore because it's got a lot of what's called conjugation in organic chemistry-- one double bond, another double bond stuck onto this ring with multiple double bonds. And it emits at the same-- a similar wavelength to the fluorophore fluoroscein, so people were happy because all the filters on their microscopes were the right ones. If you just used the fluoroscein filters, you would get a GFPC. So that really was made in heaven for those guys. OK, so let's take a look at this. Here's the wire diagram of the original crystallized structure. I've planted a ribbon on there. I get rid of all the side chains, so you see this beautiful dimer structure. Let's throw away one of the dimers, and you can start to look. There's a little something sneaky in there, and as you get closer, you can start to see the structure. There's this sort of thread going through it. That's just a trace of the backbone, but there is the structure of the fluorophore. Here's the fluorophore with the space filling, and what I think is kind of cool is if you kind of twist the GFP around, it's kind of like a barrel, and it looks like a bird in a cage because it's caged in there. And to be honest, it's the structure. It's not just the structure of the flourophore. It's the fact that the fluorophore is inside a kind of hydrophobic environment that makes it fluoresce. I could go into the lab and make that flourophore and have it in a beaker of methanol, and you wouldn't see anything. It's only in that environment that's created in the GFP molecule, so it's fascinating. The structure of GFP creates the shape to cyclize, and it creates the environment for good fluorescence. So he has a mass of animals that have been labeled with GFP. As you've seen, our mouse has been labeled multiple times. This was the first GFP rabbit. He was known as Alba. He even had his own name, but there's seaweeds and zebrafish and mice from a letter, C elegans, the famous, the fly. What else? I don't even know what these things are, but Purkinje cells and so on. Anyway, so this shows you the universality of the tool. A brief mention about the dimer problem, people looked at the structure of the diamond from the crystal structure and found that there was a sticky face between two monomers of GFP that encouraged this quaternary structure. And so they simply changed two alanines that were sitting right at the dimer interface, one from one monomer and one from the other, created a sticky patch. Sounds a bit like hemoglobin, if you remember, the sickle cell hemoglobin. They changed them to lycines and [INAUDIBLE] solved you had just monomeric protein because when you do a lot of experiments with GFP, you don't want GFP randomly dimerizing because it's going to change the biology of the protein it's attached to. So this very, very easy structure-driven engineering was perfect. OK, so let's take a look at some of the early things that could be done now. We take it all for granted, but GFP can be used as a reporter gene to do a number of things in cellular systems. So, for example, if you want to study a regulatory sequence to know whether this promoter works well, you can put it at the front end of the GFP gene and then see if the promoter works under certain conditions, then the GFP will be produced. The cells will grow green. And this is a nice variant to a luciferase-based system or a LacZ based system where you're actually having to treat and fix things in order to do the LacZ experiments. So here's a really nice example with C elegans. What you could do pre-GF-- we really have-- it's like ADBC-- you know, anti whatever before year 0 and after year 0. The biological world is pre GFP and post GFP, frankly, the impact that it's had on the system. So here is a C elegans. As you know, all the neurons have names, and a lot of C elegans biologists are very familiar with each, you know, single cell within the C elegans, in particular, the neurons. So this is a listing of the touch receptors. You could have an antibody to those, and so you could do some antibody staining, but antibody staining, remember, we got to fix the worms. They don't like that. You literally fix them down and do the staining with the antibody to get inside the C elegans to see the antibody light up the proteins. You could do beta-gal activity, same thing, but in both cases, these are dead C elegans. They're fixed. They're in place. You can't observe them going forward. But look instead, that the images you would get with GFP, where you could totally pick out all the touch receptors. And this is a situation where an important protein in those cells MEC-17 is co expressed with GFP, and you can see this beautiful image, and the worms are still alive. You could totally watch what was going on with them, and he just wants you to know that he's alive and well and kicking. OK, now, progress has to happen. So one green protein does not a history make. So, immediately, with the identity of the structure of GFP, I am going to go back really quickly to look at that picture of the structure for you. Sorry about this back and forward. OK, the chemists realize that if they change this tyrosine to some of the other amino acids that looked a little bit like it, like phenylalanine, tryptophan, and histidine, they might alter the photophysics of this structure because if you put tryptophan there, there's even more double bonds. If you put phenylalanine there, that OH is gone. So there's really opportunities, and that's the chemists saying, I can look at this protein structure, and I know the one thing I can change is to change that amino acid back at the DNA level. So I'm going to change the codon for tyrosine to the codons for tryptophan, phenylalanine, and histidine and then see what my fluorescent protein looks like, OK? Does that make sense? So it was the one thing that's obvious. I can fix this. So what they did was they made the series of GFPs, and they transfected cells, and they were able to see that when you replaced various residues, for example, tyrosine to tryptophan, tyrosine to histidine. There's a little bit of variation here, and don't worry about this for a minute. They were able to create bacteria that had been transfected with the mutated GFPs in order to give them different colors of bacteria. So my question to you is, if you look at these, this would have been the original GFP, except made a little bit brighter by a small substitution, which of these would emit at the shortest wavelengths. So just look at the colors, which is the shortest wavelength, which is the bacterium that expresses the protein that emits at the shortest wavelength. You'd always be provided a picture of the electromagnetic spectrum. Yeah, out there. AUDIENCE: The GFP. PROFESSOR: Yeah, the blue one, exactly. So you can look at the spectrum, and you say, OK, I've got blue down here. There's that cyan. It's kind of a blue green, and then I move towards the more yellow. So you can pick it out and say, it's the lowest energy. It's the shortest wavelength emission, the highest energy emission. OK, all right, and these were the variance that I just described while I went back to the picture. So you could make a blue one with histidine. You could make a cyan one with tryptophan. You could preserve the green one with tyrosine, but a change in the serine to a threonine just to improve the photophysics, a long story. I won't-- I'm happy to chat about it. And, in fact, there was one clever one where you couldn't really make the protein yellow, but if you had the tyrosine nearby sandwiched with the tyrosine and the GFP chromophore, you could actually go as far as the yellow flouorphore. And that, in principle, should make everybody really thrilled. It's always this situation with protein engineering where you make something, and it's a huge improvement and then everyone says, well, what else, you know? What's next? We need red dyes. We need all kinds of different dyes. So teams went back to the oceans and actually collected organisms based on the color they fluoresced at, nice way to sample things. And, actually, we were able to ultimately discover a red fluorescent protein from the Discosoma coral, the original protein, DS red, and if you look at this kind of carefully, it looks a lot like GFP. There's the tyrosine, that funny ring. There's a double bond there, but there is an additional double bond down further into the sequence that extends what we call the chromophore. And when you extend the chromophore, you're more likely to move to longer wavelengths, and that's how they got the red, and then a bunch of engineering later, they got what's called the fruits, which emit at all different wavelengths. Much of this was not done rationally because they went to nature. They found out what nature did. They got here, and then they did a process known as gene shuffling, where they mixed and matched portions of genes to get them the entire color spectrum. They're not all fabulous fluorophores. They have some problems. They bleach easily and so on, but, nevertheless, this is a truly amazing sort of outcome. OK, so and they enable you to do art work with fluorescent proteins with bacteria, so you just paint your picture and wait for the bacteria to grow, and you would have all those colors. So fluorescent proteins originally were very limited, originally just green, then blue-green and greenish yellow, but, now, with the palette that's available, you can color all kinds of organelles in different types of colors, all right? OK, so now let's take a couple of things. When we opened up this class, we showed you a bunch of these pictures because we wanted you to stay in 7016 with us to see the cool things you'd see, and they were pretty much what we would call eye candy. They look cool, but how did we make them? And, now, with the knowledge of GFP and RFP and so on, you can say, OK, I understand what this experiment is, and you can certainly see that this is a super vital technique. So what I want to ask is, what are we looking at? What is labeled to give us this beautiful picture? So you can sort of go down, and there's a bunch of options, but, obviously, anything where you think you're labeling DNA with a fluorescent protein is incorrect. We're labeling at the protein level. We're making the DNA to express the fluorescent protein, but we're actually labeling a protein as opposed to DNA. So A and B are out for sure, but as we go through them, you can start to see the chromatin is red and the tubulin is green, so the outcome there would be, whoops, what we're observing-- I thought what we're observing is that the histones, which are the proteins associated with chromatin, are labeled with a red protein. And you can see that there. Those are the chromatids, and the tubulin, these green fibrils, is associated with a GFP. So C would be the correct answer there. And there's another one of these cell division pictures. Once again, the same sort of idea, but what I think is really cool here, as you can literally see, I love this picture because there's this poor chromosome. It doesn't seem to be able to get with the team. He's hanging down there, and, eventually, just before cell division, it seems like things work out OK and cell division happens. Now when you think of this, the capacity to watch this stuff in action is really pretty amazing. This was another one. This a set of cell lines where proteins in cell cycle are labeled. You had a question on exam three about proteins building up and going-- and going away during parts of the cell cycle. So, oftentimes, the degradation of a protein occurs when a protein gets labeled with ubiquitin by a ubiquitin ligase, and then it gets destined for degradation. So the proteins that you're seeing grow up in phases of the cell cycle are going away because of the ubiquitin proteasome system. So in these cells, particular proteins have been labeled, and here you can see the phases-- the components of the cell cycle as proteins come and go through cell division, and you can literally observe where in the cell cycle each individual protein is. Why is this so fascinating, I think, in therapeutic development is that you could literally have drugs that might impact aspects of the cell cycle, and you could watch them in real time impact the imaging that you see on this screen with the red and the green because if you arrest at a certain stage in cell cycle, the cell will get stuck at a particular color, and you would even know which phases, which parts of the cell cycle, are being impacted. So I think that's a really sort of captivating way to think of what you can do with these proteins because you can see things in real time, dynamics of cellular systems. And I think, oh, and I'm going to finish a few minutes early, but I looked desperately for a fluorescent turkey, and I couldn't find one. So I went with this light string turkey and fluorescent colors. It seemed to fit the bill. All right, so please help us finish up the bagels. Make sure you guys over there get them and have a good break, and I'm passing you back to Professor Martin for Monday, which will be a continuation of imaging, OK.
MIT_7016_Introductory_Biology_Fall_2018
22_Neurons_Action_Potential_Optogenetics.txt
ADAM MARTIN: All right, let's get started. So I'm starting with this video here. What's happening here is there's this mouse, and you see there's like this fiber optic cable going into its brain. And the mouse is asleep right now. And now the researchers are shining light into its brain, a specific region of the brain, to activate specific neurons in order to test whether they function in arousal. And here, you see the mouse is going to wake up. There it goes. It's awake now. So for today's lecture, we're going to work towards understanding how this experiment works. And we're going to talk about how neurons function and how researchers are able to control that function in order to modify behavior-- in this case, the arousal of this mouse. OK, so this is going to involve a particular type of cell in our body, which is the neuron. And neurons are highly specialized cells that have a function to transmit information from one part of the body to another. And so neurons are highly polarized cells, which you can see here. On the left of this neuron, you see this arbor of protrusions, which are called dendrites. And then on this side of the cell body, you see a single extension, which is an axon, and then the terminus of the axon over here. And this nerve cell transmits information in a single direction. It will transmit information from this side to this side. And these neurons are able to communicate with each other. And they communicate at the ends of the neuron, which are known as synapses, which I'll come back to and talk about later on in the lecture. So neurons could be making synapses on this side and also making synapses on this side with other neurons. So to start to unpack the function of this neuron-- and I should highlight that this flow of information can occur over very long distances, right? Your sciatic nerve extends from the base of your spine all the way down into your foot, OK? So that axon is one meter in length. So that's an extremely long distance to transmit information along a single cell. And so we're going to go from thinking about how signals are transmitted in single cells, and this will evolve electrical signaling. Then we'll talk about synapses and how synapses function to communicate between neurons. And this is going to involve also sort of understanding how certain antidepressants, like Prozac, work. And then we'll end by talking about how researchers did this experiment to wake up the mouse. And it all starts with something that I told you about at the beginning of the semester, which is that the plasma membrane separates distinct compartments the outside of the cell from the cytoplasm. And there are distinct ion concentrations on either side of this boundary. So we're starting now talking about a single neuron cell. And we're going to talk about a type of signal known as an action potential. Oh, that's right. So we're going to talk about an action potential. And what an action potential is, is it's an electrical signal that travels the length of the neuron. So this action potential, I'll abbreviate this AP. So when I refer to AP, I'm not referring to advanced placement, but action potential, OK? So this is an electrical signal that travels the length of the axon and the neuron. And so in order to have an electrical signal propagate, we need some sort of electrical property that the cell has that enables this. And so I showed you or I told you earlier in the semester how sodium ions are concentrated on the outside of the cell and potassium ions are concentrated on the inside. You see here's the sodium gradient here, potassium gradient here. And now I'm going to tell you how it is that this happens, because this is thermodynamically not favored, right? These ions would prefer, by diffusion, to be equal concentrations on both sides of this plasma membrane, which means that the cell to shift this from equilibrium has to expend energy to set up this situation. And so in the plasma membrane of the cell, there is a protein. It's an integral membrane protein and sits inside the plasma membrane. So this here is the plasma membrane. And this integral membrane protein is called a sodium potassium ATPase. So it's going to have a subunit that hydrolyzes ATP to ADP. And the protein uses the energy of ATP hydrolysis to pump sodium ions up their concentration gradient. So the sodium ions are going out of the cell. And this is going against the flow that sodium would normally like to take, which would be going downstream. And it pumps potassium ions into the cytoplasm such that there's a higher concentration of potassium ions in the cytoplasm, OK? So these neurons expend a huge-- a quarter of their ATP is used by pumping ions like this, such that there is gradients of ions across the plasma membrane. Now, if one sodium ion was pumped out for every potassium ion pumped in, there'd be no charge difference between the exterior and the cytoplasm. But what happens in the plasma membrane is that in addition to the sodium potassium ATPase, there are other channels that are present. There are sodium channels. These are mostly closed, but there are some potassium channels that are leaky. And they're basically leaking potassium from the cytoplasm out into the exoplasm, OK? And if you have positive charges going out the cell, then the inside of the membrane is going to have a net negative charge. And the outside of the membrane is going to have a net positive charge. And this charge across the membrane, where you have positive on the outside and minus on the inside-- I should label this exterior, and this is cytoplasm. This voltage difference is known as a membrane potential. So this is a membrane potential. And it's an electrical potential across the membrane. If you're an electrical engineer, you can think of the plasma membrane as a capacitor, OK? So this plasma membrane is holding this charge difference across it. And so there's a voltage across the membrane. And in a resting state, the cell's resting potential is negative 70 millivolts. So if the cell is not getting stimulated by something like a neurotransmitter, the resting potential is negative 70 millivolts, where the inside is negative and the outside is positive, OK? So now I just want to define some terms that are going to be useful for us when we talk about action potentials. So when there's this negative inside potential, a negative inside potential is referred to as polarized. So it's polarized because there's a polarity across this membrane, where one side is positive and the other side is negative, OK? So polarized refers to if there's a negative inside potential. So the resting state of the side is there's a polarized-- it's polarized. However, the cell can lose this polarity and not have a charge differential, or it can flip and be positive on the inside. And when that happens, if there's either zero or positive inside potential, this is referred to as depolarized. Anyone have an idea as to how the cell would flip the potential? What would have to happen in the plasma membrane to flip this potential and depolarize the cell? Yes, Stephen? AUDIENCE: You could open the ion channels. ADAM MARTIN: So Stephen suggested opening ion channels. Which ion channels would you open? AUDIENCE: The sodium channels. ADAM MARTIN: Yeah. So Stephen suggested if you open these, it's going to depolarize the cell. Because remember, sodium is high on the outside, out here. And so if you open these channels, positive ions are going to flow in. And that's going to make this less negative and this less positive, OK? So this is the situation here, where these sodium channels open, and the sodium channels-- or the sodium ions rushing in is going to create a depolarization, where you now flip the potential. And there's a greater positive charge on the inside of the plasma membrane. Everyone see how? Because the sodium ions are going to just go downstream. They're higher concentration out here. So just by opening these channels, the cell doesn't have to do any work to do this. Sodium is just going to flow down its gradient into the cytoplasm. So what an action potential is, is it's a transient depolarization of the nerve cell. So the Action Potential, or AP, is a transient depolarization of the neuron, which means it doesn't just depolarize and stay depolarized, but it depolarizes and then restores itself back to the resting polarity. And so what you see when you measure the voltage across the plasma membrane in a neuron, you see that it can spike and depolarize, but then it's rapidly restored to its resting state, OK? So it's a transient process. When we think about the neuron at higher resolution, what you're going to see is not only is it transient, but it's also a traveling wave that propagates along the entire length of the cell. So this is also a traveling wave. And one thing that you can notice about these neurons, or the action potentials here, is that they all depolarize to the same extent. So they all depolarize to this positive 50 millivolts. And so this illustrates a key property of neurons, in that the level of activity of a neuron is not determined by the size of this action potential. This action potential is an all-or-nothing event. It either happens or it doesn't. And when it happens, it depolarizes to the same level. So the action potential is all or nothing. You can think of it as a binary signal. And therefore, the way that neurons encode sort of the magnitude of activation is not through the level of depolarization of a single action potential, but it is able to distinguish between different frequencies of action potentials that are propagating along the neuron. So signal strength, in this case, is related to the frequency of action potentials firing. So now we're going to unpack how it is a nerve cell fires an action potential and how it propagates along the entire cell length, right? In the case of the sciatic nerve, this has to happen across an entire meter, OK? That's a very long distance to propagate this change in electrical signal, at least for a cell. And so we're going to talk about the mechanism. And I'm going to start at the beginning, when this action potential initiates. So we'll start at the initiation of the action potential. So how is it that this nerve cell is told to start depolarizing at the dendrites? Because there's going to be another neuron here, which is going to communicate to this neuron over here to tell it to start depolarizing. It does this at the location known as the synapse, which is basically sort of the connection between the two neurons. And the way this process is initiated is similar to the type of signaling that you saw in the past few lectures, where you have a ligand and a receptor, OK? In this case, the ligand is going to be what's known as a neurotransmitter, which is a small molecule. And I'll show you some later on. And the receptor is going to be a receptor that binds to this ligand. But in this case, rather than being something like a G protein coupled receptor or a receptor tyrosine kinase, the receptor is going to be an ion channel, OK? So the receptor is going to be an ion channel. And so you see one example in the slide up here, where here's a receptor. And these receptors are what are known as ligand-gated ion channels. In this case, it's a sodium channel. So it's going to be-- whether or not it's open depends on the presence of the ligand. So if we take a neurotransmitter like serotonin, if it's not bound to the receptor, the receptor is closed. But if serotonin binds to the receptor, it opens up the channel, which can selectively let in a type of ion-- in this case, sodium. In this case, this is an activating channel, because letting in sodium is going to depolarize the cell, OK? So this ligand receptor binding uses a ligand-gated-- there's a ligand-gated sodium channel. And it's this ligand-gated sodium channel which starts the depolarization. So that's how you sort of knock over the first domino, right? But then there has to be some mechanism to propagate this along the length of a very long cell. And so I'll tell you this involves a different type of sort of signaling mechanism from what you're used to thinking about, because this involves a different type of an ion channel. And it's called a voltage gated. And I'll abbreviate voltage gated just VG. And in this case, it will be a sodium channel. So what's a voltage-gated sodium channel? This is a voltage-gated sodium channel here. And you can see, in the resting state of the cell, this channel is closed. And it's closed because of this red rod structure that's positively charged. That's a positively charged alpha helix that is a part of this protein and is embedded in the membrane. But this alpha helix is positioned down towards the cytoplasm, because it's positively charged. And the cytosolic face of the plasma membrane is negatively charged, OK? And the confirmation of this protein then depends on the charge across this membrane. Because when there is depolarization, that shifts the position of this alpha helix, such that now it shifts up towards the exterior face of the plasma membrane. And that opens the channel, which lets sodium ions rush in, OK? Again, sodium ions here, they're always rushing downstream. They're concentration gradient. So in this case, whether or not this channel is open or closed depends not on the presence of a ligand, but on the membrane potential across the plasma membrane. So these voltage-gated sodium channels, they're opened by depolarization. And then the question becomes, if you open these channels at the very end of the neuron, how do you get it such that this electrical signal moves unidirectionally along the neuron? So what leads to unidirectionality? Who's been to a sporting event lately? OK, good. You guys know the wave? So we're going to do the wave. Once you to stand up, you're going to be tired, and you're going to have to sit down for a while. I'm going to be a ligand-- I'm a ligand-gated sodium channel, so I'm going to start things off, OK? You ready? All right, here we go. OK, that's basically an action potential. So the way that this was unidirectional is once you stood up and did the wave, you then sat down, and you stopped doing anything. And so these voltage-gated sodium channels have a similar property. If we look at the next step in this, the sodium channel is opened by depolarization. And you see there's this ball of chain segment of the protein. You see that yellow ball? Once the sodium channel opens, after about a millisecond, that ball sticks in the channel pore and blocks it, OK? So these sodium channels open to let in sodium ions, but then they're immediately inactivated after about a millisecond, OK? And so that enables unidirectionality. So this is what I'll call voltage-gated sodium channel inactivation. And how this promotes a traveling wave of depolarization is that if we consider an action potential moving along this axon from left to right and if the sodium channels in the green zone are currently open, it came from the left, which means that all the sodium channels to the left of this green zone are going to be inactivated. So because they're inactivated here, there won't be further depolarization going to the left, but depolarization will have to move to the right. And you basically get this traveling wave. And it goes one direction, because if it came from somewhere, which it always does, then where it just was coming from, all those sodium channels, the voltage-gated sodium channels are going to be closed. So this allows it to move in a single direction along the neuron. Also, once the action potential gets to the end of the neuron, it doesn't reflect back the other way in the neuron. This can only go one direction. So this provides unidirectionality. So it's this inactive or refractory period of the voltage-gated sodium channel which prevents the action potential from moving backwards. Now, if you look at these action potentials in the cell, you see that they happen, but you don't just depolarize and stay depolarized. The cell body depolarizes and then repolarizes very rapidly. So there's an oscillation. So there has to be some way to terminate the action potential. So there's a termination or repolarization of the cell. So there has to be a way for this nerve cell to rapidly restore membrane potential. And I want you to think for just a couple of seconds about what type of channel might you open to re-establish this polarity. What ion do you need to flow from where to where in order to get a net negative charge on the inside? Udo? AUDIENCE: You need to move the sodium ions from the inside to the outside. ADAM MARTIN: OK, you could pump the sodium ions out, and that's totally accurate. So that's going to require moving sodium ions up a concentration gradient, which is going to take energy and is going to be slow. So is there another option we could take advantage of here to repolarize? Rachel? AUDIENCE: Move the potassium ions. ADAM MARTIN: So Rachel has suggested to moving the potassium ions to the outside, which is how this is done. So remember, potassium is high in the cytoplasmic, low on the exoplasm. And therefore, if you have a voltage-gated potassium channel, that's going to cause a rush of positive ions out of the cell. And that will be able to restore the net negative potential on the inside of the cell. So this termination or repolarization is the result of the opening of voltage gated, in this case, not sodium channels, but potassium channels. When do you think these have to open relative to the sodium channel? Should they open right with the sodium channel? Carmen's shaking her head no. Do you want to explain your logic? AUDIENCE: Well, I mean, they both carry the same charge, so they wind up getting out at the same time [INAUDIBLE].. ADAM MARTIN: Exactly. So what Carmen said is if they open simultaneously, you have sodium flowing in. You have potassium flowing out. And that's not going to necessarily change the charge. So when would these have to open relative to sodium channels? Yeah, Carmen? AUDIENCE: When it reaches that potential [INAUDIBLE].. ADAM MARTIN: So after it's depolarized, yeah. So this has to be delayed relative to the sodium channels, OK? So this has to be delayed relative to the voltage-gated sodium channels. Because if you're thinking about this traveling wave of depolarization, the depolarization is going to be high where the sodium channels are only entering. And then following that, you would have potassium ions getting pumped out and basically repolarizing the cell. Everyone see how you sort of get depolarization with sodium rushing in, and then after that, you repolarize with the potassium getting pumped out, right? So here, you have a spike, and you complete the cycle. It can even get hyperpolarized, where it gets even more negative than it normally does. And then it eventually gets back to this resting potential of around negative 60 or negative 70 millivolts. OK, so this has to happen fast. And I want to tell you about one process or property of neurons and another helpful cell that enables this to go extremely fast. And that is that there are these glial cells in your body and your brain that wrap around the axons of the neurons and basically function like electrical tape for neurons, OK? So they are these-- there's electrical insulation around the axons of these neurons. And this is provided by another specialized cell type called a glial cell. So this is by a glial cell. And here are two examples of glial cells. There are oligodendrocytes-- and you can see how the cell is extending processes that wrap around the axons of these two neurons. Here's a Schwann cell over here, which again, wraps around the axon. And these cells basically form what's called a myelin sheath. So they form a myelin sheath around the axons. And that insulates the plasma membrane of the axon such that-- so here is an axon. You have glial cells that are wrapped around, and it sort of forms like beads on a string. And so there are these gaps between the myelin sheath that are known as the nodes of Ranvier. So there are these nodes of Ranvier, which are gaps in the myelin sheath. And these nodes perform an important function for the neuron, because where the axon is wrapped, the membrane is electrically insulated. And so the sodium ions-- or the sodium channels and potassium channels, the voltage gated ones, localize to these nodes. And when the action potential is traveling along the axon, because these regions where the myelin sheath is are electrically insulated, the axon potential doesn't just move continuously, but jumps from node to node, such that you are just opening the sodium channels at these nodes. And that allows the action potential to travel about 100-fold faster along the axon. And that's what allows your neurons to transmit these electrical signals from the base of your spine to your foot so rapidly. So you get an increase in speed because the action potential is jumping from node to node. And one important reason to bring this up is because there is an important human disease that affects the electrical insulation in the myelin sheath here, and that's multiple sclerosis. So we're going to unpack multiple sclerosis in a couple lectures. This is an autoimmune disorder. And so we're going to talk about immunity later in the semester, and we'll talk about how that happens. But for now, I just want to point out that multiple sclerosis happens when the immune system attacks this myelin sheath. So in multiple sclerosis, the myelin sheath is damaged. And if you damage this electrical insulation, you greatly slow down these action potentials, and that has a significant impact on nerve impulses in the brain and throughout the entire body. And that's why multiple sclerosis is such a devastating disease. All right, I'm going to start moving now to consider more than one neuron. So until now, we've just talked about how an electrical signal is sent along the length of one cell. And now we're going to start thinking about multiple neurons and how they connect and how neurons integrate information from multiple other neurons to decide whether or not to send an action potential. And so if we consider this connection right here, there's a synapse right here. Here's a cell that's sending information and a cell that is receiving that information. When we're considering a synapse-- so if we consider a synapse, there's a cell that is sending the signal, which is called the presynapse. This is the sender cell. And there's a postsynaptic cell. But you can have more than one neuron sending a signal to a neuron at a given time, right? So here, you have one neuron that's sending a signal at this synapse, but you might have another neuron sending a signal to a synapse on this part of the cell. And you could have another signal coming in here. And so this neuron will then have to decide whether or not to fire an action potential down its axon. And the way that the neuron decides this is to integrate the signals. So there's a signal integration process. And what's important for signal integration in a neuron is whether or not the cell body-- whether the voltage increases above a certain threshold potential. So if the cell body doesn't increase-- if the voltage doesn't increase above this potential, there will be no action potential fired. But if the voltage increases above the threshold potential, then it fires the action potential and signals to a downstream neuron or muscle or another cell. So here, it is the threshold potential in the cell body that determines whether or not an action potential is sent down the axon. And there are different types of signals that nerve cells can send. So there are different types of signals. Signals can be excitatory, meaning it will tend to depolarize the neuron. So there are excitatory signals, which result in depolarization. For example, with serotonin, that opens the sodium channel, and that results in depolarization, so that's an excitatory signal. But there are other types of signals that bind to different types of receptors that are inhibitory. What might be a type of receptor that would inhibit this process of sending an action potential? What might an inhibitory receptor be to lower the chance that this action potential will be fired? What if I told you it's an ion channel? What ion would you expect it might pass? Udo? AUDIENCE: Potassium. ADAM MARTIN: Potassium. Udo is exactly right, right? If it passes potassium, then it's going to make the inside more negative. And that's what's known as hyperpolarization. So receptors that result in hyperpolarization would have an inhibitory effect on this process. And remember, if you're hyperpolarizing, then you could cause this to actually go down and get even farther away from this threshold potential, right? And if you have an activating signal and an inhibitory signal, they might cancel out, because one will depolarize and the other will hyperpolarize. So it's in this way a neuron is able to integrate signals coming from different neurons. And that influences whether or not it will send the signal to a downstream cell. OK, so now we're focusing on what is the communication between one neuron and another. And this revolves around this thing that's called the synapse, which is basically the gap between the axon terminal of one neuron and the dendrites of a postsynaptic neuron. And so the way that multiple neurons communicate with each other are through a type of signal known as a neurotransmitter. And this is what initiates the signal. So there's a signal initiation process at the synapse. Initiation. And this involves the presynaptic neuron secreting a neurotransmitter. So the signal, in this case, signals between neurons are called neurotransmitters. And as you see on the slide, these are examples of neurotransmitters. They're often derived from amino acids, and so they're small molecules. They're not the proteins that you often see with receptor tyrosine kinase ligands. This is a different class of signal. So one example is serotonin. And if you look up at those, we'll find serotonin here. There it is. Here, you can see it's a derivative of tryptophan. So it's a small molecule, and it's able to bind to a receptor on the postsynaptic cell and induce depolarization. And so neurons are-- the way that they communicate is-- neurons are a case of where luck favors the prepared. Neurons are totally prepared to send signals to each other. They have everything ready to go when they get word from upstream, and they're ready to send signals to the next cell. And that's because if we look at the synapse prior to an action potential, everything is ready to go. The cell has neurotransmitter, and it's packaged in these vesicles, and it's tethered to the plasma membrane, ready to be released. So prior to the action potential, there are vesicles filled with neurotransmitter that are docked at the plasma membrane. I abbreviate plasma membrane PM, just so I don't have to write it out, OK? So these contain neurotransmitter, right? But you see in this docked vesicle, the neurotransmitter is in red, and it can't get out if that vesicle does not fuse with the plasma membrane. So these contain neurotransmitter. But at this point, the vesicles haven't fused. But the vesicle's not fused. When should they fuse? In this system of neuron signaling to each other, when should the vesicle fuse with the plasma membrane? What should trigger the fusion process? Yes, Miles? AUDIENCE: So after [INAUDIBLE] axon when it's time for the [INAUDIBLE] that's when the vesicles fuse. ADAM MARTIN: Yeah, so Miles is exactly right. If we consider my diagram here, there's an action potential traveling along this axon. When it gets to the axon terminus, that should be the signal for these vesicles to fuse to the plasma membrane and to release neurotransmitter. So it's the arrival of the action potential, right? So remember, in this case, serotonin is going to be in blue. If serotonin is inside my vesicle here, it's going to need to exocytose. And now the serotonin is going to be outside the cell, ready to bind to the receptor. All right, so as Miles pointed out, you have an action potential. The fusion should be triggered by the action potential. In order to fuse, there needs to be some signal inside the cytoplasm to tell the vesicles to fuse. That signal is increased calcium ion concentration. And then when calcium concentration increases in the cytoplasm, that triggers the fusion of these vesicles. And when you get fusion, that's exocytosis, and the serotonin is now on the outside of the cell, where it can travel across the synaptic cleft and bind to a receptor on the postsynaptic neuron. So this fusion is when neurotransmitter is released. Neurotransmitter is released here. And the way that this increasing calcium has to happen, when the action potential arrives at the axon terminus. So when it arrives in the axon terminus, there's depolarization of that part of the cell. And so there's a special type of protein called a voltage-gated calcium channel. All these channels are very selective for different ions. So a voltage-gated sodium channel isn't letting in all of the ions outside the cell. It's selective to sodium. And this case, this voltage-gated calcium channel is just going to let in calcium. And then there's a mechanism that links calcium entry to vesicle fusion. And that's going to be shown here. What you see on this docked synaptic vesicle is this calcium-binding protein called synaptotagmin that's present on the vesicle. And so when calcium goes into the cytoplasm, that protein binds to calcium, and it activates the fusion machinery such that the plasma membrane of the vesicle fuses-- or the membrane of the vesicle fuses with the plasma membrane of the cell, thus releasing the neurotransmitter into the synaptic cleft. So this is what starts the signal. Now, you probably know that these neurons are not active or on all the time. So something has to terminate the signal, usually quite rapidly. So now I want to talk about that. So like all signaling pathways, signaling is useless if you can just turn it on. You have to be able to toggle it on and off in order for biological systems to function properly, right? And that's the case with neurons. If you just turn on a neuron and you don't have a way to turn it back off again, then that's pretty useless. And so we have to have a way to turn off the signal. And if we consider the synapse, this is the presynaptic neuron here. I'm going to draw a postsynaptic neuron here. And neurotransmitter is released by the presynaptic neuron to the postsynaptic neuron here. Neurotransmitter is released into the synaptic cleft. So the sort of extracellular region between these two neurons is called the synaptic cleft. So now the cell just dumped a whole boatload of neurotransmitter into the synaptic cleft, right? How is it going to turn this off? What does it have to do? Yeah, Stephen? AUDIENCE: It could absorb the-- take back in the [INAUDIBLE]. ADAM MARTIN: Stephen's exactly right. What Stephen suggested is, is there a way for the presynaptic neuron to reabsorb this neurotransmitter and, thus, recycle it? So it could either reabsorb it or degrade the neurotransmitter. Different process for different neurotransmitters. For serotonin, there are channels that are present in the plasma membrane, and these mediate reuptake of the serotonin. So you have channels that are basically-- after the neurotransmitter is released, it sucks the neurotransmitter back into the presynaptic cell such that it can then reuse the neurotransmitter later on. And so this process of reuptake highlights a very important process that's been utilized by drug companies to create antidepressants. So antidepressants like Prozac and Zoloft affect this reuptake process. And what that does is it keeps the neurotransmitter in the synaptic cleft for longer, such that it enhances the signaling. And so the idea behind these drugs is that if you are suffering depression from a lack of serotonin, then you can rescue that by preventing the rapid reuptake of the neurotransmitter into the cell after the synapse is stimulated and the neurotransmitter is released. And so Prozac, Zoloft, these are a class of drugs that are known as selective serotonin reuptake inhibitors. It's kind of a mouthful. This is abbreviated SSRIs. But the way they function is to leave the neurotransmitter in the synaptic cleft for longer so that you enhance signaling, even if you have low levels of the neurotransmitter to begin with. I also want to point out that if we look at this diagram here, the synaptic vesicle fuses, and then this releases the neurotransmitter. But all the machinery on this vesicle is recycled by endocytosis such that it can be reused again, OK? So cells are really good at recycling stuff. If this is sort of the membrane, you endocytose and then you can use it again later on, OK? And so there's recycling not only of the neurotransmitter, but also all of the machinery on the synaptic vesicles that are responsible for the fusion event. All right, now I want to end by just telling you how this experiment works, where we're able to activate specific neurons in a brain and that leads to the animal sort of waking up. So in a normal neuron-- so this is the last part, optogenetics. And I'm going to go through this very fast. But normally, you need a neurotransmitter to induce depolarization. But what optogenetics is, is an approach to control the activity of a cell with light, OK? So in this case, we're going to have light inducing depolarization. And the way this is done is there's a protein discovered from photosynthetic algae that's responsive to light, and it is a sodium channel. And this protein is called channelrhodopsin, specifically ChR2. And this is a light-sensitive protein where light induces sodium channel opening. So that's going to depolarize the cell. And what you can do is if you have a gene that you know is specifically expressed in a certain type of neuron, you can take the promoter and enhancer region of that gene and hook it up to this single component, channelrhodopsin, that open reading frame, using recombinant DNA technology. And if that's expressed specifically in the neurons that you're trying to test, you can then shine a light into the brain of the organism and activate, specifically, this type of neuron. And that allows you to test the function of the neuron in the behavior of an organism. So, in this case, this mouse, the light is shined into its brain, and they're testing a specific type of neuron that is involved in arousal of the mouse, and it wakes up. Oh, it's not playing. So here, this is the brain activity on the top, and the muscle activity on the bottom. So you're going to see light. There's the light. You see it? Light going into the brain. They induce light at that frequency for a while. And then they're going to wait and see when the mouse wakes up. And it's going to wake up right now. There it goes. It woke up. You see now its muscle activity is going, OK? So you can test the function of specific nerve cells using this approach, and it's because you have a light-sensitive sodium channel. So I'm done for today. Have a great weekend. I will see you on Monday.
MIT_7016_Introductory_Biology_Fall_2018
25_Cancer_1.txt
PROFESSOR: OK so on Monday we talked about how cell division is regulated at this single cell level. On Wednesday we talked about how regeneration is mediated at the level of an entire tissue. And today we're going to talk about how all of that can go wrong, OK? And when all of this goes wrong, it results in a disease, actually many different diseases, but are commonly known as cancer. And as illustrated in this cartoon, what you see is that cancer can often be defined as having distinct steps in its progressive, essentially, deregulation of normal cell and tissue behavior. So when we're thinking about cancer, we're thinking about a stepwise degeneration. And cancer is a disease that affects an individual's own cells, but those cells get progressively and progressively dysregulate d in their behavior and the coordination of their behavior with the rest of the tissue. So it's a stepwise degeneration of normal cell behavior. And it results from mutations that are occurring in the cell. And we'll talk about what types of mutations right now. And then I'll take you through some examples of different signaling pathways and we'll try to classify what different types of genes should be labeled as. So first, I want to talk about classes of genes and their involvement in cancer. And the first example that I'll take you through is known as an oncogene. And it's referred to as an oncogene after the mutation has already happened. And so an oncogenic mutation is a mutation that's going to promote growth and survival of a cell. So this promotes growth. Before the gene is mutated, it's referred to as a proto-oncogene. So before mutated, the gene is labeled a proto-oncogene. And the normal function of these proto-oncogenes is also to promote growth and survival, but they do so in a regulated manner. So a gene becomes an oncogene when there's a mutation that causes it to be unregulated by the environment of the cell or even the surroundings of the cell. And so you can think of this as a constitutive activation. Often oncogenes are constitutively active forms of normal genes. And one way to think about this is you have a gene whose normal function is to promote growth and it's kind of stuck in the state of the gene that always promotes growth, which is not normally how it works. Normally there are signals that tell a cell to grow, and those signals come and go, and that's how the body regulates when cells divide. But you can have a situation where it's essentially the equivalent of what you might consider a stuck accelerator, if you're thinking about an automobile. So if you have a stuck accelerator, and you can only speed up, and you can't slow down, this is analogous to an oncogenic mutation. Now a different class of gene, actually kind of the opposite of an oncogene, is called a tumor suppressor. And tumor suppressors are genes whose normal function is to inhibit growth or even promote death. So tumor suppressors inhibit growth or promote death. You could see how these two different things would have the same effect in that it's not allowing one cell to become a lot of cells because the tumor suppressor will either prevent it from dividing or will cause the cells to die. And so the cancer phenotype is associated with a loss of function of the tumor suppressor. So these result from loss of function mutations in the tumor suppressors. So if you remove the thing that's inhibiting growth, then that allows the cells to divide in a more unregulated way. And so sticking with the analogy of an automobile, you could think of the tumor suppressor mutation as a cell having defective brakes. So you can think of this as defective brakes. So it's the loss of function of these that lead to cancer. Now I want to tell you about one last class of genes that are implicated in cancer. And these are what, in the diagram up there, are referred to as caretaker genes. And what that refers to is the fact that these genes are involved in maintaining the genomic integrity of a cell. And they can do that by mediating DNA repair when it needs to happen. So these are involved in the repair of DNA. Or actually, I know what I'll say. We'll say genome. It's involved in genome repair. But not only repair, but also genome stability. So making sure that chromosomes are equally segregated to daughter cells so that you don't end up with cells with extra chromosomes or lacking entire chromosomes, which is known as being aneuploid. So genome repair and also integrity. And again, because these promote genome integrity, it's a loss of the function of these genes that is what promotes cancer. So a loss of function mutations in caretaker genes are what can drive a cancer phenotype. And that's because if you lose a caretaker gene that's involved in DNA repair, actually one example of a caretaker gene is the BRCA1 gene, which is involved in breast cancer. And so if you lack this BRCA1 gene, it makes it so that the cells are not as good at repairing their DNA. And then the cell can accumulate additional mutations, and the cell might get an oncogenic mutation or it might lose tumor suppressors. And that's what drives that cancer phenotype. Now I just want to point out something that just happened this week, which is that one of our very own colleagues here at MIT, Angelika Amon, whose lab has done a lot of research that has provided fundamental insights into how a lack of genome integrity influences both normal and cancer cells. She just won what's known as the breakthrough prize. And so this is a prize that was initiated by Chan Zuckerberg initiative, so it's out of Silicon Valley. And the point of the prize is to basically celebrate science, like we would movies at the Oscars. And so this is Angelika here receiving this breakthrough award, and this just happened this past weekend. And this is for her fundamental work on how aneuploidy influences the biology not only of normal cells, but also cancer cells because it plays an important role in the biology of cancer. All right, so now that we've defined some of these key genes, I want to talk about one example of a pathway that influences cell division. And we'll go through all of the genes in this pathway and decide whether or not they should be considered oncogenes, tumor suppressors, or caretakers. And the pathway we're going to look at involves the G1 to S transition. And you'll recall that this G1 to S transition in the cell cycle is referred to as START and is the point at which an individual cell commits to going through the entire cell cycle. So this G1 S transition, or START, what kicks off the whole process is the expression of a cyclin. And that is specifically the G1 cyclin. And this G1 cyclin is regulated by many different things. And we've talked about a lot of them. First of all, there are growth signals. These are secreted proteins that allow cells to communicate with each other. And many growth signals promote growth and cell division by up regulating this G1 cyclin. So you're actually regulating the gene expression of this particular cyclin gene. We also talked about Wnt. And Wnt is another type of signaling system. And one of its targets is also the G1 cyclin. So both of these signals promote growth. There are also other types of signals, like cytokines, which also promote G1 cyclin expression. So this is a really pivotal control point for the cell to decide whether or not to enter into the cell cycle. I'll point out that there are also other types of signals that inhibit growth, and I'll call those growth inhibitors. And so these growth inhibitors inhibit G1 cyclin expression. So if you have a cell in your body, and it's trying to decide whether or not to divide, it's basically reading out how many growth positive signals it's seeing versus growth negative signals, and it's able to integrate that information based on how much G1 cyclin it produces, and that determines whether or not it goes into the cell cycle. So G1 cyclin. And G1 cyclin functions with cyclin-dependent kinase. So this depends on cyclin-dependent kinase. This eventually leads to the expression of the G1 to S cyclin. And it's the G1 to S cyclin which is responsible for activating the transition from G1 to S, which is known as START in yeast and the restriction point in mammalian cells. But it all starts really with this G1 cyclin. So I want to talk about this step in the cell cycle. And I'll show you the nitty gritty of the mechanisms that are involved. And we'll talk about what types of genes all of these genes are. And it's going to involve a very important gene that I'm going to tell you a lot more about in just a few minutes. All right, so the critical determinant of START is this G1 S cyclin. So I'm drawing a piece of DNA here. Here's the G1 S cyclin gene. So this is DNA. I just drew a piece of DNA. Part of the genome. This is the G1 S cyclin gene. And this gene is activated, its transcription is activated, by a transcription factor known as E2F. So we'll keep track of what these are. E2F is a transcription factor. So E2F is a transcription factor that will activate the expression of this G1 S cyclin. But in early G1, there's a protein that binds to E2F. And this protein is called Rb. I'll tell you what Rb stands for in just a minute. But what Rb does is it binds to E2F and it inhibits its transcriptional activity. So in early G1, E2F is inhibited and the expression of this G1 cyclin is off. So this is off or repressed. So before the cell passes START, this expression is off. So this is early G1 before START. Now what happens is this state is changed by the G1 cyclin. So if there's adequate levels of G1 cyclin, and this is in complex with cyclin-dependent kinase. Because cyclins, their functions are always mediated through cyclin-dependent kinase. So the cyclins are never, as far as I know, functioning on there by themselves. They're always functioning through one of the cyclin-dependent kinases. So G1 cyclin CDK phosphorylates Rb. And so you then get an Rb that has a bunch of phosphates attached to it. And this inhibits Rb's function such that it can't bind to E2F. So when G1 cyclin CDK phosphorylates Rb, that causes Rb to go away from the promoter of the G1 S cyclin. And now you have this transcription factor, E2F, free to transcribe the G1 S cyclin gene. So this now gets turned on. And it's this activation of G1 S gene expression which is the signal to undergo. You get G1 S cyclin CDK because you express this gene. And that activates the transition into S phase. All right, now take a look at everything I just drew on the board. Who can tell me where the tumor suppressors are in this pathway? Miles. MILES: Rb. PROFESSOR: Rb is a tumor suppressor. That's exactly right. Let me get some colored chalk here. All right, I'm going to circle tumor suppressors in pink. Are there any other tumor suppressors? So Rb is a tumor suppressor. Yeah, Amanda. Did you have one, Amanda? Georgia. Georgia, sorry. GEORGIA: The growth inhibitors. PROFESSOR: The growth inhibitors are also tumor suppressors, exactly. OK, how about oncogenes? What would be considered a proto-oncogene in this system? Jeremy? JEREMY: CDK. PROFESSOR: CDK would be one, yup. So oncogenes. CDK can be considered a proto-oncogene. Anything else? Well, what defines an oncogene or a proto-oncogene? What's its normal function in the cell? Carmen? CARMEN: Its function is to move the cell along the cell cycle. PROFESSOR: Yes. So it promotes growth. And moving a cell along the cell cycle will promote growth. So yes, exactly. So anything here promoting growth besides CDK? CARMEN: E2F. PROFESSOR: E2F would be a proto-oncogene, sure. Jeremy, did you have an idea? JEREMY: G1 cyclin. PROFESSOR: G1 cyclin. Basically everything else here would be considered a proto-oncogene, right? Wnts are proto-oncogenes. They're promoting growth by promoting the expression of G1 cyclin. The growth signaling pathway, all of those genes would be considered proto-oncogenes. And so anything that is promoting growth will be a proto-oncogene here. Great. All right, so now we're going to move on and talk about this Rb gene, which I just showed you mechanistically what it does. But what Rb stands for is retinoblastoma. So Rb stands for retinoblastoma. And this Rb gene, as you suggested, is a tumor suppressor. It was actually the first tumor suppressor that was cloned. And so retinoblastoma, as the name implies, is involved in a human disease. And it's involved in a rare childhood eye tumor. So I'm going to show you one last weird eye picture. If you don't want to look, look down or look the other way. I'm going to show you a child that has retinoblastoma and what it looks like. So it's going to appear right now. So this is an individual with retinoblastoma. You can see that there's something inside the eye, growing in the back of it. And to give you a better picture of what is happening, this is now a cross section through a normal eye. This is a cartoon of the normal eye. And individuals with retinoblastoma have a growth in the back of the eye. From the retinal tissue you can see this huge tumor that's present in the back of this eye. So this disease involves the formation of these tumors in the eye that originate from the retinal tissue. All right. So this disease results - this is a tumor suppressor. It's a loss of function in the retinoblastoma. So there's a defect in the retinoblastoma gene. But this disease of retinoblastoma manifests itself in different ways. So there are different forms of the disease and I'll tell you how they're different right now. So there are two forms of the disease. The first, it's called sporadic. And it's called sporadic because this is a form of the disease that arises in families that have no history of the disease. So the sporadic form, the family has no history of the disease. And this disease presents in a certain way. The first is it is what is known as unilateral, meaning usually only one eye is affected. So it usually involves only one eye. And this disease can be treated in children. And if the sporadic form of the disease is treated in the child, then later on in life that individual does not have an increased risk of getting further tumors. So there's no increased risk of cancer later in life. For example, in a different organ. So this is one form of the disease. The other form of the disease is called familial. And as the name implies, familial means that the disease runs in the family. So what familial means is there's some inheritance. There's an inherited form of the disease. And the familial form of the disease can be distinguished from the sporadic form because it presents differently. The way the familial form presents is it's often bilateral, meaning that both eyes become affected. So it affects both eyes. And also in individuals with the familial form, even when they're cured from the eye tumors, they have a higher risk of getting other tumors in other organs of their body later on in life. So in this case, there is later an increased risk of cancer in other organs. So this is an example of a familial form of retinoblastoma, where affected individuals here are colored in green. So what would you say the inheritance pattern is for this particular phenotype? Carmen? CARMEN: Autosomal recessive. PROFESSOR: Why do you say autosomal recessive? Can you explain your logic? CARMEN: Yeah. It looks [INAUDIBLE] are affected regardless of-- with colorblindness it was always the sons that got it. [INAUDIBLE] getting it as well. But you can see some generations where neither parent had retinoblastoma. PROFESSOR: So Carmen's exactly right. And she's saying that both males and females are getting it, so it looks like that would argue that it's not sex linked, but autosomal. So it looks autosomal. And why do you say recessive? Can you explain your logic there? CARMEN: The third from the left. The one that has an arrow on it. Both parents are affected [INAUDIBLE].. The only way that's possible for their children to get the recessive gene. PROFESSOR: So Carmen is exactly right. She's looking at this individual here. And in this case, this individual was not affected with the disease, but passed on the disease to their daughters. Now I think one thing. This is an exception to the rule. What you see in pretty much all the other cases is that individuals in this generation in the middle here do have the disease, and they pass on the disease to the next generation. So because I'm seeing the disease in all generations, I would say that this is likely to be autosomal dominant. And Carmen picked up on something that I want to come to. It so happens that this individual was not affected by the disease, but still clearly carried a disease allele. And I'm going to talk about why this is an exception and why this is still an autosomal dominant inheritance pattern. But if we take it that this is an autosomal dominant disease, it's kind of counter intuitive, at least to me, and maybe to you, because I just told you that tumor suppressors result from a loss of function of the gene. And we're used to seeing loss of function mutations being recessive. And actually, at the cellular level, it's true. The cancer is recessive. But in this case, what you see is that, at the organismal level, the inheritance pattern actually acts as a dominant phenotype. So there's kind of a difference between the inheritance pattern at the cell level and at the organismal level. And I want to tell you why that is because I think it's important for understanding the risk to cancer. And so what's inherited is not the full blown disease. What's inherited in the case of retinoblastoma and many other cancers is a predisposition to the disease. So what is inherited is the predisposition to the disease. And that's because if we look at, let's consider the top male up here. If that male is heterozygous for the Rb gene, then he can have a gamete, which is Rb-, so lacking a functional copy of the Rb gene. And he married an individual without a disease a label, so she's going to just make Rb+ gametes. And if they have children and one of the children gets a gamete that is derived from Rb- allele, then you get an individual in the zygote here which has one functional copy of the Rb gene and one mutant copy of the Rb gene. So that's the egg, and then that egg is going to develop and give rise to all of the cells of the body. And so in this case, all of the somatic cells from this individual are going to be heterozygous for the Rb gene. So all somatic cells are heterozygous for Rb. So they're Rb+ over Rb-. And the effect of that is, is it means that each of the cells in this individual are only one step away or one mutation away from lacking both copies of Rb. So by being heterozygous, it means that all cells in the individual are just one mutation or step from losing Rb. And so the inheritance pattern, what you're looking at is the predisposition to the disease. And the predisposition doesn't mean that a person is guaranteed to get the disease. And that's illustrated in this family tree, right? There was an individual here, this male here with the arrow, who clearly was a carrier for the disease because he passed on the disease to his children, but who himself never actually was affected by the disease. So because it's a predisposition, it doesn't mean there's a guarantee. That if you are heterozygous for Rb, there's not a guarantee that you're going to have the disease, but you are going to be predisposed to it. And in the case of Rb, more often than not, if you lack one functional copy of Rb and are heterozygous for all of your cells, then you're going to be affected by the disease. Does that make sense, Carmen? CARMEN: Yeah. PROFESSOR: OK. Yeah, Jeremy? JEREMY: [INAUDIBLE] people who are heterozygous and homozygous for the disease are affected by it. PROFESSOR: Well, actually, if you are homozygous from Rb, the individual would probably not be born. Yeah. So I think it would be impossible to be heterozygous for Rb. Yeah. So really what you're inheriting here is that predisposition. And because the predisposition just requires heterozygosity, it manifests itself like a dominant phenotype. Because you only need to inherit one allele that's mutant in order to be predisposed to get the disease. So that that's why it appears at the organismal level to be a dominantly inherited phenotype. But then to get the disease, you need to lose a second copy of the gene. And so for the sporadic form of the disease, so we just talked about hereditary or familial retinoblastoma, all of the cells of the individual will start out being heterozygous and then some of them will lose, what is known as lose heterozygosity, and become homozygous mutant in a particular tissue. And that would be the tumor tissue. So what are some ways that there could be this loss of heterozygosity? Can you guys come up with some possible ways to do that? Heterozygosity. How might a cell lose that second copy of Rb? What are some potential mechanisms that you could lose it? Rachel? RACHEL: Point mutation. PROFESSOR: It could be a point mutation, exactly right. So one way would be point mutation in Rb. Other ideas? Yeah, Patricia? PATRICIA: There isn't proper separation during mitosis and you only get one copy. PROFESSOR: So if you lose a chromosome, right? So if you guys remember back, remember way back when we are all young men and women in early October. We did the whole demonstration with mitosis and we had a case where we had two good friends across the metaphase plate. And that brought both sister chromatids off to one side. That would result in loss of a chromosome. And in this case, if you have a division and you lose the wild type copy of Rb, if you lose that entire chromosome, then you're going to be left with only the mutant copy of the Rb. So another mechanism would be chromosome loss. Where the chromosome that's lost is the chromosome with the wild type Rb+ allele. Any other ideas as to how you might lose the second functional copy of Rb? Yeah, Miles. MILES: I'm not sure if it completely falls under point mutation, but overall DNA damage? PROFESSOR: Yeah, can have DNA damage. You can have a deletion that deletes the entire region of the chromosome that contains Rb. There could be even chromosomal abnormalities, like translocation, that somehow delete Rb. So I'll just say deletion for now. Any others? Can anyone think of something that wouldn't be necessarily a genetic change, but more of an epigenetic change, so to speak? Yeah, Natalie. NATALIE: [INAUDIBLE] mutagenized? PROFESSOR: Being mutagenized? NATALIE: Exposed to rays of something. PROFESSOR: But then that would cause a mutation, which might fall into one of these three classes here. What about without being mutagenized, non mutagenic. Yeah, Maxwell. MAXWELL: Are there any other environmental factors that control expression of Rb? PROFESSOR: Yeah. So Maxwell's saying, what else would control the expression of the Rb gene? What if you had an effect that would basically cause that functional copy of Rb to be not expressed? And so this is another way that you can lose heterozygosity, as you have repression of transcription. And I'm not going to go through the nitty gritty of the details, but one way in which genes are regulated is by modification of DNA by chemical modifications, like methylation. And so promoter methylation is a mechanism that causes repression of gene expression. And in many cases in cancer, the functional copy of a tumor suppressor will basically be lost by promoter methylation, so that you no longer express that gene in that cancer cell. And therefore, the cancer cell has a cancer phenotype. Any questions on Rb before I move on? Everyone understands why retinoblastoma is dominant at the organismal level, yet recessive at the cell level? That's an important point. The concept behind that is also the same for BRCA1 and other tumor suppressors like p53 and APC, which you'll see in just a minute. All right. So now I want to move up kind of from thinking about the mechanism of cancer at the level of a cell and let's think about it at the level of a tissue. And as an example, I want to use colon cancer. And you'll recall from Wednesday, I talked about the intestine as a system. And the way it works is pretty much the same for both the small and the large intestine. It just happens in the large intestine or the colon, you don't have villi, but you do still have these crypts. So that would be what a colon would look like, more or less, or at least one crypt of a colon. And remember, at the base of the crypt, there was this specialized compartment, which was the stem cell niche. And this is where renewal was happening. And renewal and cell division down at the base of the crypt then results in this conveyor belt-like movement up to the region of the tissue near the lumen, where cells are shut off into the lumen. So what might be one barrier to cancer that has to be overcome in order for a tumor to form in this organ? Yeah, Miles. MILES: You know the diagram [INAUDIBLE] cells. So the one part that [INAUDIBLE] would be [INAUDIBLE].. It's when the cells get [INAUDIBLE] into the lumen. [INAUDIBLE] system anymore. So if cancer cells, [INAUDIBLE] just never shed off [INAUDIBLE] keep [INAUDIBLE] and it would be just moved along the intestine and never die. [INAUDIBLE] undying cells that won't ever shed. PROFESSOR: Yeah, so what Miles is saying is that these cells are going to move up and get shed off. And so if you have a mutation, either an oncogenic mutation or loss of tumor suppressors, if it goes up, and sheds, and is removed from the organ, it doesn't matter. That cell is not going to be able to form a tumor. So one thing that has to happen for a cell to form a tumor in this system is this treadmill has to be blocked, such that cells are no longer exiting the organ, so that you have a cell actually stay in the organ that would be able to accumulate additional mutations and undergo tumorigenesis. And this is what happens because, as we know in colon cancer, one of the first steps in colon cancer is disregulation of the signaling that really regulates this movement of cells and the homeostasis of the tissue. So step one here. Step one is to dysregulate the main signaling pathway that's involved in this, which is Wnt signaling. And so another famous tumor suppressor is called the APC gene. This is a tumor suppressor. And this APC gene is associated with another familial form of cancer. In this case, it's familial adenomatous polyposis. And so this is a normal colon. Normally your colon has a smooth surface. It's basically smooth here. I mean, there are some folds, but I'm not sure that that's not an effect of having this dissected out of the organism. But in individuals with familial adenomatous polyposis, what happens is that the colon forms many of these polyps, which are benign cancer outgrowths. But you see all these polyps here and you see how very different the morphology of the colon is from a normal individual and an individual that has familial adenomatous polyposis. So the formation of a polyp is kind of equivalent to something like this. It's not invasive yet. It would be known as benign. But you can see that there is clearly a dysregulation in how this tissue is behaving because you get all of these polyps forming. And it's thought that frank carcinoma then results from cells in one of these polyps accumulating additional mutations that then cause the cancer to progress to a more malignant stage. So I told you that APC a tumor suppressor. And in this case, this tumor suppressor is associated with this disease right here. And I showed you the Wnt pathway last week. And I went through it quickly, but you notice this central protein right here in this destruction complex, that's APC. APC stands for adenomatous polyposis coli. I will write that down. So adenomatous polyposis coli. And what APC does, as represented in that slide above, is it's part of this destruction complex that destroys beta-catenin, which is the downstream step of Wnt signaling. So the wild type function of APC is to basically inhibit beta-catenin, which then is mediating the effects of Wnt signaling. So you can think of APC as one of the genes that's the brake on Wnt signaling. And normally, it's regulated by Wnt. So Wnt would normally inhibit APC. But if you just delete APC in a cell, then it's like the cell is seeing Wnt all the time. So by deleting APC, you get a constitutive activation of beta-catenin and you get constitutive activation of Wnt signaling. So if the organism starts out being heterozygous for APC, then there is a high probability that another mutation will take out the wild type function of APC or the wild type allele of it. And when you take out that allele, now you all of the sudden start having these cells that it's like they are always in Wnt, even though they're not. And so if you constitutively activate Wnt signaling, what that does is it prevents the cells from leaving the organ. So they're stuck. So normally in a normal colon, cells that are renewed at the bottom of the crypt, they move up, and then they're shed into the lumen. But in an APC mutant, the cells are constantly feeling like they're getting Wnt signal, and so they stay in the colon. And that allows them to accumulate further mutations. So step one in colon cancer is to dysregulate Wnt signaling, and that really disrupts the whole tissue homeostatic mechanism of the intestine. Then there would be further steps, at least three usually in colon cancer. And that would involve mutations, oncogenic mutations, loss of tumor suppressors. And that would just cause the cells to get more and more oncogenic and more and more transformed. And eventually, they can become invasive, and we'll talk about what happens when cells become invasive next week. So I wanted to end today's lecture by talking about targeted treatments for cancer just to see how they interface with the mechanisms that we've discussed. And of course, some of the primary ways to treat cancer are through surgery and also chemotherapy. But there are also more directed ways to target cancer. And because time's up, well, I have one minute. I'll tell you about the first one. And then if I have more to go, I'll start with that in next week's lecture. So the first one I wanted to tell you about is this disease, chronic myelogenous leukemia, which involves activation of the ABL gene. And it's activated, in this case, by a translocation between two different chromosomes. So this is chromosome 22. This is chromosome 9. And in many patients with chronic myelogenous leukemia, a large part of chromosome 22 is translocated onto chromosome 9, and a little bit of chromosome 9 is attached to chromosome 2. And this translocation generates a gene fusion between the BCR gene and the ABL gene. And so ABL is a non receptor tyrosine kinase. So it's a tyrosine kinase that is present in the cytoplasm of the cell and promotes growth. So this is a proto-oncogene. And when ABL becomes hooked up to BDR, then this results in the constitutive activation of BCR ABL. So this is now a constitutively active kinase. Now when this was realized, then researchers started looking for small molecules that would inhibit the kinase activity of ABL. And the famous example is Gleevec. And this is a picture of Gleevec here. You can see it's a small molecule. And what Gleevec does is now this is a crystal structure of the ABL tyrosine kinase in green. And it has two lobes, an N terminal lobe, a C terminal lobe, like a lot of kinases. And what Gleevec does is to bind in the interface between these two lobes. And it locks this kinase in an inactive conformation, such that if cells see this Gleevec, then their ABL tyrosine kinase is inhibited. And this is the driver of chronic myelogenous leukemia. So Gleevec has been very effective in treating this type of leukemia and it results in a pretty good prognosis for patients. All right, so we'll talk about more therapies next Wednesday, but have a good holiday weekend.
MIT_7016_Introductory_Biology_Fall_2018
8_Transcription.txt
BARBARA IMPERIALI: OK, we'll get started. I've got everything turned on here. So, a couple of things. I've been mentioning that it's really kind of cool, I think, at this stage, where you've gathered enough steam in some of the topics that many things that come out in the news might start looking of interest or relevant to you. And I found this. It had a news brief in The Scientist, and then I went to the original paper, which is in Nature Biotechnology. And if you recall, towards the end of the work on proteins, we were talking about phenylketonuria, which is a genetically linked disorder, where people cannot metabolize phenylalanine to keep the levels of fell phenylalanine in check. So what happens is the phenylalanine gets converted to a toxic material, and it causes a lot of fundamental physiologic disorders that are a lot of neurologic disorders. So what a small company in the area called-- what's the name of the company? I can see it here. Anyway. Synlogic is a synthetic biology company, which basically engineers bacterial strains as probiotics that can be used to mitigate some genetic disorders. And so they've created a bacterial strain that can metabolize phenylalanine, so did a good job of sort of not letting your phenylalanine levels get too high. So in the GI system, this bacterium basically works on phenylalanine to metabolize it so that, then, people don't have to lead such a strict dietary regime and have way less risk. So I think it's a really cool thing. This probiotic is in one, two clinical trials. It's being fast tracked. And the company, in general, is a synthetic biology company working on solutions to certain types of diseases that could either save you from having a lot of restrictions on your lifestyle or, alternatively, save you from taking medications and stuff. So that's something that caught my eye. I want to remind you that I have now put the link to The Scientist in the sidebar of the website so it's much easier for you to grab it and take a look at what's in the news. There's stuff in the news every two or three days. And there are things that I think you'll find interesting that really relate to technology, engineering, and fundamental science that are related to biology. The other thing I want to do is remind you that towards the end of the class-- but no time like the present, because it is sort of the equivalent of one of the problem sets but, in fact, worth a little bit more than the problem set. I want to encourage you to keep an eye on The Scientist link and maybe pick out a topic that two or three of you would like to write a news brief on with a paragraph of writing on introduction and what the technology is, how technology has addressed a particular scientific or biological problem. And then there should also be a graphic that describes it. Not stuff snipped out of whatever you're reading. Something that you create as a team to sort of describe a concept to people. There's alternatives with that assignment. You can also pick an interesting protein from the PDB, make a 3D print of it, learn how to print it, and go to the 3D printers in the maker labs, and print the protein, and then write a brief summary of what it is. And one other idea I had for the engineers' view is I would love a better, less clunky topoisomerase demonstration. In particular, I'd really like one where you can snip the pieces apart, let the untangling happen, and put them back together. So some of you could work with a couple of colleagues and make something, which I know a lot of you are really keen on, which is why you're engineers. OK, any questions about any of this? I'm trying to make sure that this doesn't creep up on you. It's just something you do. It's great to get awareness of technology in the life sciences because of how much contribution it makes. So if you keep an eye on things, you won't be forced to suddenly find something good at the last minute. You'll just have found something and go, that's the perfect thing to describe. All right? Questions? And the other thing is if you're unsure, you can always let us know what you have chosen or what you think you're going to choose in the chat with us, and we'll say, yeah, looks like a good idea. OK, so let's move forward now. All right, so what I want to do, first of all, is just remind you-- it kind of flew by a little bit-- that DNA replication is bidirectional. So what that means is wherever you have an origin of replication, you can replicate in two directions. And I was sort of falling asleep thinking about this. I sure wonder what happens at the other side of the circular plasmid when the machineries kind of collide and you spit out a brand-new copy of a circular chunk of DNA. But don't think about that too much. It'll probably keep you awake too much. So this is circular DNA. So what organisms have circular DNA? AUDIENCE: Prokaryotes. BARBARA IMPERIALI: Prokaryotes. Remember, the eukaryotic DNA is linear. And actually, we're going to talk about a conundrum with the eukaryotic DNA because of the ends of the DNA, the ends of the chromosomes and their copying, when we talk about telomerase. But this is typical of a bacterial circular DNA. It's usually super coiled and becomes uncoiled in order to be replicated. So, obviously, going bidirectionally gives you twice the speed, because your roaring around the same time. The helicase opens up the DNA. DNA polymerase does its job. The pink strand here would be the leading strand. And there's obviously going to be one on both strands of DNA, which will replicate very well. But don't forget, you're going to have to deal with the lagging strands in both cases. So in both cases, what you would do is stick down a primer in order to be able to build those lagging strands. Otherwise, DNA polymerase can't get a grip on the double-stranded DNA in order to carry out the synthesis. So in those cases, there would be a primer to set up the lagging strand, and then DNA synthesis can occur. So let's put that in. Once there's a primer here, DNA synthesis can occur. And then what happens with respect to the pieces that we're dealing with? What happens with the primer? What do we have to do to get to a nice complete, intact strand of DNA? Which enzymes are involved? Yeah? AUDIENCE: Ligase. BARBARA IMPERIALI: Here, there would be ligase activity. We need that. What about at the other end? What do we do with the primer, and then how can we move forward? Someone else? What happens with the primer? It's oftentimes an RNA primer. You'll see in a moment that RNA doesn't need a primase, so it's very easy for RNA polymerase to stitch in little pieces. What do we have to do, though, to get an intact strand of DNA? OK. AUDIENCE: Have to remove the [? RNA. ?] BARBARA IMPERIALI: Yup. So you're going to remove the RNA. Then the polymerase, later on, when we keep going, can sort of build this piece. And then we'll have to ligate that, as well. So you want to remember all the functions of those enzymes that are involved in replication. It's a little worrisome that people-- I know you don't want to talk or you think that's an obvious answer, but it's really important that you have them at your fingertips, some of these enzymes that are involved in this process. Because they should start to become second nature. When you have to make a full DNA copy of an entire genome, there's a lot of moving parts. But if you start walking through the logic of them, they make sense. If I'm going to unpeel DNA, I need a helicase. If I'm going to keep it single stranded, I need single strand binding proteins. If I'm going to move forward, sure, I need the polymerase, but what does the polymerase need? It needs a primer in order to have double strand, because DNA polymerase only wants to lock onto a double strand to go start doing its job. These complications with the lagging strands that are really annoying, but it's pretty remarkable nature has addressed this and is able, remember, to replicate DNA and bacteria at a speed of 100 base pairs per second. So that's what's going on, this entire process. Excuse me. 1,000 base pairs a second. I just want you to remember this process is slower in eukaryotes. It's about 30 to 50 base pairs per second. Obviously, when you're speeding, you make more mistakes, so there are more mistakes in bacterial genome replication. Why does it not matter so much if there's a mistake in a bacterial genome? What do you know about bacteria and their lifestyles? Do they stick around a long time? No. So they divide quickly. They live and die quickly. So you're not having to keep an intact genome without mistakes in it for a long time, because you're just turning over bacteria. If there's a mistake, it probably dies out. Or heaven forbid, there's resistance to drugs developed. And we'll talk about those later, because those occur due to mistakes and in bacterial replication. But in a eukaryotic genome, we have to preserve the integrity of the genome. So I'm going to talk about two things that are related to the accuracy of replication now, because that's a really important component. All right, so the first thing is to think about, what's the basal rate of making mistakes of DNA polymerase? So for that purpose, I'm just going to put down a piece of DNA with its partner that's being synthesized, five prime to three prime. And I'll put in some bases, so A. So that would have had a T put in opposite. G, that would have had a C. So these have two hydrogen bonds. These have three hydrogen bonds. And let's say we now have a C here. So we want to put in a G at the position opposite. DNA polymerase wants to add the next base pair. It should be a G, because it's going in opposite a C. It's being grown in the right direction, five prime to three prime. So the basal error rate is about 1,000 to 1. So 999 times out of 1,000, the right base gets put in. 1 time out of 1,000, you might put in the wrong rate. So the error rate is about 1 in 10 to the 3. That's really all that's-- all that's at play here is energetics, just how favorable putting in the right base is. But there's a slight chance that the wrong base will just go in. Because the energetics are sufficiently different, but you're going to make mistakes, just because of the thermodynamic balance. If you're putting in 999, you're going to get it wrong some of the time, just statistically, because the difference in energy between putting in the right base or putting in a wrong base. So that error rate is too high. If we replicated our genome, 32 billion base pairs, and we had a 1 in 10 to the 3 error rate, we'd have a lot of mistakes in the genome, right? And we cannot tolerate that, because all those mistakes in our genome will then propagate to mistakes in our proteome, if we're in the right segments of the genome. So this is pretty unacceptable with respect to an error rate. So one way that nature deals with this is that DNA polymerase actually does some proofreading, all right? So it has a proofreading function. So what do you do? When you're proofreading, you take a quick look at what you've just written and say, oh, yeah, that looks good. That looks good. So what DNA polymerase is-- it more or less reaches back to the base it puts in and checks that it's OK. It can only proofread one base back. It can't proofread from work that's been done a long time ago. It can just proofread very recent work. And if it looks like it's the wrong base, DNA polymerase has an opposite function. It has what's known as a three prime exonuclease activity. So I'll write that down, and then we'll talk about what that means. Three prime exonuclease. So let's say we put in, instead of G, we put in a T. That's bad news. So what it can do is it can reach back and cut off, from the three prime end, a single nucleotide, the one that just got put in, all right? And then allow the process to reoccur to get the right base pair in. So a lot of enzymes will catalyze both forwards and backwards reactions. DNA polymerase, the energetics of such are that it is able to catalyze both the addition of a nucleotide and the removal of a single nucleotide, only if it's at the three prime end. Only if it's at an open end, where it's just been put in. So, remember, the DNA polymerase is still here, because its plan is to move forward and keep on putting in nucleotides. But it actually checks back. You could picture DNA pol just sort of quickly looking over its shoulder at the work it's just done and realizing that's the wrong one. So what this does is brings the error rates down considerably. Goes in 1 in 10 to the 5. So that's way better. 1 in 100,000 is much better, and that's pretty acceptable. So it means you're really making very minimal mistakes in the replication. So this part, the proofreading, brings the error rate from 1 in 10 to the 3 to 1 in 10 to the 5. But it can only work during DNA polymerase activity, and it can only work on the most recent nucleotide that has been put in. So this is basically a summary of-- yes, question? AUDIENCE: So are [INAUDIBLE] in prokaryotes and eukaryotes? BARBARA IMPERIALI: They are, and they are similar-- actually, there's slightly less error rates in eukaryotes, because the speed is slower. So the opportunity to fix things is going to be a little bit better. So, in the end, your goal, really, is to bring your error rate between 10 to the 5 and 10 to the 6. But for bacteria, because the speed is so much larger, this is sort of the limit of it. But in eukaryote, it can be a little bit better, because the speed is slower. So you could imagine, if you're quickly proofreading, you're doing a less good job than if you're slowly proofreading. But the enzymes that I'll talk to you about in a second in eukarya are to fix entire work that's already been done. We'll see that in a moment. And that's what really cleans up. These enzymes are called the guardians of the genome, and they are much more sophisticated in eukarya. So that's a very good point. All right, so here's the general scheme. There is an extension. There's a mistake. So there's proofreading. The mistake gets taken out. And then you keep on extending again, all right? So this now becomes pretty good. But what we need to talk about now is, what are the enzymes that go to work-- and I'm leaving what's on that board there, because I'm going to come back to it-- to actually correct mistakes in DNA. So let's talk about the guardians of the genome. Because remember that your DNA is your permanent record of what needs to be made. All right, so the types of mistakes that can be fixed with proofreading are only recent mistakes. The types of enzymes I'm going to talk to you about mistakes that are found globally within double-stranded DNA. And these are mistakes wherever. They may be mistakes that didn't get corrected by proofreading. But more importantly, they're mistakes that occur due to some kind of damage on the DNA. So what kind of things might impact the integrity of our DNA on a day-to-day basis? Yeah? AUDIENCE: Sunlight. BARBARA IMPERIALI: Sunlight. So terrible stuff. UV light will actually cause some cross-links, and I'll show you one of those, that are very serious in DNA. What else might hurt the genome? So sunlight. People say, don't go out in the sunlight. They're right about that. What else? Yeah? AUDIENCE: Radiation. BARBARA IMPERIALI: Radiation is another form. So that's like radioactivity. Radiation is important, right? So if our ozone layer gets thin, there's more risk there, as well. What about barbecue? Yeah? AUDIENCE: Harmful chemicals. BARBARA IMPERIALI: Yeah. Awful chemicals, terrible chemicals, right. So chemicals. So these are things like polyaromatic structures that actually slip into your genome and cause mistakes in the reading. Or they actually physically modify the bases to make it a base that doesn't look like a base anymore. So these are commonly very reactive chemicals. So these are all very serious things to the genome. And so the enzymes that mitigate the damage to the DNA basically screen along the double-stranded DNA to look for defects. Because if you have perfectly paired DNA, then you're not-- you're going to have a very regular structure. Whereas, if something has happened to the DNA, there's something wrong with the base-- it's not base pairing well or something has actually happened between bases where they're causing a bulge in the genome-- then these enzymes will come into play. About a few years ago-- actually, it was on a class day, so I always enjoy these. The Nobel Prize was awarded in 2015 for the researchers who deciphered the mechanisms for correcting the genome through DNA repair mechanisms. So there are two basic mechanisms that I'll talk to you about. One is base excision repair. And the other one is a lot more serious. It's actually the entire nucleotide excision repair. So one fixes just the base that's gone wrong, but the other one takes much more of the structure out to fix it. So it's nucleotide excision repair. So BER and NER, and we're going to talk about both of those mechanisms, because they're very fascinating. And they kind of lean on some of what you've learned already. So base excision repair occurs when there's a defect in the base. Maybe it's the wrong base. Maybe it's just been modified a little bit. So what happens in base excision repair is that the base gets-- all right, I'm going to just use the pointer, because my little spotlight-- it'll pop back. It's a bit magical. So base excision repair, only the base, a single localized base, is damaged. There is some chemistry, for example, that will convert cytosine to uracil called a deamination mechanism. And in fact, if you replace the cytosine with a uracil, then you get in trouble with respect to its base pairing with its appropriate purine partner. So in base excision repair, what will happen is that base will be detected. It will be flipped out from the context of double-stranded DNA. If it's tucked in the double-stranded DNA, you can't quite cut the bond that's attached from the ribose sugar to the base. So the base gets flipped out of the DNA structure. And there's an enzyme known as a glycosylase that cuts the bond between the ribose and the base and gets rid of it. And then what happens is the rest of the nucleotide, just that one nucleotide, gets removed. And then DNA polymerase fills the gap, and the strand is sealed by a ligase. So a glycosylase cuts the base out. There's a couple of enzymes that actually cut the ribose, the phosphodiester linkages out. And then the two enzymes, remember, that are important when we're making DNA, the polymerase and the ligase, work together to put a base into this position. And then they put the base back in, and then the ligase joins the gap. So DNA polymerase will make one of the bonds. The ligase will make the other bond, all right? So that's base excision repair, and it's monitored by finding that there is a lack of integrity in the double-stranded DNA. It's only a base that's affected, so that base gets removed. The rest of the nucleotide is removed, but only one of them. And then they're replaced through the concerted action of polymerase and ligase. Now, there is another mechanism that's much more serious, and this is very typical of the types of damage that get formed from sunlight and UV radiation, is when two thymidines are adjacent to each other, they will undergo, quite commonly, a chemical reaction where they form a dimeric structure. So there's much more wrong with the DNA strand in that situation. So those get noticed. This thing's driving me nuts. Those get noticed as a real defect in the DNA. Base excision repair won't work. Why wouldn't it work? Yeah? AUDIENCE: Because the two are bound together. BARBARA IMPERIALI: Yeah, they're bound together. You can't peel them back out. You can't break in that structure. So you've got to move to a much more sort of major fixing of the DNA strand. So there is a genetically linked disorder where the enzymes involved in nucleotide excision repair do not work. And when a child has a copy of both defective genes from the parents, both one from the mother and the father, it's impossible for them to fix these defects in the DNA. And they get a lot of physiologic problems like sort of scarring and sunburns, barely anything at all. So this is a group of children that are so afflicted with this genetic disorders that they cannot go out in the daytime at all. So you'll sometimes see they're called Children of the Night, because basically, they have to flip their schedules. They just can't go near sunlight. And if they go out, they basically have to be covered from head to toe. And that's including their eyes, because you can get sunburn of the eyes. You can sort of see, in some of these pictures, that it looks really serious defects. And this is just the external manifestation. The internal manifestation would be cases of skin cancer very, very readily. So if you don't have at least one good copy of the enzyme that does nucleotide excision repair, then you're in trouble. And the defects are called-- it's called xeroderma pigmentosum. And there's actually a lot of family groups that get together, because the best way is just to form a sort of social network so the children understand what each other-- the limitations that they all have. And they can play together and be on these sort of flipped schedules in order to avoid any sunlight. So, in this case, it's essential to basically clip out a large chunk of the DNA. So what happens in this case is that the DNA is recognized, and then a large portion of it-- about a dozen nucleotides-- are cleaved out. And then, once again, DNA polymerase fills the gap, and DNA ligase seals the last gap. So DNA polymerase will be able to fill going from the five prime to three prime. There'll still be one gap, and then the ligase fixes it. So I think these kinds of things that are done to mitigate damage on the genome are very important to understand, because this is happening all the time. Any minor things that get fixed, that need fixing due to sunlight, radiation, chemicals will be fixed through these methods to keep that rate, that error rate, in your genome down to, like, one in a billion or something like that. All right, I'm just going to flash this up. I'm going to, very quickly. I'm going to give you guys a copy of this. But these are the components that you want to be able to understand the function of when thinking about DNA replication. So you don't have that on your slides, but we're going to give you a copy so that you can really make sure that you understand all the moving parts and how they come together for replication. Now I want to talk about one last conundrum with DNA, and that is the issue of telomerase. And this is particularly critical in eukaryotes that have linear chromosomes. Now, if you think about DNA being replicated, when you look at the two strands of DNA, as you approach the very end of the chromosome, you'll do just fine making the copy that's built five primes-- sorry about this. You'll do just fine making the copy that is built in this direction, because it's the leading strand. And it's built five prime. Am I going wrong here? It's built five prime to three prime. Can someone help me out here? I'm losing my mind. So this piece is built five prime to three prime, so therefore, this was five prime and three prime. But then on the other strand, you have a problem, because you need to put in a primer here in order to build the other strand. Does that make sense? So we've got to have put in that short primer, because DNA polymerase is will not work otherwise. And then we need to build this strand of DNA. All right, so what's the problem here with respect to these ends? What's going to happen next? If it's an RNA primer, what happens next? We nibble it up, right, with the full intention of replacing that bit of DNA. But then what can DNA polymerase do? It can't do anything, right? DNA polymerase needs double-stranded DNA to hold on to so it could fill this gap. So what happens is every time you replicate DNA, you end up with a gap, with a small amount you don't quite copy. Is everyone following me? And that's a problem, right? Because doesn't it mean every time my cells divide, my genes get a little bit shorter and a little bit shorter and a little bit shorter? So there are things in place that help. One important feature is that, usually, you don't have important genetic material at the ends of your genes-- at the ends of chromosomes, rather. There's sort of extra DNA that doesn't need to be copied. But the basic theory, the whole theory about telomerase, is that for certain types of cells-- these ascend stem cells and germ cells-- there is an enzyme that can fill this gap. It's called telomerase. So in those cells-- what's special about these types of cells? We need to keep them good. They're what defines your starting DNA. The stem cells and the germ cells, like in the sperm and egg, have to have a good copy of DNA. They can't be getting shorter and shorter every time a new generation is born. But once your cells are the somatic cells, the DNA gets shorter and shorter, because those cells don't have telomerase. And this is associated with theories of aging. So the cells you get, the ones that are finally the ones in your body, every time they divide, the ends of the chromosomes will get a little shorter. But there's no mechanism to replace those. And so it's associated with the belief that, at a certain stage, you've divided the cells enough times. And then you can't-- you're actually starting to nibble into important coding DNA. Does everybody understand what would be the significance of that? Does that make sense? Yeah? AUDIENCE: So once the [INAUDIBLE] are gone, is this [INAUDIBLE] able to divide? BARBARA IMPERIALI: No. So the telomerase keeps the DNA in those types of cells in good condition. In the cells that divide daily, and your body wants-- the somatic cells. You will keep on shortening, but they'll just be more mistakes, basically. And those are the sorts of things that would be associated with an organism that's growing to a certain age, because it's just a certain number of cells. So the cells may not divide, or there may be mistakes in certain parts of the coding genome. Any other questions? OK, so telomerase was also another important discovery that was awarded a Nobel Prize. And this gives you details. So the telomerase protects the genetic information on every cell division, though you will lose a little bit of genetic information. So it limits the number of divisions a cell can make in a lifetime. All right, so we're going to move on now. And I've spent quite a bit of time on this, but I want to guarantee you that now, as we move forward to transcription, there's a few simplifications that we can make in the story of transcription. So moving on. All right. So what have we done so far? We've seen replication. Now we're moving to the process of transcription. So when you transcribe something, you're basically making a copy of something, but in a slightly different format. So, for example, if you're transcribing from handwritten to a typed version, you go from something that's in the script to something that's typed. It has the same content, but it's in a different format. So this is what the process is called when you convert DNA into RNA. And very specifically, this is part of the process to make what's known as the messenger RNA. The first phase of transcription in eukaryotic cells gets us to a pre-messenger RNA. So there's a little bit more needs to be done to it before it can leave the nucleus to encode protein translation. But in bacteria, you're basically just going straight from the DNA to the messenger RNA. At the beginning of the next class, we will also talk about going from the pre-messenger RNA to the messenger RNA. And let's take a look at the cell up here, where what we're focusing in on here is the process whereby we're copying that DNA. I have no idea why this is really being monstrously behaved like that. I'm done, done with these gizmos. The process whereby the double-stranded DNA opens up a little bit and we make that pre-messenger RNA copy, all right? So I want you to think back to the processes that we learned about for translation. And now we're going to move forward to take a look at transcription. And frankly, it's a lot simpler. So let's just look at the players in transcription. And you've got a copy of this in your notes, so I don't need to necessarily put it all on the board. So in DNA, remember, we had a A, G, C, T. We have a deoxyribose, and it's mostly used as hereditary genetic information. But in RNA, we're making a new copy of the DNA, where we use slightly different building blocks-- A, G, C, U. U instead of T. Plus, there are some modified bases that occur in some of the types of RNA, and the sugar is a ribose. So the first main thing about the RNA copy relative to the DNA copy is that ribose/deoxyribose difference. What's quite remarkable is that when you have two deoxyribose in your DNA, it's nice and stable. We need it to be nice and stable. It's our genome. We can't let our genome be falling apart as we're sort of walking down the street. In contrast, when RNA is used, it's much more transient. We make a messenger RNA copy of part of DNA to move forward to make proteins, but we don't need that to stick around forever. And when you have the ribose with the two hydroxyls, a two and three, it's a much more fragile material. It is a transient message, and it gets degraded quite quickly. So that difference in the sugar really dictates the stability there. RNA is found in a lot of polymers, biopolymers. We'll talk we'll focus mostly on the messenger RNA today. It's less than 1% of the DNA. And then on Friday, we'll be talking about the transfer RNA and the ribosomal RNA. So we're really going to focus in right now on the messenger RNA. And the one thing about RNA structures I'll elaborate later is they have very different structures to canonical DNA, which adopts the double-stranded, anti-parallel structure. RNA structures are much more like folded protein structures, where there may be sections of base pairing, but there'll also be lots of loops and different characteristics. So even the ribose structure makes a difference in the stability of the double stranded structure and encourages a lot more of these unusual structures, which is really why people have a lot of faith in the theories about the RNA world. OK, so let's look at DNA polymerase, RNA polymerase. So here's all the good news that we'll be able to describe to you. So when you copy DNA, you copy all of it. When you copy RNA, you only copy-- when you make the copy of messenger RNA, you only copy about 1.5% of the genome. So you do not copy the entire thing. So the process is much more restricted to sections of DNA that need to be copied. And we'll talk about the features of the DNA that tell you about that later. Here's the important details. So in eukaryotes, transcription happens in the nucleus. And the key enzyme involved is RNA polymerase, RNA pol. And it has very different features to DNA polymerase, but there are two big things that are different. It includes its own helicase. So you remember, with replication, we needed a DNA polymerase and a helicase. RNA polymerase is much smarter than that. It actually includes both functions within its structure. So it's an RNA polymerase that grows the new nucleotide five prime to three prime. But it also has a built-in helicase, so that's an advantage. It still grows the messenger RNA five prime to three prime, but it uses the different nucleotide triphosphate building blocks. Or one of them is different. So UTP, ATP. So remember, the U replaces the T in RNA, so that's one key difference. It includes a helicase activity. And the other really neat thing, because it's such a complication in replication, is it doesn't require a primer. That is why even when we were replicating the DNA, we were using RNA polymerase to make those little pieces of primers, because it didn't need a priming sequence. So there's really fundamental differences about the RNA polymerase. And then the other thing is that only one of the two strands of DNA is transcribed. And in a moment, or maybe the beginning of next class, we'll judge how we can understand which sequence is transcribed. And then, obviously, the messenger RNA is a complementary sequence to the sequence of DNA that is being copied. Finally, only part of the DNA is transcribed, under unlike the process of replication. All right, so you can see already that there are a lot of simplifications in transcription that we did not have the advantage of in replication. So the helicase activity and the primary issue are two key features that make life a lot simpler. I just wanted to show you this small detail about RNA polymerase. There are a lot of natural products out there that are known to be inhibitors of vital processes, and one that caught my eye is the small molecules that are found in mushrooms, the really toxic mushrooms, when you see some of these. In fact, never eat a mushroom that you don't know and you know where it came from, because there's problems with them. Because a lot of these mushrooms include potent natural products. And in fact, there's a compound known as amanitin, alpha-amanitin, and it's found in certain mushrooms known as either the Death Cap or Destroying Angel mushrooms. So you could tell from their names that they are a real problem. And what the amanitin does is it actually interferes directly with RNA polymerase by acting as an allosteric inhibitor of RNA polymerase and locking it into a closed state so it can't keep on transcribing. So I thought this was very interesting, incredibly. Tiny, tiny, tiny, tiny doses will arrest transcription and cause dire consequences. So I think what's very interesting is that it's an allosteric inhibitor. It's very potent. What it does is it seals the polymerase in a locked, closed state that it can't move forward for transcription. Now, finally, a couple of points. When we decide that a portion of gene is going to be transcribed, there are a lot of mechanisms in place to identify the portion of that gene. And one of the key things that is known is that there are what are called promoter sites, which are actually upstream of the portion of gene that's to be transcribed, where you recruit a bunch of proteins that actually park down on the double-stranded DNA and then, at the end of the day, recruit the RNA polymerase. So all that extra genome, some of it is not transcribed into messenger RNA for making proteins, but it's part of an area of the gene that gets recognized by all of the proteins that collaborate to bring in the RNA polymerase in order for your RNA to be transcribed. So what I'm showing you here is a double-stranded DNA with one of the very common promoters that's just upstream of the part of the DNA that gets transcribed. And it's called the TATA box, because it's T-A-T-A sequence. It's got a complement that looks like it. And it's shown in pink here. And then the proteins that bind to the DNA at the TATA box actually drape over that segment of double-stranded DNA and then serve as recruitment entities to bring in all the machinery that's needed for transcription of the gene beyond it. So some of the identity of all that extra double-stranded DNA is actually guide places to guide where the machinery for transcription has to park in order for messenger RNA to be formed. So immediately, you can see we're only going to transcribe part of this genetic material beyond here. But we need a whole bunch of genetic material that's actually just serving as sort of the runway for the plane landing in the right position, all right? So I'm going to put up a puzzle that you can think about. And then we'll start with these at the beginning of the next class, because I don't want to rush them. When you decide to transcribe a gene-- let's say you've got a promoter site here-- the thing that I want you to think about is, which strand would you transcribe? And what's the logic behind this? And then we'll just do a recap on this at the beginning of the next class, because I just want you to think about it. Because a lot of the information you need is directly here. Hi there. We're just wrapping up. OK? So that's it for today.
MIT_7016_Introductory_Biology_Fall_2018
11_Cells_the_Simplest_Functional_Units.txt
ADAM MARTIN: So, last semester, my grandfather passed away, and I was responsible for explaining to my two sons how a funeral works. So I'm a professor, right? I pride myself on being able to explain things clearly. So I went to tell my five-year-old son sort of what's going on during the funeral. I told him, your papa's body is up in that wooden box up there. We're going to celebrate him right now. We're going to go to the cemetery, and then we're going to bury him. And you know, I'm a professor. I thought I nailed it, OK? Except my son was showing this looking look of concern in his eyes. And he looked at me. He was like, what about his head? So my plea for this semester is please let me know when I've forgotten the part about the head, OK? You know, you guys are listening to me. I might have forgotten something that, to me, seems kind of obvious, but that, for you, it would help to know to understand the material. So today, we're going to start with cells. And so here, you can see this is one of my favorite movies. This is a neutrophil cell here that's migrating. You can see it's chasing after this smaller cell which is a bacteria cell. And around this neutrophil, what you're seeing here are these red blood cells, which aren't doing very much of anything. So the point I'm making with this video is that cells have a huge amount of diversity. So cells have diversity. As you can see from the video, there's a diversity in size. There's diversity in shape and also diversity in behavior. OK, so we're going to unpack this a little bit. And I just want to start by just pointing out that until now in the semester, we've dealt mainly with very small things, such as atoms, small molecules, lipids, and proteins. And the size of these structures, they're on the nanometer scale. Now, cells are a unit up in size. Cells span several orders of magnitude in size. So bacterial cells are on the small side. They're on the 1 micron to 10 micron size. So you can see that here. Most bacteria is about 1 micron to 10 micron. But our cells span from tens of microns to hundreds of microns, and even larger than that, because the human egg is on the order of a millimeter in size. This is not a human egg, but this is a frog egg. You can see often the egg cells are the biggest of all cells. The biggest cell is an ostrich egg. It's about 15 centimeters, I believe. So cells span a huge, wide range of sizes. I'm going to start with the simplest, which is a bacteria cell. And what I want to point out about this cell is its simplicity. So here, you can see this is an electron micrograph of bacterial cell here. And this is a cartoon just illustrating some of the key features. You have a plasma membrane and a cell wall. The cell wall here is in sort of-- the periplasmic space is in green. And this encapsulates the cytoplasm. And the only other real structure you can see, in this case, is this nucleoid structure in the middle. And what the nucleoid is, is it's just the chromosome of the bacterium. And I want to point out that later on in the course, Professor Imperiali is going to come back and tell you about antibiotics and how bacteria can develop antibiotic resistance, which is of critical importance for biology and medicine. So our cells are more complicated than this. And that's because if you look at this EM here, you can see that eukaryotic cells-- and we are eukaryotes-- have membrane-bound compartments. There's a nucleus here that houses our nuclear DNA. And also, there's a series of membrane compartments that span the cytoplasm. Now, our cells, even in a single organism, such as us, our cells have incredible diversity and specialization. So there's diversity within a single organism. There is specialization. So as we develop from a single cell, our cells acquire properties that allow them to carry out specific functions in our bodies. And an extreme example of this is shown up here. These are pictures or drawings of neurons from Ramon y Cajal, and you can see how this looks nothing like the cartoon picture of a cell I just showed you. These cells have highly dendritic sort of arrays of protrusions. And these cells have evolved such that they are very good at sort of transmitting information in the body, right? An extreme example of a nerve is a sciatic nerve, which extends from the base of your spinal cord all the way down into your foot. So it's about a meter long. That's an extreme specialization for a cell. So these cells are specialized, but what's important to note is that within a single organism, the genomic DNA of a cell is more or less the same. So genomic DNA is the same, with some exceptions, some of which we'll get to in the course. The genomic DNA is the same. What's different is the genes that are being expressed in these cells and the proteins that they encode that give these cells different functions. So what's different and what allows cells to acquire these different functionalities is this different gene expression. OK, so that's the overview. Now I want to talk about compartments. And if we go back to this cartoon that I showed you from your-- oh, I just wanted to point out that right now, at MIT and the Broad, there is a project that's ongoing to really define all the cell types that are present in humans. And this is known as the Human Cell Atlas. And I just want you to take a minute to think about if you wanted to define different cell types, what would you look at to classify these cell types? Anyone have an idea? Yes? What's your name? AUDIENCE: Rachel. ADAM MARTIN: Rachel. AUDIENCE: Cell function. ADAM MARTIN: What's that? AUDIENCE: Function. ADAM MARTIN: You could look at function. That might be a little subjective in how to interpret. What defines the function of the cell? AUDIENCE: I'm thinking like what it does and how it works with other cells. ADAM MARTIN: But is there something, I guess, within that cell that would define what its function is? Yeah? What's your name? AUDIENCE: Samantar ADAM MARTIN: Samantar? AUDIENCE: Samantar ADAM MARTIN: Samantac All right, yeah? AUDIENCE: Gene expression data. ADAM MARTIN: Gene expression, right? What genes are these cells expressing? So what they're doing is they're isolating single cells from tissues. And then they're looking at gene expression. So which type of molecule here would be the molecule you'd want to look at if you want to look at gene expression? Not Miles. Malik. AUDIENCE: You look at mRNA. ADAM MARTIN: mRNA, exactly. Malik is exactly right. So you look at the mRNA, right? Because if there's mRNA, that means that gene was expressed, and it's possibly encoding the translation of a protein. So that's what they're doing. They're isolating single cells and doing a massive single cell RNA seek project. And that's helping to identify new cell types in the human body. OK, coming back to compartments. So I showed you this picture. You can see there are lots of membrane-bound compartments in the cell. You can see the lines in this cartoon represent lipid bilayers. And so these are compartments that are completely encapsulated by a lipid bilayer. And I'm going to remind you of something that Professor Imperiali told you about already. And the reason I'm saying it again is because it is so freaking important, OK? So if we consider a lipid bilayer where the circles are the polar head groups of the lipids and the squiggly lines are the fatty acid chains-- I'm not going to draw all the fatty acid chains. So this is a lipid bilayer. And these head groups, are they hydrophilic or hydrophobic? Yes? What's your name? AUDIENCE: Stephen. ADAM MARTIN: Stephen. AUDIENCE: They're hydrophilic. ADAM MARTIN: They're hydrophilic. They like water, right? The water is on the different faces here. These are hydrophilic. How about this central region here? Yes? Name? AUDIENCE: Ory. ADAM MARTIN: Ory. AUDIENCE: Hydrophobic. ADAM MARTIN: Hydrophobic, good, right? That's excellent. So these are hydrophobic. Hydrophobic. So what that means is you have this hydrophobic layer that is surrounding each of these compartments, and that's going to serve as a barrier such that water and things dissolved in water cannot pass through this barrier unless something allows it to. So each of these compartments are membrane bound, have properties in the lumen that's in the interior of the compartment that's going to be different from that of the cytoplasm. And the cell, the interior of the cytoplasm, is going to be different from that of the extracellular space. So let me just give you some examples of how things are different inside and outside the cell and also inside and outside these compartments. So if we consider the plasma versus the cytoplasm, we can consider the concentration of various ions. And I want to get the concentrations right. Let's consider sodium, which is a monovalent cation, potassium, and calcium. I'm just going to use these as illustrations. So sodium is concentrated outside the cell, about 150 millimolar, and is less concentrated in the cytoplasm. So there's a huge difference in sodium between the outside and the inside of the cell. Potassium-- now, you might think potassium would be a lot like sodium. It's a monovalent cation. It's a similar size. But it actually shows the flip distribution. So it's four millimolar in the exoplasm and 140 millimolar in the cytoplasm, OK? So there appears to be some selectivity here, right? The cell is concentrating certain things inside the cytoplasm, and it's excluding others. So there's selectivity, even between closely related atoms here. OK, calcium is two millimolar in the exoplasm and around 10 to the negative fifth millimolar in the cytoplasm. So because there are these huge gradients in the concentration of these ions leads you to expect that this is a non-equilibrium state. Because if it were equilibrium, these ions would go in and out, and they'd equilibrate such that the same concentration would be on the inside as the outside. And so there's selectivity. And also, there's non-equilibrium, which suggests that energy is required to maintain this asymmetry. I want to point out it's not just the concentrations of various molecules that are different between the inside and the outside, but the plasma membrane can also have a voltage across it. So this is exoplasm. This is cytoplasm. And this membrane can hold a voltage. And this is going to become incredibly important when we talk about neurons, because neurons use changes in this voltage to transmit signals across their length and also to transmit signals at the synapse. And we'll talk about that later in the course. So this is known as membrane potential, this voltage difference. I also, again, want to point out this endomembrane system in here, where the gray regions here are sort of compartments that will communicate with each other. And so there's this whole internal structure of endomembrane system which is compartmentalized from the cytoplasm, OK? And so now I want to talk about how do things get in and out of this structure. So I think we're up to three, getting in and out. So cells-- and this is very important in cell communication, right? For cells to communicate, cells have to send things, like signals, to other cells. And also, cells can take in stuff and sort of receive things from other cells. So how is it that this happens? Well, if we consider the plasma membrane of a cell-- this is a lipid bilayer, PM. I abbreviate Plasma Membrane, PM. So that's a lipid bilayer. And let's say that there's some type of sort of molecule that's on the outside of the cell. So, in this case, this would be exoplasm out here. This would be inside the cytoplasm here. Let's say this cell wanted to sort of take up this pink structure into the cell. How would it do that? Well, this pink structure, because it's hydrophilic, cannot pass through the lipid bilayer. So the cell has to use another strategy to take this up into the cell. Let's see. I'll use a little blue here. So let's say the region of the membrane right here is in blue. Then, what can happen is this blue structure can invaginate, and it can take up this pink molecule. So the plasma membrane can invaginate. The plasma membrane can invaginate, and if it has this pink molecule, then if there is a cision event here, you've now gone from having your pink molecule on the outside of the cell to having the pink molecule in this vesicle or circular structure on the inside of the cell. So here now we have this. We have our vesicle in blue. And now the cell has picked up this pink molecule, such like this. So this process of sort of taking material from the outside and sort of bringing it into the inside of the cell is known as endocytosis, which is the process of taking something from the outside and bringing it in the cell, this is also a way that viruses can get in your cells, which is a more nefarious way that this system is used. All right, I need a volunteer. Yes. Ory, come on down. Don't worry. This'll be simple. I just need you to put pressure on my head right here. So you see the tassel. Make sure they can see the tassel. You see this tassel? This tassel is that pink molecule, is right now on the outside, right? But I'm going to endocytose it by going like that, right? And I also got a hand. That's great. So you see, that's basically endocytosis. I just endocytosed my tassel. All right, you can go up. We're all set, yeah. I can do the next one. OK, so the opposite of this process-- let me get another color here. You can also have vesicles that are on the inside that then fuse with the plasma membrane and release their contents to the exterior. And this is called exocytosis. So exocytosis is something that's starting out inside one of these vesicles and then goes out, OK? So this is not exactly reversible, because there are different protein machineries that mediate either endocytosis or exocytosis. So now I'm going to exocytose my tassel. There we go. So I just exocytosed it, and now my molecules is again facing the outside world. So this is a way for cells to sort of taken things and also to secrete molecules, like signaling molecules, into the extracellular space. So now, we're moving on, and now we're going to talk about compartments within compartments. So we're on four-- compartments within compartments. And you're going to see how this relates to endocytosis in just a minute. So compartments within compartments. So the example I'm going to use here is an organelle that is present in us, in animal cells, which is the mitochondria. And you all know that the mitochondria is the powerhouse of the cell. Whenever I say mitochondria is the powerhouse of the cells, I have to do 10 push-ups, because it's so cliche. I mean, my five-year-old knows that mitochondria is the powerhouse of the cell. It's a gross oversimplification, OK? Mitochondria are way more interesting than that, and I'll show you in just a minute. All right, I'll draw mitochondria, the same mitochondria that my kids draw me. So here's a mitochondria. I'm drawing first the outer membrane. So there's an outer membrane to the mitochondria. And this organelle also has an inner membrane. So this is the inner membrane. The inside here is called the matrix. I'll draw some DNA molecules there. So this is known as the mitochondrial DNA, which I'll mention in just a minute. So this is the mitochondria. Where could such an organelle come from, evolutionarily speaking? You know, one problem that eukaryotic cells have is they cannot make mitochondria de novo. The way that mitochondria is passed on is it's replicated during cell division, and it's passed on from one cell to the next. And you guys all got your mitochondrial DNA and your mitochondria, at least initially, from your mother. So this is not an organelle that can be synthesized de novo. And the fact that this is the case has led to a theory known as the endosymbiont theory, or hypothesis, which basically states that there was an ancestral eukaryotic cell. And the way that organelles such as mitochondria and plastids were derived is from engulfing bacterial cells that were either capable of oxidative phosphorylation, in the case of mitochondria, or were capable of photosynthesis, in the case of chloroplasts. And so this engulfment is much like this endocytic process that I just talked to you. It's not really endocytosis, because these bacteria are much better than an endocytic vesicle. So it's more of like a macro sort of pinocytosis or something like that, OK? So something you may have seen in the news lately-- so some of the evidence for this endosymbiont theory is that mitochondria and plastids have their own DNA, OK? So these organelles have retained DNA. The genes in the DNA encode for proteins, that function in the mitochondria. It also includes ribosomal RNAs and transfer RNAs that are required for protein synthesis within the mitochondria. But a lot of the genes that are required for a functioning organism have now been exported to our nuclear DNA, and those genes are made and proteins are produced, and then they're imported into the mitochondria. But the mitochondria has retained a number of genes, and it's encoded in the mitochondrial DNA. Another reason to think that this could be from a symbiotic relationship, this ancient, between eukaryotic and prokaryotic cells is that these organelles divide by fission similar to how bacteria divide. So they divide by fission. So I'll show you a real mitochondria and a mitochondria undergoing fission in just a minute. I just want to point out that in the news recently, there's been talk of three-parent babies. And I just want to explain to you what that is. So eggs that are-- I shouldn't say embryos, but eggs that have the nuclear DNA from two parents can then be given mitochondrial DNA from another parent. So these three-parent babies are essentially babies that have nuclear DNA from two parents but mitochondrial DNA from a third parent. So here's just an article in the NewScientist reporting that this is imminent, and now it's been done. Now, why would you might want to do that? Anyone know why someone would want to do this? Yes? AUDIENCE: There might be a defect in the mitochondrial DNA of both parents. ADAM MARTIN: Yeah. So Stephen, right? AUDIENCE: Yes. ADAM MARTIN: Stephen's exactly right, right? There are diseases that are so associated with faulty mitochondrial DNA. And if you're a mother and you have the mutations that cause this disease, this would be a way for you to have a child without passing on the disease to your child. And so this is something that is still controversial, but that's why people are exploring the opportunity for making these so-called three-parent babies. Again, I've a pet peeve with textbook pictures of mitochondria. Here's your textbook picture. Everything's labeled nicely. This is what mitochondria look like in real life. Mitochondria, like the endoplasmic reticulum, actually, form these tubular networks that essentially span the entire cell. So it's convenient for us to depict mitochondria like this in textbook, but mitochondria are way more interesting than that. They're dynamic organelles. And I also want to make the point-- we kind of talk about these organelles like they behave as these separate entities, but in fact, they interact with each other. And there's lots of interesting biology behind that. So I'm going to show you one movie that's from work done by Gia Voeltz, and this Friedman person is the first author. She just gave a talk here at MIT and showed some beautiful movies. All right, focus on this movie here. The ER is labeled in green, and the mitochondria is in red. And what you see is there is this mitochondria tubule, and it's crossed by the ER right here. Now, focus on this when the movie plays. You're going to see that the mitochondria undergoes fission right at these points, where the ER and the mitochondria intersect. So that illustrates-- and now they've mechanistically dissected what sort of makes the mitochondria undergo fission at these crossing sites. But it really illustrates the real sort of complexity and dynamics that are present in a cell, which you might not be getting from your book. Oh, I did want to mention that-- so there's a chapter in your book-- I think it's Key Concept 5.3-- where you can read about all the organelles. You should read that and know roughly what the organelles do. I mean, I could lecture about that, but it would just be so boring that I can't do it. I'd have to do a ton of push-ups. So we have to sort of choose what we lecture about. So I would just suggest you read that part in the textbook. If you have questions, come talk to me. And just be familiar with what your organelles are doing. Yes, Ory? AUDIENCE: What was the chapter? ADAM MARTIN: It's 5.3. AUDIENCE: Is it like a chapter? ADAM MARTIN: It's Key Concept 5.3. It's probably listed in the assignments, right? AUDIENCE: It's a section in the chapter. ADAM MARTIN: Yeah. AUDIENCE: It's only a little bit. It's not going to be long. ADAM MARTIN: All right. I have one last part, which is to prepare you for Friday's lecture. In Friday's lecture, we're going to start talking about genetics. And before we talk about genetics, we're going to talk about something that is-- we're going to lay the groundwork, essentially, for genetics by talking about how cells divide. So I think one of the most miraculous things that cells do is that they can undergo this trick where they replicate themselves and split it into two daughter cells. And you're seeing here, there are chromosomes in the middle here. They're going to line up along the metaphase plate. And now they're going to get pulled to separate sides. They're going to wiggle back and forth first. Then, in just a minute, they're going to get segregated. They'll go eventually. There they go. So you see-- and then the cell is going to pinch at the equator and divide in two, OK? So now I'm at this last part, the cytoskeleton, because the cytoskeleton is the answer to part of how these cells divide. And before I present the cytoskeleton, I just want to briefly mention chromosomes and what they look like. So the chromosomes are your nuclear DNA. They're linear pieces of DNA, as opposed to the mitochondria, which has circular DNA. And again, the fact that mitochondria have circular DNA is analogous with sort of bacterial chromosomes, where bacteria have circular DNA. But our chromosomes are linear pieces of DNA. And you guys have probably all seen chromosome spreads that look like this. So this would be a chromosome that's replicated. There are two copies, one and two. So this is a replicated and condensed chromosome, right? Initially, chromosomes are just like bowls of spaghetti. Everything's mixed together. But during mitosis, the chromosomes condense, and then they sort of resolve from each other, such that you can see the arms of the chromosome. And you can see where the chromosomes are coupled to each other at this structure here, which is known as the centromere. And at this centromere, a protein complex assembles. I'll just draw it like this. This protein complex is called the kinetochore. The kinetochore is a complex of hundreds of proteins that assemble into this large platform that sits on the centromere. And the kinetochore is able to attach to the cell's cytoskeleton, specifically microtubules. So this attaches to microtubules. And I will tell you what microtubules are right now. So microtubules are a component of the cell's cytoskeleton. So the cell's cytoskeleton is a network of filamentous rods that are present in the cell. And the term cytoskeleton kind of makes it sound boring, I think, because it makes it sound static. But the cytoskeleton is anything but static. It's actually a dynamic machine in the cell. And these machines that are assembled from these fibers are able to generate force. So these microtubules and the cytoskeleton, they generate force. You can think of them as motors or machines. So there are machines that generate force. And I'll just illustrate this with a couple of videos and slides. So, again, this is not a stable structure, but very dynamic. So this is going to be a movie where, in green here, microtubules are labeled, and the red label's the nucleus. And these cells in this fly embryo are going to undergo cycles of nuclear division. And so you're going to see the microtubules assemble, disassemble, assemble, disassemble. So you see how dynamic this process is. The cell is able to assemble this force-generating apparatus, which is known as the mitotic spindle, each and every cell division. So this structure, or machine, is critical to physically segregate the chromosomes to opposite poles of a cell. So now I'm going to tell you about the microtubules themselves. So microtubules, as the name implies, are sort of tube-like polymers. So these microtubules are biopolymers, which means that cells are expressing sort of genes that encode for proteins that form a subunit that can then self-assemble into a larger rod-like structure. And you can see one of these rod-like structures here. They essentially look like straws. They're about 25 nanometers in diameter. And I'm going to show you this video showing you microtubules both disassembling at first and then assembling. So that's a dissembling microtubule. But they also assemble to form these longer rod-like structures. So these biopolymers are dynamic, and they both assemble but also disassemble. And both the assembly and the disassembly can generate force. Microtubules can push if they polymerize into something. They push it, right? Just like you're kind of poking someone with your finger. Now, they also disassemble, and when they shrink and disassemble, they can actually pull. And I'll show you an example of that right here. So this is an example that's reconstituted, meaning they're just purified proteins here. There's no cell. And what this is, is a bead. And you see the dark stripe here is a microtubule. You can see it growing. There's the end right there. That microtubule's going to grow. It's going to grow out to about here. And then at the end of the movie, it's going to stop growing, and the microtubule's going to shrink back. And you're going to see that when the microtubule shrinks back, it's going to actually pull this large bead and pull it towards the left side of the slide. Here it goes. It depolymerizes and pulls. You see it? So these microtubules can generate a pulling force that's strong enough to pull this glass bead. And also, it's strong enough to pull an entire chromosome. It's actually much stronger-- it generates much more force than it requires to pull a chromosome. So it seems like there's some robustness in the system so that it's generating a pull that's much stronger than needed to actually drag a chromosome through viscus media. So during mitosis, the way this system is set up is there's what's known as a bipolar spindle. I'll just write down that. We'll come back to it in Friday's lecture. There's a bipolar spindle. And the bipolar spindle is made up of a number of microtubules. So here, you can see there are two poles, here and here. And in the middle, along the sort of equator of the cell, the chromosomes will line up. There are just two chromosomes here. And you can see how microtubules reach out from the pole and attach to both sides of that chromosome. And you can imagine that when these microtubules are told to shrink and disassemble, if they're able to remain attached to that kinetochore, which they are, when they shrink, they're going to pull the two copies of the chromosome away from each other, OK? And so this is the basic machinery that allows for chromosomes to be segregated in cells. OK, that's it for today. Any questions? All right, terrific. Good luck on your exam on Wednesday. It's in here.
MIT_7016_Introductory_Biology_Fall_2018
14_Genetics_3_Linkage_Crossing_Over.txt
ADAM MARTIN: And so I wanted to start today's lecture by continuing what we were talking about in the last lecture. So I'm just going to hide this real quick. And so we're talking about the fruit fly and the white gene and the white mutant, which results in white-eyed flies. And we talked about how if you take females that have red eyes and cross them to males, the white-eyed male, then 100% of the progeny has red eyes in the F1 generation. And so I asked you guys, would you get the same results if you did the reciprocal cross? So what if we took white-eyed females and mated them to red-eyed males? So what about this? Actually, I'm going to move this over to over here so that maybe it's more visible. So what if we have white-eyed females and crossed this to red-eyed males? So let's unpack this sort of a little bit at a time. So what's the genotype of these white-eyed females here? Miles? AUDIENCE: So if you designate the eye gene as the letter A, a female would be X lowercase a, X lowercase a. ADAM MARTIN: Yes. So Miles is exactly right. So the dominant phenotype is red eyes, because the gene encodes for an enzyme that's important for the production of the red pigment. And so X lowercase a here would be a recessive mutant that lacks the pigment. And because it's a recessive allele-- because you need only one copy of this gene to produce the pigment. So the recessive allele results in the white phenotype. Therefore, this has to be homozygous recessive. How about this red-eyed male? Yeah, Ory? AUDIENCE: Wouldn't you have a Y and then an X capital A? ADAM MARTIN: Yes. So this would be this phenotype, right, where capital A is the gene that produces-- is a normal functioning gene that produces the pigment. So then in your F1 here, are you going to see something similar to this or something different? AUDIENCE: Something different. ADAM MARTIN: Different, great. Who said different? Javier? Do you want to propose what you might see? AUDIENCE: Yeah. For the males, they're going to inherit the Y gene from the father and the [INAUDIBLE].. ADAM MARTIN: Exactly. So the males are going to get the Y from the father, and they're going to get one X from their mother. So all the males are going to be of this genotype here, which means they're going to have what color eye? Javier is exactly right. That means they're going to have white eyes. So all the males will have white eyes. And what about the females? AUDIENCE: [INAUDIBLE] ADAM MARTIN: What's that? AUDIENCE: Red eyes. ADAM MARTIN: Yeah. So Ory is saying the males are going to get red eyes, right, because they're-- or the females are going to have red eyes, because they're going to get the X chromosome from their father, which has the dominant gene that produces the red pigment. So all the females are going to be heterozygous, but have a functional copy of this gene. So all of the females will have red eyes. OK, does everyone see how-- now, how would this compare with Mendel's crosses and pea color? Would there be a difference in Mendel's crosses if you switch the male versus the female if these were autosomal traits? Ory is shaking his head no, and he's right, right? In that case, it doesn't matter. You can do the reciprocal crosses, you get the same result. But because this is sex linked, which one is the male and which is the female is relevant. And this actually relates to something that we just saw on the MIT news. I just got this email this morning, but it came out I think yesterday, which is that biology-related research in the mechanical engineering department-- specifically, the CAM lab-- they've been able to design a 3D sort of model for ALS disease, which is also known as Lou Gehrig's disease. And so what they've done in the CAM lab is to take cells from either patients that have ALS or from normal individuals, and they coax these cells to become neurons. Here, you're seeing a neuron in blue and green here. And you see the neurites extend from this neuron. And they have a model where this neuron can then synapse with a muscle. And so they're using this 3D sort of tissue model to model ALS and to look for drugs that might affect ALS, potentially curing ALS. And so last night, I started reading about ALS and was pleased to find that there's actually a very rare X-linked, dominant form of the disease that can be passed on from generation to generation. And the inheritance pattern of this X-linked, dominant version of ALS would have an inheritance pattern that's similar to what we observe for the white mutant in the fruit fly, right? Whereas, if you have an affected father, and this is a dominant mutant on the X chromosome, then all of his daughters will get that X chromosome and be affected, whereas the sons will all be unaffected. However, if you have the reciprocal situation, where you have an affected mother and an unaffected father, then the sons and daughters get the disease randomly. So this is a sort of form of inheritance, which is relevant if you're considering human disease and some forms of it. Most versions of ALS, well, are sporadic, but inherited forms are usually autosomal dominant. So this is a rare case here. But I thought it was interesting in that it's relevant to what we've been talking about. So now, just to recap-- here, I'll throw this up so everything's up. So in the last lecture, we talked about Mendelian inheritance. And we talked about when you take two parents that differ in two traits and you perform a cross, you get a hybrid individual that is heterozygous for both genes. And now this is the F1 individual. Let's say we want to know what types of gametes this F1 individual produces. We can perform a type of a cross known as a test cross, where we cross this individual to another individual that is homozygous recessive for both these genes, which means that you know exactly which alleles are coming from this parent. And they're both recessive, so you can see whether or not the gamete produced by this individual has either the dominant or the recessive allele. Let me see. I'll boost this up. So now we can consider the different types of progeny that result from this test cross. And some will have the two dominant alleles from this parent and will, therefore, be heterozygous for the A and B gene. And it would exhibit the dominant A and B phenotype. I think that is what I'm showing here. So if the chromosomes, during meiosis I, align like this, then the two dominant alleles segregate together, and you get AB gametes. And you also, reciprocally, get these lowercase a and b gametes, as well. So that's the other class here. So you can get these two classes of progeny. And the phenotypes of these two classes will resemble the parents, right? So these are known as parental gametes. So these are the parentals. But you know because Mendel showed that if you have genes and their alleles on separate chromosomes, they can assert independently of each other. So an alternatively likely scenario is that the chromosomes align like this, where now the dominant allele of B is on the other side of the spindle. And therefore, these chromosomes are going to segregate like this during the first meiotic division. And that gives rise to gametes that have a different combination of alleles than the parents. So you have some that look like this. So each of these would be different classes of progeny. And you have one last class that would look like this. And so neither of these look like the original parents, and so they're known as non-parental. And so if these two genes are behaving according to Mendel's second law, where there's independent assortment-- if you have independent assortment, what's going to be the ratio of parental to non-parental? Rachel? AUDIENCE: One to one. ADAM MARTIN: Yeah, Rachel says one to one, and I think a number of others also said one to one. So you have 50% parental, 50% non-parental, right? Because it's equally likely to get either of those alignments of the homologous chromosomes during meiosis I. So now I'm going to basically break the rules I just explained to you in the last lecture and tell you about an exception, which is known as linkage. Gesundheit. And in the abstract sense, linkage is simply when you have two traits that tend to be inherited together. So just considering probability. So you have traits inherited together. They're exhibiting what is known as linkage. But that's an abstract way to think about it. It's just based on probability, right? So a physical model for what linkage is, is that you have chromosomes. The genes are on the chromosomes. And for two genes to be linked, those genes are physically near each other on the chromosome. So the physical model is that two genes are near each other on the chromosome. OK, so let's consider again these generic genes, A and B. If A and B resulting from this cross-- if these two happen to be on the same chromosome, now they're physically coupled to each other. Then they're going to tend to be inherited together. No matter how these align, they're always going to go together during the first meiotic division. And that's just going to only give you the parental gametes. So if there's linkage, you're going to have-- let's consider the case where you have complete linkage. If you have complete linkage, 100% of the gametes are going to be parentals, and you're going to have 0% non-parental. That's if the genes are really, really, really close to each other, and maybe you don't count so many progeny. You won't see any mixing between the two. But there is a phenomenon that can separate these genes, and it's known as crossing over. And another term to describe it is recombination. So the alleles are getting recombined between the chromosomes. Recombination. And what crossing over or recombination is, is it's a mixing of the chromosomes, if you will. Or it's an exchange of DNA. So there's a physical exchange of DNA from one of the homologous chromosomes to the other, OK? So you can think of this as an exchange of DNA between the homologous chromosomes, OK? And that's important. It's not an exchange between the sister chromatids, but between the homologous chromosomes that have the different alleles. And what's shown here is a micrograph showing you a picture of the process of crossing over. You can see the centromeres are the dark structures there. And you can see how the homologous chromosomes intertwine. And there are regions where it looks like there's a cross. Those are the homologous chromosomes crossing over and exchanging DNA such that one part of that chromosome gets attached to the other centromere. So I'll just show you in sort of my silly cartoon form how this is, just to make it clear. So let's say, again, you have these A and B genes, and they're physically linked on the chromosome. During crossing over, you can get an exchange of these alleles, such as a bit of one chromosome goes to the other homologous chromosome and vice versa, OK? So now you have the dominant A allele with the recessive b allele and vice versa. So now during meiosis I-- after meiosis II, this will give rise to two types of gametes, one of which is non-parental. And the same for the one down here. You get two types of gametes. One is non-parental-- the lowercase a, uppercase B gamete. So this happens if there's incomplete linkage. That means there can be a recombination event that separates the two genes. And I'm going to give you an example of a case where data was collected with what fraction of each class there is. So now we're considering an example where you have a linkage. So A and B are on the same chromosome. And so we'll consider a case where, in this class, there are 165 members. For this one, there is 191. So I'm kind of-- line down like this. And then for the first recombinant class, 23 individuals. And for the last, there are 21 individuals. So you can see there are many more of the parental class than the recombinant class, but we can calculate a frequency, or recombination frequency, between these two genes. And in this case, the recombination frequency is 44/400, which is equal to 11%, OK? So 11% of the progeny from this cross had some sort of crossing over between the A and B alleles. It would've been up here. Now, this frequency is interesting, because it is proportional to the distance that separates these two genes. So this recombination frequency is proportional to the linear distance along the chromosome between the genes. Now, it also depends on the recombination frequency in a given organism or in a given part of the chromosome. So when you're comparing recombination frequencies between different organisms, there's actually differences in the real different-- they're not equivalent. You can't compare them, basically. And also, there are regions of the chromosome where recombination happens less frequently than others. And so, again, you can't compare distances along those. But overall, you can use this as a distance in order to map genes along the linear axis of a chromosome. And maps are useful, because you can see where stuff is, right? So in this example here, I'll highlight a couple of places. Here's Rivendell. Here's Lonely Mountain. Here's Beorn's house. So let's say we are able to determine the distance between Rivendell and Lonely Mountain, and the distance between Lonely Mountain and Beorn's house, and the distance between Rivendell and Beorn's house. You'd be able to get a relative picture of where all of these places are in relation to each other. So this is a two-dimensional map I'm showing here. It's not one dimensional, but chromosomes are one dimensional, so it's a bit more accurate, OK? So this idea that recombination frequency can be used to measure distances between genes and that this could be used to generate a map is an idea that an undergraduate had while working in Thomas Hunt Morgan's lab back in 1911. And what I find fascinating about the story is this guy basically blew off his homework to produce the first genetic map in any organism. So the person who did it was Alfred Sturtevant, and he was an undergraduate at Columbia working for Thomas Hunt Morgan. And I'll just paraphrase this quote here. In 1911, he was talking with his advisor, Morgan, and he realized that the variations in the strength of linkage attributed by Morgan to differences in the separation of genes-- so Morgan had already made this connection, that the recombination frequency reflects the distance between the genes. But then Sturtevant realized that this offered the possibility of determining sequences in the linear dimension of the chromosome between the genes, OK? So then-- this is my favorite part-- "I went home and spent most of the night, to the neglect of my undergraduate homework, in producing the first chromosome map." And this is it. So the first chromosome map was of the Drosophila X chromosome, which we've been talking about. There's the white gene, which we've been talking about in the context of eye color. There's a yellow body gene here. There's vermilion, miniature, rudimentary, right? These are all visible phenotypes that you can see in the fly. And you can measure recombination between various alleles of these different genes. All right, so now I want to go through with you an example of how you can make one of these genetic maps. And it's essentially the same conceptually as to what Sturtevant did. And it involves what is known as a three-point cross. So a three-point cross. So there are going to be three genes, all of which are going to be hybrid, and I'll start with the parental generation that is little a, capital B, capital D. And we'll cross this fly or organism to an organism that is capital A, lower case b, lowercase d. Yes, Carmen? AUDIENCE: So when you write the gametes up there, does that imply that they were analogous parents? ADAM MARTIN: So what I'm writing here is the phenotype, basically. And so these are homozygous for each of these, yes. I could also write this as-- but I'm not going to draw the chromosomes, because it kind of gets more confusing. I'll draw the chromosomes here in F1, because we have now, basically, a tri-hybrid with one chromosome that looks like this, right? They got that chromosome from this individual here. And another chromosome will look like this. See? So this F1 fly is heterozygous for these three genes, and it has these two parental chromosomes. So now we can look at the gametes that result from this fly by doing a test cross, just like we did before. And so we want to cross this to a fly that's homozygous recessive for each of these genes. And now we can look at the progeny. And just by looking at the phenotype, we're going to know the genotype, because we know all of the flies from this cross have a chromosome from this individual that has recessive alleles for each gene. So we can consider now this first one here. That's one potential class of progeny. Another class would be this one. And these two, you can see, resemble the parents, right? So these are the parental classes of the progeny. So this is parental. All right, now you can consider all other combinations of alleles. And so I'll quickly write them down. You could have something-- progeny that look like this and this. These are just kind of reciprocal from each other. You could have progeny that look like this and this. And the last class would be this and this. So all of these progeny that I drew down here are recombinant, because they don't resemble the parents, right? Because there are three genes, now there's many more ways to get recombinant progeny, as opposed to having just two genes, right? So you can have many different combinations of these different alleles. And so now I'm going to give you data from a cross with three such genes. So you might get 580 individuals that look like this, 592 like this, 45 and 40, 89, 94, 3, and 5. So this is data that is, I believe, from fly genes. I've just ignored the fly nomenclature, because it's confusing, and just given them lettered names, OK? But this reflects data from some cross somewhere. So now we want to know-- let's go back to our map. We want to make a map, OK? And so to make our map, we're going to want to consider all pairwise distances between different genes. So we'll start with the A and B gene. I'll write over here. So let's consider the A/B distance. And remember, to get a distance, we're looking at the number of the frequency which there is recombination between these two genes, OK? So we now have to look through all of these recombinant class of progeny and figure out the ones that have had a recombination between A and B, right? So on the parent chromosome, you see little lowercase a started out with capital B and vice versa. So any case where we don't have lowercase a paired with capital B, there's been some type of exchange. So here, lowercase a's with lowercase b. So that's a recombinant. Here, uppercase A's with uppercase B. That's a recombinant, too. So we have to add all these up. So 45 plus 40. How about here? Recombination here or no? Yes. I'm hearing yes. That's correct. Here, recombination, yes or no? Carmen's shaking her head no. She's exactly right. So we just have to-- these are all the recombinants between A and B. So it's 45 plus 40 plus 89 plus 94, which equals 268 over a total progeny of 1,448. And that gives you a map distance of 18.5%. Because this method was developed in Morgan's lab, this measurement is also known as a centimorgan. It was named in honor of Morgan. So that's what I refer to when I have lowercase c capital M. That's a centimorgan. So you can also use centimorgan here. All right, so that's A and B, but now we have to consider other distances. So how about the A/D distance? And again, we have to go through and figure out where the alleles for A and D have been recombined, OK? So little a is with capital D, and upper case A is with lowercase d. So we have to find all the cases where that's not the case. Here, this is lowercase a with capital D and capital A with lowercase d. This is parental from the respect of just the A and D genes, but all the rest of these guys are recombinants, OK? So this is 89 plus 94 plus 3 plus 5, which comes out to be 191 over 1,448. And this is 13.2 centimorgans. So that's the distance between A and D. Distance between A-B, distance between A-D. So the last combination, then, is just B and D. So if we consider the B/D distance, again, we have to look for all cases in which lowercase b and d become separated and uppercase B and D become separated. Here, they're separated. Here, they're separated. Wait, no, not here, sorry. Here, they're not separated. Here, they are separated. Everyone see how I'm doing this? Are there any questions about it? You can just shout it out if you have a question? So this distance is 6.4 centimorgans. Everyone see how I'm considering every pairwise combination of genes and then just ignoring the other one and looking for where there's been a recombination in the progeny? So now that we have our distances, we can make our map, right? So the two genes that are farthest apart are A and B, OK? So that's kind of like here, Rivendell and Lonely Mountain. Those are the two genes that are at the extremities. So I'll draw this out. It doesn't matter which way you put it. We're just mapping these genes relative to each other. But B and A are the farthest apart from each other. Now if we consider the distance between B and D, that's 6.4 centimorgans. So it appears that D is closer to B than it is to A, because it's 13 centimorgans away from A, OK? So D is kind of like Beorn's house here. It's closer to Rivendell than it is from Lonely Mountain. So we'll put that in there. This distance is 6.4 centimorgans. And then the distance here is 13.2 centimorgans. As far as I know, no one of the field of genetic mapping has been assaulted by large spiders, but the field is still young. So one thing that should be maybe bothering you right now is if you add up the distance between B and D and D and A, you don't, in fact, get 18.4 centimorgans. Instead, you get 19.6 centimorgans. This is 19.6 centimorgans. So it seems like, somehow, we are underestimating this distance here, OK? So we seem to be underestimating this. So why is it that we are underestimating this distance? Well, to consider that, you have to sort of look at how all of these classes were generated. So now I'm going to go through each class, and we'll look at how it was generated. I'm also going to-- well, I'll just draw new chromosomes. So we have to draw this order now. We have B, D, a. So the first chromosome is B, D, a. That's right, B, D, a. The other chromosome is b, d, A. So now we look and see how this recombinant class was generated. So this is lowercase a. We'll start with b-- lowercase b, lowercase a, but capital D. So it's capital D, lowercase a. So that recombinant results from this chromosome here, where there's crossing over, and little b gets hooked up to big D and little a. See how I did that? And then if we consider this class, this is uppercase B, lowercase d, capital A. So these two classes of progeny result from a single crossover between B and D, OK? So this is a single crossover between B and D genes. And now we can go through and look at how this is generated. So to get all recessive alleles on the same chromosome, there would be a crossover here. And so this is a single crossover between D and A. So a single crossover between D and A. Now, these last couple classes of progeny are interesting in that they're the least frequent class. And so when we consider how they're generated, we'll start with uppercase B. Let's see if I can get rid of this. So uppercase B, lowercase d, lowercase a. And so what this last class is, is actually a double crossover. So this is a double crossover. And it's least frequent because there's a lower probability of getting two crossovers in this region. But now you see that even though it doesn't look like there was recombination between A and D, in fact, there was. There were two crossovers, and it just looks like there was no recombination, if you didn't see the behavior of gene D. So if we take into account that there are actually double crossovers between B and A, then if we add that into our calculation here, where you add in 3 plus 5 multiplied by 2 because these are both double crossover events, then you get the 19.6 centimorgans that you would expect by adding up the other recombinations, OK? How is that? Is that clear to everybody? You're going to have to make maps like this on the problem set and possibly the test. So make sure you can given-- yeah, Ory? AUDIENCE: I realized that you immediately [INAUDIBLE] overestimated the difference between A and B and not overestimated B to D or D to A? ADAM MARTIN: It's because when you have two genes that are very far apart, you can have multiple crossovers. And when you have sort of crossovers that are in pairs of two, then it's going to go from one strand back to the other, and so you're not going to see a recombination between the two alleles. So it's an underestimate, because if you have multiples of two in terms of crossovers, you're going to miss the recombination events. You see what I mean? You understand that you can miss the double crossover events? AUDIENCE: Yeah, I get that. ADAM MARTIN: Yeah, right? So then you're going to underestimate the number of crossovers that actually happened in that genetic region. All right, now I want to end with an experiment that, again, makes the point that genes are these entities that are on chromosomes. So just like you can have linkage between two genes on the same chromosome, you can also have linkage between genes and physical structures on chromosomes, like the centromere. So you could have genes like A and B here that are present on the chromosomes and present very near the centromere of those chromosomes, OK? So they could be right on top of the centromere, OK? And to show you how this manifests itself, I have to tell you about another organism, which is a unicellular organism called yeast. And yeast is special and that it can exist in both a haploid and a diploid form. So it has a lifecycle that involves it going both as a haploid and as a diploid. And so you can take yeast-- and we'll take two haploid yeast cells. And much like gametes, these can fuse to form a zygote. So, in this case, I'm taking-- again, we'll consider two generic genes, A and B. And we'll make a diploid yeast cell that is heterozygous, or hybrid, for A and B. And what's great and special about yeast and why I'm telling you about this is because as opposed to flies and us and other organisms, the product of a single meiosis is packaged in this single package, if you will. So the yeast can undergo meiosis, and the product of a single meiosis is present in this case, where each of these would represent a haploid cell that then can divide and make many cells. But this is the product of a single meiosis in a package, OK? So you can actually see the direct result of a meiotic division, a single meiotic division. So this is the product of a single meiotic division. And that's special because when we make gametes, we have individual cells. All the products of meiosis are split up, and then just one randomly finds an egg and fertilizes it. So you don't know which of the gametes are from the product of a single meiosis. And so being able to see the product of a single meiosis allows us to see things like genes being linked to physical structures on the chromosome like the centromere. So if we consider this case, these two genes are both linked to the centromere. And during metaphor phase of meiosis I, they could align like this, in which case you would get spores that are parental for both dominant alleles or parental for both recessive alleles. So each of these cells is known as a spore. So I'll label spore numbers here. So this is spore number. And in this case, you get two spores that are dominant for both alleles and two spores that are recessive for both alleles. Because there are two types, it's known as a ditype. And this is a parental ditype, because you have two types of spores . And they are both parental An alternative scenario is that these chromosomes would align differently, right? So you get parental spores there. However, alternatively, you could have this configuration, where this is now flipped. And during meiosis I here, these chromosomes move together. And they, again, produce two types of spores, so it's a ditype. But in this case, all the spores are non-parental. So another scenario is you get this. And because there are two types and they're non-parental, this is known as non-parental ditype. That's a non-parental ditype. And if these genes are linked to the centromere completely, then you can only get these two classes of packages, OK? So if these genes are unlinked-- so the two genes are unlinked, but both linked to the centromere, then you get parental ditype-- 50% parental ditype, type 50% non-parental ditype. So I'm abbreviating parental ditype PD and non-parental ditype NPD. So what has to happen to get another type of spore? And another type of spore would be-- you could have spores that are all different genotypes from each other and that you have A cap dominant A/B; dominant A, recessive b; recessive a, dominant B; and lowercase a and b. And this is known as tetratype, because there are four types. So how do you get this tetratype? Anyone have an idea? Yeah, Jeremy? AUDIENCE: You're crossing over. So one of A and B would switch in one of the [INAUDIBLE].. ADAM MARTIN: Where would the crossing over happen? AUDIENCE: Between the two [INAUDIBLE] one [INAUDIBLE].. ADAM MARTIN: Between the allele and what? AUDIENCE: Sorry? ADAM MARTIN: The crossing over would occur between the gene-- AUDIENCE: Oh, and the centromere. ADAM MARTIN: And the centromere, exactly. Jeremy is exactly right. So Jeremy said that in order to get a tetratype, you have to have a recombination event, but this time, not between two genes, but between a gene and the centromere. So at least one of the genes has to be unlinked to the centromere. And in that case, now you get a meiotic event that gives rise to four spores. And there are four different ways to get this. So if you have two genes unlinked and at least one is unlinked to the centromere, then you get a pattern where you have a 1 to 1 to 4 ratio between all these different events. So you have a 1 to 1 to 4 ratio between parental ditype, non-parental ditype, and tetratype. And we can see this in yeast. If you have two genes that are linked to the centromere, you only get parental ditypes and non-parental ditypes, where virtually everything else gives rise to tetratypes, except if they're linked. What happens if the two genes are linked to each other, irregardless of the centromere? If you have two genes that are linked, what's going to be-- what are your progeny going to look like? AUDIENCE: Parentals. ADAM MARTIN: You're only going to get the parentals, or you're going to get a lot of parentals. Javier is exactly right, right? If the two genes are linked, the parental ditypes are going to be much greater than any of the other classes. Now, this might seem esoteric, but I like the idea that you can have linkage between a gene and something that's just the place on the chromosome that's getting physically pulled. It all makes it much more physical, which I think is nice to think about. All right, we're almost done. I just have-- yes, Natalie? AUDIENCE: Can you go over what the PD [INAUDIBLE]?? ADAM MARTIN: Yes. So PD is parental ditype. So this is the parental ditype. It's a class of product here where you get four spores that are each of these genotypes, OK? So each of these 1, 2, 3, and 4 would represent one of these cells from a single meiotic event. Does that make sense, Natalie? Does everyone see what I did there? So these 1 2 and 3 are the spores of the meiotic event right here.
MIT_7016_Introductory_Biology_Fall_2018
33_Bacteria_and_Antibiotic_Resistance.txt
BARBARA IMPERIALI: OK, I want to walk us through a bit of an exercise to understand what happens when people become resistant to an antibiotic. What's the molecular basis for resistance? It's nothing magical. It's really things that you can understand based on what you've learned during various parts of the course. But I just want to remind you about this dreadful schematic here, which shows how rapidly resistance emerges to different antibiotics by showing you the year that the drugs are introduced and the year that resistances develop. One of the newest antibiotics to be introduced-- people thought, oh, it's a different mechanism of action. It should be pretty resilient. It should last for a while. It should be useful with daptomycin. It's a cyclic peptide antibiotic, which has a particular structure that doesn't look like a lot of the others. And honestly, it was two to three years before resistance emerged. So what I want to do is think together about what are the ways in which a bacterium could evolve to develop resistance against an antibiotic? So here we've got the target. We know that the antibiotic is very effective against the target. What types of things could happen in the bacterium to make it manage to just ignore the antibiotic and resist the antibiotic? Any suggestions? So it's very simple. There's a molecular target. It could be topoisomerase. It could be in fact the ribosome and the machinery for synthesizing proteins. It could be the machinery that cross-links peptidoglycan. What sorts of approaches and what sorts of strategies might evolve to make that antibiotic stop working? OK, fire. Yes. AUDIENCE: When two hydrogen-holding enzymes at-- create things that are not [INAUDIBLE] absorbing. BARBARA IMPERIALI: OK, right. So antibiotic gets in. The enzyme breaks up the antibiotic, so evolution to destroy the antibiotic. And that's very much what happens with penicillin. The key aspect of the structure that's so useful suddenly becomes invalidated through a degradation of the beta-lactam bond. So that's one of them. What's next? Yeah. AUDIENCE: [INAUDIBLE] BARBARA IMPERIALI: Ah, so maybe there could be uptake decreased, so that's quite hard, but there could be some evolution of the cell wall to make it less permeable to the antibiotic because we're usually relying on antibiotics to diffuse in passively. So if the membrane-- this could be in a membrane of a Gram-negative bacterium, or the outer membrane of a Gram-negative bacterium that has a slightly different composition, could physically change its structure. These are tough things to evolve all at once, but it's certainly a possibility. We've talked about uptake. What about-- what else could happen? Yes. AUDIENCE: When you get a [INAUDIBLE] out, such that they have oxygen [INAUDIBLE].. BARBARA IMPERIALI: OK. So this guy could just-- we've got a circular antibiotic usually, but this just changes-- the target just changes. It can't bind anymore, through mutation, so that the antibiotic simply doesn't bind. It gets in, but it's changed, and that happens a lot. It also happens a lot-- one can think of antibiotic resistance very much along the same veins as one thinks of resistance to chemotherapeutic agents. You're targeting a kinase. Your drug works great. A year later, the cancer comes back because there's a single mutation in your target. This happens a lot with the EGF, kinase, RAS's and so on. There's a dramatic change. So this change would be a change in the target. You mentioned two things, right? I thought. Maybe not. AUDIENCE: Well, like I said, there were some for that [INAUDIBLE]. BARBARA IMPERIALI: Ah. On the surface. AUDIENCE: A couple of days you're going back, so. BARBARA IMPERIALI: OK, so there could be some kind of import strategy. So some antibiotics just diffuse. Some go in through targeted import, and that might change so that the antibiotic can't get in anymore. Other thoughts? There's a couple of other sneaky ways. I mean, you've got to sort of give these bacteria credit for maximum sneakiness. So if influx is an issue, what about efflux? The biggest problem with antibiotics against Gram-negatives is that they way upregulate efflux pumps. They just, like, cover their cells with pumps that just go, you're going to give me an antibiotic? I'm just going to lob it straight back out to you. So the efflux pumps increase. So their molecules, we have a lot of efflux pumps to just kick out things that are not-- that we don't want in our cells. The bacteria have similar things. They will-- they're fairly promiscuous, exporting pumps that will bind to things that don't look like things that should be in a cell, and literally bind to them on the inside of the membrane, [MAKES SQUIRT SOUND] send them back to the outside of the membrane. So Gram-negative bacteria can massively upregulate the production of a pump that they already have, but they just make much more of them. So in many cases, you can hardly test a new compound because it's getting pumped out as fast as it gets pumped in. What could be a strategy when this happens? Because people are doing this. You inhibit the pump. So you make your antibiotic-- it works great, but you stop the pump working. So you have to give two drugs, the drug that's the antibiotic to the target and the drug that inhibits the pump, and that happens. Similarly, with this mechanism, where the antibiotic gets destroyed, you can recover from this by inhibiting the destroying enzyme, and then your antibiotic doesn't get destroyed when it gets into the cells. So there's one extremely important formulation of antibiotic that is used. It's called Augmentin. And what it is is a penicillin plus a beta-lactamase inhibitor. It's the drug that works plus a drug that inhibits the enzyme that destroys the drug. People get this all the time. Every day of the week, this stuff is prescribed. It's a two-compound cocktail that has something to overcome the resistance so that your drug still works in a cell. All right? And there's one more key mechanism, and what bacteria will just do is they say, well, you know, I'm getting dosed with this much antibiotic. What's a good way around it is to massively upregulate the biosynthesis of the target to a state where you just can't saturate it all. So upregulation of the target is a very, very common thing. So you-- this should have just increased the number of transcripts being made. You increase the amount of target being made so that, even if antibiotics are flooding into cells, there's just not enough to inhibit all of the target because it's been upregulated by 10 or 100-fold. So what I think is cool about all these mechanisms is they all make sense. You just kind of have to think of them-- do I stop the drug getting in? Do I stop it getting pumped out? Do I stop it getting degraded? Do I make more target? All of those things are very viable, and they are strategies that are used quite commonly. So very commonly, both in bacteria and in viruses, we seldom give one compound. We commonly give multiple compounds to sort of hit multiple targets, because if you gave a two-drug cocktail to a bacterium, but you knew there was going to be upregulation of a target, you could hope that the other enzyme is still a target. So you give cocktails of drugs, as opposed to single drugs. And you're going to see that very relevantly when we talk about the HIV virus because it's only been that HIV has become-- HIV-AIDS has become a treatable condition because of drug cocktails, not because of singular drugs that inhibit one step. And you'll see it very, very commonly there. Any questions about this stuff? OK. So we're going to move on to viruses. And so I will actually update the slides that are on the web to give you this set of information so you can see it in one place. OK, viruses. Viruses are fascinating organisms. They don't have the right to be alive because they don't have the machinery to be alive, but they exploit the host's mechanisms for completing their viability. So viruses, more or less, I think we like to think of them as living a borrowed life. And they only survive if they have spent some time inside a host cell. So we will see with viruses, some viruses specifically target humans. Other viruses may spend some time in different organisms, and then target humans, and be carried around amongst different organisms. But the key thing is that viruses can only actually replicate once they're inside a host cell because they basically exploit all of the host cell's machinery to do that. So viruses don't make many of their own enzymes. They don't have their own amino acid supplies or all the metabolic enzymes that are required for life, all the replication, transcription, translation machinery. They just borrow the host's machinery, but there are occasionally individual components that the virus will bring along with it to cover certain things that are not provided by the hosts. But viral genomes are tiny. They may comprise maybe eight genes. Some of them are a lot-- are bigger, but they are very, very small genomes. They're very, what we call, parsimonious genomes, so there's overlapping genes, so you can keep the genome tiny by having-- compacting the size of the genome. And then there are bigger viruses. Some of the biggest viruses approach the sizes of bacteria, like the mimiviruses, and they may have been an intermediate step from virus to more elaborated organisms. And those viruses have a bit more machinery within their contexts. People-- obviously, there's no fossil record for viruses. It's not like we can go, you know, go exploring and find a fossil record. But where these viruses are being found is in the permafrost, so they're frozen. They've been frozen there for centuries. So people are finding really sort of scary things in the Siberian permafrost because the viruses are preserved there. And some of these giant viruses are being discovered in those locations. And if that's not the subject for wonderful sci-fi books, I don't know what is, because there are-- and I tend to read those things because my favorite thing is finding mistakes in them. So there's a lot out there about those kinds of things. So let me just show you a tiny bit about-- you know, this is that boring old "learn genetics" thing right from the beginning, but what I want to take you back to is sizes. So we know all this stuff. We've learned it to death, hemoglobin, antibodies, and ribosome. But what I really want to point out on this slide is that the smallest viruses, like the rhinovirus-- that's the common cold-- or the hepatitis virus are not barely any bigger than the ribosome. So obviously, there's not much in the virus, but all the components of the virus have evolved to enable them to sneakily get into host cells, exploit the host cell machinery, and then replicate inside host cell, and then get out of the host cell and ready to infect another host cell. So I want you to really notice these sizes. So the rhinovirus is similar to a ribosome, but some viruses, which we will talk about, the influenza virus and the HIV virus are a little bigger, but none of them compete up to the size of a bacterium. And just to get you into that mode, there are the bacteria, and there's the mitochondria, and remember endosymbiotic theory, bacteria, similar size to mitochondria, and much, much bigger than any typical virus. But giant viruses approach some of these bigger sizes. They're a different ballgame altogether. And we won't talk about them, apart from the fact that they're really cool, and they're in the permafrost. All right, so another impressive thing about viruses, as they look-- some of them like this. Phage, a bacterial virus, they look like things, lunar landers, for example, or sort of other kinds of things. Some of them are linear. Some of them are different kinds of shapes. A lot of viruses are icosahedra. We'll talk about that in a moment. But the fact sheet about viruses is first of all sizes. And the typical viral size, if there is something typical, expand from 20 to 400 nanometers in diameter. So remember, the ribosome sits right at this end with respect to size, but bacteria-- oh, yellow, I can't do a yellow lecture here-- remember, are 1 to 10 micrometers in length, depending on what dimension you're measuring, so considerably smaller, nanometer scale, micrometer scale for bacteria. So that's the first thing that it's important to know. They're very small. The next critical thing is what's in a virus? What is its-- what's the blood and guts of a virus? And it's either DNA or RNA, and it can be single-stranded or double-stranded. So it has its genetic material. Its genetic material is usually dedicated to making more copies of itself. So if the virus has a coat, a coat of proteins, the virus has to have a gene for that because the host isn't going to have a coat for a virus. So the virus has to have certain specialized things that complete itself that can't be borrowed from the host cell. And they can be-- let's see-- they can be what are called capsid viruses or enveloped. Capsid viruses just have a protein coat. The enveloped viruses have a membrane surrounding them with proteins stuck into them. So the enveloped viruses have an outer membrane which is studded with proteins. But what is cool about the virus is it never makes its own membranes. It doesn't make its phospholipids. It just, as it emerges from a host cell, pinches a piece of the cell surface. It steals the cellular membrane with it, as it's emerging from a cell. And you'll see this in a video, how cool that is, all the proteins that-- all the components of the virus cluster near the surface of a membrane, and then you have this wonderful endocytosis using the host membrane. So the virus never has to make it own membrane. And not all viruses are enveloped, just some of them, and you'll see examples of each. So the definition really is that they're small, infectious agents, that they only replicate inside living cells because they have to exploit a lot of the machinery of living cells. And they can infect humans, other animals. There are plant viruses. Bacteria have viruses. So all living organisms have viruses that infect them. But viruses are usually targeted very specifically to the cells that they infect. And, in fact, you will see with HIV, it's not just a virus that infects a human host. It infects specifically, and that was why it was so terrifying, this-- actually T cells. That's the one cell type in the host that it goes after. So viruses very often target to particular organs within their hosts, and that's why we know some of the viruses. And you'll see that the names of the viruses are related to the organs that they may infect. So I want to just briefly describe the terms. We talk about these with all infectious diseases, that they may be endemic, epidemic, and pandemic. Endemic is the term that we use for, there's a very low level of an infectious agent in the population. It's completely out of-- within control. There's a few cases, but there's not a transfer from person to person or animal to animal. We would call that endemic, a very, very low level of virus that doesn't cause any threat. As soon as the virus or bacterial infectious agents starts spreading amongst a local population, we would call that a local epidemic, so all of a community, all of a country, so very much defined geographically into a particular space where there's transmission of viruses from person to person or animal to person within a community. There has to be direct contact. But now, with travel, many viruses reach pandemic stages, which means worldwide. So plane travel really caused enormous trouble because you can have a virus in Africa or Asia. Somebody gets on a plane and ends up somewhere else, and the virus has been moved to a new country. I made the terrible mistake of reading Hot Zone on a plane one time, which is about Marburg virus, which is where people basically just start bleeding out in the spot. And I'm reading this book, and it's describing the steps of someone who had Marburg and was just sort of bleeding out next to them on the plane. And I'm like, are you crazy, reading this book on a plane? Because they were describing how Marburg was just moved from its country of origin to New York City, or something like that. So you remember when we had the Ebola concerns. There was a real, genuine worry that Ebola would jump through flight travel, through people coming in at airports, and end up with a pandemic of Ebola. When there was a real problem with the avian flu in Asia, Singapore, that's very, very protective of its territory, had sensors that would-- you would go into Singapore, and you'd go down these two huge escalators. And they had sensors measuring people's temperature at a distance as they came down the escalator, and hauling people over, and sort of interrogating them, where have you been?-- to see whether they would be allowed to enter Singapore, because the flight travel, people getting on planes, spreading a very contagious virus to a new country is very, very realistic. The issue with spreading to pandemic situations is very, very important when one thinks of history, because when the Europeans were conquering the Americas, in particular South America and Central America, they brought with them a lot of viruses. But there was an innate sort of resistance to-- because of years and years of exposure. But these communities have never seen these viruses, so millions of people died because they were suddenly exposed to a human virus that they had never seen before, through transmission from a country where there wasn't such a problem with the virus. So the indigenous peoples of the Americas, Australia, and New Zealand had terrible consequences there. Some of you have probably heard of the Spanish flu, and that was towards the tail end of World War I and is thought to have killed as many as 100 million people. And that, in fact, is quite interesting. It's called the Spanish flu, but there's some evidence that it might have originated in the Americas, in the boats that took troops over to Europe to help at the tail end of the First World War. And there's a really interesting book about that whole story, that the Spanish flu may not have originated in Spain. And that's a-- it's definitely a worthwhile read. So that tells you a lot about the statistics of viruses. I just want to highlight here, we talk about HIV as a very serious virus. It emerged in the early '80s to this current-- well, in 2011, there were 35 million people infected. There's about 2 and 1/2 million new cases a year. But what's fascinating about HIV-- there was a stage before the really good antivirals were available that, if the mother had HIV, the baby would get HIV. But now, if there's treatment of the mother, and the baby is delivered, often by a Caesarean section, the baby can escape being infected with the virus due to the new antivirals. So that's really important, that the-- originally, there were a lot of cases of newborns who simply got HIV during birth. But now that can be-- there's escape from that, which is really, really cool. The viral load can be brought really low with the common antivirals against HIV, and that next generation doesn't have that sentence. So I mentioned to you that a lot of viruses are basically named after the organs that they hit. So I've just got a human being here with a lot of different viruses that hit different places, and I just want to point out a few points. Viruses may be targets nowadays of childhood vaccinations, and many of you-- I hope all of you-- have had vaccinations to many of these common viruses. There is a concern now with communities that are deciding not to vaccinate children. That's a huge social problem that may, initially, sort of, people can get away with it because there's community vaccination. You're in a community where a lot of people have a resistance or some sort of immunity to a virus. But as communities become less and less vaccinated, than later generations will start to get the disease seriously. And that's actually happening in parts of the world where there's-- there used to be no polio, and now there's polio emerging because the community immunity has been-- is fading away. So we hope that people get vaccinated. That's for sure. The vaccinations work. Several vaccinations-- several viruses were pretty much eradicated. Smallpox and polio were two of the real poster examples of childhood vaccinations that worked and worked amazingly well. But now, there's a problem with failure to vaccinate in certain parts of the world. So that's a concern. And then, another interesting thing is that some viruses-- whoops, oops, go back. Sorry. Go back, back, back, back. Don't now-- that's all a secret. You can't see that just yet. Some viruses lead to cancer, so human papilloma virus, where there is a vaccine, the people who have HIV-- some of the types of hepatitis and Epstein-Barr are all associated with later cases of cancer. That's important to know. So often cancers are named by the organ that they attach, so even though the three-- the five hepatitises all attack the liver, they're not related. They're just five viruses that go off to the liver. So if you've had a vaccination against Hep A, It doesn't protect you from Hep B or C by a relationship. They're very, very different, so you have to have different vaccinations. So this just gives you a nice view of human viruses, what their names are, what organs they may attack, what sorts of things they might be associated with. But the trouble is, this nomenclature doesn't get you anywhere towards understanding the mechanism of a virus. So what we will focus on is a much better system for describing viruses that's based on whether they have DNA or RNA within the genomes that they import into host cells, and whether that DNA or RNA is single-stranded or double-stranded, because that truly tells us a lot more about the virus and maybe the steps that could be inhibited to prevent the viral infections. But first of all, just a few pictures-- here's some-- so viruses can be rod-shaped. They can look like-- they can be icosahedra. They can just have a capsid. So I mentioned they may just have a protein coat, the sets of repeating proteins that pack into a beautiful structure, very commonly an icosahedron, and I'll show you why that is. Or they may be enveloped viruses, like, influenza has a membranous surface around where the DNA is packaged. All of these have nucleic acids packaged within them, DNA or RNA, single or double-stranded. And in the case of the enveloped viruses, that membrane-- it's a normal membrane. It's just like your membrane. In fact, it is your membrane-- will have proteins dotted within it that's actually-- serve as recognition to the host cells. They'll grab onto host cells and be the source of the infection into the host cells. And this is a bacterial virus, and as I said, I just love the way they-- I mean, they really look like this. You know, the cartoon is really the cartoon of what the thing looks like, and they're sort of pretty amazing. And they kind of-- they keep their nucleic acid in the head here. They land on their sort of feet, and they shoot the nucleic acid material into the host cells. So that's very interesting. Of course, this thing is-- OK. So why are many viruses that are capsid viruses icosahedra? So it ends up being a problem of geometry. So how can you make a perfect coat around something with very, very few building blocks of different types? Like, if every building block in that coat was different, the virus would have to have genes for all of them. What viruses can do is they can have genes for, like, three pieces of a module of the virus. So I'm going to show you how these capsid viruses get assembled. So here, color-coded, is an icosahedral virus, where I've coded in the red, green, and blue, a triangular component-- this is really cool-- that is a single sort of panel on that icosahedral virus that comes together as a triangle through noncovalent interactions between three proteins. You see that panel there. What you can then do is see how that panel would fit into a pentagon with an extra triangle stuck onto it, and you can fit that triangle into the pentagon and also into the additional piece. And then you can start to visualize how you could build an icosahedron from those pieces because they represent each of those faces within the virus. So you can go from this, which is a set of building blocks that I just showed you-- then you can assemble them like this. And one of these would be this part of the icosahedron, and then you just have a bunch of copies of it. And you can see how you would assemble that. And years ago, I decided to decorate my Christmas tree with icosahedra. So I went through this geometrical thing, and believe me, it works really nicely. You can put together an icosahedron and build an icosahedron. You could spray it gold and put it on your Christmas tree. It's kind of fanatical, but it really-- it's highly recommended. All right, so let's now get down to something a little bit more serious than Christmas trees and things. All right, so I told you that the classification of viruses by what organ they attack or who discovered them or anything is just-- is a vagary that's not so useful to the non-physicians because you can't immediately know, oh, this is how the virus gets into the host cell. This is how the virus uses its genetic material to make new viruses. So what was developed by Baltimore-- David Baltimore used to be at-- it was kind of interesting. David Baltimore, a very famous person and Nobel laureate, used to be at MIT when I was at Caltech, and we moved in opposite directions. I'm not sure it was a great trade for MIT, but it was a great trade for Caltech. So I ended up with David Baltimore's labs in Building 68 because we did that swap in 1999 or something like that. So I thought that was pretty interesting. Anyway, so what Baltimore decided is-- was much better to classify viruses by the type of genetic material, like, are they DNA or RNA? Is that DNA or RNA single-stranded or double-stranded? Because, depending on what the genetic material in the virus is, once that gets unloaded into a host cell, certain steps have to happen in order for the virus to be able to replicate that genetic material, to convert it ultimately into the proteins it needs, and then to package up new viral genetic material into viral capsids so that they can then be sprung out of the cell and go infect another cell. So the classification basically went this way. So if you think of it, what the major goal in the infected cell is to get the virus to a stage where the virus has plus-messenger RNA. It has RNA that can be read by the host's ribosomes and convert it into proteins. So the overall goal of the virus, if we give it sort of some conscience, shall we say, is to make its viral material into messenger RNA. Now, the virus doesn't include messenger RNA. That's just what is made transiently, but the virus may have single-stranded DNA. It may have plus-sense RNA. It may have negative-sense RNA. It could have double-stranded DNA, or it could even have double-stranded RNA. And depending on what that genetic material, is what the Baltimore classification of a virus would be. So depending on what's inside the virus, then they can be classified. And what we're going to go through today and on Friday is examples of class I, class V, and class VI viruses, so we can see how that genetic material ultimately becomes a new viral-- a new virus within a host cell, or at least the components thereof ready to be sprung out of a virus. And there's one important point that I want to also address-- oops-- budding or lytic. All right, there are two ways in which viruses escape their host cell. They may be budding. So here you have a host cell. The viral components all congregate near the surface of the membrane from the inside, and then the host cell buds off. The viral components go with it, and the bud splits off. So the host cell has its nucleus. It's still intact. HIV is such a virus. HIV doesn't kill its hosts. That's the best sign of a parasite. It wants the host to stick around, so it just buds off. The other types of viruses are lytic, and, basically, the cell just bursts open and throws out the virus. So they're in two categories. Some of them are budding, though the enveloped viruses have to be budding because they're going to take with them the membrane of the host cell. All right? So that's another important difference. So what have we got here? So ultimately, the goal is to be able to make a plus-strand mRNA for protein synthesis. So we really need to have the appropriate sense of the RNA that will dictate the protein synthesis. So let's first of all look at one of the simple versions, a double-stranded DNA virus. And this is represented by the smallpox virus. So everybody's heard of that, and herpes simplex. And these are both enveloped viruses, so that means they have a membrane shell. And so I'm just going to walk you through the steps of going from the double-stranded DNA to make a new virus. So here's the virus. It has a capsid, as well as a membrane envelope. And there's recognition between the virus and the host cell. And we'll talk very specifically about what that recognition is when we talk about HIV, because that's very well categorized. Once the virus gets into the host cell, it sort of spills off all the coat and dumps out its double-stranded DNA, the viral DNA, into the host cell. And then that DNA can-- in the nucleus, can replicate into more copies of the viral DNA, or it can be transcribed into messenger RNA, which is then the coat for all those capsid proteins that the virus needs. And then these start to self-assemble within the host cell where the capsid proteins wrap around the viral genetic material. They accumulate near the surface of the cell, and then they bud off from the cell. So that's how you go from simple DNA. So these processes are completely based on the human enzymes that do those processes. Replication, we've got to replicate DNA. We're going to have to do that in the nucleus. Transcription, we're going to have to ship out part of the DNA from the nucleus to the cytoplasm and make-- we're going to have to make a copy of the messenger RNA and ship it out to the nucleus. And then we're going to use the host ribosomes, the host's amino acids, the building blocks in everything, to make a new protein that is not a host's cell protein. It's the capsid protein. Obviously, the human cell isn't going to be making a capsid protein. So that's the main thing that the virus had to encode. It had to have the DNA to make that. So that all looks sort of fairly simple, and the steps make sense. This is why we cover this virus first because it really-- it's kind of the most transparent to understand, so this transient stage of sort of borrowing machinery. And as I mentioned here, the virus can spring out of the host cell. Yes. AUDIENCE: So all it means is-- one's that don't kill the host cells, how do they, like, on the body, or the cells that they-- BARBARA IMPERIALI: They start to just be too much of a burden onto the body, so they're just-- you know, if they're inside cells and exploiting the resources of the cell, they're basically-- they're harming it, but they're not destroying it instantly every life cycle. They're just using resources to replicate, and then go-- get spread to another cell, and another cell, where they'll keep using resources. So it's really just an overload of the system. It's a very good point. But they can stick around a long time, and with HIV, you're going to see what really sneaky thing they do is because they put their genome into the host genome. And that's sort of really pretty terrifying. I always ask this question, but it's kind of a silly one. You know, what is life? A virus is alive. Well, they're kind of alive, but they're not really alive unless they have some place to live. But aren't we all like that? So that's very philosophical. So we'll move right on here. So when you think of a virus, this is the original central dogma, all the moving parts of the central dogma. And note-- so that when you think of a virus, what double-- what does double-stranded DNA need from the host? It's got all of these things. It's got the-- the host has the polymerase. It has the DNA-dependent RNA polymerase. It's got all the ribosomal machinery. So the only thing that the virus needs is the gene for its capsid proteins. So you can peel out from that entire life cycle the one unique thing about the virus. So that's a double-stranded DNA. Let's now move to a different type of V, which is a negative-stranded RNA virus. And these are quite important because these form the basis for-- let me just go to the diseases. This is the influenza virus, and I'm going to mention some very important points relative to influenza virus. So influenza virus is what's known as a segmented virus. And what that means is that its genome-- in this case, it's negative-stranded RNA-- is in pieces. A lot of other viruses just have a single strand of genome, a single nucleic acid strand. So it's just one piece, where portions of that nucleic acid code for different proteins, and they'll often code for initially polyproteins that get broken up. And we'll see a virus with a single strand when we look at HIV. But the influenza virus has a segmented genome. And that's very relevant for its lifestyle because we'll see in a moment how influenza virus can cause more damage than we anticipate because of recombination of different copies of the segmented virus through differences. But let's first of all take a look at the life cycle of this virus, and then we'll move on to dealing with the issue of the segmentation. So here's a typical enveloped virus with a capsid. Inside, there's the negative-stranded RNA that gets into the host cell, and you make-- and you dump into the host cell the viral genomic RNA. That can get copied. The minus-strand RNA gets copied to the plus-strand RNA, which becomes the messenger for protein synthesis in the cell. So you've gone in with minus strand. You've made the plus strand, which is the messenger, and that encodes all the proteins that are needed for a new virus. And some of those proteins may have signal sequences. They may be shipped to the surface of the cell, and they may be planted in the outside cellular membrane of the host cells. And what you see here is copies of those proteins actually in the surface of a cell. So what happens with this virus is, once all the moving parts are made, they congregate at the surface of a cell, get packaged, and then bud off from the cell. So remember all the rules you learned about where proteins end up in the cell are all good still here because the capsid proteins have to get to a cell membrane, so they're translated with a signal sequence. They congregate-- I don't know how this self-assembly occurs, but it's a fascinating process, so that ultimately you bud off an intact virion from the host cell. But the key thing that the virus has to have is something that will copy negative-stranded RNA to plus-stranded RNA, which is going to be the messenger. So the virus also has to code for a particular protein that's unique to its lifestyle. So it has an RA-dependent RNA polymerase. We don't use an RNA-dependent RNA polymerase, but the virus needs it to take its negative-strand RNA to a plus-strand RNA, which will be the messenger. So does that make sense? So obviously, that's a moving part that it needs to provide to the host. Now, what's this about segmented viruses that's quite important? Oh, and I just want to underscore here, what defines the destination of these proteins, whether they're capsid proteins or proteins that are going to be packaged within the virus, is basically just the same rules that apply that we talked about when we talked about protein trafficking. So every year, there's a whole panic. Did you get your flu shot? Is it going to work this year? Oh my god, millions of people are going to get sick. Go get your flu shot. It's tetravalent, it's trivalent, and so on. So what we're trying to do every year is predict what the virus is going to look like. So we have to-- there are teams of people, who sometimes get it wrong, who predict the variation in these genes. And they look at winter in the Southern Hemisphere, because that precedes us, and try to guess what's going to happen in winter in the Northern Hemisphere. And we get to try and put together a vaccination package. But the problem with the viral influenza virus is that there can be not just a drift, like mutations, small mutations happening a little bit at a time, but there can be recombination of the genes, because they are segmented, into totally new virus particles that have different properties. So viruses don't just drift in their genomic sequence. They can have dramatic shifts in their sequences that occur-- whoops-- through combinations of viruses that come-- that have infected different animals. So some of the common strains, when we talk about certain strains that have been very, very troublesome to humans, may result from such a combination. So this would be the Eurasian pig flu, the classic pig flu, the human flu, the bird flu. If, in certain communities where people often live with their livestock, a cell gets infected with viruses, a human virus and the swine virus, they can mix and match together. And you can make a totally different viral composition, where you've got one piece of genetic information from the swine flu and seven more from the human flu. And what that can suddenly mean is that the-- first of all, the vaccines don't work at all, but that they may have very, very different properties for infectivity. The protein that is expressed that may cause that very first attack of the virus on your cells in the upper lungs may be very different to the type-- to the protein that comes from the swine flu and may give you much more serious lung infections because they can go deeper into the lungs. So it can be very small changes by pulling an enzyme, a piece of gene, from a completely different organism and matching it up with the rest of the genes from the human virus that makes for dramatic shifts in viral infections that cause these sorts of sudden tectonic shifts where we've really got to deal with a virus. There are two terms up here. There's H and N. These are hemagglutinin and neuraminidase. They are two proteins that are in the viral coat, so you'll often hear viruses referred to as H1N1, H3N3, and it's just the variant of those proteins that are in the viruses. See these little terms here-- that's what that means, what type of hemagglutinin, what type of neuraminidase, is on the surface of the virus. So I am done for today, and next class will be exclusively about the AIDS-HIV virus, where we'll go into that life cycle and also talk about resistance to therapeutic agents and combination therapies.
MIT_7016_Introductory_Biology_Fall_2018
3_Structures_of_Amino_Acids_Peptides_and_Proteins.txt
PROFESSOR: It's going to be a great lecture today. It's about proteins. I love proteins. Don't forget the handout, yeah. OK, so I'm going to briefly wrap up the lecture we were doing on Friday because there were a couple of things that I wanted to make a note of, and then we'll move on to section 2.3 about amino acids, peptides, and proteins. Now, in the last class, I introduced you to the lipidic molecules, and you can pick them out of a lineup because they are rich in carbon-carbon and carbon-hydrogen bonds. As you can see here in these line-angled drawings, the majority of a lot of these molecules is carbon-carbon or carbon-hydrogen hydrogen. They are molecules that are mostly hydrophobic, so there are some terminologies here. Whoops. Hydrophobic, which can also be referred to as lipophilic. You either-- you can hate water and love fatty acid or fatty types of materials, so those both terms are synonymous. And some of the lipids are what are known as amphipathic, and they include hydrophobic and hydrophobic components. There are a couple of tiny terms that I didn't mention explicitly, so I just want to go ahead and do that now. For example, in this phospholipid structure-- and we'll talk about these-- they have long chain fatty acids attached via esters to this glycerol unit, so there's one here and the second one here, and then what's known as the polar head group. In those fatty acids, they could be fully saturated. It means they have no double bonds in the structure, so that term saturated is equivalent to no double bonds, so no carbon-carbon double bonds. Or they could be unsaturated, where there is a double bond within it, so that's one or more double bonds. And those double bonds take on a particular shape because there's not freedom of rotation around double bonds the same way there is around single bonds. So those single bonds, you can twist them around and twist them around, but the double bond geometry is fixed. And so double bonds, we refer to them as either trans, where the two groups are on opposite sides, leaving the double bond, or we refer to them as cis, where the two groups are on the same side. And we tend to use that cis and trans sort of naming system in a lot of other contexts as well, but you almost always want to remember that trans is as far away as possible, cis is closer than trans. All right, so I'm just going to take you forward to the phospholipid structure. This is a very important semi-permeable membranes are made up through the non-covalent, supramolecular association of phospholipid monomer units. Here's a monomer unit up here. You see it has an amphipathic structure, with a lot of hydrophobicity but also hydrophilicity, and these molecules assemble into supramolecular structures that form the boundaries of your cells. Saying they are semi-permeable tells us a little bit about what can go through them. If they were fully permeable, anything could come and go and they wouldn't be much use frankly. It's like leaving the door open the whole time. But because they're semi-permeable, only a few things can come and go without extra help, and other things need active mechanisms to go through. So let's take a look at the boundary here. So when you see a membrane bilayer, they are-- they're often shown looking like this, where every one of these units is a phospholipid, and there's water on both sides of the phospholipid, because that polar head group is interacting with water on both sides. So down here could be the inside of the cell. Up here could be the outside of the cell. And a lot of cells, especially eukaryotic cells, the ones that make us up, have a lot of endomembranes, membranes within the cells. For example, forming the boundary to the nucleus or to the mitochondria. Yes? AUDIENCE: Why is there a [INAUDIBLE] PROFESSOR: Oh, this guy must be-- so this looks like it's probably a saturated fatty acid. So what do you think this one might be, folks? Unsaturated. And what's the double-bond geometry? AUDIENCE: Cis. PROFESSOR: Cis. Yeah, it is. It's like a-- it looks like a ballerina or something. OK, so we have a lot of concerns, and we'll see later about how things get in and out of cells. But most commonly, things like oxygen or water and other small hydrophobic molecules can pass readily in and out through the semi-permeable barrier, but other things, things that are charged, things that are big, need a different mechanism to get in and out. And we will see later on how proteins provide the opportunities to cargo things into cells or out of cells, even very large entities, and there are certain mechanisms whereby that happens through a semi-permeable membrane, OK? I want to show you the other feature of membranes. They are self-healing. What this means is if you poke them, you poke a hole in a cellular membrane, You? Basically push apart those non-covalent forces. Once you take the thing away, be it a needle or a very fine glass capillary, they seal right back up to close to close the hole in the cell wall, so that kind of tells us that they're non-covalent forces. So this is a really cool video of someone doing micro-injection into eukaryotic cells. The needle points to the cell, approaches the surface. You can drop something into the cell, and then the cell closes and maintain-- regains its integrity of the barrier, so this is a very cool observation. People do this. They have to not drink too much coffee because it's quite complicated to do a lot of micro-injection, because you can really cause carnage in your cell population if you're not very dexterous with the micro-injection but people can be very good at it. So I just want to ask a couple of questions before, give you a couple of things to think about before we close up. The lipids. So here's a typical lipid bilayer, where I've highlighted a single lipid. And the colors, those are the head groups, and all in white and gray are the hydrophilic components, and just one of the phospholipids is highlighted, and that would be this molecular structure here. So first of all, what do you think the non-covalent forces at that membrane interface may be? That is, what's going on here at the interface? What are the types of interactions that you might have there? Give you a minute to think about it, and I want to show you that I'm actually giving you a clue here, because you can see the structure, negative charge, positive charge, but also remember this is a barrier to water, so there are other things going on with the solvent that the membrane is sitting in because there's water surrounding that barrier layer. Anyone want to tell me what the answer is, and why? Yeah, did you-- are you-- yeah. AUDIENCE: Hydrogen bonding. PROFESSOR: Yeah, between what and what? AUDIENCE: Like the oxygen and [INAUDIBLE] PROFESSOR: Right, so water. Water is a good hydrogen bond donor and acceptor, so there will be hydrogen bonding. What about amongst all those lipid head groups, what's the other major force? Yeah? AUDIENCE: Electrostatic force. PROFESSOR: Between the different charges. So the correct answer here is both of them. Don't think it's just electrostatic, it's both. It's electrostatic amongst the head groups, hydrogen bonding between all that sort of dense bunch of charge, and the water. And then the other question, what type of molecules can get across? I've already answered that question to you. Salts are going to need ways to get in and out. Small proteins are too big to dissolve in that membrane through passive mechanisms, so we're going to have to figure out how to get proteins in and out of cells. Neurotransmitters, such as this, this is GABA, or gamma aminobutyric acid. It's charged. It just can't get through without a transporter of some kind, and it's actually proteins that end up doing the heavy lifting of the transport processes that we'll see. OK, so moving along. This section will be about the building blocks of your protein macromolecules, which I want to remind you comprise 50% of all of the macromolecules, so that suggests it's a pretty important class of macromolecules that has a lot of different functions. Now, the amino acid building locks-- blocks look pretty simple. They're called amino acids because they have an amine, the carboxylic acid, and there's a carbon that is tetrehedral between the carboxylic acid and the amine. And the simplest of those is when those are both hydrogen, but most of the amino acids are differentiated from that-- this one I've showed you on the board. This amino acid is glycine. Usually, when it's just a lonely amino acid in aqueous solution, it's in a different charged form, just consistent with what we talked about in the last class. And I put it here. So this is glycine. It's one of the 20 encoded amino acids. That means the amino acids that are made through ribosomal biosynthesis through a code that's provided by the messenger RNA, so they are encoded by messenger RNA. Later on, you'll see all of the beautiful mechanics of those processes. Now, this table looks pretty complicated, so I'm going to deconstruct it a bit. But what I first of all want to assure you is that these-- you will always get a handout with these structures on them. We are not asking you to remember these structures. You might become familiar with some of them, but you do not have to remember them. You'll have a table that shows them, but on that table, I won't necessarily give you the information on what their properties are, because those are things that you should be able to spot by looking at their chemical structures, all right? So that's important. So these are all line-angled drawings, so you see the carbon. The hydrogens aren't shown in there. The charges are shown for what's called the side chain, because most of the amino acids have a side chain. The amino acids are also chiral, but you'll learn more than you ever wanted to know about chirality in 512, so I won't weigh you down with any of those properties. So there is a side chain that dictates the properties of the amino acids. One tiny detail, the amino acids that are encoded in our proteins are all what are known as alpha amino acids. There are other amino acids. GABA, that I showed you on the previous slide, is not an alpha amino acid. Actually it's, a gamma amino acid. These are called amino acids because the amine group is at the alpha position relative to the carboxyl. Don't need to know a lot more about that with respect to that. So let's take a look at this set of amino acids, and what you see is amino side chains with rather different properties. I've amassed-- here's glycine at the very top. All amino acids have a three-letter code or a one-letter code. I particularly enjoy using one letter codes and spelling out people's names in peptides and things like that. I'll let you do that in the privacy of your own room. It's kind of amusing to see if your name actually spells out a peptide. Some of us-- if I get a little stopped stuck with Barbara because there are no B amino acid one letters with a B. The next most abundant type of amino acid have hydrophobic side chains. What that means is they have a lot of CHs, but not a lot else, right? So take a look at them. Alanine has a methyl group, for example, where I've shown the R, that would be alanine. And they get increasingly big. They're quite large. Some of them have quite extended size chains. Other ones have side chains with rings with double bonds in them. Those are what we would designate in organic chemistry as aromatic. They show-- they are still hydrophobic, but they show different properties to this other set of amino acids. Some of these amino acids may actually have polar groups in them, but their major feature is that they're hydrophobic. But in an amino acid, such as tyrosine, you could not only have hydrophobic interactions with that ring system, but also hydrogen bonding with the OH on the tyrosine, so some of the amino acids can do a few different things. The next set of amino acids are those that are polar and charged, and I've shown you the most common state of all of those amino acids, but you already know that the amine of lysine is likely to be charged. This quanidinium group of arginine, take my word for it, it's charged. It's a bit more complicated to draw. Histidine is also one of those that's annoying to draw, but the negatively-charged side chains with a carboxylate are both negatively charged, and that's something you would remember from the previous class hopefully. And then finally, there are amino acids with polar uncharged side chains, such as those shown here. Now, this doesn't look like a very exciting set of building blocks. How can life run on things made of 20 relatively simple building blocks with functional groups? And it's that the building blocks are not functional themselves. It is the polymers that are made up of amino acids, and I'll always call them AAs because it's easier for me. The polymers of amino acids are heteropolymers. That means they're made up of a bunch of different monomer units when they're called heteropolymers. And the other important thing about these polymers is that they are of defined sequence. What is the sequence? It's the order in which the amino acids appear. So I'm writing that down, order. And all the functions of proteins are dictated by the order of the amino acids, so let's take a look at the sidebar here. So once again, remember a couple of things that we will always give you this table to think about. Ooh, come back. There are a couple of outliers I just want to mention quickly. So I talked to you about glycine, the simplest amino acid with no elaborate side chain. Proline is a little odd because its side chain is kind of in a cyclic structure, and towards the end of the class, I'll talk to you about collagen, whose structure is totally dependent on the involvement of proline in the sequence of the amino acids that make up collagen. And then the last sorts of unusual amino acid is cysteine. It has a thiol, and the one clever thing about cysteine-- I'm just going to put a bit of a peptide here. One cysteine, and then I'm going to put a second cysteine, and these are going to be deemed in a peptidic structure. What cysteine can do is it can exist either with the thiol side chain, SH, or it can be at a different oxidation state where the two sulfurs are joined to each other. So for the most part, your linear arrangement of amino acids that dictates sequence is solely held by-- together by the covalent bonds and the peptide backbone that we'll talk about in a minute. But occasionally, enfolded structures, if two cysteines are close to each other and the environment is oxidizing, they will form a cross-link. But they're not what drives folding. They kind of fall into place later on, but that just sort of sets cysteine apart a little bit for its properties, all right? OK, so coming down the side here. Amino acids are assembled in a unique linear polymer of defined order, and we designate that defined sequence the primary sequence. And proteins can be 1,000 amino acids, 1,500, 100 amino acids. They can be various lengths where they, you know, we would generally consider the smallest protein to be about 400 amino acids, and you might go up to thousands of amino acids. I'm going to write 2,000 or more here. When the proteins are smaller, they are not capable of adopting too much ordered structure, and we mostly call them peptides. Peptides are sort of shorter sequences, so peptide sequences. So this would be a protein, and peptides, probably two to 39 amino acids, but these breakpoints are a little bit more vague. So the primary sequence will define the structure of a protein, and we're going to start to talk about the hierarchical structure of proteins as put in place, and that's the primary sequence, And that primary sequence is kind of a cool thing because it's very specific. It defines-- it's got encoded into its structure, the three-dimensional fold of the protein, OK? All the information for the folded, compact, globular structure that's functional is encoded in that primary sequence. It's a cryptic code. We may not be able to tell by looking at it what it really looks like, but all the information is there in order to program the folding into a globular structure. So the primary sequence determines the fold, and it's the fold of the protein that mandates its function. It's not the sequence of the protein. The sequence defines the fold. The fold, the three-dimensional form, defines the function, OK? So that's very important. And I think it's absolutely amazing that with a relatively limited set of building blocks, we can define so many different functions of all the proteins in our body that may be structural, they may be catalysts, they may be things that transfer information from the outside to the inside of cell. All of that is programmed with this rather limited set of building blocks, OK? Now, let's now talk about peptides because one gets a little frustrated looking at single amino acids. They don't tell us so much about the peptidic structure, so I'm going to draw two amino acids, and then I'm going to tell you one important thing. So let's put R1, and I'm going to draw another amino acid, and I'm putting it in a particular orientation. R2, because that designates that these might be different amino acids. For example, if R1 is H, there's an implied hydrogen here, that would be glycine. If R2 is a methyl group, there's an implied hydrogen there, that would be alanine, all right? When nature bonds all these amino acids together, it carries out a condensation reaction to form a peptide bond between these two components of the amino acid, the amine and the carboxylic acid. And now I'm going to draw you the first of the dipeptides that you'll meet. And there are so many things to tell you about these structures, it sort of drives me crazy thinking about, oh, I must remember to tell them that or I've got to remember to tell them that, because the structures are cool. R1, R2. OK, so this is a dipeptide, two amino acids, and there are some characteristics I want you to remember. When we write out peptides, we always write them N to C. So in that peptide, this would be the carboxyl terminus, and this would be the amino terminus. If you don't always remember to write things in this order, and you tell your friend, oh, go and get this peptide made, and you put it down in the wrong order, they'll make the wrong peptide. So you always-- there is basically an agreement amongst everyone that we always write from left to right, the sequence of peptides. The next important thing about this structure, as you look at it, there are several bonds joining the polymeric structure. Many of these bonds show free rotations. You can twist them around, there's nothing stopping that conversion. All of these show freedom of rotation. But the amide, or peptide bond, is unique in that there's restricted rotation about that bond. So it's as if you've got a linear polymer, but every third bond has kind of stuck in a particular orientation, which starts to define a lot of details about protein tertiary structure. It's not complete spaghetti. It's like spaghetti with little bits that haven't been cooked. They're stiffer than the rest of the sequence. And the other really important thing about the peptide structure is that embedded within that structure, there is the amide or peptide functional group where, remember, this can be a hydrogen bond acceptor, and this can be a hydrogen bond donor. Once you know that, the next few slides will make a lot of sense as we talk about higher-order structure of proteins. So let's just take a look at that with a slightly longer peptide. By convention, if I'm going to draw a peptide that's methionine isoleucine threonine-- you can look up that names-- those names on the chart-- that would be the MIT peptide. These are the three amino acids. I'm going to condense them into a tripeptide. When I condense three amino acids, I spit out two molecules of water, and I put in place two amide or peptide bonds. If I go down this backbone, every third bond is going to be fixed, fairly fixed. There's not freedom of rotation around it, and every third bond is going to have the capacity to be involved in hydrogen bonding interactions, as I've suggested here, all right? What else is there here? When I write the MIT peptide, I write M first, I second, T third. If I wrote TIM, it would be a completely different chemical structure with different chemical properties, so the directionality is important to understand, and there you have it. So now you can go home and practice your name in amino acids and draw them out. If you draw them out fairly sort of sharply, then you'll never get confused about what end's what and where the substitutes are, but it's important to remember as you're making a dipeptide-- oops, I forget this doesn't work. As you're condensing a dipeptide, when you're putting these R groups on, one goes up, one goes down, but these are nuances of the structure that may be lit for-- good for a later discussion. So here is now a longer linear peptide, and the suggestion of a globular structure that might be found if that peptide was folded up. And the primary sequence here defines the globular structure, and the process whereby you go from the extended primary sequence to the folded structure is called protein folding. And physical chemists and physicists and computational chemists have for years tried to understand how we could predict the folded structure from the primary sequence. It's not simple because what you're doing is you're solving a massive energy diagram, where as you fold a structure up, you're trying to maximize all those non-covalent forces for maximum thermodynamic stability, right? It's kind of a three-dimensional puzzle where you're trying to have as many hydrogen bonds, electrostatic interactions, and so on, as you can possibly make. So when computational chemists try to fold proteins, they're basically solving a three-dimensional puzzle where they are maximizing interactions. And there are a lot of ab initio and molecular dynamics programs that are now starting to be able to fold proteins into fairly reliable structures, but they don't always get them right because they haven't gotten all the clues yet. And also while they may be able to do ab initio or computational folding with small structures, the headache gets way bigger the larger the structures get. So the predictors aren't very good at predicting big structures, they're getting better at predicting small structures. And so just to reinforce to you, the primary sequence is established by covalent bonds, the peptide bonds, but the globular tertiary structure is based on non-covalent covalent interactions, OK? Now, I want to ask you this. I love cartoons with science in them, but you know, 10%, 20% of the time, they make mistakes, and I felt this one was particularly pertinent. So a bunch of guys lugging around in a lab and says, well, we finished the genome map, now we just have to figure out how to fold it. What is wrong with that cartoon? What fold? Yeah? AUDIENCE: You want to [INAUDIBLE].. PROFESSOR: Yeah. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, the genome doesn't fold. It's double helical, duplex DNA or something. You're actually folding proteins, so the cartoon is not quite right, but it's sort of kind of cute. All right, now, when we talk about the non-covalent forces that hold proteins together, I just want you to remember from last time this set of non-covalent forces, because if you understand them and recognize them, you'll understand how they may occur in folded protein structures. All right, so here's a peptide sequence. Here's a puzzle for you. You can go back and figure out what the one-letter code spells there. Just take out your table with all the amino acids. It's appended to the back of your P-set, and you'll be able to see what that very large peptide spells. All right, I don't want you working it out while you're here. You've got to listen to me for the time being. OK, so the first order, we get it, there's a primary sequence. The next thing to think about is what's known as secondary structure. It's a higher order than just the primary sequence, and it's established by non-covalent bonds, and it's called secondary-- oof, my writing's horrid today. Secondary structure. And those are interactions that are put in place exclusively by interactions between the peptide bonds of what's known as the peptide backbone. So if I look at the structure, these are the side chains. The peptide backbone is this continuous linear sequence. That's what we would call the peptide backbone, and the secondary structure is put in place by hydrogen bonding between components of the peptide backbone. So for example, a hydrogen bonds, such as that, or a different hydrogen bonding interaction, such as that. Between the atoms that have lone pairs of electrons and the other atoms-- heavy atoms that hold a hydrogen that's quite acidic. And there are a couple of major forms of secondary structure. What I'm showing you here is what's known as the alpha helix. First deduced by Pauling, in fact, through model building, he said, proteins could form these ordered structures, and an alpha helix is an ordered structure exclusively made up from the hydrogen-bonding interactions of the peptide backbone. And you can look at this helical structure. It's a continuous strand of peptide, but there are hydrogen bonds between COs and NHs all the way through the backbone, such that this strand of peptide can fold up into a cylindrical, helical structure, where all those R groups, the side chains of the amino acids, are on the perimeter of that helix. So this secondary structure is an important one because it's very prevalent in a lot of proteins. The next secondary structure is also held together by hydrogen bonding, and it's interactions between stretched out strands of peptides that may not be close to each other in the primary sequence, but they align in the folded structure. And so for example, what I've shown you here is what's known as a-- this guy is then to say this is an anti-parallel beta sheet. And across that sheet, there are continuous opportunities for hydrogen bonding interaction. If the strands run in opposite directions, it's anti-parallel. If they're in the same direction, it's parallel. These two secondary structure elements make up a lot of the sort of basics of how proteins start to fold. They're key non-covalent forces, and there are also other smaller motifs. One is called a beta turn, where the peptide sequence may go through a chain reversal, so the sequence would look like this. I'm going to just draw it, and I'll talk to you in a moment about ribbon diagrams. And this piece here would be the turn, whereas that would be the interactions enforced by the sheet. These are the ordered elements of secondary structure. You don't have to be able to figure them out, but you have to be able to pick them out in order to understand the structure, OK? So even those simple elements still it's hard to make big enough structures to have functions. So as I mentioned in a continuation of the theme, the protein folding is hierarchical, you can start to put together elements of secondary structure to make things that are a little larger. Helix, turn, helix. Helix with a different kind of turn, maybe put in place by a metal ion or something, or a strand, turn, strand, or now something that's a composite of these two major types of secondary structure, the helix and the turn. And these really start to be proteins that might be big enough to be able to do something, but they're all exclusively held together by non-covalent forces between the amides or peptide bonds in the backbone of the protein, OK? Not very exciting just yet. Now, one other little clue that people will-- you might see and you might be confused, people sometimes, when they're drawing sort of a quick picture of a protein, they might draw a helix, but instead of really showing it in detail, they might show it as a cylinder, so you might need to pick that out of a structure. And then I want to call your attention to that, that in all those motifs, when you join one helix to another, you might need to turn a strand to another strand you need to turn, and so on. OK, so this is like taking your very extended stored of polymer, knowing there are different kinks in it, because of the backbone bonds, but folding it up in a structure that maximizes the opportunity for another order of structure, which we'll talk about now. All right, so we've seen primary. Secondary is just with backbone. And things start to get much more interesting when we get to tertiary structure, because tertiary structure is enabled by all these other interactions, electrostatic, hydrogen bonding, hydrophobic forces, that can be put in place due to the side chains of amino acids interacting with each other or with the backbone structures. So I'm going to walk you through this, so you can sort of get a sense of how these three-dimensional puzzles work on a very small scale. So look here, that's a very small motif. And what I'm going to call your attention to is when you fold up these motifs, when the secondary structure is in place, a lot of the side chains are near each other, and they can engage in long-distance contacts. And so for example, I'm going to show you interactions between side chains, between side chains and the peptide backbone, or side chains and water. But what I want to do is take a look at this and see, can you put any of those potential interactions on the drawing that's on your handout? It's pretty obvious where there's an electrostatic interaction, right? Boop. OK, between plus-- get those out of the way, those are the easy ones. And then interactions between hydrophobic groups, where they want to amass that lipophilic structure, so it's not exposed as much to water, so they cluster, so those are easy. And then you can start thinking about what are all of hydrogen bonds you could draw. Here I've shown one between side chains, between side chains and backbone, between side chains and water, and those may all contribute to the ultimate thermodynamic stability. Make sure you get your hydrogen bonds right. Remember, two donors don't interact with each other into acceptors, don't-- so this might describe the folding possibilities of that small motif. Now what I want to show you-- I'm going to-- let me-- is an ab-initio simulation of a folding process. So let me just get that a little bigger on the screen. So this is computing. GB1 is a very small protein that holds reversibly under appropriate conditions, and what I'm going to do is forward you through this video. This is a simulation. This is all computation. It's not looking at anything by spectroscopy or in solution or anything like that. And what I'm going to do is I'm going to forward you through the structure. This is multi-scale modeling. It's got a lot of details in how it's done, but the starting point is a very denatured protein, all stretched out, right? And what I'm going to do is just show you for a few seconds, you know, this thing's like trying to find its thermodynamic minimum, and it's actually failing pretty badly. And it does that for about 30-- 60 seconds of the simulations, so I made a point to myself to take you to about minute one, where things start to get fairly interesting. And you're saying, well, what's interesting about that? You see that nascent helix, in the background, the red and the blue, is starting to form strands that are a little bit aligned, and it's trying to find as many connections as possible to satisfy a stable structure. At a certain point in the simulation, five of the hydrophobic groups are in a little pea. They're in a little hydrophobic cluster, and that's a breakpoint in the folding process, because that gets everything glued together better, so that the rest of it now can start to really find its final place in the folded structure. These early structures are known as molten globules. A lot of the interactions are not yet in place, but the hydrophobic cluster is critical. But then after that, it's almost as if you're sliding downhill to get all the remaining interactions in place to fold the protein, OK? So protein folding is a puzzle that can be solved computationally by maximizing thermodynamic interactions. So it's sigma this, sum of this, sum of this, sum of that. That's going to get difficult the larger the protein gets, but for small proteins, those simulations really start to make sense, OK? All right, so let's just move on here. Lost-- ah, good. What did you think of the simulation? It's kind of cool, right? So you can find the link in the sidebar. So just pop these back on now, and that's the folded structure. All right, so with many proteins, they're much more complex than that. So for example, here's cyclin A. It's involved in cell cycle, and you can see its alpha helix structure dominantly, very clearly, all those beautiful alpha helices. Next to it is the green fluorescent protein, which is a cylindrical structure made up of anti-parallel beta sheets. What's really cool is when you sort of rotate it, you can see all those sheets, but then it does this little sort of curtsy to the audience, and you can look down into the barrel. And then in some cases, proteins may be a mixture of a secondary structure elements. Here it's a little hard to tell. This is triose phosphate isomerase, but if you look down it, you can see the helices, and there's also a group of beta strands that are held together. So in that protein, it's a mixture of alpha helix and beta sheet. Now, I'm not going to tell you much about pulling up Protein Data Bank files right now because I want to cover the next topic. And then when we have a few minutes later on, I'll show you. But wherever I show you a structure, I'm trying to show you the Protein Data Bank code, and in the web site, you can see there is a free download of PyMOL, which is the program I used to create all these structures and movies, so you can really look at things. And believe me, it took me about three years to learn how to use it properly. It'll probably take you about a week or maybe a couple of days. So if I can learn it, you can certainly learn it. Now, there is one final element of protein structure that people get kind of hung up on, and it's what's called quaternary structure. It's like, aren't we done yet? So in addition to all of these, let's say I have a folded motif, and there's its structure. That would be have primary, secondary, between the strands or the helix, and tertiary structure, right? But in some cases, proteins hold up to quaternary structure, where it's multiple of these units joined together-- hoo, I could have picked a simpler fold, but that will get you the general gist of it-- all right, where these are actually associated by non-covalent forces. So there's more than one polypeptide chain. In fact, here would be four peptide chains coming together in a higher-order structure that's made up of four of those units. The prototypic example of this is the protein that carries oxygen around in your blood, which is hemoglobin, and it has four primary sequences that have come together in a tetrameric quaternary structure. Hemoglobin is kind of interesting, because it's made up of two alpha and two beta subunits. If All these subunits were identical, they would be called homooligomers, all the same pieces. If they are different, they are called heterooligomers. We'll see a little bit more about this when I talk about hemoglobin in the next class, because the features of the quaternary structure are very, very important for the proper transport of oxygen, and single mutations can really mess things up, and you'll see more about that in the next class. So just wrap that little bit up, proteins are condensation polymers of amino acids. Each protein sequence is defined by covalent bonding. Native proteins. Most of them that are not have quite quaternary structure are folded through secondary and tertiary interactions, these things that we already talked about, and folding is defined by how to maximize all those non-covalent forces to get the maximum thermodynamic stability with the maximum number of interactions. And subunits may also come together through quaternary structure. OK, so I'm going to talk to you about several proteins throughout the course, but for now, I want to focus you in on a structural protein that provides mechanical support for tissues. In the next class, we'll talk about transporters and enzymes, and as we move on to signaling, things like receptors and membrane proteins and so on. So the protein I'm going to describe to you is collagen. It is the most abundant protein in the human body. It plays enormous roles. It's not an enzyme, it's not a catalyst, it's not a transporter. It is one of those structural proteins, where the structure of collagen has evolved to provide a mechanical stability to lots of essential components of complex organisms. And there are many different types of collagens that are found in different parts of the body. For example, bone, tendon, cartilage, and so on. They are all college and structures, but they have subtle differences, maybe some have different, slightly different, mechanical properties to adapt to the functions that they perform, OK? And what I'm going to show you is that a single amino acid change in the primary sequence of collagen can destabilize the structure, so it is no longer viable. And the disease type I'm going to talk to you about is a set of diseases known as collagenopathies, and the particular one is called osteogenesis imperfecta. Osteo always refers to bone because college and plays a critical role in the structure of bone. Bone isn't just bone, it's collagen involved in it. And it's also this disease is called brittle bone syndrome. And here's the X-ray of a baby born with brittle bones syndrome, and you'll see that the long bones in the upper arm are all irregular because the bones are brittle, and they'll break even in utero. A lot of babies with this defect can't even be born through the birth canal because it would crush the bones, and many of them don't survive very long at all. Some survive with different kinds of cases, but their lives are greatly impacted, and they could just sort of hit a table and the bones would break, all right? There are those sort of serious situations where parents are actually accused of abuse to the child, but the child actually had brittle bone syndrome, and it was just through helping them put their clothes on or taking them upstairs, the bones got broken very readily. So osteogenesis imperfecta really describes a collection of these defects. Now the collagen tertiary structure is shown here. It's actually made up of a type of helix. It's not an alpha helix. It's a polyproline helix, where the individual subunits in that tertiary in the structure are fairly long and extended, and I show you three strands in this polymeric structure, a yellow, a red, and a green. And these rolled together into a three helix bundle that has a fibrillous structure, and then all these structures come together to make the macromolecular structure that is collagen. It's not just one of those fibrils. It's bundles of those fibrils in a very organized pattern where you could even see that patterning in electron microscopy. And there are many genetic defects of collagen, and what's so important to think about is if you have a defect in one strand that defect will propagate through every single strand. If this is one strand made up of three polypeptide chains, it propagates all the way through the structure. And I believe I have little time to just show you, here's the collagen structure. I'm just showing you how it's extended. Those are three independent strands, and there's a set of magenta residues in the middle, which come from a defect in the sequence where a glycine has been changed to an alanine. So I'm going to show you this movie because it shows you right at the center of the structure, there are residues painted in pink. And what I'm going to do is show you close up of that segment. If you look at those cells they're all nicely organized, except where that defect is, and that defect is caused by the change of a hydrogen to a methyl group on three residues that come together, and that bulges out that fibrillous structure and makes it not as compact and beautiful as it should be in the version that's got the glycine there. So if you look at it, you can even see that helix gets bulged out and it's not as well-aligned as the rest of the structure. And then that defect gets propagated into all the fibrils and results in the weakening of the bones. Either the collagen fails to form properly, or the collagen, when it forms, it has much less mechanical stability. So I think that's a good place to stop and I'll pick up next time with hemoglobin. Oh, one last little thing, a couple of things for you to do. There's a great link on the website to the Protein Data Bank to see how enzymes work. And if you have a little time, it would be awesome if you could just take a quick flick through those parts of the text. These slides are posted with these reading assignments, and they're posted in color if you want to look at them again.
MIT_7016_Introductory_Biology_Fall_2018
30_Immunology_1_Diversity_Specificity_B_cells.txt
ADAM MARTIN: And today, we're going to talk about immunity, which is important, especially at this time of the year. So immunity is the resistance to disease based on a prior exposure. Based on prior exposure. And of course, this is the principle behind vaccination. So humans have been sort of using the properties of the immune system to prevent themselves from getting disease for centuries. One of the first very clear examples of this is back in the 18th century with the English physician, Edward Jenner, and Edward Jenner found out or came to the realization that farm hands on farms, specifically milk maids, that were exposed to a variant of smallpox from cows, which is known as cow pox, could become immune to smallpox. So cow pox is a less severe form of the disease. And what Jenner did was to take pustules from individuals who had the cow pox disease and inject them into an eight-year-old boy, and then infect that boy with smallpox to show that the boy was immune to smallpox after having received the cow pox material. And so this is the first example of the vaccine. And because the vaccine was derived from basically someone with cow pox, the word vaccine is from the Latin root of vaca, which is cow, so that's where the word vaccine comes from, OK? So today, we're going to talk about the systems that our bodies have to fight disease and there are several different all levels of the immune system. So I'm going to talk about two of them. So I'll talk about two levels of immunity. The first I want to mention is the one that we're all just like born with, which is known as innate immunity. [SNEEZING] [SNEEZING] Bless you. So innate immunity, as the name implies, is something that we are born with, so this is inborn. It also doesn't have a delay in when it's activated, right? So if you have an infection, this is sort of the first line of defense, right? There is an immediate response, and that's the innate immune system, so this is immediate. Here, I'll put this down here. It's immediate. So one example is of an innate immune response. This is not in the body but ex vivo, but here you see a human neutrophil, and neutrophils are part of our innate immune system. And neutrophils hunt and kill bacteria, right? You see that neutrophil chasing after that bacterium, and it's going for it. It's really trying to get it, but that bacteria really wants to get away, but got it! OK, great. So these neutrophils are part of the innate immune system. It's inborn, it's immediate. And in addition, the response of the innate immune system doesn't really change if you've been exposed to an infectious agent prior to the-- OK, so this does not change. I'm using the Greek delta for change. Does not change with prior exposure. OK, so it's sort of like a constant surveillance mechanism in your body that will go after foreign agents, OK? Now, this is very different from the next level of immunity, which is known as adaptive immunity. And as the name adaptive immunity implies, this is a type of immunity that does change. It adapts, OK? And this type of immunity is acquired, so it's also known as acquired immunity, but it's acquired with exposure to a foreign agent, OK? So this involves a change in immunity, this one does not, but the innate immune response is immediate, whereas adaptive immunity takes time. There's a delay, so this is also delayed. It's also highly specific, OK? So it's highly specific to the foreign agents that you are infected with. The innate immune system is less specific. It'll recognize, like, things like bacteria, but it won't be able to necessarily distinguish between different types of bacteria. So this is more specific than the innate immune system. This is why every year, you have to get a flu shot, because the flu virus is constantly changing. And our immune system is so specific that unless we get a new vaccination, our bodies will not be able to recognize it, OK? So this is-- so now I'm going to break down adaptive immunity into two branches. One is known as humoral immunity, and humoral immunity is basically protein-mediated, and there are proteins that mediate this are called antibodies. These antibodies are proteins, and it's called humoral because the antibodies can be secreted into the fluids or humors of our body, which is basically the blood, OK? So there is humoral immunity. The other type of adaptive immunity is cell mediated, and one thing I want to point out that the types of cells that make an antibody are known as B cells. What the B stands for isn't really important, but one thing that's helpful is that these cells mature in the bone marrow, and B stands for bone marrow, OK? So you can always remember where they mature. Now cell-mediated immunity, in contrast, involves a different type of cell called a T cell, and the T of T cell stands for thymus because these cells mature in the thymus. And I just want to point out where these cells come from. So we talked about adult stem cells earlier, and in this case, these T and B lymphocytes over here are derived from a multipotent hematopoietic stem cell, which generates a whole bunch of different types of cells. Many of them are involved in the innate immune response, but this common lymphoid progenitor over here gives rise to D-- T lymphocytes and B lymphocytes, which are involved in adaptive immunity. OK, so it's not important that you remember where-- what all these cells come from or what they, like, what the tree is, but that these cells arise from a common progenitor cell. OK, so both of these branches of the adaptive immune system have what are known as antigen receptors. I'll abbreviate antigen, AG. So they have antigen receptors, meaning that they have things on them that recognize specific antigens, and antigens are basically things that result in an immune response. They could be proteins. Antigens are substances that activate the immune system. That's just immune system, OK? Another abbreviation that I'll use is when I refer to an antibody, I'm going to abbreviate it, AB. All right, so we have these two branches of the immune system, and they each have the type of antigen receptor, so now I want to go through what these different types of antigen receptors look like, OK? And I'm going to start with the B-cell antigen receptor, also known as the antibody, also known as an immunoglobulin. OK, so another-- these are all synonymous, but you will see them in different contexts. Immunoglobulin is abbreviated IG. And what the antibody looks like structurally, it looks like this, and I'll just draw it out for you down here. So I'm drawing a lipid bilayer that represents the plasma membrane. The outside of the cell is going to be up, so that's the exoplasm up here. The inside down here is the cytoplasm. And this would be a B cell, then, we're talking about here. I'm going to draw just a segment of the B cell plasma membrane. And the B-- the antibody can have a transmembrane domain that spans the plasma membrane, and then there are domains-- and what I'm drawing here is a circle, is an IG domain, so this is going to equal an IG domain. It's just a type of protein fold that is modular, OK? So you can see up on my diagram here, right? You see these like here there are these two green segments labeled V and C. Each of those is a single IG domain. OK, it's just a modular fold that is separate from the other part of the protein. OK, so here we have along-- this is one polypeptide chain that has a transmembrane domain, and it is inserted into the plasma membrane. The N-terminus is here, the C-terminus is down here, and each antibody protein has two of these long peptides. And because they're the longest part of the molecule, they're known as heavy chains, so these are the heavy chains. And each antibody protein is composed of two identical heavy chains, OK? So these are identical. And then also there's another component, which is present up here, and this is a shorter polypeptide. And because it's shorter and smaller, it's known as the light chain. OK, that's the light chain. OK, so that's more or less what an antibody looks like. The part of this antigen receptor that recognizes the antigen are the tips right here, so this is where the antigen binds, and it can bind on either this side or this side. This molecule is laterally symmetric. One side is identical to the other, OK? Now, the T-cell receptor looks different, and the T cell receptor has fewer names. It's just called the T-cell receptor, or the TCR, for short. And the T-cell receptor is structurally very different, so now I'm drawing here a T-cell plasma membrane. Here's the plasma membrane. The exoplasm, again, is up. The cytoplasm is down below this plasma membrane. And the T-cell receptor has two chains. One is called alpha and the other is beta, and it has fewer immunoglobulin repeats, so that you can see you just have this sort of smaller system here, where you have an alpha and a beta chain. And in this case, this region here recognizes the antigen, OK? So basically the T-cell receptor, or the tip of it, interacts with the antigen. Now, the B-cell receptor, or the antibody, has different forms, so let's talk about the different forms. And these are shown up on my slide above, right? So you see over here, here is an antibody that has a transmembrane domain and is anchored in the plasma membrane, but there's another form that lacks that transmembrane domain, and instead of being an integral membrane protein, is instead secreted into the blood, OK? So the forms of the B cell receptor are both a membrane-bound form, which is initially how this antibody is presented, but later on, it can be secreted, and this often changes when there is an infection, OK? So once you have a virus or bacteria in your system, then you get the B cells sort of pumping out the secreted form of the antibody in order to fight the infection, OK? In contrast, for the T-cell receptor. For the T-cell receptor, there's only one form, which is the membrane-bound form, OK? So for T-cell receptors, it's membrane only. OK, another thing that differs between these antigen centers receptors is the types of antigens that are recognized. So antibodies can recognize all sorts of different molecules, OK? They're very promiscuous, but they-- and a given antibody is not promiscuous. A given antibody will recognize a very specific structure, but the possibility for antibodies is that they can recognize small molecules. They can recognize proteins, they can recognize DNA, they can recognize carbohydrates, you get the idea, right? They really can recognize a whole range of different types of molecules. In contrast, the T-cell receptor is more restricted in that T-cell receptors will recognize peptides or short sequences of amino acids. So it recognizes peptides, and these peptides are presented to the T cell on a type of molecule presented by the MHC complex. There are two classes, 1 and 2, and we're going to talk about this in detail in Friday's lecture. So I just want to point out the difference in the types of antigens that can be recognized here, and we'll talk about exactly what that means on Friday. OK, so now we have to talk about the amazing properties of the immune system. The first is how specific it is, its specificity, and I think this is a really amazing property, the ability to really discriminate between very closely related molecules, right? And this is essential for immunity to work well. You want to recognize things that our foreign agents that have like invaded your system. You don't want to be recognizing proteins and structures that are natively present in your body, because if your immune system did that, you'd have an autoimmune disease, so this specificity is really crucial for the function of the immune system. So now I want to talk how is it that the immune system achieves such high levels of specificity, and the way I want to illustrate this is I want to bring this down quickly. So if we consider the structure of the antibody, these different domains are different in that-- in how variable they are, so some are variable. So this domain here for the heavy chain is the variable domain of the heavy chain, which I'll just abbreviate VH, and then these other immunoglobulin domains are constant, meaning they don't have a lot of variation in sequence. Like the heavy chain, there is a variable domain for the light chain, which I'll abbreviate VL, and then there is a constant domain for the light chain, OK? And so what I want to do now is consider what the sequence variation is here on this antibody is the same over here. This is the same thing over here. You have a variable domain for the heavy chain and a variable domain for the light chain. So let's consider the amino acid sequence of the antibody molecule specifically at that variable part of the protein. So let's say we could take individual antibodies and define their sequence from end to C-terminus. That would be from tip towards the end here. So if we take a number of different antibodies and align their amino acid sequence-- so what I am-- I'm not writing out an amino acid sequence, but I'm just illustrating like a particular type of computational experiment you could do. So these would be aligned amino acid sequences where each of these represents a different antibody, let's say, heavy chain polypeptide that's produced from a different B cell, OK? So each of these is a different antibody from a unique B cell. And then we just consider the residue number and how much each amino acid residue varies along this sequence. So if we were to align antibody gene stretches like this and look at how much variation there is, you'd get a graph that looks like this, OK? So the y-axis is the amount of variation and the x-axis here is the residue number along this polypeptide sequence. And what you see, probably even without the color here, is that there are these three regions where there's a lot of variation in the sequence of different antibodies, OK? So here you see the blue segment here has a lot of variation, the yellow segment has a lot of variation, and the reds segment here has possibly the most variation. And what these are known as are hypervariable regions, meaning that they exhibit a lot of variation. Another name for them is that they are complementarity-determining regions. Complementarity. Complementarity-determining. Determining regions, or CDRs, and there are three of them, 1, 2, and 3, OK? So there are regions in this antibody molecule which are much more variable than others, OK? So what are these regions? Well, this is a sort of crystal structure of the-- of an antibody, and you can see how the antigen is bound at the end. That would be this end of the molecule or this end of the molecule. And here you see a ribbon diagram of the structure of the antibody, and the complementary complementarity determining regions are the regions here that contact the antigen. And what they are are basically here's an IG fold, this whole thing, and there are these three loops that extend out of the end of this molecule, and you can think of them as three fingers, OK? Then these fingers are able to reach out and sort of grab on to like a foreign particle and/or any particle and stick to it, OK? So these are the variable regions, and they have differences in amino acids-- in amino acid sequence, and even very small differences in the amino acid sequence at this particular part of the antibody can have a huge effect on whether or not they're able to stick to something, right? You can imagine if I lost my thumb, then right now, I'm not able to sort of stick to that anymore, OK? So small differences in amino acid sequence result in large changes in the affinity of this antibody for an antigen. And antibodies have different sequences, meaning that they're able to bind to specific substances differently. So if an antibody has one set of sequence, it might recognize one structure. If it has another sequence, it might recognize another structure. So just by changing the sequence at these complementarity-determining regions has a huge influence on what these proteins will bind to, OK? Now, each B cell expresses a unique antibody and just one unique antibody. So each B cell in our body expresses one and only one antibody protein, and that antibody protein has a unique sequence at the CDR region, and this one antibody has unique specificity for an antigen, OK? So here you can see in my diagram, I have a whole bunch of B cells here. They all express a different antibody, and you can see that the way you could get more of a given antibody is to clonally expand one of these cells, and all of the cells that result from that colonial expansion will express the exact same antibody. And when you have a clonal population of a cell that all has the same antibody, that's known as monoclonal, OK? So each B cell will have a B-cell receptor or an antibody with unique specificity. So now the question becomes, OK, so I told you how you get specificity, but in order to have a functioning immune system, you need to have lots of different cells that each express a different cell receptor, so there needs to be a way to generate diversity. And the answer to how we generate diversity has an MIT connection. The research wasn't done at MIT, but the person who discovered the mechanism is now at MIT. This research was performed by Susumu Tonegawa, and Professor Tonegawa, for his work on how this diversity is generated, was awarded the Nobel Prize in medicine in 1987. OK, so Professor Tonegawa did this research elsewhere, but now he is a faculty member here at MIT. All right, so diversity. The problem of diversity, right? We have millions of B cells that have a unique antibody. OK, so one solution to this problem would be we have a million different antibody genes, and each B cell clone sort of expresses one of them OK how many genes do we have? Anyone know, roughly, on the order of magnitude? Do we have a million? What's that? AUDIENCE: 30,000? ADAM MARTIN: Exactly. Yeah, so Mr. George has suggested-- Miles, I believe. Yeah, OK, good. Miles suggested 30,000, which is the good upper limit, right? So having a million antibody genes sounds a little bit unfeasible, OK? And so it's basically unfeasible for us to express as many antibody genes or have as many antibody genes as we have antibodies. We just don't have enough real estate in our genome, OK? But there's another solution to generate the diversity, which is essentially a form of shuffling. So we have a single heavy chain gene for antibodies, and we have two genes for the light chain, but these genes are composed of multiple gene segments. There are multiple gene segments. Specifically, the segments that make up-- that generate this variable domain is composed of multiple gene segments, and these gene-shaped segments are shuffled during the development of the B cell to give rise to different proteins. OK, so these gene segments are shuffled to generate this diversity. OK, so now I'm showing you on the top here, this is the human immunoglobulin heavy chain locus here. You can see it's pretty big. There are lots of components. I want you to focus on this. So there is-- you see in orange, there's this variable gene segment, and there are 45 variable gene segments here. There's this diversity, or D segment here, which there are 23 of, and then there are six of these joining or J segments. OK, so these are all distinct parts of the gene. They're all distinct parts of the exon that encodes this variable region of the antibody, OK? So you have multiple V, D, and J gene segments. And in order to generate a functional antibody, one V has to be brought together with one D, which has to be brought together with one J for that heavy chain, OK? So you have multiple V-D gene segments, and they have to be brought together to form a functional antibody. OK, that's illustrated right here. So here you see this is the light chain. For the light chain, there are only V and J gene segments. V For the heavy chain, there there's V, D, and J. And so most of the cells in our body and the cells of our germline, at the very earliest stages of development, all have this arrangement, where you have everything still intact. But during lymphocytes development, specifically in lymphocytes, there is a recombination event that brings together V and J segments or V, D, and J segments, OK? So this is mediated by recombination at the heavy and light chain genes for that antibody, OK? And so this is very different from the recombination we talked about earlier in the semester, where recombination is happening during meiosis and the formation of the gametes, right? In that case, recombination is happening between homologous chromosomes. Here we're not talking about recombination between homologous chromosomes. We're talking about recombination that brings together and deletes segments along a single chromosome to bring these V and J segments together, OK? So this is sort of a intra-chromosomal recombination, which deletes the intervening sequences and brings these gene segments together to form a functional antibody protein. So this process is known as V(D)J recombination, and this is lymphocyte specific. OK, and that's because during the development of B and T cells, there is an induction of recombinases that mediate this recombination. So in this case, there is recombination, which is mediated by recombination-activating genes 1 and 2, called RAG1 and 2, OK? So there are these are lymphocyte-specific recombinases which mediate this rearrangement, which bring together a unique V, D, and J segments together, OK? So the diversity comes from the fact that each of these V, D and J segments, each V segment-- you could-- this also applies to D segments and also J segments-- has a unique sequence. So it encodes for a unique amino acid sequence, meaning that if you bring together different combinations of Vs, Ds, and Js, you get a distinct protein, OK? Now even if you had all of the combinations of V, Ds, and Js, you still don't have the diversity that we see in the human body. So there is another process that further generates diversity, which is the fact that when these segments are getting shuffled, it's imprecise in that nucleotides can be inserted or deleted as these segments are joined, which generates greater amino acid diversity, and this is called-- it's called junctional imprecision. So this recombination is not precise, but it leads to the insertion or deletion of nucleotides of nucleotides. And if there's a multiple of 3 nucleotides either inserted or deleted, then you get a functional antibody. Why is it that it has to be a multiple of 3? Jeremy? AUDIENCE: Otherwise, you end up with a frameshift mutation. ADAM MARTIN: Exactly. Right? This is and the-- this is on the more sort of like on the N-terminus side of the gene, right? So if you inserted one nucleotide between V and J, then the downstream portion of the gene, the downstream part of the open reading frame would be out of frame and wouldn't generate a functional protein. OK, so it has to be a multiple of 3. Yeah, Georgia? AUDIENCE: How is functional precision lymphocyte-specific? Or is it not? ADAM MARTIN: It's just the RAG1 and RAG2 are turned on specifically in the lymphocytes as they mature. AUDIENCE: And that also affects the insertion, deletion? ADAM MARTIN: Well, if you don't have recombination, you can't get junctional precision, right? So the junctional imprecision-- or junctional imprecision. The junctional imprecision is a consequence of the recombination process itself, right? So if you're not having recombination, you're not having any junctional imprecision because you're not generating a junction. OK, now there's one more thing that's important here, which is something that happens not as a consequence of this recombination process but as a consequence of activating the T cell response, which is that in addition to these variations, there's also something known as somatic mutation. So there's an elevated mutation rate at the IG locus that further increases the diversity of the amino acid sequence at these variable regions of the antibody, OK? Another way this is referred to is because it can increase the affinity of the antibody for a antigen, it's also known as affinity maturation, so these are synonymous. Maturation. Maturation. OK, so-- and this depends on the T cell, the cell-mediated branch of adapted immunity, so this is T-cell mediated. So one other aspect of this process that I want to talk about is until this recombination happens, the immunoglobulin gene is not expressed, so it's this recombination that leads to the expression of the-- either the heavy chain or the light chain gene, OK? And that's because the enhancer is sort of downstream in the gene, and by deleting the intervening sequence here, you bring the promoter in range of the enhancer, and now this gene is expressed, OK? But remember you have two copies of each of these genes. You have a parental copy and a maternal copy, and another feature of this system is that there is what is known as allelic exclusion. So the system is such that a B cell expresses only one antibody, and so if you had both alleles expressing, that wouldn't be the case, OK? So allelic exclusion makes it that if you get a recombination event that leads to a functional antibody for one of your sort of inherited copies of the gene, one of your alleles, it suppresses recombination on the other one, OK? So you will only get one of these genes, one heavy chain and one light chain, expressed per B cell. OK, so only one gene expressed so that each B cell only has one antibody. OK, I just wanted to point out, finally, that these junctions between V-D and J segments fall right in this CDR-3 region, so they're responsible for the high level of variability at the CDR or hypervariable 3 region. OK, and because of the allelic exclusion, each B cell expresses only one antibody, OK? So all of the antibody proteins expressed by that cell will be exactly the same. OK, so now the last property of the immune system we need to talk about is memory. And so the immune system needs to be able to recall past infectious agents that it's experienced, and so it needs-- I guess we're kind of personifying here, but it needs some sort of memory, right? It needs the ability to recall this, OK? And this is the principle behind vaccination, right? The way vaccines work is to put in one of these attenuated or inactivated foreign agents, such that your body is able to remember that later on when you get the real deal, and it's able to fight it off, OK? So the body has to be able to remember. And several ways in which this manifests itself, if we compare a primary infection, the first time you've seen an infectious agent, versus a secondary infection, they have very different responses from the standpoint of the adaptive immune system, OK? So if we consider the lag before your adaptive immune system really takes off, the primary response takes about five to 10 days, so it's a bit delayed, whereas the secondary response can be one to three days, OK? So it's faster. It's able to react faster when you see an infectious agent the second time. If we also just consider the magnitude of the response by considering how much antibody, the antibody concentration that's like put into your system, then the primary response is smaller and the magnitude of the secondary response is larger. So you basically-- your body's able to produce more antibody against an infectious agent the second time it sees it. Not only is the antibody amount better the second time, but actually the antibodies themselves are better antibodies, OK? And we can show that by thinking about antibody affinity, which is how tightly the antibody recognizes the antigen, and I'll give you numbers that represent the dissociation constant for an antibody to a given antigen. So the lower that number is, the tighter the binding. So for the primary infection, the antibody affinity is weaker on the order of 10 to the negative 7th molar in terms of KD, and this secondary infection generates antibodies that are functionally quite better. They bind much tighter. It can be less than 10 to the negative 11th molar, which is sub-nanomolar. Right? That's a really tight interaction between two molecules. So the antibodies, you get more of them, and they're better antibodies, OK? So what makes this memorable is that when-- what lasts in your body from the first time you see the agent to the next is there's a type of B cell known as a memory B cell, and this memory B cell will express a given antibody, and that antibody will be specific to the substance you saw previously. And because recombination is-- this recombination is irreversible, then that B cell is going to remember that antibody because it's still encoded in the genome. So the memory results from V(D)J recombination being irreversible and the fact that these memory B cells stay in your body, even if the antigen is not present, so these also stay in the body. OK, so effective vaccines generate these types of cells, these memory B cells. OK, that's important if you want an effective vaccine, that you have these B cells that retain information about the past infection. All right, so what exactly is it that the antibodies do? So I'll talk about effector functions of antibodies. So antibodies can bind to a foreign substance and interfere with the normal function, right? If you have a bacteria and maybe the antibody binds to some part of the bacteria to interfere with that bacteria getting into the cell, and this type of effect is known as neutralization. If you had an antibody that bound to something like a bacteria, you could also have it recruit phagocytic cells to internalize that bacteria, and so you could also induce phagocytosis. In addition, antibodies, when bound to a foreign substance, if that foreign substance is a cell, then it could recruit a killing cell to kill that cell, so there's also a killing aspect to this, OK? So what's in this diagram here is a type of cell known as a natural killer cell that is killing its target cell, and so you can kind of think of this cell as the Terminator, OK? So right, if the natural killer cell recognizes this target here, then it's hasta la vista, baby, and that cell is dead, OK? I just want to point out one thing that I mentioned before, which is that antibodies can be leveraged to generate treatments for certain types of diseases. And we talked about a drug called Herceptin-- or not a drug but a-- it's an antibody, but it's a treatment for HER2 positive breast cancer, so this is used to treat HER2-positive breast cancer. And it's really been a nice success story in the cancer field because what this-- what Herceptin is-- it was derived from a mouse antibody, so this is a mouse monoclonal antibody that recognizes this HER2 growth factor receptor, which is over expressed on 30% of human breast cancers. And what Herceptin is is that researchers took this mouse antibody and engineered a human antibody to have the mouse sequence at its complementarity-determining regions, such that you have a human antibody that won't be sort of removed by the human immune system but will recognize HER2 and recruit human immune cells to HER2 positive cells, possibly killing those cells or binding to HER2 and somehow neutralizing the activity of HER2 on these cancer cells. So antibodies can be very useful for therapeutics, as well as being useful in our own bodies to mediate immunity. OK, we'll talk about T cells on Friday. Remember to bring your projects.
MIT_7016_Introductory_Biology_Fall_2018
15_Genetics_4_The_power_of_model_organisms_in_biological_discovery.txt
ADAM MARTIN: All right, so today, I'm trying something different this semester. I wanted to tell you guys about one way that you can discover things in biology. And this is going to fall in the genetics cluster of lectures. I want to show you how you can go from being interested in some property of an organism, or even its behavior-- how would you go from there to identifying genes and mechanisms that are responsible for that type of behavior or appearance? OK, so today, we're going to go from some type of phenotype, or process that you're interested in, such as maybe the appearance of an organism. But we're even going to go more abstract than that because at the end, I'm going to tell you if you're interested in behavior, you can try to figure out the genes and mechanisms that are involved in determining the behavior of an organism. So the question is, how do you go from something you're interested in learning about an organism to actually identifying genes and mechanisms that are important for that? And on my title slide here, I have three fruit fly mutant phenotypes that you can see, and each of these mutants defined genes that were subsequently found to be present-- or homologous genes were present in humans and were shown to play important roles in human biology. So later on in the lecture, hall kind explain what each of these phenotypes is. But first, I want to just highlight the importance of model organisms and their use in biology. You've been seeing them already. I've talked a bit about flies, we talked about Mendel's pea plants. I just now have a compendium of model organisms that I'm going to throw up to tell you about. So we've talked about bacteria and the importance of bacteria in elucidating the flow of information from DNA to RNA to protein. Bacteria we're also extremely important for elucidating the initial mechanisms of gene regulation. I mentioned yeast a little bit in the last lecture when I mentioned tetrad analysis, but we'll talk about yeast a lot more later, when we start talking about the regulation of cell division and the cell cycle, because yeast played a pivotal role in identifying the gene that's critical for this decision of a cell whether or not to divide. Also, one terrific model organism in plants is a Arabidopsis Thaliana. And Arabidopsis has also played an important role in elucidating mechanisms of development, but also, in making advances in scientific research that relate to agriculture, such as disease resistance and pathogen interactions. So the two heroes of today's lecture will be the roundworm, Caenorhabditis elegans, and the fruit fly, Drosophila melanogaster. But before getting into study is in those organisms, I wanted to highlight a couple important vertebrate-model organisms-- the zebrafish and the mouse. And so you can see the mouse is our lab mascot, but it's also an important genetic model. In particular, there are many models in mice that mimic cancer. And so these have been useful for elucidating mechanisms of cancer. So it's really unethical for us to do a lot of different types of experiments on humans, but I will mention that there are human cell lines, such as the HeLa cell line, and these also play an important role in biology research. But these cell lines are taken out of context. They're not functioning in the context of an entire organism. So human cell lines are important, but you have to understand that they're sort of an in-vitro system, that they're out of the context of a functioning organism. OK, so why is it that we research these model organisms? So there are some practical reasons. Most of them are fairly small, and they're easy to house large numbers of them in a lab. They're often cheap to house in the lab and work with. Also, they develop fast. And especially when we consider genetics, the rate-limiting step in genetics research is the time it takes from the conception of an organism to the time that that organism can reproduce sexually. So these model organisms-- most of them reproduce very quickly, and so that accelerates the pace of research. But maybe most importantly is the fact that we are related to each of these model organisms through evolution because we all arose from a common ancestor. And just to highlight this example, using the fruit fly, the fruit fly has 17,000 genes when compared to our 20,000 genes, so we have roughly the same order of magnitude number of genes as the fruit fly. And so let's think about important genes. Let's think about just genes that are associated with human diseases. If we just consider human-disease causing genes, 75% of the human-disease causing genes have a homologous gene in the fruit fly. So we're similar to these model organisms, such as the fruit fly, in particular, genes that are important for understanding human disease. So I came across this quote when preparing for the lecture, and I liked it. It's from John Rinn, who is a contemporary of ours. And John Rinn said, "Genetic approaches are as fundamental to biology as math is to physics." And I think this is an apt quote because both genetics and math themselves are scientific disciplines, but they can be leveraged to learn things about biology and physics respectively. So genetics really plays a fundamental role in biology and the discovery of new biological mechanisms. OK, so now, I want to just briefly take you through, how is it that we discover things using genetics? And I'm going to take you through a type of approach that's called a "forward genetic screen." And I'll get to the orchestra in just a minute, but I briefly just want to define what a forward genetic screen is. So a forward genetic screen is type of approach you would do if you don't know the genes that are involved in a specific process, but you want to identify them. So in a forward genetic screen, you don't know the genes and mechanisms involved. So you don't know what these genes are. You might not even have a genome sequence of the organism. You don't need a genome sequence in order to do a forward genetic screen because you don't know the genes, but you're interested in a particular aspect of development, or of organismal function, and you want to identify the genes that are important for that. So when starting a forward genetic screen, you have to, then, infer what a possible phenotype would look like if you broke genes that were involved in that process. So the mantra of geneticists is that we are going to break genes and then look at the result and see if it gives the phenotype we're interested in. So in a forward genetic screen, you're looking for a phenotype that you would expect if you affected a certain process, if you disrupt a process. So think about this orchestra up here. Let's say you are interested in what regulates each of these sections in the orchestra. What is the master regulator of this orchestra, if you will? And so conceptually, what a genetic screen would involve is taking hundreds, maybe thousands, of orchestras like this one, and just shooting an individual in this orchestra, and removing them from the orchestra. And so let's say you remove-- let's say you remove this guy right here, then maybe nothing really happens because they are, like, 30 or 40 other violinists, and that's not the major control circuit. So then you would infer that this gene is not important for the regulation of the orchestra. But let's say you took out Bernstein, and then you listen to the orchestra and find that the sections are uncoordinated, the bassists start doing crazy things on their own. So then the logic of Drosophila genetics, you would then name that gene uncoordinated and infer that that gene has some important role in coordinating the different sections of the orchestra. So the goal in genetics is to identify a mutation that alters a gene function that gives you a phenotype that you're interested in. And rather than taking a gun and shooting members of the orchestra, in genetics, you try to identify mutations. So you try to induce mutations. So we're looking for mutations. And these mutations could be spontaneous mutations, meaning you didn't do anything to induce it, but they just appear as a variant in the population. And so we've talked about Thomas Hunt Morgan and the white mutant, and the white gene. So the white mutant was a spontaneous mutation. Morgan's lab didn't do anything special. They just looked at lots of flies, and over the decades, they identified this one special male fly that they could work on. But nowadays, researchers have a way to accelerate this process. And so we can induce mutations. In the way we can induce mutations is by using some type of mutagen. So for example, you could have some sort of chemical mutagen that increases the error rate in DNA replication, or you could use radiation to induce DNA damage, and that essentially accelerates the frequency of mutations that occur in the genome of an organism. And so the process of mutagenizing an organism isn't specific to genes. You're just inducing random mutations across the genome of the individual. Let's say this piece I'm drawing here is part of a chromosome, and these boxes are genes. You're inducing random mutations, and maybe one mutation hits this gene. So that's a mutation that might affect the function of an organism. You might have other mutations that are outside genes and may have no effect, or you could have a different kind of mutation which isn't in the coding region of the gene, but maybe affects the regulation of this blue gene over here. So this is a random process. If you're feeding an organism some chemical mutagen, you're inducing random mutations in different places, and you don't know which are the ones that you want until you look at their phenotypes and try to find the needle in the haystack. So let me show you some examples. So let's say we were interested in our body pattern. We all have a body. Our head is in our head position, our ass is in our ass position, we have arms that are in a right position, our legs are in the right position. Let's say we're interested in figuring out what genes were responsible for that body pattern. What kind of a mutant might you look for? Anyone? Rachel. AUDIENCE: [INAUDIBLE] ADAM MARTIN: What's that? A mutant with his body parts in the wrong place. Maybe you have like a leg where your arm should be coming out, or maybe you have two heads, or something like that. So you look for some sort of defect in the pattern formation. And so this would be, obviously, unethical to do in humans, but in model organisms, we can actually find these types of mutations. I'm just going to highlight a couple mutants. This one's an obvious one. So this fly has two wings. You can see them folded over each other right here. But there's a specific class of mutant that was called "wingless"-- where the flies have fewer than two wings. This particular fly has only one wing. And this wingless mutant defined a gene that became known as wingless, and it's a gene that has a homologous gene in humans. And this particular gene defines an entire signaling pathway, which-- whoops, sorry-- which is important in stem-cell biology and is also over activated in cancer. But obviously, we don't have wings, so this gene didn't get discovered in humans. It was discovered in flies, and then only later on, it was inferred-- or it was discovered that there is a related gene in humans. OK, so that's one example. One of the other phenotypes I showed you is called "notch." Normally, a fly wing has a nice, smooth margin. But notch mutants have wings that have this chunk taken out of them at the end. So again, the gene became known as "notch" because of this fly phenotype. But again, there's a human notch, and human notch, again, is involved in human diseases, such as cancer. OK, so that's two examples. But now, I want to talk to you about how one might find the needle in the haystack. How can you have a concerted effort to identify genes that have that function in a given process? I'm going to tell you about work done by Eric Wieschaus and Christiane Nusslein-Volhard because they did one of the more famous genetic screens that had been done, and they won the Nobel Prize for their results in 1995. So I mean I take you through this classic genetic screen. In this screen, they're going to induce mutations. So I'm going to take a parental generation. And this screen was done using fruit flies. And they took male fruit flies and treated the males with a mutagen to induce mutations in the gametes these male flies that would then be passed to subsequent generations. And they mated the mutagenized males to females, and then they went on to look at-- to isolate individual F1 progeny. So we're going to look at individual F1 progeny in this generation. So in the F1 progeny have the potential to get one mutagenized chromosome from their father and will get a normal chromosome from the mother. So I'll just drawn this like this, where independent mutations are going to be different colors. So that has maybe a mutation on one of its chromosomes. Another fly, because this is random, might have a different mutation on the same or a different chromosome. I'll draw a couple more. And maybe one of these flies doesn't have a mutation that's been induced. And then I'll draw just one independent mutation right there. So again, this is random. So to see random mutations, you have to take individual flies. Now, are you're going to see a phenotype in this F1 generation? So Miles-- is it the Malik or Miles? Sorry. AUDIENCE: Miles. ADAM MARTIN: Miles. Good. OK. AUDIENCE: Most likely not because any mutation [INAUDIBLE] spontaneous would be overshadowed by the direct gene, the female. ADAM MARTIN: Exactly. So what Miles just said is that-- basically, he said that these mutations if their loss-of-function mutations are going to be recessive-- because they're going to be overshadowed by a normal functioning gene on the other chromosome. So if you're looking for a loss-of-function mutation, that's likely going to be recessive, and you would not see it in the F1 generation. So in order to look for organisms that have incorrect body patterning, you need to get a situation where each of the mutated chromosomes is homozygous recessive. And to do that, you have to do more crosses. So what the researchers did was to take independent individual organisms, and cross them all, again, to non-mutated flies. And remember, flies don't self-cross. So they need a male and a female that has the same mutated chromosome mating to each other in order to see a recessive phenotype. So because you only have an individual fly for each of these, you need to get more than one fly with the same mutation. So they mate to a normal fly. But now, you can get an F2-- multiple F2 males and females that will have the same mutated chromosome. So here, let's draw these. They're still heterozygous. At this point. But at this point, you just generated multiple organisms, male and female, that have the same chromosome that arose from, essentially, the same chromosome and the father, or the male gamete and the parental generation. Oh, I did green. OK. OK, so now we have males and females, both of which are heterozygous. And I'm skipping some of the details of this cross. You'll want to take my word for the fact that they can keep track of which chromosome is which. And I'm not going to tell you how they did that because it's fly genetics, and it's not terribly important that you know it. So now, you have males and females. And now you can look an F3. And from each of these lines-- each of these lines, you're going to cross siblings to each other. So you're doing a sibling-cross. What fraction of the progeny for each of these sibling crosses should be homozygous for the mutant chromosome? Yes, Steven. AUDIENCE: One-fourth. ADAM MARTIN: One-fourth, exactly right. So Steven's exactly right. A quarter of the progeny should get two copies of the mutated chromosome. So 25% should be homozygous recessive. And so you can screen this F3 progeny for each of these independent lines, and look for flies at some stage of development that are defective in patterning. And so I'll show you how-- where'd my clicker go? All right, so they're looking for incorrect body pattern. Yes, your question-- what's your name again? AUDIENCE: Georgia. ADAM MARTIN: Georgia, yeah. AUDIENCE: Would there be some [INAUDIBLE] in the F2 [INAUDIBLE] normal? [INAUDIBLE] ADAM MARTIN: Yes. So they're able to select for the mutated chromosome, yeah. And I'm skipping over how they did that because it's kind of esoteric. But you're right. Some would be normal. But I'm just telling you they're able to figure out the ones that have the mutant chromosome, and they made sure to keep those in the next generation. Good question, Georgia. All right, so they can screen F3 for a phenotype. It turns out all of the mutations they're interested in are lethal, so they have to look at the larval stages for ones that have a defect in patterning. So this is what a Drosophila larvae looks like. And you can see it has a segmental pattern here. This is the head of the larvae, the tail of the larvae. But you see there are these segments that alternate between smooth cuticle and hairy cuticle. They have things called "denticles," which are these hairlike projections, which are essentially, what the maggot uses for traction so it can crawl around. So they're basically just looking at maggots here and looking at the pattern of segments in the maggots. But what they found is a mutant. And here's a mutant here which just has these hairlike projections all across the cuticle without these intervening regions of naked cuticle. And because there's a lot of hairlike projections here, it reminded the researchers of a hedgehog, and so this mutant became known as "hedgehog." And the hedgehog gene was the founding member of an entire signaling pathway, known as the "hedgehog pathway," that plays important roles in human development and also, human cancer. So the human gene for a hedgehog, or one of the most important ones in humans, is known as sonic hedgehog. So you see that geneticists have kind of an odd sense of humor. But the sonic-hedgehog gene is important in cancer. And actually, there are now a number of drugs that are being developed to target the hedgehog pathway. And one was approved back in 2012 for use in treating basal-cell carcinoma. And there's currently another drug that's in phase-II clinical trials for treating some forms of leukemia. So this is a story that goes from identifying this weird fly mutant all the way to clinical trials, developing drugs, whose purpose is to inhibit this signaling pathway, which hedgehog is the signal-- the extracellular ligand for. OK, so now, I'm going to switch gears and tell you about another story. It's a bit like this one, except this one has an MIT connection. So I'm going to tell you about work done at MIT in the worm Caenorhabditis elegans. And this work was done in the lab of Robert Horvitz, who is a member of our biology department. And Robert Horvitz, for his work, won the Nobel Prize, along with John Sulston and Sydney Brenner for their work on how cells decide what fates they give rise to. And one thing that the Horvitz Lab has done is to elucidate the mechanisms that determine whether or not a cell lives or dies during development. So if we think about the development of an organism, we all start from a single cell. And the cell divides into different cells, but different cells in our body give rise to different cell fates. So if we consider just a generic cell division here, you have a parent cell, A. It might divide into cell B and cell C. And maybe cell B might have a particular fate. It might go on to develop into a neuron, or a muscle cell, whatever. It doesn't matter. But one fate that Horvitz, Sulston, and Brenner saw is that sometimes, cells-- their fate was to just die. And this was very stereotypical. You get a death. And this was defined as "programmed cell death" because it followed a very stereotypic pattern, where the same cell, each time, would undergo cell death, so it seems like there's a program for it. This also is called "apoptosis." So what really enabled this work is the biology of C. elegans. And, here, I'm showing you C. elegans development. This is the C. elegans zygote. This cell divides into two cells that are different. One is called AB, one is called P1. And we know exactly what the fates for the descendants of both these cells are based on the work of Sulston, Brenner, and Horvitz. Now, you get a four-cell stage where AB divides into two cells, and P1 divides into two cells. And again, we know the fates of all these cells. And what C. elegans researchers have done, because C. elegans is a relatively simple organism-- there are only 947 cells in every individual worm, so there are 947 somatic cells. And this is stereotypic. Every worm has these 947 cells. How many cells do you have? More than or less than 1,000? You have more than 1,000 cells, right? That makes you much more complicated. So really, the practical aspect of C. elegans is it has a much simpler composition, and they can track what happens to every single one of these cells. They know when every cell divides and what the daughter cells of that division will turn into. In other words, they know the entire lineage of this animal. So this is a picture of the lineage. That's showing you what happens to every single cell in the development of an adult worm. And so what particularly interested Robert Horvitz is 131 cells, during the development of this animal, underwent programmed cell death. And this is not random cell death. It's the same cells every time. They know what happens to every single cell in the development of this organism, so they know when there's a division, and one specific cell dies every time in the organism. So it's really a unique scenario, where you can really see what becomes of every cell that is present in the organism. OK, so now, let's talk a little bit about the death process. So I'll make the cell-death process simple. You start with a live cell. So you have a live cell. That live cell dies such that you then have a dead cell. And then after the cell dies-- after the cell dies, the remnants of that dead cell are engulfed by neighboring cells, such that you no longer see that cell. So the last step in this process is engulfment. Now, a key aspect of one of Horvitz's screens is that other researchers had identified a mutation that affected cell death that specifically blocked this engulfment process. And this gene is called ced-1. There's the first cell-death mutant that was identified. "Ced" stands for "Cell Death abnormal" so C-E-D is short for "ced." So researchers in the C. elegans field started isolating these cell-death-abnormal mutants, where something happened in the cell-death process that was abnormal. And I'll tell you how this was leveraged by the Horvitz lab to identify, basically, a pathway of genes that were involved in cell death. So this is now a worm. And what you see in this worm are these bubble-like structures that are cells that are dead but haven't been engulfed. So these dead cells, because they're not engulfed, are visible in the adult worm. And so you can basically just look at adult worms, and see whether or not these bubble-like structures are present in a ced- mutant. In a wild-type worm, you don't see this, and so it's harder to see. But this provided a visual assay to look for mutations that block the cell-death process because if you block the process upstream of ced-1 in an engulfment, you no longer will see these bubble-like structures. And so that is what the Horvitz Lab did. They started with ced-1 mutant worms. So they're starting with the ced-1 mutants. And they then mutagenized these ced-1 mutants. Sorry-- mutagenized. And so they're essentially, looking for second mutants in this animal that will affect the death process. OK, I'll take you through the logic of this. So for the remainder of the generations I'm going to tell you about, they're all ced-1 mutant. Ced-1 is homozygous, in this case. And the worms are hermaphrodites, meaning they are both male and female sex organs. And therefore, they can self-fertilize. So this is a self-cross. And so ced-1 remains homozygous mutant throughout all these crosses. Now similar to the fly screen, they can look for individual-- or you get individual worms here. Let's get some colors so we can compare different chromosomes. All right, so this chromosome might have one mutant, this chromosome could have another mutant, and this chromosome could have another mutant, and maybe this one doesn't have a mutant. OK, but again, these are heterozygous animals, so if you're looking for a loss-of-function mutant, you would not see the phenotype at this stage. So what was done in this screen is, because these are worms and they're hermaphrodites, they basically allowed each of these worms to self-cross. So now, were talking just about a self-cross. And because a single worm has just one of these chromosomes, when it undergoes a self-cross, a quarter of the progeny will be homozygous recessive for the mutation. OK, so here, we have the F1 generation. And now, they can just look at the F2 generation, and they're looking for some fraction of the worms resulting from each individual that has a phenotype that looks like a cell-death mutant phenotype. And so they're essentially, going to screen through the F2 generation, and look for worms that fail to have these bubble-like structures in the adult world. So they're looking for a loss of these refractile structures. And they got one. This is now a double mutant between ced-1 and ced-3, and you see how you no longer have these big bubble-like structures in the worm. So they identified a mutant, and thus, a gene, that's called "ced-3," which basically causes a failure of the cells to undergo cell death. Now given what I've given what I've shown you, is this necessarily a mutant that's involved in cell death? Can you think of an alternative explanation for why you would lose these sort of bubble-like structures? What else could be happening? When you do a screen like this, and when you do science in general, you have to think through all the different types of possibilities. Yes, Georgia. AUDIENCE: [INAUDIBLE] ADAM MARTIN: Excellent. Georgia said you could have re-mutated ced-1. So one possibility is that you isolated a ced-1 revertant, which means you changed its DNA sequence so that now, the ced-1 gene is functional. So one scenario here is you could have some type of revertant, or you could have a suppressor mutation. Maybe you have a mutation that bypasses the function of ced-1 such that now, the cells can engulf the dead cell. So you could also have a suppressor. Or the alternative scenario, the one that we want, is that we've affected the cell-death process and identified something that is a bona-fide cell-death mutant. How would you differentiate between these two? Any ideas? Should this have an extra cell? No. Stevens says no. I agree. There should be no extra cell here because if you restore the phagocytic process, then the cell should die, it should be engulfed, and there shouldn't be an extra cell. But if this is a bona-fide cell-death mutant, then you should have extra cells. And it turns out the ced-3 mutant blocks all 131 of these cell deaths so that you have 131 extra cells. Because they know the entire cell lineage of this worm and can see whether or not cells are present or not, they can tell if there are extra cells or not. And therefore, that allowed them to infer that they had an actual mutant that was affecting the mechanism that promotes the cell to die. And this shows that the cell death-- it shows the cell death is an active process. It's not just some random event that's happening. It's an active process that is controlled by genes. So there's an active mechanism involved in the cell death. Any questions about how these screens go in the crosses before I move on? OK, so I have one more story to tell you about, and this relates to behavior. So now, I want to tell you about behavior. We all behave, some of us better than others. So how is it we can go from something as abstract as behavior to specific genes and mechanisms that control this? And I'm going to tell you about work that was awarded the Nobel Prize last year. Two of these researchers-- their careers are at Brandeis. And so what they discovered is the mechanism that controls circadian rhythm in organisms-- so circadian rhythm. So circadian rhythm is a behavior. We are awake during certain parts of the day and are asleep at night. If you're hidden from the light-dark cycle, you continue this cycle for some amount of time. So there's something intrinsic in our system such that we want to exist on this 24-hour wake-sleep cycle. That's why it's a pain in the ass to travel to the other side of the world. So we want to identify what controls this behavior. What, then, might be a phenotype you might look for? What would happen if you break circadian rhythm? Yeah, Rachel. AUDIENCE: [INAUDIBLE] ADAM MARTIN: It wouldn't have a nice, clear wake-sleep cycle. Exactly. So this screen-- what they did is to look for flies where they didn't have this robust wake-sleep cycle. So you basically look for flies that are awake at night, and identify the mutations that cause this to happen. OK, I'll show you an example of what the data looks like. So what each of these lines is each tick mark is a measurement of fly activity. So what you see are flies asleep, they're awake, they're asleep, they're awake. And researchers identified mutants that didn't show this 24-hour cycle. So some are just totally arrhythmic. Other mutants alter the period of this cycle so that it's either longer or shorter. So this is a really elegant way to look for genes that are important for circadian rhythm. And I'll take you through the genetic screen. This was done by Konopka and Benzer. And they did a nice screen. It involves a genetic trick which I'll show you. There's a type of chromosome called "attached-X" in flies, where two X chromosomes are fused to each other-- so two X chromosomes fused. And if a fly has two X chromosomes, it's female. If it just has one X chromosome, it's male. So that's determines fly gender. So then these researchers took males and mutagenized them. And they crossed these males to attached X females. And when you cross males to attached X females, you do something clever, which is half of the progeny dies because you need an X chromosome and you can't have three X chromosomes. But your females are all attached XY. So the females actually get their X chromosomes from their mom, which is the opposite of the way it normally works. And males get their X chromosome from dad because the attached X strain has a Y chromosome. So this is a little bit of a genetic trick. And the fly-- the reason it was done, in this case, is they wanted to mutate the X chromosome and then have the fathers pass on their X chromosome to their sons because there's only one copy of the X chromosome. So if you get a mutation on X, you don't have to homozygous it because there's only one copy of it. So it's a little bit of a genetic trick that, in this case, served the researchers a generation on their screen. OK, so you take this, and then you identify males now that have a mutated X chromosome, and they have only one X chromosomes so you should be able to observe the behavior even here. But to establish a line of flies that have this mutation, they then took these F1 mutated males, and crossed them, again, to attached X flies. And again, in doing this, all the males from this cross are mutant. So now, you have multiple males, all of which are mutant. So you can start with a single male F1. And by crossing it to this strain here, you get a lot of males now that have the mutant chromosome, and you can look at their behavior to determine whether you've affected circadian rhythm. Does everyone understand how the attached X works here? It's kind of a little bit of a trick, but flies have these tools that you can use to save your time in the lab. OK, so these mutants that the Benzer Lab identified had altered period of the sleep-wake cycle, and therefore, the gene was named "period." So this screen identified a gene called "period." This is a gene. And there's a hammer log of the period gene in humans. And the gene in humans is associated with familial advanced sleep-phase syndrome. So defects in the genes that were identified in Drosophila actually are relevant to human sleep disorders. All right, I'm all set.
MIT_7016_Introductory_Biology_Fall_2018
23_Cell_Cycle_and_Checkpoints.txt
ADAM MARTIN: And so today and for the remainder of the week, the theme is going to be the cell division cycle. And so we're going to really talk about the cell division cycle in every lecture this week with the penultimate lecture talking about how dysregulation of the cell division cycle results in a pathological condition known as cancer. OK, so here is now a cell going through the cell division cycle. It's entered into mitosis right now. And these guys here are the chromosomes of the cell. And you're going to see them line up at the metaphase plate. And eventually they'll be segregated to the two poles of the cell. And then the cell will divide along its equator. OK, so I thought we could start today by just thinking about what has to happen in a cell during the cell division cycle. What has to happen during this process in order for the cell to replicate? Yes, Miles? AUDIENCE: For all the [INAUDIBLE] all those have to be duplicated so that each cell has a starting number. ADAM MARTIN: Mmm hmm. So Miles suggested the organelles have to be duplicated such that the daughter cells can inherit those organelles. And that's correct. What else has to happen? Anything else have to be duplicated? Stephen-- AUDIENCE: DNA has to be duplicated. ADAM MARTIN: The DNA, the nuclear DNA, the chromosomes, have to be duplicated. So the chromosomes have to be duplicated-- duplicated. What else has to happen every cell cycle? What would happen to the size of the cell just when it divides? Yeah, Udo? AUDIENCE: It would grow. ADAM MARTIN: So Udo is suggesting that the cell has to grow, right? Because if the cell didn't grow, then cell division would make smaller and smaller and smaller cells. And so another thing the cell has to do during the cell cycle at some point, it has to grow in size. OK, and what's the final point of the cell cycle? What happens? What's kind of the goal of the cell cycle? Yes, Stephen? AUDIENCE: Undergo mitosis. ADAM MARTIN: To undergo mitosis. And so the cell has to physically divide, right? The chromosomes have to be segregated, and the cell has to physically divide. OK, so you can think of the cell cycle as the goal of getting all these events to happen is to get one cell to become two cells. So you need chromosome segregation. And you want an equal segregation of genetic material into two daughter cells after the cell divides. OK, so today, we're going to unpack the mechanisms that allow a cell to do many of these things and how it's regulated. And one thing to think about is, what is going to determine whether or not a cell enters the cell division cycle and undergoes a division? What do you think are some things that cells would care about if it's trying to decide whether or not to divide? So one thing a cell is going to care about in a multicellular setting is whether it's getting appropriate communications from other cells that are telling it to divide. And remember, Professor Imperiali told you about signaling. And one example was receptor tyrosine kinase signaling. And this is just a diagram showing you the RAS map kinase pathway. And one of the effects of this signaling pathway is to promote cells to enter the cell division cycle in order to divide, OK? So this is in a multicellular organism, the signaling is important. For unicellular organisms, cells might care about whether or not there are nutrients present or whether or not the cell is of the right size, OK? So cells have to make the decision. I'm going to focus mainly on cell communication and how that might change the cell physiology. And I want to start by just giving you a little bit of an overview of the cell division cycle. So there are four distinct phases of the cell division cycle. And I split them into two classes. There are phases where things physically happen to the cell. And those are S phase and M phase. And so during S phase, S phase stands for DNA synthesis. And it's during this phase when the nuclear DNA is replicated, OK? And so for each of the action phases, if you will, there's some sort of machinery that's involved in changing the cell. In the case of S phase, that would be DNA polymerase and helicases that mediate the replication. So helicases, DNA polymerase. And you remember from earlier in the semester when we talked about DNA replication all of the proteins that are involved in replicating a chromosome. OK, the other phase where something really physical happens is M phase, which is mitosis. And during M phase, this is when the sister chromatids of each chromosome are separated to the daughter cells. OK, so this is when chromosome segregation happens. And again, during this phase, there has to be some sort of machine that gets activated at this phase of the cell cycle in order to, in this case, physically separate the chromosomes from each other. And that machine in M phase is the mitotic spindle, which you'll recall is a machine that consists of microtubules. So these are microtubules. And in each cell cycle, these events have to happen. But they have to happen in order, right? You need DNA synthesis before you segregate the chromosomes, right? So there has to be an order to this. So these other phases are called gap phases. And there are two of them, G1 and G2. And events happen during these gap phases to help to ensure that things happen in the right order. And so in G1, the cell has to decide whether or not to enter into the cell cycle. So some things to consider here, the one that I mentioned before is whether or not there are growth signals present. OK, so for a metazoan cell, it's important that the cell doesn't just divide without any regard to what's going on in the surroundings. There needs to be a communication between cells such that there's the proper balance of cell division in a tissue for specific cell types. OK, so this G1 phase, this is when the cell-- if the cell goes from G1 to S, this is when the cell commits to the cell cycle. So G1 to S, this is the time when the cell commits to going through the entire cell cycle. So if the cell passes this G1 to S transition, then the cell has committed to going through the entire cell cycle. OK, the other phase, this G2 phase, ensures that this type of quality control that happens, it has to ensure that the DNA is replicated before the cell moves on to mitosis. So you can think of G2 as a phase where there's a quality control mechanism and the cell cares about whether or not its DNA is replicated or not. OK, so now I want to tell you basically the answer as to how this system works in a eukaryotic cell. And this system requires a level of control. And it requires a control system. And what this control system does is to ensure that these different events that happen during a cell cycle occur in the right order. OK, so this control system is going to ensure proper order. And there are two main components to this control system. The first is called cyclin dependent kinase, or CDK. And so cyclin dependent kinase is a kinase, so it can post-translationally modify other proteins by adding a phosphate group to them. And so it's through that mechanism that cyclin dependent kinase can modify events and control when they happen in the cell cycle. OK, and the other key component of the system is a protein called cyclin. And what cyclin is is it's the regulatory subunit of the CDK. So this is the regulatory subunit of CDK. And so without the cyclin, the cyclin dependent kinase is inactive, OK? So the CDK needs the cyclin to have activity. So cyclins increase the activity or activate CDK. OK, but there are different flavors of cyclins. There are actually many different cyclins, at least four classes of cyclins. And these cyclins appear at different phases of the cell cycle and then go away. OK, so the cyclins oscillate that's why they're called cyclins, because they come on and off. And depending on which cyclin is present determines what CDK is going to phosphorylate, OK? So these cyclins also determine substrate specificity of the kinase. So which cyclin determines what protein the CDK phosphorylates. So I've outlined three classes of cyclins here. Here is a G1S cyclin in complex of CDK. And then here's an S cyclin complex with CDK in red. And so what do you think the S cyclin CDK is going to phosphorylate, what kind of protein? Anyone have a guess? Miles-- AUDIENCE: Helicase. ADAM MARTIN: Yes, you're actually exactly right. It's going to phosphorylate and activate things that are involved in DNA replication. And Miles is right. Helicase is one of the proteins that gets phosphorylated by S cyclin CDK. And then similarly, M cyclin CDK, which appears here in blue during mitosis, M cyclin CDK is going to phosphorylate proteins that are involved in forming the mitotic spindle so that it induces cell cycle events that happen specifically during mitosis. OK, so you all see it depends which cyclin is present that determines what cell cycle events are happening at a given time. And therefore, it's important that we understand how these cyclins appear at distinct cell cycle phases and whether or not that's the mechanism for the oscillation. Yes, miles-- AUDIENCE: I have a question about [INAUDIBLE] question about mitosis [INAUDIBLE]. So I know that microtubules make sure the chromosomes separate from the cell. How does a cell regulate having half of the organelles on each side [INAUDIBLE] split [INAUDIBLE].. ADAM MARTIN: Some organisms employee motors such that organelles are physically sort of put in daughter cells. But I think often it's just random, right? If you dissolve the organelle and it becomes kind of a bunch of different vesicles, then if you just split in half, there's a high probability that each daughter cell will get parts of the organelle, OK? So organelles can change their morphology during the division process in such a way that they're able to be inherited by both daughter cells. OK, that's an excellent question. So it's the cyclins that really determine what's happening. And I just wanted to point out here that one of the main transcriptional targets of RTK signaling is this G1 cyclin, OK? So it's these signaling pathways that lead to the increase in G1 cyclin that start the cell on this process of entering into the cell cycle, OK? So you get these cyclins getting synthesized. And the cyclins appear in a defined order. OK, so there are different cyclins, but they appear relative to each other in a stereotypical order. So they appear in order. And it's that order of the cyclin that defines which cell cycle events happen at what time in a cell. OK, now I want to tell you a little bit about how the machinery that's involved in the control of the cell cycle was discovered. And I'm going to start by telling you a little bit about budding yeast. And by showing you how this was discovered, it will give you a sense as to how this system controls the cell division cycle. You'll recall that budding yeast can exist as a haploid cell in addition to existing as a diploid cell. So there's a haploid/diploid life cycle. Also, one nice feature of budding yeast for this particular question is that you can infer the cell cycle morphology of the yeast just by looking at its morphology. So budding yeast divides by budding. And the size of the bud indicates what cell cycle phase the cell is in, OK? So you can infer cell cycle phase by morphology of the yeast cell. So for example, if we look up here, here's an unbudded yeast cell that's probably in G1. Here's a yeast cell with a little teeny bud on it. That's probably an S phase. Next to it over here is one that's a slightly bigger bud. And here's one with an even bigger bud. And that one might be in G2 phase. OK, so because this bud grows in size over the course of the cell cycle, you can just look at a yeast cell and infer what the cell cycle phase is. OK, so I'm going to tell you about a genetic screen that was done to look for mutants that were defective in the cell division cycle. And these are known as cell division cycle, or CDC, mutants. Now, what type of yeast cell, haploid of diploid, might you want to screen mutants with? What would be the advantages of either one or the other? Yeah, Natalie? AUDIENCE: Would you do haploid, because if it's a recessive mutation [INAUDIBLE] expressed? ADAM MARTIN: Yes, so what Natalie suggested is to start with the haploid mutants because there's only one copy of each gene such that if you hit it, now you no longer have a functional copy of that gene. If you started with a diploid cell, you'd have two copies of the gene. And you'd have to have two mutations both happening in the same gene, which would be a rare event. OK, so it's better to start with a haploid in this case. Now, what's the problem if you mutate a gene that's involved in the cell cycle? What's going to be the phenotype, the immediate visible phenotype? Is it going to be alive or dead? Carmen-- AUDIENCE: Dead. ADAM MARTIN: It's going to be dead, right? And it's hard to work with an organism that's dead. OK, so what was done is to look for a particular type of mutant which is known as a temperature sensitive mutant. And a temperature sensitive mutant is a mutant where the cell or organism is alive and well at one temperature but dead at another temperature, OK? And so the screen basically involved taking yeast. Here's yeast growing in a test tube. And this is now haploid yeast. And you can treat that yeast with a mutagen. It doesn't matter what, just something that will induce mutations at a high rate in these yeast cells. And then you can take these cells and plate them on media where individual cells will grow into colonies. OK, and if you grow it at 22 degrees C for yeast, this is the most moderate temperature you can choose. So this is what's known as the permissive temperature. OK, but you can also take this plate of used colonies and duplicate it and grow it at another temperature. And you might get something that looks like this, where you see this colony grew at 22 degrees, but at 37 degrees C it did not grow. And that would suggest that, then, this has a temperature sensitive mutant. And this temperature of 37 degrees is known as the restrictive temperature. OK, so that would identify a temperature sensitive mutant. Now, is every temperature sensitive mutant that you identify, is that going to be a cell division cycle mutant? Miles, you're shaking your head no. Why is that? Can you explain your logic? AUDIENCE: So there could be a couple different proteins [INAUDIBLE] there's too many mechanisms in the cell that could be dependent on temperature to narrow down to just [INAUDIBLE] mutant cell cycle. For example, if a phytoprotein mutant organism would mutate [INAUDIBLE] temperature sensitive, without that protein it would die also. ADAM MARTIN: Exactly. So Miles is suggesting that if you just mutated any old gene that was involved in viability for this yeast, and it unfolded at 37 degrees because you sort of made a mutation that made it unstable, then you would identify that as a temperature sensitive mutant. So what would be a good criteria, I guess, that we could use to select just the mutants that are affecting the cell division cycle? Might there be a way for us to do that? I guess I'm asking, can we narrow down the phenotype, right? Temperature sensitivity could be-- by affecting any process in yeast, is there a way we can gear it towards the cell division cycle? Diana-- AUDIENCE: Maybe [INAUDIBLE] specific phase of the cell cycle, you could look at the morphology of it. And if all of them stop at the same phase, you might assume that you [INAUDIBLE].. ADAM MARTIN: All right, very good. So Diana is suggesting is that we looked for a phenotype. And she's guessed that the phenotype, if this gene is involved in sort of mediating a change from one cell cycle to phase to another, that if you mutate that, you'd have yeast that's all stuck in one phase, OK? And that's indeed the phenotype that was screened for. OK, and so if you just take a random population of yeast that's dividing, you'll see cells that are unbudded, small budded, slightly bigger budded. And if you count the number of cells, what you'll see is that most of your cells are unbudded. Some are small budded. And a larger percentage are large budded. And this just reflects the relative amount of time that yeast is in each of these phases of the cell cycle. So yeast spends most of its time in G1. Therefore, if you look at a random population of yeast, you'll see most of the cells will be in the unbudded state. OK, so this is for wild-type normal yeast. Now, what was identified is a cell division cycle mutant, CDC 28, which at the restrictive temperature causes a train wreck at a specific phase of the cell cycle. So all of the cells now are stuck in the unbudded state. And so this suggests that these cells, when they are shifted to the restrictive temperature, are still able to move through the cell cycle. But once they get to this phase, they get stuck. OK, so here there is a cell cycle arrest at G1. And they confirmed it was G1 by measuring the DNA content. And by measuring the DNA content, they were able to show that these cells did not duplicate their DNA. So they didn't even start to undergo S phase. They were stuck in G1, OK? OK, so that suggests that the CDC 28 gene is required for cells to go from G1 to Sl. OK, so the wild-type CDC 28 gene is required for this transition from G1 into the S phase. And it turns out that this yeast CDC 28 gene is the one yeast cyclin dependent kinase, OK? So this was sort of the defining mutant for cyclin dependent kinase. And you'll recall earlier in the semester when we talked about molecular biology that we talked about work done by Paul Nurse who used functional complementation to clone the human cyclin dependent kinase by transforming DNA into a different yeast, fission yeast. But again, that rescued the cell cycle arrest. And that's how the human cyclin dependent kinase was discovered. And I just wanted to point out here that the work I'm telling you about was awarded the Nobel Prize in physiology and medicine in 2001. And it was awarded to Leland Hartwell, Tim Hunt, and Sir Paul Nurse. Leland Hartwell did the screen that I just told you about there and identified CDC 28. And we already talked about Paul Nurse earlier in the semester. Tim Hunt worked on clams and sea urchins and identified cyclin. So this was the work that identified the regulatory machinery of the cell cycle. And they sort of showed that this worked in a number of different organisms. And they showed that it was evolutionarily conserved from yeast all the way to humans. OK, so this is a conserved mechanism. OK, so there's a mechanism that actively governs the transition from G1 to S. And I'll point out this transition is known as start in yeast. And it's called the restriction point in mammalian cells. It's kind of the point of no return in the cell cycle. But the cell cycle doesn't just blindly charge through the rest of the way. And there are certain quality control mechanisms that are in place to ensure that things happen in the proper order and that the quality of events happening is good before the cell moves on to the next stage. And so I'm going to define a concept called a checkpoint. And the checkpoint is a type of quality control mechanism. And checkpoints operate in the cell cycle to ensure that one event doesn't occur till the preceding event happens correctly, OK? So this ensures proper order of events and ensures that events happen correctly before the next subsequent event has to occur. OK, so one example of this is, if you just consider S phase and M phase, DNA replication has to finish before the cell starts segregating chromosomes. Otherwise there's going to be catastrophic consequences, such as possibly creating a cancer cell, OK? So one example of a checkpoint is called the DNA damage checkpoint. And what the DNA damage checkpoint does is it looks to see if there's DNA damage or if the DNA is still replicating. And if either of these cases is present in a cell, then it sends a signal. And that signal, in order to influence the cell division cycle, has to interface with the cyclin CDK control machinery, OK? So this signal will then inhibit cyclin CDK. And cyclin CDK governs two major transitions in the cell cycle. So there are two major what I'll call transition points. There's the G1 to S, which I just outlined over there, which is called start. So there's G1 to S. But there's also G2 to M, OK? So these, basically, the transition out of the gap phases, those are the key transition points that can be regulated by the cell to either slow things down to halt the transition or just go right through, OK? So let me tell you about an experiment that defined the functionality of the DNA damage checkpoint. And I'm going to tell you about work done by Weinert and Leland Hartwell. And it was published in 1988. And they were interested in what the nature and the function of these checkpoints was. And so you can take budding yeast. And you can damage its DNA by irradiating the cells with X-rays. And if you irradiate the cell with X-rays in a wild-type normal yeast, so in the normal yeast that's not mutant, then this damages the DNA. And the cell stays in G2. OK, so it stays in G2. OK, and there's a delay. OK, so here what I'm drawing is a G2 delay. So the cell spends an abnormally longer time in G2 than it normally would if you didn't damage its DNA. All right, and then over time it will continue in the cell cycle and enter into the next cell cycle. And what's interesting about this is that these cells live. OK, so one interpretation from this result is you damage the cell's DNA. It delayed the cell cycle in G2. So it didn't rush right into chromosome segregation. And that allowed the cell time to repair its DNA. And that enabled the cell daughters to live, OK? So that's an interpretation. Now, part of the evidence for that interpretation is that Hartwell and Weinert discovered a mutant called RAD 9, so the RAD 9 mutant. And RAD 9 stands for radiation sensitive. This is a radiation sensitive mutant. And this particular radiation sensitive mutant disrupted the delay. So it disrupted the checkpoint here. So what happens in a RAD 9 mutant is, again, you irradiate cells with X-rays. The cell goes from S phase to G2. But this time, there's no delay, so it goes from G2, the poor yeast charges unsuspectingly into mitosis with damaged DNA and divides. But in this case, there's a high level of death in the resulting progeny. So here you have death. OK, so RAD 9, then, is a gene that is involved in promoting the cell cycle delay such that the yeast cell has time to repair its DNA. And if you remove that delay-- so here there's no delay. If you disrupt this delay, which defines the checkpoint process, right? The checkpoint is a process whereby if there's DNA damage, you delay the cell cycle such that the cell has time to repair it. If you don't have that, it has bad consequences for the cell and results in your cells undergoing premature mitosis before they've had a chance to repair the DNA. Let's see. I'm going to use this one here. All right, one thing you might be wondering is what causes these cyclin proteins to oscillate. And so the last point I want to make is I want to tell you about the mechanism that allows these protein oscillations. And it involves a mechanism that's going to be new for you, at least from the context of this class. It's a mechanism that's regulated proteolysis. OK, so there's a regulated degradation of these cyclin proteins that allows them to go up and then down, OK? So if we consider just one part of the cell cycle, well, if I plot the concentration of cyclin and we look at M cyclin from G2 to M phase-- so this here is a time axis-- the M cyclin goes up in M phase. And then it drops precipitously during mitosis, specifically at the metaphase/anaphase OK, so this is for the mitotic cyclin. And I'm going to tell you that this precipitous decline in cyclin levels is due to regulated proteolysis. OK, and it involves a mechanism that you were briefly introduced to by Professor Imperiali, because it involves a small protein known as ubiquitin. What ubiquitin is is it's a small 76 amino acid protein. So it's a 76 amino acid protein. But this protein can get attached to other proteins. So it's a post-translational modification. OK, so ubiquitin, which I'll abbreviate UB, ubiquitin is attached to lysines on a target protein. And the attachment of ubiquitins to lysines of a protein has important consequences. And what Professor Imperiali told you about is when this happens in the case of protein misfolding. But this ubiquitination of proteins also occurs to proteins that are not denatured or misfolded. And it's a way of regulating protein levels in the cell. OK, so what happens is-- I'll show you a complicated diagram of what happens. But I'm really going to focus on this step right here. There's a series of steps that are needed to get ubiquitin to get attached to the target protein. I'm going to ignore pretty much all of that. But there is an E2 enzyme that becomes conjugated with ubiquitin. And then the ubiquitin is able to be transferred to a target protein. And rather than make this generic, I'll let you know the target protein is going to be cyclin. So we'll just say cyclin. And so there's an enzyme that transfers the ubiquitin from the E2 to the cyclin, OK? And it's polyubiquitinated, meaning there's a chain of ubiquitins added to the cyclin. And it's carried out by a particular type of enzyme known as an E3 ubiquitin ligase. And there are hundreds of these ubiquitin ligases in humans. And different E3 ubiquitin ligases confer different specificities. So they target different proteins, OK? So different E3's target different proteins. And so this is where the specificity comes from, OK? So when a protein in the cell, misfolded or not, is polyubiquitinated like this, this is a garbage tag on that protein, OK? So you can think of polyubiquitin as a garbage tag. And once it's polyubiquitinated, it's sent to the proteasome, which Professor Imperiali showed you. And this is the structure that degrades proteins. OK, so if the protein is targeted for degradation by putting this tag on it, it's going to be rapidly proteolyzed in the cytoplasm of the cell. All right, now I want to show you some experiments that provided the first evidence that this regulated proteolysis is what sort of causes the cyclin to oscillate, OK? And it's going to involve a new model organism. Is there anyone here that has ranidaphobia? OK, we all like frogs? OK, so this model organism is xenopus laevis, or the African clawed frog. And I want to thank my colleague at UMass Amherst, Tom [? Oreska, ?] who provided the slides of frogs for me. So what's great about these frogs, well, other than them being very cute, is that they lay a ton of eggs. And the eggs are huge, OK? So here is a mom, a mom frog. And then you see all these circular things all around the frog are the eggs. OK, so they're about 1 millimeter in size. They're huge. You can collect these eggs, put them in a test tube. And you can see all these eggs that you have in this test tube. And then you can spin the test tube. And by spinning the test tube in a centrifuge, you crush the eggs. And so that results in this middle layer here, which is cytoplasm, OK? And you can remove it with a syringe. And what you end up with is a concentrated cytoplasmic extract. So that's all cytoplasm. OK, so that's a lot of cytoplasm. OK, so for the xenopus system, this system allows you to get this highly concentrated-- because it's not diluted. It's the same concentration as cytoplasm almost-- cytoplasmic extract, which is known as xenopus egg extract. OK, and what's amazing about this egg extract is it can go through the cell cycle even though it's not in a cell. OK, so you can get this extract to essentially simulate the cell cycle. OK, so you have to mimic fertilization, because that's when cell divisions start to happen in the normal frog embryo. But once you do this, then you mimic the fertilization process by adding calcium, and then you'll see this extract go through the cell cycle. And you can see it by looking at the morphology of different structures in the extract. So this is a nucleus that has assembled an extract around some DNA that was added. OK, DNA replication can happen in this nucleus. Other events that happen in the interphase of the cell cycle also happen in this extract. And if you wait, then it will go into mitosis. And you'll start to see mitotic spindles assembling in the extract. OK, so what's important about this, this is totally in vitro. OK, so this is an in vitro system. There are no cells. But you're able to see the extract go through the cell cycle. OK, and if you were able to look at cyclin, like M cyclin, you'd see that M cyclin levels go up and then down and up and down. They oscillate just like they would in a cell. And this is just a diagram showing you that here where mitotic cyclin, M cyclin, concentration is in blue and CDK activity for M cyclin CDK is in purple. So you see it goes up. And then the cell enters mitosis. Early mitosis is in blue. Late mitosis is an orange. So you see mitosis happens when M cyclin is high, just like it does in a cell. And then it degrades. And then it repeats. OK, so this is all outside of a cell. But you're just looking in a test tube. And you can recreate the cell cycle. Now, this, because this is a biochemical system, allowed these researchers-- in this case, the researchers who did this experiment were Andrew Murray and Mark Kirchner at Harvard. And what they did was to test the role of various components in this oscillation. The first experiment they did was to RNase treat the extract to get rid of all the mRNA. And if you degrade all of the mRNA in this extract, you no longer get the cycling. You no longer get the cell cycle. So this shows you mRNA is important or necessary. But you don't know which mRNA, right? One hypothesis might be that you need the mRNA from M cyclin in order to produce cyclin every cell cycle. And that was their hypothesis. So what they did to test that was to degrade all the mRNA, inactivate the RNase, and then add back the mRNA to one gene, that mitotic cyclin. And what they saw when they did that is they restored the cell cycle, suggesting that this one mRNA, M cyclin, is sufficient to restore the oscillation of the mitotic cycle. OK, now, the last experiment, which I think is the most important, shows you the mechanism by which the cyclin is going. Because they added-- and instead of adding back the wild-type mitotic cyclin, they added back a cyclin mutant that was non-degraded all by this E3 ubiquitin ligase mechanism. OK, so if they add back a cyclin mutant and this mutant has a deletion in the part of the protein called the destruction box-- and this is essentially the part of the protein that is recognized by the E3 ubiquitin ligase, OK? So the destruction box mutant basically blocks this such that cyclin is no longer polyubiquitinated and it can't be targeted for proteolysis. OK, and in this case, what happens is cyclin levels increase. And then they stay high and there's no cycle. OK, so when you have this cyclin mutant with the destruction box deleted such that it's non-degraded, it's not degraded, you get a cell cycle arrest. And because this is M cyclin, the cell arrests in mitosis. You get a mitotic arrest, a mitotic arrest. OK, any questions about this mechanism of proteolytic degradation? You all see how this-- yes, Malik? AUDIENCE: So [INAUDIBLE] cell [INAUDIBLE] what is it physically doing? ADAM MARTIN: What is it physically doing? It's basically stuck with a mitotic spindle and it's not segregating the chromosomes. Yeah, so it hasn't gone through mitosis. It's stuck in a specific phase of mitosis. In this case, it's stuck in basically a metaphase-like state. One last point I want to make about this is that the mRNA for M cyclin is just constant. It's always present. So this is constant. You have constant mRNA. CDK is constant. It's the cyclin protein that's going up and down. And it's going up and down because of this regulated proteolysis. OK, great. On Wednesday, we'll talk about stem cells and we'll talk about guts.
MIT_7016_Introductory_Biology_Fall_2018
1_Introduction_Course_Organization_of_MIT_7016_Introductory_Biology_Fall_2018.txt
BARBARA IMPERIALI: OK. We're going to get going. Now, we have a small class this year because of changes in the institute with pass/fail types of things, but Professor Martin and Dr. Ray and I consider this to be a special opportunity for us to run the course a little bit differently with a few more quirks and surprises. Because we have a small number of you, we can listen to you all. We can get input from you. We can even get feedback from you of something you might like to see more of. And in general, we really want to capture the sense of you. I have looked at the registration list. We have people from every year. We have people from many, many different disciplines. So this is what we're going to do today after we I start doing some introductions and so on. We're going to talk about the nitty gritty of the organization. We need to tell you this. We need to convey this information to you clearly about when exams are, and what requirements are, and how to do well in this course without even realizing it, that kind of thing. And then I'll take you through this sort of fast track through molecules to man, all the way down to cells and organisms, to show you that there was a breakpoint in the 1950s where the structure, the non-covalent structure of DNA was elucidated. And there was an entire revolution after that which makes modern biology, the study of modern biology, so entirely different from the study of biology in the era before that. Biology used to be considered taxonomy and dissection, like listing and looking at. But now biology, modern biology, is a molecular science. So as we talk about these topics, what you will see is the blueprints for life are common across domains of life. And if you learn basic principles, you'll have an exponential increase in your ability to appreciate these characteristics, that modern biology is a synthesis of science, technology, engineering, where all the tools from those disciplines, different disciplines-- physics, math, computation-- funnel into modern biology to make what we know now feasible, and that's a dramatic and fantastic opportunity for all of you moving forward in your careers. Now I want to introduce the team. So I'm Barbara Imperiali. I'm a faculty member in chemistry and biology, and I'm really interested in chemical biology, glycobiology, biophysics. I love to tease apart complex pathways in organisms where you biosynthesize very unusual glycol conjugate that are very important for cell-cell communication and host cell pathogen communication, for example. I was trained as an organic chemist. In fact, I did my PhD degree at MIT about five million years ago on a sort of current scale. So my co-instructor is professot-- sorry about this, but they want us on video. ADAM MARTIN: Hello. I'm Professor Martin, and my lab is interested in how cells generate mechanical forces and how this is involved in sculpting tissues during development. BARBARA IMPERIALI: So what Adam hasn't told you is he's a cell biologist, a biophysicist, and he's a lot better at genetics than I am. Our instructor is Dr. Diviya Ray who's been with this course, now this is the sixth year, and she is trained in immunology, cancer biology, and also cellular signaling. But what you can't tell from that is how dedicated she is to each and every one of you. If you have any trouble in the semester, just contact Dr. Ray and say, I need some help, be it a particular problem in the material, or there's just something come up that makes it difficult for you to do your best in the course. She will help you. She'll work out mechanisms to get you through troubled spots. So let's get going here. Now, what I want to try to do is just give you sort of a flavor of where we're going to within the course by starting with a few bullet points and topics just that I can sort of pique your interest. So as I mentioned before, studying biology in the 21st century is a fabulous opportunity. No matter what discipline you come from, you can add to the expertise that will move biology forward. Biology would not be where it is today in the absence of science, engineering to promote it and to support progress in biology. So you really want to realize that, that you have an opportunity. You may say, well, I'm in this discipline or other. I don't think biology is going to have anything to do with my future career or career opportunities. But it has a lot to do with your life. It has a lot to do with understanding health and disease, understanding new scientific discoveries and developments. So it's so important that you, as a scholar of the 21st century, have a good grasp on these materials. And we're not trying to feed you anything dull and boring. This is really exciting stuff, because the level of complexity that we can study nowadays-- whole genomes, whole organisms at a molecular level-- is amazing. It's amazing. We're not just peering down a slide and looking at one cell or something. We will be able to do full descriptions. So what we'll try to give you is a view of the fundamental principles that are common to all living organisms. So the study of biology, some people are microbiologists, or eukaryotic biologists, or human biologists, or they study virology. But we're going to build for you, in the first few weeks of class, information on the common building blocks that go across all domains of life. Because once you start to learn about those molecules, the build up, the macromolecules of life, then you'll start to really gain an understanding how amazing it is that these same sets of molecules function across from bacteria to man. So you learn the rules for the simplest organisms. You look at the molecules and you see how form fulfills function, which is something I'm really excited about, and then you'll be able to apply it as we get ever more complex systems which demand a lot of attention. So there's a common molecular logic of very complex processes. Motivations-- I just mentioned a few. Sure, you want to understand health and disease. You want to understand what might be going on with current therapies. When you have a relative who's been diagnosed with a serious disease, what are the current opportunities? What's coming down? What sorts of opportunities for therapy might be available? Because there are so many diseases now we understand at a molecular level. We may not understand how to treat them yet, but we understand what their origin is, and that's why molecular approaches are so important. You may often hear of words like systems biology and synthetic biology. These are kind of jazzy words for fairly straightforward things. Systems biology is a little bit like treating an organism or a cell as an electrical network, a wiring diagram. What proteins talk to what proteins? What are downstream functions? Where are signals amplified? And so on. So that's systems biology at its heart, quantifying different intermediates in a complex map of the cell. Synthetic biology is about using biology to make stuff, which is really cool. Many, many important molecules can be made in the lab, but it's so much more effective to make them in an organism. People are doing what they call synthetic biology, and that's exploiting and harnessing nature to make things that are useful for mankind. And all the way through, what I just want to emphasize how integrating technology and engineering for science is really what we're all about here, because we appreciate we couldn't make the progress without it. There are also issues general biology impacts that are in the social sciences and impinge on things like ethics, designer babies, cloning people, cloning your pets, all kinds of things, treating a disease through genetics or not, [INAUDIBLE] some of these new innovations. But you really need to understand ethical issues related to them to be able to explain to your parents, or your grandparents, or your sister or brother who hasn't taken biology, what the implications of some of the things that we can do in biology, but probably we shouldn't do in biology. And we will welcome your thoughts on some of that later on. OK. So where did the world start? Arguably four and a half billion years ago is kind of a vague theme, but it started with the world, the earth, being a ball of fire, and it took quite a while for it to cool down to establish the hydrosphere and the globe as it's known today. There was a period of time known as the prebiotic world, where there were not living organisms that replicated, and that was basically a world where building blocks started to evolve out of fiery hot mud pits and in volcanoes and goodness knows where. People believe that the building blocks of life, just the molecules, came together from things like hydrogen cyanide, or sulfide, or other primordial components that were in the primordial soup. There was a phase known as the pre-RNA world, where the RNA building blocks were around. There's reasonable arguments in favor of the RNA world, where a lot of functions were catalyzed not by proteins, but by nucleic acids, specifically ribonucleic acids. So it's a period of time still pre-biotic that had the first pre-RNA, and then RNA world. But then things really started to get interesting when the first cells evolved. Now, I will talk a little bit about this in the next class, because the thing that's critical to be able to build a cell is to be able to build a wall around it. So very, very early on in life lipid bilayers, membranes, evolved in order to make compartmentalized structures where you could differentiate the in from the out. And so much of life is completely reliant on the fact that we're made of cells. We're not just one big sort of bucket of water with things floating around in it. Because so much of function becomes coordinated by cellular compartmentalization through things known as lipid bilayers, which are semi-permeable membranes. Oxygen can move across. Some small hydrophobic things can move across. But a lot of things get either stuck in or stuck out. So we'll talk a lot about that. So the first prokaryotes were cyanobacteria. They're photosynthetic bacteria. It was quite a long time until those unicellular organisms that totally lacked a nucleus, lacked a lot of intracellular compartmentalization, evolved to eukaryotes, and those cells are different. They're 100 or 1,000 times bigger. They're complex. They're compartmentalized. They can do a lot of functions. In a full organism they're very differentiated, and they may look different in muscle, or in heart, or in skin, or in bone. And so those eukaryotes-- so that's a long gap of time, but there was a lot going on in that phase. And about a half a billion years ago, multicellular life evolved. And multicellular life now can be looked at, if we think of the evolution of homo sapiens, can be thought of as something that we can keep track of a bit through fossil records over the last five million years, where the first humanoid life evolved. Then you got sort of to a stage-- I think he's homo ergaster, that this sort of Shrek-like person evolved quite early on. And then the humanoids gradually became different, evolved. In some cases there were branches of the tree of evolution and dead ends. In other places there was a branch that carried on for a while. For example, the neanderthal and homo sapiens kind of kept on evolving for a while. But there's a lot of developments that have been characterized from the fossil record. But now there's a lot of belief that if we trace things back through genomes, we might get more precise information on steps in evolution. Now, the evolution of the advanced, if you will, hominids really came along with a number of things. There was a stage at which a particular gene, the FOXP gene, is attributed to the ability for complex speech. And that could have been a leap forward when humanoids could communicate more, and it seems to be associated with that. But there are other sort of sociological functions, like burying the dead, or making jewelry, or making tools, that are associated with the more evolved organisms. There are other types of things like cranial capacity, standing upright, looking forward. A lot of things came through those years of the evolution of homo sapiens. So it's fascinating to think about that and to think what light genetics can shed on those five million years of evolution. Now, the world of biology took a mega kick start with the elucidation of the human genome, but more importantly of the technology necessary to solve the map of the whole human genome. In 2001 there was a major development with the publication of the first map of the human genome. It's fascinating to think with humans, we humans have about three billion genes, but there's only across human-- is that right? No, sorry. Base pairs, yes. Thank you very much. But across humankind there's enormous diversity, but that's accounted for by only about 0.1% of the diversity. So you can see people look very, very different, but we still share 99.9% of our genome. Another very interesting thing is that genomes vary in size quite considerably. Before I move forward, I just want to quickly show you this map. I mentioned tracing evolution through a molecular clock, so looking back in time not by following the shape of a skull, for example, or physiologic changes, but looking at genomes using the genome as a molecular clock based on mutation rates that are fairly constant amongst domains of life. You couldn't compare a human and a bacterium, but you can go back through a lot of eukaryotic evolution and see where divergence has happened. So in this map, you can see that human and neanderthal diverged from the chimpanzee a certain time ago, which had diverged from the gorilla further ago based on the molecular clock that's available. OK. So now I want to talk a little bit more about getting into the details of the genome. So genomes differ greatly in size. Our genome includes about three billion base pairs in our 22 chromosomes plus the X and Y chromosome, but the typical genome of a model bacterium has only five million base pairs. So far, far smaller, more tangible, more easy to study, because those genes are more limited in size, but the genome size is not necessarily proportionate to the number of genes that are expressed and made into proteins. A fascinating discovery is that of the three billion base pairs, only about 1.5% to 2% actually code for proteins, and there's a ton of interest now in what's the rest of the genome doing there. Where did it come from? What's its function? There are different functions that Eric Lander calls the dark matter of the genome, different functions to the rest of the genome. But the part that we focus on is the part that gets encoded into proteins that form the functions of the molecules of life. So we're going to focus ourselves in on those. But here you see differences in sizes of genomes based on base pair. But what's fascinating is despite this huge breadth of sizes and huge differences in organisms, the building blocks are the same. And that's what I think is the wonderful part of what we're able to teach you is, we can take you from the 1950s when the structure of double stranded DNA was first solved. Now, there were 60, 70, or more years of work before that where they figured out the pieces, they figured out the chemistry, the covalent bonds, and the bases, and the sugars, and the phosphodiester. But they had no clue how the DNA could encode and program the synthesis of a protein. But once the structure, the three-dimensional structure of double-stranded DNA was solved-- this is this beautiful anti-parallel structure that you see here-- by Watson, Crick, and Rosalind Franklin, then the clues came pouring in. Without that structure, without the structure of what's known as the non-covalent structure-- not the covalent structure, you'll see all those building blocks-- but the non-covalent structure, how you could zipper apart the two strands of DNA and make copies of both of them and replicate DNA and then go forward. That was an amazing step forward, and for that, there was a Nobel Prize awarded. Unfortunately it was after Franklin's death. So it was given to Watson and Crick and a third person. Now, here's has that structure of DNA. I could sort of watch it for hours to be honest. The phosphodiester background-- backbone going up the back, and the bases base pairing across. And these are the key steps that happened from the '50s. So in the definite-- after the definition of the double stranded structure, it took a few years, but they cracked what's known as the genetic code. How does that DNA get converted into a protein? What happens is you make an RNA copy of the DNA. And the RNA is read to make a protein. And you will learn about all those components. But that was another real landmark. Then what was really exciting is that some technology companies started figuring out, first, there were very slow ways to sequence DNA. But in the-- and that happened in 1977. But what was really important is about a decade later, where the ability to sequence DNA was not done anymore using huge agarose gels and a bucket of radioactivity. But it was done through using fluorescence, in order to allow you to read out the sequence of DNA. And you will learn about that. And in 1987, the instruments were commercialized, major, major technology and engineering. We wouldn't be anywhere without that. In 1990, the Human Genome Project began. In '01, the draft of the human genome sequence was completed. 2010, you could sequence a single strand of DNA, one molecule of DNA. And now there's so many initiatives that have come out of that. And so much amazing technology that has evolved. So things like the 1,000 Genomes Project to look at variation across man, so all people from all different parts of the world. You can look up that website. That's very cool. The Human Cell Atlas, there was quite a bit of news about that in MIT Technology News, where Aviv Regev is playing a major part in that, to actually sequence representatives from all of your trillions of cells and see how they differ. And then there's cancer genome projects and precision medicine sequence every type of cancer cell, find out what's different about it, and precisely figure out how to treat it, all very exciting things. And then of course, there's synthetic genomes, where you can literally build a cell and its genome, program it to do what you want, hopefully. And then there's one of the things that your generation will have to deal with, and that's all the data. Because we've just found ways to churn it out. But you guys are going to have to do the heavy lifting there. So DNA, then, looking at that structure, is packaged into cells. So figure this one out. Each human cell has 1.8 meters of DNA in it, yet it fits into a cell that's 10 to 100 microns in diameter. And it's bundled tightly up. So you'll learn how DNA in cells gets bundled up and wrapped around proteins that neutralize the negative charges of the double stranded DNA with positively charged proteins and enable packaging. So we will talk about all of this. When is DNA unraveled? What signals its unraveling? Because in order to copy it, you've got to unpack it. So these are a lot of details about DNA that you'll be able to sort of have much more sense of as we move forward. Cells are different in size. I just mentioned to you a typical eukaryotic cell is about 10 to 100 microns in diameter. A typical bacterial cell is about 1 to 10 microns. So there is a vast difference in sizes for these simple cells that have no nucleus, relative to the cells that are compartmentalized and perform a lot of functions. So we will learn to appreciate that difference in size, looking at the building blocks that go into all of them, but then understanding how big cells have to have a lot more complexity in their signaling in order to establish their functions but also interact with other cells in multicellular organisms. We're still doing fine for time, yes. The other thing that we will spend several classes on is imaging and visualization of things going on in cells. So what we'll talk to you about is the discovery of fluorescent proteins, which have provided an unparalleled opportunity to label proteins within living organisms in order to track what they do. And through the efforts of protein engineers, there is an entire panel of colored proteins that fluoresce at different wavelengths that we can use to study biology in live systems, in real time. These slides show you a little bit of that. I love these pictures, just showing a dividing cell. Where the chromosomes you see red because the histones are labeled with red fluorescent protein, and all that green fuzzy stuff are microtubules around. We can do this now. You couldn't do this 15 years ago, observe these changes. We can also look at changes as cells divide and go through the cell cycle. One of my favorites is this where of going through the stages to program a cell to divide, a new protein gets made, and then it settles down. But then when you go to divide again, you keep making-- you cyclically make different sets of proteins. And you can observe them in real time dividing. So just think if you were trying to make a chemotherapeutic where you wanted to stop cell division, or you wanted to inhibit one of those proteins, you could literally watch it function. Does it get in to cell? Does it disrupt the normal pattern of cell division? So these are capabilities that are now, really are available. So I've talked to you about cells. But I'm going to pass you over to Professor Martin for a little bit-- you'll get a little bit of a sense of how he thinks. And then I'll do the wrap up. PROFESSOR MARTIN: Thank you. So this is one of my favorite model organisms. This is a fruit fly, at larger than real size. And so one topic that I'll start on when I start lecturing either at the end of this month or beginning of October is we'll talk a lot about genetics. And one thing we'll start on is pioneering research done in this system to establish the chromosome theory of inheritance. OK. And we'll talk about the importance in model organisms in discovering new biology. But in addition to that, I also want to talk about how genetics will affect you guys as you go on and graduate from MIT and go into your own careers. Because genetics is really playing an important role in all our lives. And already, you guys have the option to get your DNA genotyped, right. There are lots of companies now like 23andMe and Ancestry.com where you can get your DNA genotyped. And you can learn about your ancestry. You can learn about whether you might be predisposed towards certain diseases. And so in order to appreciate the data you get back from these companies, you really have to understand something about genetics. And another thing which I find very fascinating are ethical issues that come up with the use of such sites. And you might have seen this in the news last semester. Both forensic experts and police identified a suspect in a killing that happened 40 years ago. And this was in part due to using the suspect's family tree. OK. And so they used the family tree, you know, some-- you know, this guy's relatives had done one of these Ancestry.com's. And they used the information from DNA acquired from other individuals to track down this other individual. OK. So one thing that I find incredibly exciting about biology is that it is truly dynamic. OK. And this is a human neutrophil. And it's just a bright field microscopy. Nothing's labeled. And what you're seeing here is this-- this neutrophil is chasing after this bacterium. And it illustrates another concept that we'll talk about in this course, which is signaling. So this neutrophil is receiving a signal from this bacteria that tells it where it is. And it's then able to chase that bacterium and track it down. And there you see it just got the bacterium. OK. So we'll talk about dynamic processes that cells do and how that's important for their function. In addition to considering single cells, we also want to understand how entire organisms and tissues work. And I want to emphasize that, yes, we have sequence-- or researchers have sequenced the human genome and the genomes of many different organisms, OK. And that's great, right. We have this data set. But we still don't understand how all the components that are in the genome are wired together and work in order to create a complicated organism like ourselves. OK. And so one aspect of that, which is mysterious, is how does the genome encode shape? OK. How do we get our shape, and how do we get the shape of our organs? And this is something that my lab is interested in. And so this is a fruit fly embryo. And you can see at the beginning here, this is three hours into development. You just have a smooth surface for this embryo. But during development, this changes. And I'm just showing you here a cross-section of the same embryo. And you see, it's a sheet of cells that surrounds a central yolk. OK. And this changes three hours into development, because a population of about 1,000 cells in this organism fold to form a crease. OK. So this is a dramatic shape change for this embryo. It goes from being a single layer to now having multiple layers. So this is a time course here, showing you how cells change shape in this tissue and how this leads to what's initially a single layer of cells to become two layers of cells. And this process is similar to morphogenetic events that happen in human embryos. But we can study this in fruit fly embryos or many other model systems, in order to try to understand mechanistically how this happens. So again, this is dynamic. And I want to show you a movie that shows you the dynamics of this process. So now this is an embryo that's been labeled with some of these fluorescent proteins that Professor Imperiali just introduced. One's green, that's the-- and it's shown here in green. And the other is a red fluorescent protein in red. The red fluorescent protein is marking individual cells. The green protein is a motor protein that generates force. And what you see is, where the motor protein is, this is where the tissue contracts. And this is where the tissue folds. OK. And so because we're able to see these proteins in action, we can infer how they're functioning during development to essentially program tissue shape. And there are many other opportunities where, even though we have the genome, we still don't understand how collectives of proteins, or collectives of cells, are sort of interacting with each other to sort of create emergent properties that are what are responsible for patterning something as large as a human. Another thing that we'll talk about is how cells divide. And this is another fruit fly embryo. And it's labeling histones. So it labels the DNA. And so you're seeing nuclei here divide sequentially. There'll be one more division. And then it's going to stop. OK. And my point here is that cell division during development and in adults is under exquisite control. OK. And a breakdown of this control is important in the progression of cancer. So we're going to talk about how cells control whether or not they divide, and how this is impacted in cancer cells. I also want to point out that this video is from Eric Wieschaus who is at Princeton University. OK. Want to just hit the lights. I have one last thing just to mention. So I just want to reinforce what Professor Imperiali said, we have a small class. So this is really an opportunity to have this be more interactive than it would be if we had like 300 people in the class. So I want to really encourage you guys to ask questions. Also if you have ideas, we would love to hear them. And I want to try one new thing this semester. So I find that students are a little hesitant to come to my office hours. So this year I want to hold what I'm calling running hours. So one thing that I really like to do is I like to run. And I've noticed that many of my students are also runners, because I'll like see them out around the river. And so I just want to hold sort of weekly running hours. I'm going to choose 3 o'clock, not three hour run, all right, 3:00 PM on Fridays. And we'll just meet in my office. And so if you like to run, you can just meet there. We'll go on a run around the Charles. And this is not a competitive event. I'm not some fitness nut. I ran home last week, and I ate half a bag of Swedish fish on the way. So it's not a competition. It's just to try to get to know you guys and to try to break the ice in sort of a non-academic way. BARBARA IMPERIALI: OK. So I'm just going to wrap up here. So we bombed you with quite a lot of-- yes, over there. You want to know more about running. [INAUDIBLE] AUDIENCE: Will you still have normal office hours. PROFESSOR MARTIN: Yeah I'll have normal office hours. Yeah, or you could join me at CrossFit if you would like as well. We will both have office hours, and we will post them. And we welcome you to come visit us and, you know, find out more, tell us more about yourselves. We are fountains of information. So basically over the first half of the course, we tend to cover foundations. And so we build on biochemistry, one of my favorite subjects, where we cover all of the molecules of life. What are all the bits it takes to make a cell, lipids, sugars, proteins, nucleic acids. Then we synthesize them all together, where we show, in molecular biology, how the genome encodes the proteome, and what happens to the proteome after that. So you'll see me for all of those lectures. Then I will hand you over to Professor Martin for genetics, for the learning how to manipulate DNA. And we'll cap-off this first phase of work with cell signaling and understanding much more about dynamics of cells, as opposed to static building blocks. But you've got to understand the building blocks before you can understand the complexity. That's why I really like to cover those molecules at a reasonable depth. It's kind of ridiculous, 4 classes. But nevertheless, that's how we start. For some of you, you've seen some of it before. For others, you've seen none of it before. It doesn't matter. We will give you our flavor on it. If your chemistry is a little weak, I suggest you read the textbook. There's a couple of sections on just chemical covalent and non-covalent bonding, that you'll need to do the first P set. If your chemistry is strong, you're fine. If your chemistry is weak and you need a little help, I'll run an extra session next week. We can take care of every eventuality because you're a smaller class. And then we'll take it from there. And then what I really want to do is encourage you to do the reading. Make sure you're in a recitation. And next time, but it's in the sidebar, I'd like you to take a look at the sliding scale which shows you the dimensions of molecules, macromolecules, and organisms, which I find rather cool, even though it's probably built for high school students. OK. That's it from us for now.
MIT_7016_Introductory_Biology_Fall_2018
26_Cancer_2.txt
ADAM MARTIN: So I guess I will start by, first of all, congratulating you all. Since the last time I've seen you, you've all regenerated your intestine. So it's like you're a brand new person. So there's that. I want to start the lecture by talking about something I've been putting off telling you about, which is the Nobel Prize in physiology or medicine that was awarded this year. And it was awarded to James Allison and Tasuku Honjo for their discovery of a way to harness the immune system to fight cancer. And we're going to talk about the immune system later in the course, and it turns out, I will lecture on it. And I've actually had immunology, and in fact, I had immunology with James Allison. So I will do my best to channel James Allison when I talk about the immune system. But what they won the prize for is, they figured out a way to essentially release the brake on the immune system in order to allow the immune system to better fight cancer. And this is a technique that's been used in the clinic, and there are currently a number of clinical trials that are also looking to see whether this can be used in a variety of different cancer types. OK. So I just wanted to point that out now because we're talking about cancer, and I'll go more into the mechanism as how this works when we start talking about the immune system. So as I mentioned in the last lecture, cancer is basically a progressive loss of tissue organization. And luckily for us, our body has a number of different barriers to cells becoming cancerous and forming a tumor. And so I wanted to start out just by having you guys tell me what you feel the barriers would be to this process. So who has an idea of a barrier? What are the barriers in place by your body to prevent cancer cells from arising and forming a tumor, and for this process to form a malignant tumor? What's that? Oh, Rachel, sorry. STUDENT: Apoptosis. ADAM MARTIN: Apoptosis is a good one, right. So there's a careful process in your body to limit how long certain cells are resident in the body. And all of this depends on communication between different cell types in your body-- so barriers to tumor agenesis. And one type of signal is a survival signal. And if a cell is not getting a survival signal, then the cell will undergo apoptosis, which is what Rachel was referring to. So one barrier is that there is a highly regulated system of determining whether or not cells divide and also whether or not cells live. OK. So there are growth survival signals, which means that the decision for a cell to go off its rocker and just start dividing uncontrollably is not likely for a normal cell. Something has to happen to that cell in order to perturb it. So what are some other barriers to forming a tumor and for that tumor to become invasive? Yes, Jeremy? STUDENT: You just mentioned it, but the immune system. ADAM MARTIN: Yeah. So this cancer cell can't activate the immune system. So you could have some type of immunosuppression of the cancer. And I won't talk about that today, but maybe we'll come back and talk about it when we talk about the immune system. OK. Looking at this diagram, when the tumor proceeds to invasive cancer, what do you think are some barriers to that process happening? What are cells normally doing in an organ? The cells that line your intestine, are they invading into the surrounding tissue? No. Jeremy is shaking his head no, right? So normally, cells don't do that, or certain types of cells do, but normally, if you have cells that are the lining of an epithelium, they're not going off to a blood vessel. OK. So I'm just going to review the structure of the tissue, and then we'll talk about some of the barriers that prevent this from happening. So I'm drawing a tubular organ here. This could be the tube of the intestine. There'd be a lumen inside here. It could be some ductal structure in an organ, like a mammary gland. It could be that this is the airway of a lung, and there's an epithelial lining around that airway. So these cells here are forming an epithelial lining. And one important aspect of epithelial biology is, immature epithelia, they have a floor that the cells stand on. So this structure that I just drew around these cells in green is known as the basement membrane. It's the basement because it's the floor that this other thing is built on. And this basement membrane is made out of what is known as the extracellular matrix, which I'll just call matrix right now, and the extracellular matrix are basically extracellular proteins that form a mesh work that forms a rigid structure on which epithelial cells can sit on. OK, now if we think about what surrounds this organ, there's also matrix proteins in the intervening areas. And this is known as connective tissue. And so if we think about the cells that are the epithelial cells here, they are in one state, and this state is known as epithelial, as I just said. So these are epithelial cells. Epithelial cells have a few properties that are really distinguishing. The first is that they have high intercellular adhesion. And the next is that, if you consider their relative mobility, they can obviously move within the epithelium because we talked about the intestine and how cells are moving from the base of the crypts up to the villus. But in general, these cells aren't moving in and out of the organ. So I would say that they have a low migratory potential. And so these are the epithelial cells that are here. Now there are also cells that are in this interstitial space, and I'll draw-- they often have this highly elongated morphology. And these cells are in a fundamentally different state. And this state is known as mesenchymal or stromal. So this area outside the organ is also known as the stroma. So they're known as stromal cells in this location. And these cells have very different properties than the epithelial cells in that they have low adhesion-- I should say low cell-to-cell adhesion. And in contrast to these epithelial cells, these stromal cells, as you see in this example up here-- that's a human neutrophil-- these stromal cells are highly migratory. So that neutrophil is chasing a bacterium, and they'll eventually get it, but the movie will loop, and so you'll see it constantly chasing that bacterium. So some examples of stromal cells would be immune cells, like this human neutrophil. And you probably know that immune cells have to traverse different organ systems. They have to be rapidly recruited to sites of infection or sites of injury. And so this type of cell is fundamentally different from the types of cells that line your organs, where you basically need those cells to stay put. When considering cancer, you have these different types of cells in your body. 90% of cancer comes from a cell of epithelial origin. So 90% of cancers are epithelial in origin. Some other examples of stromal cells are cells like fibroblasts. And fibroblasts are cells that secrete and remodel the matrix that's in connective tissue. And they're also important for wound healing and secreting matrix during the wound-healing process. So that's just to give you a few examples. OK. So now let's come back to this example and talk about what happens to get a cell to go all the way from being a normal epithelial cell to having this type of behavior, where initially you have growth and a loss of the control over growth and survival signaling, and eventually, the cancer can become what's known as malignant? So it's a progression, and initially, you might just get increased in-cell division and abnormal cell shape, which is known as dysplasia, and at that point, it's known as something like an adenoma or something that's benign. But it's this last point here, up here, where the cells breach this basement membrane. When that happens, then the cancer is known as carcinoma, and it's malignant. So malignant cancer is cancer that has breached the basement membrane. So I want to take you through the progression of tumor genesis. And first, I'm going to start with the breakdown of this growth survival signaling. And I just wanted to remind you about what we talked about with the intestine as our model organ. And in the intestine, remember that this is all regulated by signals between stem cells and niche cells. So the niche cells are sending the stem cells signals like Wnts that control their self renewal, and then it's the loss of this signaling that allows these cells to eventually undergo apoptosis and get shed into the lumen of the intestine. So one of the first steps in cancer is this breakdown in growth survival signaling. So this first breakdown is going to enable the cells to overcome this first barrier. And so as I just pointed out, remember, it helps to think of what has to happen in a normal tissue or organ. Normally, the decision whether or not a cell divides or dies-- so cell division and also death-- is highly regulated. And it's regulated by communication with other cells. So in cancer, this regulation goes awry. And how would this regulation go awry? Yeah, Jeremy? STUDENT: Loss of function in a tumor suppressor or an over-activation of the oncogene. ADAM MARTIN: Mm-hmm. So Jeremy suggested that there could be mutations, like oncogenic mutations or tumor suppressor loss, that lead to abnormal cell division and death. And that is exactly right. So you could have oncogenic mutations, and these oncogenic mutations, they often hyperactivate or constitutively activate these growth-signaling pathways that are normally downstream of growth factors and receptor tyrosine kinases. And if you hyperactivate those, you reduce the dependency of the cells on these signals. So these oncogenic mutations can reduce the dependency of cells on growth factors and growth signals. OK. But one important point is that this is not enough. And you can think about this in the case of the intestine because if you had an oncogenic mutation in one of these cells here, it might not be as dependent on signals for growth. But it doesn't matter because even though you had that mutation, it's going to die, and it's going to shed out of the tissue. And actually what happens in many cases where you activate a signaling pathway that allows the cell to, in an abnormal way, go through the cell cycle, you actually induce a failsafe mechanism at the cell which causes the cell to undergo apoptosis. So often, you get this step happening, and the cells just undergo apoptosis because you've evolved to protect your organs from this type of mutation. So it's the oncogenic mutation, in collaboration with loss of tumor suppression, and one of the main tumor suppressive mechanisms our body has is apoptosis, where if a cell is doing something abnormal, the cells simply dies. So loss of the tumor suppressor could be loss of a gene that promotes apoptosis, and in that case, if the cell loses that mechanism, the cell will avoid apoptosis. OK. So this is oncogenic mutations and the loss of tumor suppressors which subverts the normal communication between cells which is required for normal tissue homeostasis. OK. Now another example of this growth and survival signaling is not one which involves, necessarily, a genetic change, but one that involves changes in expression. And it also involves interaction between the tumor and the surrounding cells of the body. So I'm going to tell you a little bit about tumor microenvironment. And this is something that's important to consider in cancer because a tumor in a body, it's not in isolation. It's surrounded by other cells in your body and other things in your body, like matrix, and so here is a picture showing you a tumor, and the tumor is stained with this membrane protein which is shown here in the rust color. So that's the tumor. And what's also stained in this piece of tissue is the DNA. It's stained in blue. And so you see the nuclei of the tumor cells in there. But you see all these blue nuclei surrounding the tumor. And these are stromal cells that are around the tumor. And they've actually been recruited to the tumor by the cancer cells. So what I mean by tumor microenvironment is just the region around the tumor. And so if you have cancer, and you have a tumor, the tumor cells can actually secrete signals which recruit stromal cells. So there can be an interaction where the tumor sends recruitment signals that causes stromal cells to come by. And what the stromal cells can do, what they do for the tumor and why this is beneficial for the tumor, is that stromal cells can secrete growth signals or survival signals that promote the growth of the tumor. So there can be a reciprocal interaction here where you get an abnormal conversation between cancer cells and the surrounding stromal cells. But again, this is very similar, if you think about it, to the way a normal organ works. You often have these signals going between cells. And that is involved in normal tissue homeostasis. What's happening here, though, is abnormal and that the cancer cells are constitutively recruiting these stromal cells just to get this growth signal that they're addicted to. And because you have just the presence of such a loop suggests that these cancer cells, even if they have oncogenic mutations and have lost tumor suppressors, they are not totally independent of growth signals. They still, to some extent, rely on growth signals. So these oncogenic mutations, they reduce the dependency on growth signals, but this dependency, the dependency on growth signals, is not eliminated. And one experiment that showed this was an experiment that was done back in the 1950s where patients with a type of skin cancer, basal cell carcinoma, were taken, and they had their tumors excised from one part of their body. And then the tumor was grafted back onto them, to another part of their body, distant from the original site of the tumor. And this grafting experiment was done either with the stromal cells that surrounded the tumor or the stromal cells surrounding the tumor were removed, and just the tumor cells were grafted back onto the body. OK. Now this experiment probably would not be allowed today, but back in the '50s, I guess it was legal. And so the experimental result is that if you graft the tumor cells with the surrounding stroma, the tumor was able to establish a second tumor, or the tumor cells survived this grafting procedure, whereas tumor cells that were taken without the stroma underwent cell death when they were put in a new location. So there's something about this specialized microenvironment that the tumor creates for itself that is enabling the tumor to grow and survive. And the model is that it's because these stromal cells are secreting growth factors that these cancer cells are still dependent on. So this dependency on growth signals is important, clinically, because some types of cancer, there is an elevation of growth factor receptors that are associated with the cancer. And the famous example of that is, in 30% of breast cancers, there is a growth factor receptor, the HER2 receptor, which is over expressed in the cancer cells. So I just drew a receptor here, and I'm now talking about the HER2 receptor. HER2 stands for human epidermal growth factor, EGF, receptor 2. So this is a receptor tyrosine kinase, which is a transmembrane protein receptor. It's expressed on the surface of the cell. And so about 30% of human breast cancers are HER2 positive, which means that the cancer cells are over expressing this receptor tyrosine kinase. The fact that you have these cancer cells over expressing a growth factor receptor suggests that the cancer requires some type of growth stimulus in order for the cancer to be growing. And the fact that this was discovered, that 30% of breast cancers over express HER2, has been used by researchers to develop a treatment for this type of cancer, this HER2 positive. The treatment is known as Herceptin. Possibly some of you have heard of this. It was developed at Genentech. And what Herceptin is, it's an antibody that was raised against a human HER2 protein. OK. So Herceptin is an antibody. It recognizes HER2 on the surface of these cancer cells, and you can treat patients with this antibody, and an antibody binds to the HER2 positive cells. And it either blocks the function of HER2 or recruits immune cells to kill those cells off. The exact mechanism, I don't believe, is known. But what is known is that Herceptin has really changed how we're able to treat HER2 positive breast cancers. And it's been a huge success story in the fight against cancer. All right. So we've talked about this first barrier, the barrier to tumor cells becoming at least semi-independent of these growth factors. And so now I want to talk about other barriers to tumor genesis. And if you consider an epithelial cell here, even if these cells are able to grow, they won't be able to leave the organ until something else happens. And one thing that would need to happen for these cells to become malignant and to leave the organ that they were initially part of is for there to be a breakdown in the adhesion between cells. So for the next few minutes, I want to talk about the breakdown in cell-to-cell adhesion. So how would a cancer cell basically unstick itself from the cells that surround it in order to leave an organ and go somewhere else? And for that, to explain that, I have to remind you about some of the normal biology of these epithelial cells, which we talked about earlier, which is that they express these transmembrane proteins that are adhesion proteins. So normally, epithelial cells have adhesion proteins. And the famous one for epithelia, or one of the famous ones, is called epithelial or E-cadherin. And E-cadherin is a transmembrane protein. It has an extracellular domain, but rather than that extracellular domain binding to some secreted ligand, this extracellular domain recognizes E-cadherin molecules on other cells, and they stick to each other. And it essentially functions like cellular Velcro. So the cells link together, and they stick to each other such that they form a coherent tissue. So the way that cancer cells subvert this mechanism of adhesion is, they have to do something that inhibits E-cadherin. And so cancer cells, what they can do is to decrease the cell-to-cell adhesion, and I'll tell you how in just a minute. And in addition to decreasing this cell-to-cell adhesion, they can also promote the genes that are involved in motility. So we have to understand, then, how it is a cancer cell would do both of these things. And I'll start with cell-to-cell adhesion. And in contrast to what we've talked about with the growth and survival signaling, where you have mutations that happen in cancer cells, and you basically have an irreversible change to the genome of the cell, this change in the adhesive properties of the cell, and essentially their cell state-- because what you see here, if a cancer cell is having less adhesion and getting more migratory, it's switching from an epithelial type of state to a mesenchymal type of state. And what this is called is an epithelial-to-mesenchymal transition, and that is EMT, for short. And this EMT is not a genetic change. It's not caused by a genetic change, but it appears that EMT results from changes in gene regulation. So you can think of it as more of an epigenetic change, an epigenetic change in gene expression. So there's a change in gene expression such that this can reverse later on. And there are master regulators of this process, which alter gene expression. And so the type of gene that alters gene expression, these are often called transcription factors. And there are several transcription factors that are master regulators of this EMT process. And their names are Twist, Snail-- another one's called Slug. There's an Escargot. You can see this got a little out of hand. And you can tell by the names that they probably were not discovered in humans. And in fact, these genes were discovered in the same genetic screen that I outlined before, where Hedgehog was discovered. OK. So it was a genetic screen in the flies, and these are genes that affect the embryonic development of the fly. And I'll show you this view of the fly embryo. So this is an embryo here, and you see, it's an epithelial sheet surrounding the yolk. And what happens is, in early stages of development, a population of cells express the Twist gene. So that's Twist in the dark here. And this Twist gene causes these cells to basically invade into the middle of the embryo. So here are the Twist cells now, going into the inside of the embryo. And then these cells undergo EMT, and they start to migrate around inside the embryo. What this is doing for the fly is that these are the cells that are going on to form the muscle. And if you look at your neighbor, you might notice that their muscles are on the inside of their body. So this process is basically putting these cells, by getting them to move, it's putting the cells in the right place for what they are going to differentiate into during embryonic development. But this process, if activated during cancer, can allow these cells to be more mobile and to leave one organ and go into another organ. So cancer isn't inventing anything new here. It's corrupting a normal program that cells have and need during development, and it's inactivating it at an inappropriate time. So these are all transcription factors. And what they do is, they repress the expression or function of cadherin such that the cells are no longer as sticky. So basically, when these transcription factors get turned on, it makes the cells less sticky to each other. And it also promotes their migration. So it turns on genes that are important for migration. This process of EMT, as I mentioned, is reversible, and it also involves interactions between tumor and stromal cells. I'll show you an example of-- so in the fly embryo, one thing that's nice about the flies is, you can actually watch this happen live. And so I'm going to show you a movie now, showing you the invasion process. We're going to have an embryo here. And what I'm going to show you is, we're looking at the cells that are expressing Twist and Snail, and they're on one side of the embryo. And I'm going to show you a section of the embryo that is analogous to this. So when the cells move inside, they're going to disappear in this movie. So when you see the crease in the embryo, that's when the cells are invading into the middle of the embryo. And so the cell outlines are in magenta here. And what's labeled in green is a type of motor protein that's involved in the mobility of cells and, in this case, is involved in these cells moving into the interior of the embryo. So here, you see the motor protein appearing, and that's when these cells are going to dive into the embryo. So it's almost like a waterfall. These cells are moving into the inside of the embryo through this process of invagination and, subsequently, EMT. There it goes again. You can see the cells are now on the inside. So this process of EMT also involves interactions between the cancer cells and the stroma. And that's illustrated here because what you can see, what's being labeled here, you can see the nuclei of the cells, but these cells at the edge of the tumor are up-regulating this gene, which is an alpha-beta integrin, which I'll mention in just a minute. And this is getting up-regulated right where the tumor contacts the stroma. And this gene, alpha-beta integrin, is a gene that's associated with motility and EMT. So on that slide here, one gene involved in motility is a class of genes called integrins. And so this also results from the signaling between the tumor and the surrounding cells. And again, this is not something that cancer created. This is what happens during wound healing. So during wound healing, if you injure yourself, the wound recruits immune cells, and the immune cells signal to the surrounding skin or epithelial cells to undergo EMT in order to close and fill in the wound. So this is a natural process that happens in your body that is simply getting corrupted by cancer cells. All right. So now let's talk about this last step of motility. So this is now how the cells would break out and move away. Let's come back to this movie here. There are a few things I want you to notice about this movie. The first is, holy shit, that cell is moving. And so the first aspect of this is, because the cell is moving, it suggests that there's some sort of force being generated. So the cell is generating force. And I'm not talking about like some type of mystical Jedi force. I'm talking about the mass times acceleration force, so like a physical force, OK? The second thing I want to point out about this movie is that you see that bacterium, and it's moving around, this cell is able to follow that bacterium. So the cell, in the force generation process, is very responsive to the signals that that cell is getting. So the cell is generating force, and this force is responsive to signals. So for the remaining part of the lecture, I basically just want to tell you about how it is that the cell generates the force required to move itself around. And it involves a particular type of protein. It's a component of the cytoskeleton, and we've talked about microtubules and the microtubule cytoskeleton and its role in segregating the chromosome. But there are other cytoskeletal systems that the cell has, and one is called the actin cytoskeleton, which is shown on the side above. And actin, the actin cytoskeleton, is a system that is a biopolymer in the cell. So this is a biopolymer, like microtubules. And actin is a gene. And the actin gene encodes for a protein, the actin protein, and when the actin protein is made in the cell, it starts out as being just a single globular protein. And when the actin is in this state, it is called globular or G-actin. But these subunits, these proteins, can come together and form a polymer. So they can go from being individual, isolated proteins to forming a polymer that forms a long skinny filament. And this form of the protein that's forming a biopolymer is known as filamentous or F-actin. And these are long, skinny filaments. They can be hundreds of nanometers, even microns, in length, so hundreds of nanometers long. And the filament width is about 10 nanometers. So I'm just trying to give you a sense of dimensions, that this is a long, skinny filament that the cell can assemble. And it assembles into these dense meshworks, which you can see in the slide up here. So this is the leading edge of the cell. So the cell would be migrating this way. And what you see in this cell is this densely branched network, and these are all actin filaments. So you get this huge dense forest of actin filaments that's right at the edge of the cell that is moving forward. One thing I want to point out is, like all things in biology, this is not a one-way street. And so these biopolymers are very dynamic, meaning they can undergo assembly and disassembly, and they can do so on the timescale of seconds. One last thing I want to point out about the actin filament is, there's a polarity to the filament such that there's an end, known as the plus end, where growth is favored, and there's an end, known as the minus end, where there's often de-polymerization. So you can get a directional growth of the filament. So this is where growth happens. And so in this network that you're looking at, the way the actin is oriented-- so I'm going to draw just a cell here-- you have this dense meshwork of actin, and in this meshwork, there's a polarity to the way the actin filaments are oriented. So the cell is migrating this way, and the plus ends of the actin filaments are facing out right at the surface. And the minus ends are back here. And so what you have, when the cell is migrating, is you have this biopolymer network that's growing on this end, but shrinking on the other end. And that allows the cell to generate a constant protrusive force. So it's the growth of actin filaments, the growth of F-actin, which generates a protrusive force. Now if we consider the whole process of cell migration, what you can see is that initially, you're going to push the cell membrane forward. So you get a protrusion. It pushes it forward. That's often called a lamellipodium, or a pseudopod, and so this part of the cell moves forward. But in order for the cell to have a net motion forward, it then has to stabilize that protrusion. And it stabilizes the protrusion by adhering to the substrate. So right now, we're just considering a cell moving on a flat substrate. And so the way it attaches to the substrate is through cell matrix adhesion. And this cell matrix adhesion is mediated by another type of adhesion receptor known as an integrin. So the cell pushes forward, anchors itself on the substrate, pulls its body along, and then just repeats that cycle over and over again. So the best way I can illustrate this is, if you think about it, you get your protrusion. You get your elbow out. You put down. You anchor. And it's just a repeated cycle, and it's able to migrate across the substrate. So it's kind of like a frontal toe mechanism. You have cycles of protrusion. You generate traction, and then you pull, in order to translocate. And so that's how cells are migrating intuitively. This is just showing you that cells can pull on stuff. So not all cells migrate in this way. I just wanted to say that. So 3D cells have other mechanisms to migrate that de-emphasize this traction mechanism that cells have. So you can get rid of all the integrins that a cell has, and they're still able to migrate. And that's because 2D emphasizes the role of adhesion, but if you can find the cell, then the cells are able to migrate. So what's shown here are cells that are not confined. This is a confined cell in a micropipette. And you can see, it's the same type of cell, but the cell can migrate in confinement, but it can't migrate outside of confinement. So in this sense, the cell is migrating through a different mode. And you can think of it as the cell is doing some type of chimney-ing maneuver. So it's able to get in a confined environment, push out against its surroundings, and that's how the cell then is able to generate traction in the absence of an integrin molecule. All right. Great. So we are set for now.
MIT_7016_Introductory_Biology_Fall_2018
12_Genetics_1_Cell_Division_Segregating_Genetic_Material.txt
ADAM MARTIN: Well, first of all, nice job on the exam. We were quite pleased with how you guys did. And so from now on in the course, Professor Imperiali has been telling you about information flow, but information flow within itself, so information flow from the DNA to the proteins that are made in the cell, which determines what that cell does. And so we're going to switch directions today. And we're going to start talking about how information flows between cells-- so from a parent cell to its daughter cells. And we're also going to talk about how information flows from generation to the next. And this, of course, is the study of genetics. And what genetics is as a discipline is it is the study of genes and their inheritance. And the genes that you inherit influences what is known as your phenotype. And what phenotype is is simply the set of traits that define you. So you can think of it as a set of observable traits. And this involves your genes, as you probably know. I mean, just this morning, I was dropping my son off at school, and he was comparing how tall he was compared to his classmates. And as he went in, he was like, thanks for the genes, dad. So I expect that many of you are going to be familiar with much of what we'll discuss, but we're going to lay a real solid foundation, because it's really fundamental for understanding the rules of inheritance and how that works. So genetics is the study of genes. So what is a gene? You can think about genes in different ways. And what we've been talking about up until now, we've been talking about molecular biology and what is known as the central dogma. And the central dogma states that the source of the code is in the DNA. And there's an information flow from a piece of DNA, which is a gene. And the gene is a piece of DNA that then encodes some sort of RNA, such as a messenger RNA. And many of these RNAs can make specific proteins that do things in your cells in your body. So that's one very molecular picture of a gene. You can think of a gene as a string of nucleotides. And there might be a reading frame in those nucleotides that encodes a protein. So that's a very molecular picture of a gene. The field of genetics started well before we knew about DNA, and its importance, and what the DNA encoded RNA which encoded proteins. So the concept of a gene is much older than that. And so another way you can think of a gene is it's essentially the functional unit of heredity. So it's the functional unit of heredity. I'll bump this up. So I want to just briefly pause and kind of give you an overview of why I think genetics is so important. So what you saw up here is you saw a cell divide. And I showed you this in the last lecture-- you saw the chromosomes, which are here, how they're segregated to different daughters. And this is-- basically, you're seeing the information flow from the parent cell into the daughter herself. But we saw this, so I'm just going to skip ahead. So why is this so important? I'm going to give you a fairly grandiose view of why genetics is so important. And I'm going to say that we can make a good argument that genetics is responsible for the rise of modern civilization. Humans, as a species, began manipulating genes and genetics even before we had any understanding of what was going on. So this is more of an unconscious selection. And so 10,000 years ago, humans were hunter gatherers. They'd go out, and try to find nuts and seeds, and hunt animals. And that's how we got our food. But around 10,000 years ago was the first example of where humans, as a species, really altered the phenotype of a plant, in this case. So wild wheat and wild barley, the seeds develop in a pod. And the biology of the wild wheat is such that the pod shatters, and the seeds then spread on the ground where they can then germinate into new plants. But 10,000 years ago, humans decided that it would be more ideal if we had a form of wheat which didn't shatter, which is known as non-shattering wheat in which the seeds remain on the plant. And that allows it to be easily harvested at the end of the season. So 10,000 years ago is one of the first examples where humans really genetically altered the phenotype of a plant. And they selected for this non-shattering wheat, which then allowed for the rise of agriculture. In addition to wheat, we also-- about 4,000 years ago was the rise of domesticated fruit and nuts. So here are some almonds. If you would like an almond, feel free to have some. You guys want some almonds? No. If you have a nut allergy, don't eat them. Great. So wild almonds, when you chew them, there's an enzymatic reaction that results in cyanide forming. Rachel just stopped chewing. Don't worry. These are almonds that are harvested at Trader Joe's, so you're safe. And so the wild almonds, obviously, were not compatible for consumption. But 4,000 years ago, humans again selected for a form of the almond, which involved just a single gene, which was non-bitter and known as a sweet almond, which was also not toxic. So this doesn't just go for foods, but also for clothing. So humans have selected for cotton with long lint. And that served as a basis for clothing and sort of allowing us to have fabric. And I just want to end with a little story about the almond, which is part of the archaeological evidence for when almonds were domesticated was when King Tut's tomb was unearthed. And they found a pile of almonds next to the tomb, because the Egyptian culture, what they did is they buried the dead with food to sustain them in the afterlife. So that just gives you an idea as to how far back the importance of genetics goes. If we think about nowadays, right now you are always seeing genetics in the news. And you also have the opportunity yourself to sort of do your own genetic experiment. And so now you guys are undoubtedly aware of all these companies that want you to send them your DNA. And they also want you to send them money, such that they can give you information about your family tree and also information about your health. So this is now a big business. But if you don't understand genetics, this is not as useful as it could be. So I'm just curious. How many people here have used one of these services and had their DNA genotyped? Cool. And do you think that really changed your view of who you are? Or was it kind of, eh? AUDIENCE: We actually-- I don't know if we even looked at where we came from. We looked for genetic disease. ADAM MARTIN: So you're looking for genetic disorders. And you don't have to tell me anything about that. Yeah, so I have not done this, but my dad has done it. And he will go find his relatives and bore them with our ancestry. So this is one example of how genetics is really in play today. And not everyone knows how this works. I've had people at Starbucks in the morning come up to me with their 23andMe profile and ask me to explain stuff, because they know who I am. It's a little awkward. So we can also use genetics for forensics. And so this is kind of a-- I had a lab manager in the lab, and he told me that people were doing this in senior homes in Florida, which I thought was kind of funny. What I find hilarious about this is the mug shot of the dog. That dog looks so guilty. But you can use DNA to-- you can use DNA to genotype poop. You can genotype your neighbor's dog. You can get evidence that they're the one that's pooping on your lawn. So that's a not-so-serious example. But there are more serious examples of where DNA genotyping is really having an effect in our society. And this is something I mentioned in the intro lecture. Just this past spring, someone was suspected as being the Golden State Killer. This is a cold case. The killings happened 40 years ago, but the break came from investigators getting DNA from the suspect's relatives to implicate this person in this crime. So they had DNA from the crime. And they saw that there were matches to the DNA at the crime to certain people. And then they can reconstruct who might be the person in the right place to commit the crime. So this is-- I think this is interesting, because it also leads to all sorts of privacy issues, right? Who's going to gain access to your genotype if you submitted to these companies, right? I mean, this is probably a case where I'd argue there's probably a beneficial result in that you can actually figure out if someone's committed a crime. But there are other issues in terms of thinking about insurance companies where we might be interested in having our information not publicly available to insurance companies. And maybe this is something we can discuss later on in another lecture. For today, I want to move on and go through really the fundamentals of genetics. And what I'm going to do is I'm going to start with the answer. OK? I'm going to present to you guys today the physical model for how inheritance happens. OK? So today, we're going to go over the physical model of inheritance. And this physical model involves cell division, which you saw in the last lecture and also in my opening slide. It involves cell division and the physical segregation of the chromosomes during cell division. So also chromosome segregation. OK, so this is how I'm going to represent chromosomes. And I just want to step you through what it all means. So I have these two arms that are attached to this central circle. The circle is meant to represent the centromere. So this is the centromere. And you'll remember from the last lecture on Monday, the centromere is the piece of the chromosome that physically is attached to the microtubules that are going to pull the chromosomes to separate poles. OK? So that's called the centromere. And usually, it's denoted, it's like a constriction in the chromosome or a little circle. OK? These other parts of the chromosome are the chromosome. So that you have the arms of the chromosome. Now I'm drawing what's known as a metacentric chromosome. It's not important that you know that term. But it just means that the centromere is in the middle of the chromosome. There are other types of chromosomes with the centromere might be at the end. OK? So there are different types of chromosomes. All right, now, for all of us, we have cells that have different numbers of chromosomes. OK? Some of our cells are what is known as haploid. And what I mean by haploid is there is a single set of chromosomes. Now the cells that we have that are haploid are our gametes, so they're our eggs and our sperm cells. OK? So these include gametes. OK, but most of the cells in your body are what is known as diploid. And diploid means there's two complete sets of chromosomes. OK, and you get one set from one parent, the other set from the other parent. OK? So one set from each parent. OK, and I'll draw the other set like this. And what I'll do is I'll just shade in this one to denote that it's different. OK? So these two chromosomes then are what is known as homologous. They're homologous chromosomes. Homologous. OK, and what I mean by them being homologous is that, basically, these two chromosomes have the same set of genes. OK, so they have the same genes. They have the same genes. But they have different variants of those genes. OK, so different variants of these genes. And these variants are referred to as alleles. OK? So if you have the same gene but they differ slightly in their nucleic acid sequence, then they're distinct alleles of those genes. So often, the way geneticists refer to these different variants or alleles is we use a capital letter and a lower case letter. OK, so this chromosome over here might have a gene that's allele capital a. And then this homologous chromosome will have the same gene but a different allele, which I'll denote lowercase a. OK? So in this case, big A and little a are different alleles of the same gene. They might produce a slightly different protein, which would result possibly in a different phenotype. OK? So everyone understand that distinction? Oh, I want to make one point because this came up last semester and was one of those cases where I forgot the part about the head. So we often just have two alleles when we teach genetics. But I hope you can see that because a gene is a long sequence of DNA, there is a ton of different alleles you can have within a given gene. So one nucleotide difference in that gene would result in a different allele. OK? So we often refer to two alleles, but there can be more than two alleles for a given gene. OK? Does everyone see how that manifests itself? OK, great. Any questions up until now? Yes, Carmen? AUDIENCE: So when you say that there's more than one, more than just the two alleles, I don't have more than one on each chromosome. So they're just more than one-- ADAM MARTIN: In the population. So Carmen asked, well, can I have like five alleles of a gene? And that's a great question. And so thank you, Carmen, for asking that. What I mean is if we consider a population as a whole, right? You have two alleles of each gene, unless it's a gene that somehow duplicated. And so when we're considering the population, there can be more than-- right? I mean, I see we have people with-- hair color is not a monogenic trait. But we have people with black hair, with blond hair, with brown hair, right? There is more than just two possible alleles with possible phenotypes. OK? All right, let's go up with this. All right, now I want to start at the beginning. So most of our cells are diploid. And the origin of our first diploid cell is from the union of two gametes. OK? So I'm going to draw two gametes here. Each is one n. And I'm just going to draw one set of chromosomes for this here. So we might have a male gamete and a female gamete. And what I'm referring to when I say n here, n is basically referring to the number of chromosomes per haploid genome. So when you have one n, it means you're haploid because you have only one set of haploid genome. But early in your life, we're all the result of a fusion between a male and female gamete. And so that creates a diploid cell. OK, so now, this diploid zygote, so this is referred to as the zygote, is diploid and now has a set of homologous chromosomes. OK? So I'm only drawing one set of homologous chromosomes here. So on the board, I'm going to stick to just one, so I don't have to draw them all out. In the slides, I have three. OK? So each of these represents a chromosome. These are different chromosomes. Different chromosomes are either different color or have a different centromere position. And then these down here that are colored are going to be the homologous chromosomes. OK? Do you see how I'm representing this? OK, so once you have the zygote, right, so you guys are no longer one cell, right? You guys each are tens of trillions of cells. So this zygote cell had to reproduce itself, and your cells had to divide, so that you grew into an entire multicellular organism. I'll just quickly erase that. OK, so when most of your cells divide, and most of your cells are known as somatic cells. When cells of your body or your intestine and your skin, when they divide, they genetically replicate themselves. And they're undergoing a type of cell division known as mitosis. OK? In mitosis, it's essentially a cloning of a cell. Or ideally, it's the cloning of a cell. So you have a diploid cell. It has to undergo DNA replication . And when a chromosome undergoes DNA replication, it will, during mitosis look like this. OK? And these two different arms or strands, they're known as sister chromatids. OK? So that's just another term you should know. These are sister chromatids. OK, and the sister chromatids, if DNA replication happens without any errors, should be exactly the same as each other in terms of nucleotide sequence. OK? So after DNA replication, this cell will essentially have four times the amount of DNA as a haploid cell. And it will split into two cells. And again, they'll both be diploid. OK? And I'll just point out, if we're thinking about our pair of chromosomes here, right, this parent cell has both homologs. And the daughter cells, because they should be genetically identical, also have both homologs. OK, so that's an example with just one chromosome. I'll take you through an example with these three chromosomes here-- all six chromosomes. So you have-- these are homologs. These are homologs. These are homologs. And during mitosis, all of these chromosomes initially are all over the nucleus. But during mitosis, they will align along the equator of the cell and what is known as the metaphase plate. Metaphase is just a fancy term for one particular stage in the mitotic cycle. And then what will happen is the spindle will attach to either one side or the other side of these chromosomes. And it will physically segregate them into different cells, OK? And what I hope you see here is that this has six chromosomes. This has six chromosomes. And these two daughter cells are genetically identical to the parent cell. OK, so this is known as an equational division, because it's totally equal. OK? And again, the daughter cells are both diploid, OK? So that's mitosis. Any questions about mitosis? OK. Moving on, we're going to talk now about another type of cell. And these are your germ cells. And these germ cells undergo an alternative form of cell division known as meiosis, OK? And your germ cells-- germ cells produce your egg and sperm. And so meiosis essentially is producing gametes, such as egg and sperm cells, OK? So what's the final product going to be? What should be the genomic content of the final product of meiosis? It should be one end, right? Who said that? Sorry. Yeah, exactly right. What's your name? AUDIENCE: Jeremy. ADAM MARTIN: Jeremy. So Jeremy is exactly right. Right? The germ cells-- in order to reproduce sexually, they should be haploid cells, so that they can combine with another haploid to give rise to a diploid, OK? So the ultimate result that we want is to have cells that are one end. But most of our cells to start out with are diploid, so they're two end, OK? So what's special about meiosis is you're not just going from two end to two end, but you're reducing the genetic content of the cells. You're going from two end to a one end content, OK? So again, meiosis starts with DNA replication. But in this case, the first division, which is meiosis I, is not equal. And it actually segregates the homologs, such that you get one cell that has one of the homologs duplicated and another cell that has the other homolog duplicated. OK? And I'll show this. I'll show it right now. So this is the same cell now. It's undergone DNA replication. As you can see, each chromosome has two copies. But instead of all the chromosomes lining up in the same position of the metaphase plate, what you see is that homologous chromosomes pair at the metaphase plate. And what happens here is that the homologous chromosomes are separated-- two different cells. And now, you have two cells that are not genetically identical, OK? So because there is not equational and there's a reduction in the genetic material that's present in the cells, this is known as a reductional division, OK? So that's meiosis I. And that's a reductional division. And then-- but this is not yet haploid. And so-- here, I'll just stick another one in here. These cells then undergo another round of division, which is known as meiosis II. And during this meiosis, these sister chromatids are separated, such that you're left with one chromosome. And my drawing-- at least one chromosome per gamete, OK? So each of these, then, is 1n. OK? So again, you have the chromosomes. But this time, you have them aligned like in mitosis. They align. The sister chromatids are physically separated. And now, you see this cell is genetically identical to this cell. And this cell here is genetically identical to this cell, OK? So that's meiosis II. And that's an equational division much more like mitosis, OK? Because the product of the division of those two cells-- each of those is equal, OK? And finally, the result of meiosis II is that you're then left with gametes that have a haploid content of their genome. OK, I want to end lecture by doing a demonstration. Let's see. So this could either be amazing, or it will be a complete disaster. So we're totally going to do it. So everyone come up. Right here. Here. Evelyn, you can leave when you have to go. And we'll have a chromosome loss event. OK? It has to be a multiple of four. If we have extra people label, then the people can supervise. Go. Oops, sorry. All right. What do we got here? Here you go, Bret, Andrew. Sorry. I hope I'm not hitting anybody. AUDIENCE: [INAUDIBLE] ADAM MARTIN: What's that? Yeah, that's the advantage of these. All right. Here you go, Myles. Let's see. Here you go. Sorry. Someone take this. All right. What do we got here? Just got a little chromosome here. AUDIENCE: [INAUDIBLE] ADAM MARTIN: Oops, sorry. All right. Who doesn't have a chromosome? Everyone in the class has a chromosome? All right. One of you want to come in here? All right. We'll see how constrained we are in terms of space. I've never been this ambitious and had this many chromosomes before, so I'm excited to see how this works. So you each have a Swim Noodle. They're different colors, so different colors represent different chromosomes. And then you also have Swim Noodles that have tape on them. And these represent different alleles from your other chromosomes. So these two chromosomes would be homologs of each other, OK? Does that make sense? OK, great. All right. Now, the metaphase plate will be along the center of the room. So let's first reenact mitosis. So why don't you guys find your sister chromatid and then sort of align in the middle of the room here? Sister or brother chromatid. How are we doing? Do we have enough space there? It's a little packed. You can see how the cell-- can you imagine how packed it is inside a cell? OK, everyone found their sister chromatid. Normally, the sister chromatids-- they replicate and they get held together. So there's no finding of sister chromatids, but-- all right. Great. So segregate and we'll see how you guys did. All right. And the goal is that you guys would be genetically identical. So how-- OK, great. That looks like one short red, one short red. OK, that's good. They look genetically identical to me. All right. So that was my mitosis. Now, we're going to do meiosis. OK, why don't you guys align, like what would happen during meiosis I. OK, you guys can come back. Think about who you're going to pair with. [SIDE CONVERSATION] All right. So what were you looking for when you were pairing? Who were you looking for? AUDIENCE: Longest chromosome. ADAM MARTIN: Your longest chromosome, right? OK, great. All right. Why don't you guys segregate? All right, so that was meiosis I. Meiosis I looks successful to me. And now, we have to undergo meiosis II. So maybe what we could do is you guys can rotate. And the metaphase spindle can be sort of in this orientation. AUDIENCE: [INAUDIBLE] ADAM MARTIN: Yeah, that will-- we want a group over there, a group over there, a group here, a group here. And those will be our four gametes. [SIDE CONVERSATION] All right. You guys set? All right. Go. [SIDE CONVERSATION] OK, terrific. Everyone haploid? Looks like everyone is haploid, which is good. Right? So let's just take a minute and think about probability here. So what was the probability that a gamete would end up with this orange allele on the red chromosome? AUDIENCE: Half. ADAM MARTIN: Half, right? Because there are two, right? So these two gametes have that allele. These two should not, right? OK, great. And we just had a chromosome loss, so that gamete is in trouble. But maybe we could get a TA to rescue this chromosome. Either one of you is fine. There you go, David. [SIDE CONVERSATION] All right. That was great. Now, let's-- as you're doing this, you get a sense as to how things could get mixed up, right? And you think inside the cell, right? So I don't-- I've lost track of how many chromosomes. We have 1, 2, 3, 4, 5, 6, right? How many chromosomes do we have? AUDIENCE: 23. ADAM MARTIN: We are-- a haploid set for us is how many chromosomes? AUDIENCE: 23. ADAM MARTIN: 23. Exactly. Right? So it'd be even worse for a human cell to get this to go right. So why don't you guys line up in the mitosis configuration? And we'll consider some things that could go wrong. All right. Who here is good friends with their sister or brother chromatid? Is anyone very good friends with their sister or brother chromatid? [LAUGHTER] AUDIENCE: [INAUDIBLE] ADAM MARTIN: Yeah. Someone become good friends and become inseparable, OK? Would someone volunteer to be inseparable? OK, great. You guys are now inseparable, OK? Now, segregate. OK, great. Now, what happened there? AUDIENCE: [INAUDIBLE] ADAM MARTIN: What's that? AUDIENCE: He stole her. ADAM MARTIN: Yeah, that's cell stole her. OK. So now, we have two-- a duplication of that chromosome. What's happened over here with this daughter cell? AUDIENCE: It's missing a chromosome. ADAM MARTIN: It's missing a chromosome, right? AUDIENCE: Right. ADAM MARTIN: So these are the types of mistakes that can be associated with a cell becoming cancerous, right? Because let's say there was a gene that suppresses growth on that chromosome. And it wasn't on that homolog. Then you might result in a genetic sort of mutant or loss of that gene that would result in uncontrolled proliferation. Also, picking up the extra copies of genes that promote growth could allow that cell to have a proliferative advantage, OK? We're going to-- this is sort of foreshadowing what we're going to talk about later. But I just want to plant the seed now. OK. Why don't we go back and do meiosis? [SIDE CONVERSATION] OK. Now, anyone see any friends looking across the aisle now? All right. Great. You guys are now inseparable. Why don't you guys segregate, except the inseparable ones? Oh, but your sister chromatids still have to stay attached. There you go. See? Great. Right. So just like last time, this is known as a non-disjunction event where the chromosomes don't separate when they should, OK? Great. Now, why don't you guys do meiosis II? [SIDE CONVERSATION] All right. You can segregate. All right. Now, you see these two gametes over here are lacking an entire orange chromosome. And these two gametes here have picked up an additional copy of an orange chromosome, OK? So these two gametes are no longer haploid for the orange chromosome. And if one of these gametes were to fuse with a haploid gamete that has an orange chromosome, then now you have a zygote that has three copies of the orange chromosome, which is abnormal, OK? So if that were chromosome 21 in humans, that would result in something that's called trisomy 21, which is down syndrome, OK? So you see how mistakes in how chromosomes segregate can result in human disease. OK. Why don't we give yourselves a hand? Good job. [APPLAUSE] OK, you can just throw the Pool Noodles on the side. And I just have one slide to show you where we're going next. [INAUDIBLE] [SIDE CONVERSATION] AUDIENCE: So I have a question. ADAM MARTIN: Yeah? AUDIENCE: When the homologous chromosomes split, can you share alleles? Are there alleles preserved in this portion? ADAM MARTIN: You're asking if there's crossing over? AUDIENCE: Yeah. ADAM MARTIN: There is crossing over. Yes. And that will get its own entire lecture. Yes, good question. OK, so just to give you guys a preview of what's up next. So in the next lecture, we're going to talk about Mendel and Mendel's peas. And we'll talk about the laws of inheritance, OK? And realize Mendel was way before DNA or what our knowledge of a gene was, OK? Next, we'll talk about fruit flies, and Thomas Hunt Morgan, and seminal work that led to the chromosome model of inheritance and also resulted in the concepts of linkage and also genetic maps. OK, we're going to go-- well, just to sort of anchor yourself, the structure of DNA was published in 1953. So these seminal genetic studies up here were done before we knew about DNA. So geneticists were studying genes and their behavior well before we knew DNA was what was responsible. And then we'll talk about sequencing and the sequencing revolution. We'll talk about cloning, and molecular biology, and how one might go from a human disease to a specific gene that causes it. And then, finally, we'll start talking about entire human genome and genome sequences. OK, so that's just a preview of where we're going, so have a great weekend.
MIT_7016_Introductory_Biology_Fall_2018
4_Enzymes_Metabolism.txt
PROFESSOR: So what are we going to do today? So, today we're going to continue with amino acids, peptides, and proteins. And I want to talk about a different protein variant that is the causative, the cause of sickle cell anemia. And it's a very interesting structural issue. But let me very briefly recap what we did last time and then talk to you a little bit about a process known as denaturation. So last time, we discussed how the primary sequence of a polypeptide chain defines its folded structure. The folded structure is put in place with secondary and tertiary interactions, non-covalent interactions. Secondary just amongst backbone and its tertiary sort of everything else, even including backbone amides, but either with water, or a side chain, and so on. And then there are some proteins that dissociate into quaternary structure. So these monomer subunits, as they would be called-- and I'm going to depict this as a closed circle or an open circle-- may form dimers of some kind. The dimers may be heterodimers. Or they may be homodimers. Or you could form trimers, tetramers, and so on. And when we talk about hemoglobin, which is the protein that gets, that has a problem-- that is the cause of sickle cell anemia, you'll see that that is a heterotetrameric protein. So in this sort of rendition, you would kind of draw it like this where there are four subunits. Two are of one flavor and two are of the other. And that's the quaternary structure of hemoglobin. Now proteins fold. There are weak forces that are holding them together. But there's a lot of weak forces. But if you subject a protein to various treatments that may break up those weak forces, the protein will undergo a process of denaturation. So can anyone think of what kinds of things would cause protein DNA denaturation? Yes. AUDIENCE: Some heat. PROFESSOR: Heat is a bad one, is a serious one, obviously. And heat-- yes, I'll write them all down. What's yours? AUDIENCE: pH. PROFESSOR: pH. So pH. Acidity. Basicity. And we'll talk about why those things cause changes. Any other thoughts? Yes? AUDIENCE: [INAUDIBLE] PROFESSOR: Oh. Yeah. So for example, salt. Organic solvents. And a process that a lot of people don't necessarily think about, but as engineers some of you will, is shear forces. So if you're shooting a protein through a very narrow tubing and there's high shear forces, those who will also denature nature proteins. So with heat, it's very clear. You're going to break those weak bonds. And then they can either reform. Or if you go to too high heat, the unfolded protein starts to form aggregates. And anyone who has ever scrambled an egg knows that that is an irreversible process. You don't get to cram the egg back into the shell. It's not the same anymore. Because what you're doing when you're scrambling eggs is denaturing proteins through heat treatment. So that's what heat does. It breaks the forces. The proteins stretch out into their denatured state. And instead of refolding to a compact structure, they just start aggregating with each other. And that's pretty much irreversible. pH is interesting. Why would pH break up at low temperature? Why would pH cause changes? Yeah. AUDIENCE: [INAUDIBLE] amino acids have a certain structure. So they're either protonated or deprotonated, then the pH, that would change. PROFESSOR: OK. So pH, perfect. So pH will change the charge states of many of your sight chains. And once you've changed it, you might have had a lovely electrostatic interaction. But then you go and protonate the carboxylic acid. And it can't form-- in fact, it wants the form, it wants to break apart as opposed to come together. So that is changing charged state, which causes denaturation. Salts and organics. For example, they may make interactions with parts of the protein. For example, organics, organic molecules may slip into a hydrophobic core and break them up. Just push them apart. They want to be there. And then too much of a high concentration of an organic solvent that is miserable with water. And we would say ethanol, acetonitrile, DMSO. But you don't need to worry about too much of which details. Well actually, once you get above 10% or so, we'll just start denaturing proteins, sometimes reversibly but often irreversibly. So this is very important to know that proteins are stable, but you've got to treat them nicely. There are some human diseases that are a result of misfolded or aggregated proteins. So for example, all the prion diseases are proteins gone bad, pretty much, where they are not in a folded structure anymore, but they are in aggregates that cause problems with cellular processes and toxicity. So Alzheimer's disease. Mad cow disease. A lot of those are neurologic disorders caused by poorly folded or very misfolded proteins, for example. So these are the things we talked about last time with respect to the flux from primary to secondary, to tertiary to quaternary. And that's a perfect time for me to introduce to you what we'll talk about today. So last time we talked about structural proteins. And I showed you how collagen, just with a simple defect, changing a glycine an alanine in one of its subunits, really alters the quaternary structure of the protein to make very weak collagen that's no longer supportive of bone strength. But what I'm going to talk to you about today is a defect in a transport protein that carries oxygen around the body. So we're going to talk about hemoglobin. These diseases are what are known as inborn errors of metabolism, or that's kind of a complex term. Or genetically linked diseases, because there is a single defect in a DNA strand that then gets transcribed into an RNA strand. So one base defect that then becomes an amino acid defect in your protein strand. So these are tiny changes in the protein that cause dramatic changes in the structure and function of the protein. And what you will see with hemoglobin is it causes a real problem with the quaternary structure and causes proteins to aggregate. So hemoglobin is the dominant protein in red blood cells. Or erythrocytes. And in fact, the differentiation of the red blood cell as it comes from progenitor cells goes through a process where the red blood cell dumps out its nucleus so it can't divide anymore. And basically, the content of the cell is extremely high in hemoglobin. You've packed the hemoglobin into the red blood cell at the cost of losing the nucleus. So that's terminally differentiated. Can't become a red blood cell. It can't divide anymore. And it has about a half-life, they have about a half-life of 100 days. So they turn over, and then that's it. And when red blood cells turn over, the hemoglobin has to be taken care of in order that it's not toxic. Red blood cells are red because of a particular molecule that's in the hemoglobin called the heme molecule, which is bound to iron, which provides the hemoglobin with the capacity to pick up oxygen in your lungs, travel it around the body, and then leave it where it's needed. And then replace the oxygen with CO2 and take the CO2 back to the lungs in order for you to respire it out. OK? So hemoglobin carries oxygen and CO2, from oxygen from the lungs, CO2 back to the lungs. And the reason why you need the iron is that the iron is coordinated to the oxygen. So the heme molecule-- I won't draw it. If you want to see it, it's a big, complex organic structure. Very interesting structure. But something for another day here. But I want to just stress to you that the iron heme complex is red. That's why your blood cells are red. Your blood cells don't have a nucleus so they can cram in lots more hemoglobin. So it's kind of a fascinating situation. So hemoglobin is an example of a homotetrameric protein. And it has four subunits. Two of one flavor and two of another. So we call this an alpha 2 beta 2 protein, differentiating the alpha subunits and the beta ones. Yes. AUDIENCE: Why isn't it homotetrameric? PROFESSOR: Why isn't it homotetrameric? AUDIENCE: [INAUDIBLE] PROFESSOR: You could ask why is it? I don't know. I mean, there will be interactions amongst the subunits that favor that particular packaging. The subunits are kind of similar in shape. They have what's called a globin fold. You can more or less pick out those tubes, remember, alpha helices. They could form tetramers that are all the same, but the energetically favored form is the two and two. Hemoglobin is a tetrameric protein, because that's really advantageous for picking up oxygen and dropping off oxygen in a very narrow oxygen range. So there are proteins called globins that just one of these that can bind oxygen. Hemoglobin is tetrameric because it has a cooperative oxygen binding. So in a very narrow range of oxygen, it fills all four sites in the tetrameric protein with an oxygen Molecule. So it's very advantageous from a physics perspective that it responds to very narrow changes in oxygen. Does that make sense to everyone? Yeah. AUDIENCE: [INAUDIBLE] PROFESSOR: OK. It means, anything that's cooperative means that one, let's say I've got a tetramer of hemoglobin. One oxygen binds to one of them. So I'm a binding oxygen here. And then binding to the next, the next, and the next gets easier and easier. So they sort of want to come in as a team. And that's handy for maximizing oxygen transport around the body in a narrow oxygen range, which we can only deal with what's out there in the atmosphere, so we have to make this work. Does that answer your question? OK. All right. So where was I? OK. So what we're going to do today. We're going to look at hemoglobin. It's the tetramer. Those discoid structures are the hemes that I just mentioned. I've drawn them as this sort of four-leafed clover here just for simplicity. And there is a single defect in the sequence of the single monomer subunits in hemoglobin. So each of these-- let's go here. So there are four proteins-- beta globin, two copies of beta globin, and two copies of alpha globin. They are all-- let me see. What's the size? Do, do, do, do. [INAUDIBLE] You know, I can never see things when I'm up at the screen. But they're about 150. 156. OK. So they're about 146 amino acids long in each of them. And a single defect in the beta globin where you have a change from glutamic acid residue 6 to valine at residues 6-- one change in beta globin, which means two changes in the whole structure, because there are two beta globins-- alters the properties of the hemoglobin and causes what's called sickling of your red blood cells. So let's take a look at what that would look like at the amino acid level. Glutamic acid is one of your charged amino acids. I'm just going to draw a little bit of it as it were in a peptide. And it's at position 6 in the sequence. So it's six residues from the amino terminus because we always write things in this direction. And the change takes place to put in place a valine. And there's a pretty big change in identity and personality of those residues. You've gone from polar charged, to neutral, big, fluffy, hydrophobic residue. And it's really amazing. So the beta globin is expressed on chromosome 11. It's 134 million base pairs. One base has changed. So what you have in the DNA, in the normal DNA that encodes the normal beta globin gene, there's a particular sequence of nucleic acids. This is what the double strand would look like. We're going to see way more about nucleic acids next week. When that gets converted to the messenger RNA, you get a particular code that in the genetic code codes for glutamate acids. Everything's normal. A single change, if we change the center nucleic acid within the DNA, it makes a different messenger RNA. And one base pair puts in valine instead of glutamic acid out of 134 million base pairs. So what happens in the normal hemoglobin, you have normal behavior. You had this tetrameric structure. It cooperatively carries oxygen. It moves around the blood no problem. Excuse me, it sits in the erythrocytes or red blood cells no problem. The minute you have that mutation, the hemoglobin molecules start to associate into clusters like fibrillar clusters, because each tetramer gets glued to another tetramer, and another one, and another one. So you have hemoglobin not behaving as this beautiful, independent quaternary structure, but rather sticking to, physically sticking to other Molecules And those tangles get, those molecules get so large that they start to form long and inflexible chains. And it's such a dramatic change that that discoid structure that you're familiar with for red blood cells suddenly becomes a sickle shape. So that would be the normal cell with normal hemoglobin. But sickle cell, they look like this. They're kind of curved, odd, a very odd shape. And the problem is red blood cells have evolved to move really smoothly through your capillaries. As soon as you get a different shape that's sort of not that discoid structure, they start clogging in the capillaries. And when you have the defect where all of your hemoglobin is messed up with this variation, it's incredibly painful, because think of all your capillaries going out to the farther reaches of your joints. Those very thin blood vessels are blocked up with the sickle red blood cells that are caused by the variation in hemoglobin. So that one little defect takes us all the way to a serious disease. All right? So what I want to do very briefly is show you the molecular basis for this. All right. And the defect actually appears on the two beta globin chains, but right on the outside of the protein, not in the middle of the protein. Because this is a defect that affects how proteins interact with other proteins, not the function of the protein on its own. Probably still carries oxygen just fine. But it's the mechanical change in the hemoglobin that causes the disease. OK. So sickle cell anemia, the hemoglobin is now called hemoglobin S with that mutation that I just described. And when people are heterozygous, it means they have one good copy of the gene that's normal and the copy of the gene that's the variant. And you'll learn much more about this in human genetics when we talk about that later on. So you have a mixture of the OK hemoglobin and the sickle cell hemoglobin. People who are homozygous for the defect, all of their hemoglobin is disrupted, and those are the people who really end up in hospital with a lot of transfusions, and so on. The heterozygous, actually, you can manage quite well with. And I'm going to show you in a minute that in some parts of the world, being heterozygous-- i.e., having some of your hemoglobin with a defect and some without it-- actually confers an advantage. It's a really cool story. So what I'm going to do is quickly show you the wire structure. OK, so this is the structure that elucidated the real reason for the interaction. What happens when you have this mutation. And it was a structure that was captured of a dimer of hemoglobin molecules where you could really see what was happening at the interface and the sorts of changes that had been put in place by that variation from the charged to the neutral structure. So for any of you who want to pop by, I can start to show you how to manipulate PyMOL. We can do that separately from class. But this is a dimer of tetramers. And if I just show you just some of the subunits, I can actually show you how there's two of each subunit in each structure. So if I, go I can pick some out. Every other one. And then I can color them a different color. You can see where the globins, where the beta globin are and where the alpha globins are. That's still looks like chicken wire. It's very unsatisfactory. So what I can do is I can show you everything as a cartoon and get rid of all those little lines. And then you can see perfectly the structure where you see two beta globins and two alpha globins in each structure. OK? So what we're going to do next is zoom in to see what's happening where we've done this mutation, what's going on with the placement of the valine in that structure. All right? And wherever I put a four-letter code-- so that one was 2HBS-- that's what's known as the protein data bank code, and it enables you to go fetch the coordinates of that protein. So if any of you for the late project want to do a protein structure and print it, come to me and I'll explain a lot more about that. Or the TAs can also do that. So let me now move you to looking in closely to the variations. So what I've done here is I've actually colored-- the beta globin is purple, and the alpha globin is cyan colored. You can see the hemes in each of the subunits. Those are those red wire things. And now we've zoomed into the place where the mutation is where you have a valine instead of carboxylic acid. And what you can see from this image which should stop is that the valine on one subunit in one homotetramer interacts with a sticky patch on another subunit that's made up of phenylalanine 85 in the adjacent protein and leucine 88 in the adjacent protein. So this sticky patch on one surface glues onto a sticky patch on the surface of another tetramer. If you had glutamic glutamate there, would that form? No. In fact, it would be quite deterred from forming because you don't want to cram that negatively charged element into those two hydrophobic residues. So what you've gone from is a situation where this really is fine on the surface. It's hydrated. It's not sticking to anything. To another situation where you have phenylalanine and leucine, which are both hydrophobic, providing a patch on the one tetramer where the valine from the other tetramer combined. And because the molecule's a tetramer, on each of the subunits, there is also another valine that will go off and do that elsewhere, and another valine. And there's one you can't see that's tucked behind. So that's why the hemoglobin forms these structures, because every hemoglobin molecule has two places to stick to another hemoglobin tetramer, and so on. So think of the repercussions from one nucleic acid change that's really quite remarkable. So what we've seen here is that that change occurs. And just a couple of moments for you to think about this, you can have variations at that site that won't cause a problem. Which ones of these do you think are least likely to cause a sickle cell type of phenomenon? So tyrosine, serine, aspartic acid, and lysine? So I'm going to change the glutamate to something else. Which one's going to have a perfectly normal hemoglobin? There's one that stands out. Yeah. AUDIENCE: [INAUDIBLE] PROFESSOR: Aspartic. That's fine. No problem. It just switched it for its younger brother. Well, which one of the others? And in many cases here, you could probably argue your way to all of them. But one would be pretty bad. Which one would be pretty bad? Tyrosine, exactly. It's another. Even though it's got that OH group, it's still pretty hydrophobic because of that ring system there. What about the other two, serine and lysine? What do you think? Which one would probably be, in fact, the least detrimental of those remaining two? And give me the reason as well. Yes. AUDIENCE: Lysine. PROFESSOR: Lysine. I think it would be lysine because lysine is now positively charged. It's equally unlikely to want to do this goofy interaction because it is also charged, just charged in the other direction. But one could also argue that serine would be OK because it's a little bit more polar, so it wouldn't cause as much problem. OK. Finally, this issue with sickle cell anemia, there's some fascinating data that shows in parts of the world-- for example, during a drug trial for plasmodium falciparum, one of the causative agents of malaria, they found that 1 out of 15 people with the sickle cell trait was infected with malaria, whereas then the people who were healthy, normal homozygotes for the right hemoglobin, 14 out of 15 were infected with plasmodium falciparum. Now why do you think that is? How can we relate the infectivity of a parasite with the shape of a cell? We've gone from these juicy-looking red blood cells, nice and round and probably quite open, to a cell that's sort of difficult to shape. So it turns out that the parasite doesn't want to infect the sickle cell red blood cells anywhere near as well. And there are, for example, other bloods tested which shows the same correlation. And here's a map of Africa where you see a massive overlap of the prevalence of the sickle cell trait and the presence of plasmodium falciparum. So there is an evolutionary advantage to having the heterozygous variant where you have some normal hemoglobin but some of the sickling hemoglobin, because it confers you some resistance to malaria. It's not good to have both of them, the variant that causes sickling, because that's painful and it really causes a lot of health disorders. It's just when you have one of each gene encoding both variants. OK? All right. Great. OK. So now we're going to talk about enzymes. And these are the proteins that catalyze reactions. Any questions about that? So while a lot of disease states actually might be bred out because someone would be at a disadvantage with a particular disease, in this case, that trait has been maintained because it offers a very different advantage with respect to disease. OK. Let's talk about enzymes for a moment. Or for the rest of the class, in fact. OK. So enzymes are the heavy lifters of the protein world because they catalyze all the reactions in metabolism, in biosynthesis, all kinds of transformations that make you want you are. Enzyme is a protein-based catalyst. You all know that. Terrible writing again. There were a couple of other times I just quickly want to give you. So an enzyme, there is also a term known as an isozyme. And an allozyme. You may see them. You'll see allozyme less commonly, but you'll see isozyme quite commonly. An isozyme of one enzyme is a variation on the enzyme that catalyzes the same reaction, but it's expressed on a different gene. An allozyme is the same enzyme, but with a variation in it. So it's encoded by an allele of one gene. So it's just a variation of the gene that might have happened through a mutation. Still catalyzes the reaction, but there's a slight change in the sequence. But they're coded by the same gene. Same gene, with a variation. And as I said, you will see the isozyme term more commonly than the allozyme term. Now why do we need enzymes? Well, the problem is there are physiologic reactions that we need to carry out that are just too hard to carry out at room temperature pH 7 in water. They just don't occur. So you need enzyme catalysis for all of your metabolic reactions. Let me just give you one trivial example. This bond you already know nicely now. Peptide or amide bond. If I want to hydrolyze that, if I want to break it open, pH 7, physiologic temperature, so 37c, in water, it would take me-- how many years is it? The half-life of that bond would be 600 years. OK? That's pretty untenable for digesting a Big Mac even that even under the best of circumstances. So we need enzymes to speed up breaking down proteins and carrying out reactions because otherwise, we just can't-- we can't do anything. So what I want to describe to you are some of the details of how enzymes work and then how we can control the function of enzymes. So typical enzymes take a substrate to a product. Some enzymes may take two substrates and make one product. Some enzymes maybe take one substrate and make two products. It just depends on the transformation that you're doing. Enzymes are classified into a bunch of different families. But the thing that will tell you that something you're reading about is an enzyme is the suffix ASE at the end of the name of the enzyme. So the enzyme that hydrolyzes the peptide bond or hydrolyzes proteins is called, no big surprise, a protease. And you'll see later on ribonuclease, DNAs, oxidoreductases, all kinds of reactions where if you see this term at the end of the name it's telling you quite loud and clear that it's an enzyme. Just a very sort of simple way of remembering that. Now enzymes promote reactions in order that we can have them carried out at room temperature. But we want to think about how they carry out these changes and transformations. What is it about the structure of the protein that enables these reactions? But the first thing we have to do is take a look at the thermodynamics and kinetics of a transformation. So before I go anywhere, what I want to do is describe to you how enzymes work by thinking about the physical parameters that we describe the energetics of a transformation. So in thermodynamics, you all know delta G is delta H minus T delta S. And we're really only going to worry about one of these terms. We're going to worry about delta G, and I'll explain why. So delta G is the Gibbs free energy. H is the enthalpy T is the temperature in Kelvin. And then S is entropy. So these are the two terms when you're looking at an energy diagram, we generally think about reactions where we describe the y-coordinate as the change in delta G, the change in the free energy, and the x-coordinate is your reaction coordinate. So in going from a substrate to a product, we generally have a situation where we have a substrate at a certain energy, and then maybe a product at a different energy. And we're going to talk about the details of that. So why do we deal with Gibbs free energy, not enthalpy? Does anyone know? OK. Enthalpy describes the energies of all the bonds in a molecule. But when you're doing an enzyme-catalyzed transformation, you're not busting open all of those bonds. You're not breaking something down to carbon, hydrogen, and oxygen. You're only dealing with parts of the energetics of the molecule. You're only dealing with what's known as the free energy changes. So looking at the enthalpy changes isn't going to get you very far. It's not going to describe the reaction because the enthalpy changes would be enormous breaking down that molecule. And that's not what you want to achieve. In a chemical transformation, we care about delta G. Now the next thing to think about is what are the energetics of the reaction, and how does an enzyme-catalyzed reaction manipulate those energetics? So the key thing here is we want to talk about Gibbs free energy. I shouldn't have written quite this much stuff here because I need the Blackboard. All right. So when you describe a reaction, you want to understand how far that reaction goes and how fast that reaction goes. So when you go through a reaction, we can describe how far the reaction goes by thinking about the free energy of the substrates and the products. So in this case, the substrate is at a higher energy than the products. So you will go a long way through the reaction to make quite a lot of products in a transformation. So that describes how far the reaction goes. So that is the difference between the energy of the substrate and the product. How fast the reaction goes is described in a different part of this diagram. Does anyone know what it is? Yes. AUDIENCE: Activation rate. PROFESSOR: Yes, exactly. How fast the reaction goes is literally how high the mountain is that you have to get over to carry out the transformation. And that height is described as the energy of activation. So that tells you how fast, and the difference here tells you how far. The energy of activation is a really important parameter because it's actually what gets manipulated when you're dealing with catalyzed reactions. So the energy of activation-- the higher that mountain is, the slower the reaction will be because it's a much harder transformation to go through. The reactions in our bodies can be of different flavors depending on the difference in energy of the substrate and the product. So shown there, substrate going to product where the product is at lower energy than the substrate, we would call this an exergonic reaction because we're releasing energy in the transformation. So S higher than P. Exergonic. And if we have a different reaction-- and I'll sketch this one in here-- where the product is higher energy-- and this is a reaction coordinate-- then that will be an endergonic reaction. Both reactions happen in enzyme-catalyzed systems. And we'll explain why you're able to catalyze even ones that require energy. So exergonic releases energy. And endergonic requires. OK. What else have I got on here? We also, in the situations where energy is produced, the exergonic reactions, we call these catabolic processes. And if you have trouble remembering catabolic and anabolic, just join me in that because I always forget which is which. But the ones that produce energy are catabolic. The ones that require energy are anabolic. And when we think about metabolism, the catabolic reactions are when we're breaking molecules down because we need energy. We need to use it to do something. The anabolic reactions are when we want to store things. Store fats, build proteins, because they're going to be endergonic. They're going to be requiring energy to take place. I just forgot one thing that I have shamefully done. Remember, this axis is kilocalories per mole most commonly when we're talking about delta G, or kilojoules per mole if you're in a different part of the world. But it's important to have units on these diagrams. So that tells us a little bit about enzyme-catalyzed reactions. We need the enzyme to do something about this energy of activation. Because if we didn't have a high energy of activation and I brought a Snickers bar to eat during class, I would just burst into flames, right? It needs a high energy of activation to keep it stable under regular conditions, but only break down the bonds at times when you require that breakdown. All right. So what did the catalyst do? OK. Now I'll show you the simple reaction. The enzymes are a very large structure. It binds to a substrate, chemistry happens, and it releases a product. But at the same time, you can't disobey the principles of thermodynamics. So there are certain criteria we have to think about when we consider an enzyme-catalyzed reaction. So first of all, do not disobey whichever law of thermodynamics it is. They do not change delta G. Delta G is a property of the two reactants. You're not going to change it with a catalyst. It's going to have a much more, a more important impact on a different parameter. Which parameter do enzymes change and help lower? Over there. AUDIENCE: [INAUDIBLE] PROFESSOR: Right. So catalysts do change and in fact lower energy of activation. And we'll talk about how they do that the end. And then the last rule about a catalyst is you can recover them unchanged after a reaction. It would be a lousy catalyst if it did its chemistry and then you've used up the catalyst. So enzyme catalysis are the ultimate green reagents. You can keep using them thousands and thousands of times to continuously turnover transformation. So you haven't changed a catalyst. So the things that we want to think about is how-- what are the processes that enzymes can manipulate? And I should probably just quickly run through these slides so we've talked about these entities. But I put them on the board because they're particularly important. So the energy of activation of a catalyzed reaction is lower than the uncatalyzed. And I'm not going to bore you with these questions because you can work this out quite readily. So delta G is the free energy that changes. And these are endergonic because the energy of the products is lower. So this is the slide I want to get to with respect to the enzyme-- to enzyme catalysis. So we always think, well, gosh, the enzyme is really large relative to the size of the product. That's because all the energy within the protein-folded structure is very useful for lowering the energy of activation of a transformation. So let's say I have a reaction that involves two substrates coming together to make a product. If I'm off the enzyme, these guys, it's going to take them a long time to bump into each other to do chemistry. The way enzymes catalyze those types of reactions is they have binding sites for both of those compounds. In fact, the enzyme acts as a stage. One substrate binds. The other substrate binds. They're binding close to each other on the enzyme. Chemistry can happen. It favors reactions that involve multiple molecules. What about another situation where you have a bond-- for example, the amide bond-- the proteases break? It's hard to think of how that-- how can we make that more easy? Well, amides are most stable when they are flat and planar through this arrangement of atoms. But what can happen on the enzyme is that they can twist bonds to make them less stable and then more easy to hydrolyze. So the structure of that enzyme basically holds onto the substrate and twists or distorts the bond that you're trying to do chemistry on to once again lower the energy of activation. Another way enzymes work is in a reaction where you're breaking this bond, you might make charged intermediates. The enzyme's there to hold those charged intermediates in order to stabilize them. Once again, to lower energy of activation. So it's funny when you get the question that's well, how do enzymes catalyze reactions? There is no one rule. You want to think about the reactions and then just think about the ways in which an enzyme could contribute to that. For example, orienting two substrates ready to do chemistry. Causing physical strain in a bond that you want to break. Or comforting electric charges that form during a reaction coordinate. So there are loads and loads of different principles, and it's a really important study that is carried out. So finally, I think I have a couple-- oh no, I have a couple of minutes. But I want to just describe this to you. It'll also be covered in the sections, because I'm going to rush it a bit because this last bit features a little bit on the P set. So finally, enzymes are very commonly the targets of drugs. We like to think that some drugs are important targets. If we deactivate the enzyme, we might mitigate the symptoms of a disease. Now you can't go in and heat the enzyme or denature the enzyme if you're trying to treat a person. So we do a lot of work to mitigate disease by inhibiting enzymes with small molecules. So in these slides, I describe to you the types of molecules that may alter the chemistry of a transformation. So if a substrate binds to an enzyme-active side-- we often do this Pac-Man rendition-- you could design a molecule that binds there instead and basically inhibits the substrate from getting there. This would be called a simple reversible inhibitor that's competitive with the active site. There are other inhibitors that will bind to the enzyme but do chemistry with it and stay blocked at the enzyme. And that would be called an irreversible competitive inhibitor. You can't get the inhibitor off. And there's differences in the way you can reverse this. Because for example, up here, if I add a lot more substrate and these are equilibria, I can get my reaction to happen any way. But here, I could add as much substrate as possible but it won't help. It won't reverse the transformation. OK? And there's a question here to restore the reaction. The answer really is, you just have to start with a new enzyme cause you covalently changed the protein structure. The last type of inhibitors that are important are the ones that bind at different sites on the enzymes. And they are called allosteric. Allo always means different. So if you have a compound that's an allosteric inhibitor, it might bind on another face of the enzyme, but it will alter the active side so it doesn't work. That's an allosteric inhibitor. And the final type of compound is an allosteric activator that may bind somewhere else on the enzyme but make it more active. So these are the way small molecules work. I'd like to encourage the TAs to just cover this in a little bit more detail because I've rushed It. And I'll also re-mention it at the beginning of the next class. But bear in mind, we should have everything covered now so the problem set 1. And if you have any questions, reach out to us. Covered them in section. And I'll reiterate a little bit of this in the next class. And finally, there's a little bit of reading. If you would like to prepare, we'll talk about carbohydrates next time, one of my favorite molecules. And there's also a fabulous set of videos on how enzymes work at the Protein Data Bank site. And you will see this little handout on the version of the slides that's posted.
MIT_7016_Introductory_Biology_Fall_2018
34_Viruses_and_AntiViral_Resistance.txt
BARBARA IMPERIALI: What I'm going to do first of all for today, the bulk of today's lecture will be on HIV and Ebola viruses, with more time spent on HIV because it's potentially one of the mechanistically best understood of the retroviruses. And it also has offered numerous opportunities for therapeutic intervention. And there are a lot of themes and terms that I can bring up as I talk to you about the HIV virus, because it's just a great exemplar of the viruses. OK. So HIV is what is known as a retrovirus. So that sort of designates that there's something working backwards in this retrovirus. Based on its number in the Baltimore Classification, it is the most recently identified mechanism of viruses. HIV stands for Human Immunodeficiency Virus. And when you hear the term HIV, you'll often hear HIV/AIDS where the AIDS part stands for Acquired Immunodeficiency Syndrome. So what you can start to tell about the name-- why am I hunching down-- is there's something related to the immune system with respect to this virus. And in particular, it is that this virus targets, very specifically, a population of the cells that are critical in immunity. And when the virus attacks those cells, the person infected with the virus becomes immunodeficit. So I've talked to you about different viruses. Some go to the liver. Some go for other organs. So like the liver, the viruses that really cause a deficiency in liver function, but this one is very specifically deficiencies in immune function. And it has been a tremendous challenge developing vaccines against HIV/AIDS because of the problem with the immune system. The other thing that I mentioned to you in the last class was that HIV is often found co-infecting with TB patients, people with those microbial infections. And that's because the immunodeficiency makes you more susceptible to tuberculosis. And oftentimes there's a co-infection with the two diseases. And it makes the people with TB really even less able to combat the TB infection. So that is really what gets a lot of the TB patients. So that's an important aspect to know. And it's also an important aspect to be aware of when treating people for any infection is if there's an HIV component, the body is much less able to deal with mounting an immune response because of the fact that the immune cells are targeted. So HIV is an envelope virus. That means it has a membrane around it. It is this sort of peculiar shape where the outside of the membrane is coated-- the outside of the virus is coated with membrane. And then stuck in that membrane are particular proteins that are very critical for interaction with the human host cells. So these are often termed GP proteins. And the one very important one is GP140. And that GP stands for glycoprotein. Now, an important aspect, remember, of virus lifestyle is that they basically exploit all of the human cellular machinery for their benefit to make their proteins for all of the basic needs of a cell. So these glycoproteins look like the glycoproteins that are made by humans, because they're being made by the same machinery that we would make the glycoproteins. In bacteria, when we talked about them, it's a different machinery that glycoproteins and glycoconjugates look different. But in HIV, the glycoproteins look like our glycoproteins. And therefore, you could more or less consider the HIV virus particle to be coated with things that look sort of human. So that's a kind of decoy to the human system anyway with respect to recognizing a foreign entity. So often the glycans there serve as decoys to make the human body think, well, there's nothing wrong here. Now within the virus, there is RNA. And it's single-stranded but negative strand RNA, so that's what puts it into-- Sorry. Positive strand. Positive. So what puts it into a separate category is the fact that the genetic material in the retroviruses is single-stranded RNA and it's the positive strand. So far we've talked about two other types of virus mechanisms-- the double-stranded DNA-- remember that was as an example. That was smallpox-- and we've also talked about the single-stranded RNA negative strand. And our best example there was the influenza virus, which is a segmented genome. What you'll learn about HIV is that it has a non-segmented genome. Meaning that this strand of RNA is one continuous strand rather than separate pieces as we saw in the influenza virus, which was the last example at the end of the last class. Now, let me just take a quick look. So every virion, that would be a single viral particle, therefore will contain within its structure the RNA. And it will also contain two copies of a particular enzyme that are absolutely critical. When the virus infects a cell, there's no time for it to make anything. It needs a particular enzyme to get going with replicating that initial strand of RNA to start to come to make a DNA, double-stranded DNA copy of it, and then make a messenger RNA. So the enzyme that's important and there are two copies of an enzyme known as reverse transcriptase. So basically, the virus structure is fairly simple. There's a capsid inside with the RNA plus the reverse transcriptase that I'm just showing as a filled circle here. And then surrounding the virion is a membrane bilayer with GP proteins stuck into that membrane bilayer that are going to be important when one starts to think about the infectivity by the virus. All right. There's a second glycoprotein that's also important, which is known as GP41. And these are very specifically part of this complex, but associated with the GP140. All right. So now let's take a look at the virus life-- or details of this virus. But first of all, I want you to take a look at T-cells. You've heard a lot about T-cells. They're a major component of the immune system. So I'm going to home you right in on what the virus, the virion, recognizes on T-cells and what makes it such a disaster for the human immune system. OK. So here are typical T-cell receptors on the surface of T-cells. Just as a recap, I want you to remember T-cells produce unique antigen binding proteins that are put onto the T-cell surface. And they provide cell-mediated immunity by destroying antigens such as viruses. But what the HIV virus does is it really homes in on the cell-surface receptors of the T-cells, specifically the HIV virus recognizes the CD4 receptor, which is a glycoprotein on the surface of some T-cells that are designated as CD4 positive T-cells. So if you have circulating T-cells, that would be the first place where the HIV virus attaches to the T-cell to cause infectivity, thus debilitating the ability to mount the type of immune responses that T-cells are involved in. So it's very important to now understand why the physiology of this virus is to knock out the thing that's there to protect us from foreign antigens, which makes it so serious. Now, HIV first started to emerge in the early '80s. It was more or less designated as a death sentence. As you may be aware, HIV circulated around the gay community in the San Francisco area very rapidly. But it then became incredibly widespread. There's a possibility that the jump of the virus from primates to humans, the virus might have originated in Africa. There's a lot of work done on the origin of this virus, but it sort of came out of nowhere. It was something that was unexpected, unanticipated, and hitting people just like a cannonball. And it was literally a death sentence early on. As the mechanism of the virus became understood, a number of strategies could be put in place with respect to antiviral therapeutics to target this particular virus. And what we will see is the four types of targets that have been named as targets for therapeutic agents, why they work, how they work. And we'll see the mechanisms of those as to how they pertain to the HIV virus. And we'll talk about things like combination therapies. So early on, there was really nothing. Gradually early, therapeutic approaches started to emerge, but they were devastating treatments because the drugs were so poor against the HIV virus, people were taking massive handfuls of medications a day to try to stay ahead of the virus. As time has progressed, these therapeutic agents have been improved and improved. And now they're relatively simple therapeutic agents where you don't have to take massive doses of fairly non-specific therapeutic agents. But they're quite sort of workable. The problem is, and you'll see why, people have to take these therapeutic agents for life. Once cells are infected with HIV, there will always be a population of the virus genetic material in the human genome. So you can't just say, I feel way better, not taking these drugs anymore, because there is the chance that in some repository in some of your cells the virus genetic material is still there. And you'll see which enzymes in HIV are responsible for that. OK? So let's take a look here at the Group VI retrovirus named HIV/AIDS. In 2015, there were 37 million people infected. Nowhere near enough in treatment, about half of those in treatment. And about 1.2 million deaths. This may account also for people coinfected with TB. I mentioned to you when we started talking about infectious disease, I really encourage you to go to the NIAID website. There is tons of really interesting information, not just from a clinical perspective but from a mechanistic perspective, from a treatment perspective. There's really important things. So I really encourage you if you are interested in infectious disease, the National Institute of Allergies and Infectious Diseases Component of the NIH is the one responsible for all the research on infectious disease. And in fact, for many, many years during the crisis with HIV, there were special earmarked funds for any research towards the development of therapeutic agents. So what happens? How does this virus, this little particle that I've shown you here-- I should really show you this is the capsid. And I should really show you the membrane a little bit more encircling that with the capsid inside. So the cartoon that you see on the screen is more correct. So here's the virion particle. It has inside it the capsid that includes the single-stranded RNA. It only needs to code for very few proteins, remember, because the virus just exploits everything else from the human host. And GP120-- sorry for that typo, please correct that. And GP120 is displayed on the surface of the virion. And in fact, it's GP120 that interacts with the CD4 receptor on the surface of a target host cell, a T-cell. Now there is a second receptor that's quite important. It was not discovered until quite a bit later on. It's designated as the coreceptor. And the interaction of the virion with the host cell is only good when there's both the CD4 receptor plus the coreceptor. And this was found through careful biology that realized there was another critical component for the infectious disease. And that is designated as the CCR5 receptor or CXCR4 receptor. So both receptors on the surface, the virus has evolved or selected out, to recognize cells that have both of those types of receptors on the surface which ends up making them very targeted towards the T-cells. Once that association occurs, then becomes the process of fusion. So remember, I told you that large things can get into cells by merging a membrane with the membrane on the surface of the cell. So what is happening as those two receptors are engaged is the virus membrane is coming close to the host cell membrane and starting to fuse through the action of GP120 along with the CD4 receptor and coreceptor. And I'm going to show you a movie that shows that part of the fusion, because it's really cool to see it in action. Once that fusion occurs, the virus membrane actually just merges with the remainder of the host cell membrane. And you drop that capsid into the cell, which then becomes on packaged to release the single-stranded RNA that's the contents of that. And what's important to know-- whoops going backwards-- is that the minute the single-stranded RNA gets into the cell, that needs to be processed. But it needs to be processed with an enzyme that is completely unique to the virus. And that's the reverse transcriptase. Without the reverse transcriptase being there, it's just RNA that's just going to get chewed up in the cell. So what reverse transcriptase does it reverse transcribes RNA into DNA. So that gets you on the track to the infectious process. In the absence of RT, you don't get there. You're not able to make a DNA copy to then make a copy of the RNA and so on. So reverse transcriptase, that is why the reverse transcriptase has to be delivered along with the virus. It can't be just a virus with the genetic material. It's got to have this particular enzyme with it. So reverse transcriptase is responsible for converting RNA into DNA. And as you all know, that is not the usual direction with respect to the central dogma. We're always talking about DNA to RNA. But you've also heard throughout the course, how handy this reverse transcriptase is material is. When we talk about arrays and loads of biotechnology, using reverse transcriptase is super valuable. Let's say, for example, we don't want to sequence a genome, we want to look at the transcriptome, the part of the genome that's really important for being translated into proteins. What we do is we collect the transcriptome but reverse transcribe it into DNA, which is a much more stable, tractable material for sequencing and so on and so forth. So RT has become kind of a gift to biotechnology and biological research. And it was inspired from the RT in the HIV virus. So that's very important. So the reverse transcriptase, then, makes a DNA copy of the RNA. And then a second complementary copy of that DNA is made to make double-stranded DNA which represents the viral genome, but in the format of a double-stranded DNA. That DNA can be, then, taken-- here's the fun part. That DNA is actually, then, taken into the nucleus and it's zippered in to the host genome by an enzyme known as integrase. So it's not that you have this virus junk and then you send it back out again with the mature particles, you actually put this in the permanent copy of your DNA in the nucleus. And that is why people don't get cured from HIV/AIDS. They have to take treatment for the rest of their life, because that DNA is permanently in the copy of your genome that gets replicated every time your cells divide and so on. So you can understand the seriousness of that event. So that gets taken into the genome. But also when it's time to make new virions, the DNA is transcribed into messenger RNA. The messenger RNA leaves the nucleus, gets translated into proteins. And those proteins form the basis of the new virion particles. Now, I mentioned somewhere here that it is a non-segmented genome. So initially, the messenger RNA is a single segment that then gets translated into a single segment of protein, which is known as a polyprotein. So there are no stops between the various genes encoding frames within the messenger RNA. So that gets transcribed by the host machinery straight into a polyprotein that's all glued together in a single long piece. But then there is another important HIV enzyme, the protease, that chops those portions of the polyprotein up into useful and usable portions. So you also need the protease readily available rapidly to start dicing up the polyprotein into usable protein segments for the new virus assembly. So let's take a look here what happens. The messenger leaves the nucleus. You make a polyprotein that gets digested into protein segments. And then everything accumulates at the inside surface of the membrane and starts budding off the virion. When it's first budded off, it's not really ready to go. It's not intact. It has to develop a little bit further to become a mature virion moving forward to go infect another cell. So the initial stage, that budded stage, of the virion is not completely competent to go infect. And in fact, there has been some thoughts about can you ensure that the immature virion doesn't develop into the mature virion, because that would be one way of targeting the system. OK. So there are a few different pieces-- the reverse transcriptase, the integrase, the protease, and then, finally-- these are all critical processes that have been the targets of therapeutic agents. They are molecular targets, such as enzymes. And we know how to address enzymes with inhibitors. And one last feature of the virus life cycle that has been focused on respect to therapeutics is the viral fusion. So there have also been efforts at a molecular level to try to inhibit this process where your virus particle docks down on the cell and somehow the membranes fuse and then the virus dumps its contents into the cell. OK. This is just sort of a quick sort of more beautiful view. But in a minute, I'm going to show you a video. But I thought it was kind of nice because it really allows you to look at the different steps in the virus life cycle-- recognition, fusion, delivery, making the double-stranded DNA, integrated the DNA into the genome, and making new viral proteins. So those steps, they're pretty logical. It's just like step one, got to find my target. Step two, I've got to stick to my target. Step three, I've got to deliver my genomic content into my target and so on. So I don't want you to think this is a bunch of steps to memorize. They all sort of make sense in a logical progression of events to make the virus be able to go all the way through to budding off new, immature, and then mature virions. And so as I mentioned over there, these are the four targets that I'm going to describe to you with respect to therapeutic development. And this is where they all work-- the fusion early on, the reverse transcriptase taking that single-stranded RNA to a DNA copy, the integrase which zips the genomic material into the genome, and the protease that cleaves up the long polyprotein to make mature pieces of protein that are part of the virus. OK. So let's talk first of all about viral fusion. And as I mentioned, when I get to showing you-- it's a really beautiful video from the Howard Hughes Institute that really shows the fusion very well. But I want to describe the fusion to you at a molecular level. And then later, you'll see it a little bit more real life. So what I said to you is that GP120 and GP41, which are these proteins that are in the surface of the envelope then recognize the coreceptor and CD41 and form a complex. So it's a complex that involves what's on the virus. It's actually in a trimeric structure with the two receptors that are on the host cell. And they form a pretty close union. And then what happens is there are a series of events that tug the two membranes together, mechanically linked to conformational changes of the protein, and basically splosh the two membranes together for a fusion event. So it's through the confirmational dynamics of that large complex happening in sequence after the first interactions that get you to a fusion event. So these initially look like this, but then they start changing their shape, and making a fusion event. So you could look at it as, initially, you have the complex formed, but then, later on, there's been a twisting of the structures, the large macromolecular structures to fuse the membranes. So for quite a few years-- and I don't think it was the most successful of the approaches with respect to therapeutics-- people thought that maybe you could inhibit the interaction between GP120 and those receptors with small molecules that might inhibit the progress through the fusion process. So that was deemed a viable target. It's not the same as an enzyme target, but it's definitely a critical point in viral infections. And what was made were short peptides that might stabilize this complex and avoid the structure moving forward to the fusion-related complex. So that was one sort of series of events. So those peptides were designated a C-peptide. They would bind to this portion up here that's pointed to with the N. They'd stick and they'd basically jam the cogs. They'd stopped that event occurring by binding quite tightly to part of the fusion complex. OK? The drugs, the therapeutic agents, in that case were peptidic, kind of large. They don't diffuse into cells, but that's not a problem because this target is outside the cell. So there's no need for that to get into cells for its effect. All of the other therapeutic agents I'll describe to you are molecules that need to get into cells in order to do their job. But this particular strategy didn't involve that. Which was why it was fairly attractive early on, because it was a very different target from the others. So the next step I want to describe to you is inhibition of reserve transcriptase. So this is a RNA-dependent DNA polymerase. OK? So different from the other DNA polymerases. We have DNA-dependent DNA polymerases, but we don't have the RNA-dependent ones. And so the types of strategies that were used initially was to use types of analogs of the nucleosides that will be used in polymerization. So the reverse transcriptase, the first types of targets-- what's going on down there-- the first types of targets were nucleoside analogs because they can inhibit the polymerization. And you're going to realize you know quite a bit about these when we first look at the structure. So I'm going to describe to you what are known as nucleoside analogs. What does this term mean? Nucleoside means it's a nuclear base plus a sugar. There's no phosphates. Because, remember, there's an s there. And it's an analog meaning it's not one of the native ones, but it-- that looks good. AUDIENCE: I'm looking for [INAUDIBLE] BARBARA IMPERIALI: OK. I'll grab her. And you get the bag. [LAUGHTER] OK. So it's a nucleoside analog meaning it isn't the standard nucleosides that you put into your DNA. And the most critical first line actually became azidothymidine. So look, that's the thymide nuclear base, hits a sugar, but it's different from the regular ribose that's either in RNA or DNA. And it's two deoxys so it looks more like a DNA building block. But check it out. On the three prime site, there is azide. What do you think this thing does? Or have I already told you? Yeah. What do you think this thing does to stop the virus marching forward and turning its RNA into a DNA copy? Can it polymerase? No. So what do you think? So what how does it work? AUDIENCE: It blocks [INAUDIBLE] BARBARA IMPERIALI: Yeah. Yeah. It just grinds to a halt. What technology does this remind you of? Yeah? AUDIENCE: The sequencing with the [INAUDIBLE].. BARBARA IMPERIALI: So it's really cool. I mean, we're using something like this for sequencing blocking the chain growth to get pieces so we can tell the sequence, but this is a very viable drug for reverse transcriptase for kind of similar reasons, because it stops translation. It stops polymerization. Now, there's one important thing that I need to tell you about this AZT, as it's termed, azidothymidine. AZT is what is known in the industry as a prodrug. It's not the actual drug. Why is it a prodrug? What has to happen to it to be useful? Why is it handy to have a prodrug? I know there's a lot of questions there, but they all kind of tie together. So it's a prodrug. It can get into cells, right? What has to happen to it for it to be useful in inhibiting reverse transcriptase? Because it won't do it all on its own. Look at that structure. There's something missing in that structure. What are the building blocks in the polymerases? Are they nucleosides? No. Up there. AUDIENCE: [INAUDIBLE] BARBARA IMPERIALI: Yes, exactly. So you have to convert this neutral molecule that can sneak into a cell by kinases that put phosphate groups onto that molecule. So that that molecule can get engaged in polymerization, you can put one nucleotide onto a growing strand, but then it can't go any further because of that azido group at 3, which would be the normal place you grow. So prodrug strategies are very useful. It's a way of having drugs that aren't quite drugs yet but they become drugs once they are at their location. It's not handy to use the triphosphate of AZT as a drug, because it can't get in the cell. So we deliver a cellulary available molecule. We let the promiscuous enzymes in the cell convert it to a triphosphate. And then it becomes a real drug, a mature drug. So I think that's a very important thing about these types of therapeutic agents. And what I want you to know is that many, many viruses are treated with nucleoside analogs performing similar sorts of things. But in some cases, there's a lot of different tailoring goes on with the nucleoside analog. It doesn't always work to have the azide at C3 prime. Some of these features don't always work. So these have to be tailored in different ways to target different viruses. Because in this case, AZT hits reverse transcriptase. But there may be a different flavor of polymerase enzyme that you need to inhibit in other organisms. So it's a lot, lot, lot of options. Yeah. AUDIENCE: Does it only affect RT? Or could there be side effects of the cell, like an old using it by accident? BARBARA IMPERIALI: There can be, but I think the most important thing is that RT is very promiscuous. So it's much more likely to pop this building block in. And that doesn't have the mechanisms to reverse that reaction, all of the clean up that gets done with normal DNA polymerase. So our DNA polymerase is going to be a lot more tuned up to avoid these mistakes. But the viral reverse transcriptase just churn through in making it. So we're protected by our own proofreading and other types of mechanisms to avoid it really messing up replication. But that's a very good point. OK. So remember the central dogma. And this is why this process confounds that. And that's azidothymidine. And we've already discussed that. So the next enzyme that became quite an important target, because it was a new target, was integrase. So I will tell you, though, that reverse transcriptase inhibitors like AZT were used early on, but people with the virus rapidly developed resistance to that therapeutic agent. It was one enzyme that was just mutating like crazy. And the organism could slip through that net. So the development of therapeutic agents didn't stop with AZT. It was, like, we need to hit other targets for us to have a therapeutically viable combination of therapies that can really stop this virus effectively. So a next enzyme that was targeted is integrase. So this is the one that's been targeted most recently. And there are currently several integrase inhibitors. Integrase is a fascinating enzyme because it really is the enzyme that actually stitches that piece of viral DNA right into the human genome just so it's able to be replicated, transcribed, and translated at any time. So integrase has been targeted. It was a tricky target to hit. But as you can see with several of these drugs, they are successful drugs. One thing I want to remind you about, a lot of the names of the agents, if you see A-V-I-R within the name of a therapeutic agent, it means it's an antiviral. So that's a clue that all of these compounds are given names-- raltegravir, blah. I don't know who comes up with these names, but nevertheless, they're all names that end in A-V-I-R. So if you see a medicine and it ends in A-V-I-R, it's an antiviral agent. Now, the last of the really fascinating enzymes that was targeted is the HIV protease, because the HIV protease is very, very different from many proteases in mammalian biology. It's a simple dimeric structure. So it has quaternary structure. It has two monomers that are 99 amino acids each. And each monomer contributes an aspartic acid to an active site. And those are involved in the mechanism. And what generally happens when substrates or inhibitors bind is these two flaps-- one of them I've shown you in magenta-- close down and close up the active site. And ritonavir was one of the first generation protease inhibitors that sat in the place where the substrate would normally sit in the HIV protease. And remember that the HIV protease is critical for maturity of the virus because it chops up the polyprotein into the necessary enzymes. They can't be functional enzymes in the polyprotein. They have to cut up into the appropriate pieces. So the HIV protease had unusual specificity. And as soon as it was discovered and structurally characterized, every major pharmaceutical company jumped on-- this was in the late '80s-- to make successful protease inhibitors that just got better and better. Ritonavir is an example of first generation, but I think now we're on third generation-type inhibitors which are just a better and better and much more selective and specific for the HIV protease. And because they're much more potent, instead of needing grams and grams of an inhibitor, you can get away with milligrams of inhibitor, which made the therapeutic dosing for HIV much more palatable is sort of a good word. Because when you have a very promiscuous protease inhibitor, like the early ones that were discovered, they mess up your entire GI because your GI runs on proteases that digest your food. So the original, the early protease inhibitors had a lot of side effects. The most modern third generation are very viable. And this just shows you what other topic I want to hit on next is here's the green and cyan two monomer units of HIV protease. In yellow is an inhibitor that is bound. And just on one monomer of the two monomers, I've highlighted, in red, different resistance sites. So when any of those sites mutates, the protease stops being inhibited by the HIV protease inhibitors. Because you want to think about it, one mutation in one subunit, actually, will have a corresponding mutation in the partner subunit that is in the dimer. So each mutation ends up being two mutations in the full enzyme that has quaternary restructure. So these are major. They go all the way out to the outer parts of the enzyme. It's not just around the active site. It's just all over the enzyme you can get mutations that stop the typical drugs that are targeted at the protease's binding So here I just show you these are data that show for the protease and for the reverse transcriptase. Within the length of the protease, for example, there's mutations all the way along. I don't know why this is 101. The numbering system is a little different in some variants. But you can see which mutations will add to drug resistance. And there's a similar picture here for reverse transcriptase. OK? So this is a very serious issue. So now I'm going to give you a little break here. [VIDEO PLAYBACK] - So this is HIV. It's a typical retrovirus, meaning that it has an outer envelope. And in the center, it has two copies of RNA, as well as an enzyme here in blue that's reverse transcriptase, which will ultimately turn that RNA into DNA. The virus itself, with this outer envelope PROTEIN actually directly infects T-helper cells. The way that it does this is that as it comes up to the cell surface, it uses receptors that are on T-helper cells and exclusive to T-helper cells, which are CD4 molecule which really defines T-helper cells. It's a surface receptor that binds to the envelope protein. That causes a conformational change and allows a second receptor to grab hold of the envelope. This is the chemokine coreceptor. It's also called CCR5. And we'll talk about that more. What happens now is that the stock of the envelope protein pierces through from the virus into the host cell and starts to draw the cell membrane and the viral membrane together. And what ultimately happens is fusion of those two membranes and the viral genetic material is injected, essentially, into the cell. And the envelope protein is left at the cell surface. - So that's the fusion process that's so hard to explain. - The virus has a matrix and a capsid protein, shown here in green and red, that essentially are digested when it enters into the cell. That releases the viral enzymes and the viral RNA. And here we have reverse transcriptase, which takes the viral RNA, and using host nucleotides converts that viral RNA into a single strand of DNA. While it does that, it makes some random errors, which is characteristic of reverse transcriptase. It has very poor proofreading activity. That single-stranded DNA, now, is again reverse transcribed into a double-stranded DNA. At that point, another enzyme that has come in with the virus in the beginning, called integrase, essentially grabs hold of that double-stranded DNA and carries it through a nuclear pore into the nucleus of the cell. Within the nucleus of the cell, it finds the host chromosome. And basically, the integrase enzyme, makes a nick in the host's DNA and allows for HIV to insert itself into the host chromosome. And that right there is what establishes lifelong infection. Now, RNA polymerase comes along and makes its messenger RNA. - That's the host polymerase, it's just back to normal now. - Those messenger RNAs encode for different viral proteins. They end up associating with ribosomes at the surface of the rough endoplasmic reticulum. And here is a piece of mRNA that's making envelope protein which is directly produced into the endoplasmic reticulum. And its shuttled, then, through the endoplasmic reticulum and taken to the cell surface, where, at the cell surface, it becomes embedded in the cellular membrane. And at this point, coalescing with other envelope proteins that have been produced you have this cluster of envelope proteins now on the surface of this infected cell. At the same time, there are other messenger RNAs that are being produced that allow for translation of other viral proteins. So here are additional viral proteins being made which are going to be used to make up the key components that the virus, ultimately, is going to need. These are transported again to the cell surface, to the area where these envelope proteins are. And a strand of RNA, as well as some of the enzymes are part of that complex. This then buds off at the cell surface at this point, but it's still not a mature virion because the polyprotein chain needs to still be digested into its component parts. That's done by an enzyme called protease. Protease breaks up those polyprotein chains and, ultimately, allows for them to coalesce and form the mature structures that make up the final virion. And now you have a mature infectious virion that can go on now to infect other cells. Once that happens, now, the cell can produce tons of viruses. And this is really what, then, keeps the whole process going. [END PLAYBACK] BARBARA IMPERIALI: OK. All right. So I have gone monstrously long on HIV. So sadly, I cannot get you to Ebola. But I'll post those slides anyway. This question just speaks to combination therapies. So combination therapies are very, very important, particularly in cases where resistance to individual therapeutic agents against an individual target can happen. So HIV mutates its proteins very, very rapidly. But the likelihood that it will simultaneously mutate two targets in one cell is much lower. So combination therapies basically are there to avoid drug resistance that occurs, usually, because of the high mutation rate, not because one drug has a short half life, not so that multiple strains can be targeted, or to decrease the number of side effects. Obviously, combination therapies give you more side effects, but they are really important in mitigating the disease. So I was going to talk to you about vaccine development, but I will discuss this with Professor Martin to see if he wants to take that over. But I want to leave you with one last thing. OK. Three small points. On next Wednesday, this is Fred Flintstone. I tend to channel Fred Flintstone because he wears this nice blue scarf. Next Wednesday is the last of the classes. We've got some good stuff lined up for you. We're going to have the topoisomerase demo from some people who submitted a topo demo for us. And then we're going to do 7.016 Jeopardy. Now, don't think this is going to be cheesy. It's going to be tough. And it's going to be fun. And we're all going to be involved. Professor Martin and I will be the hosts. He's promised to dress up. Ha ha ha. Jackie will help me run the program, because it's in a real Jeopardy program. And Hannah and David will keep scores. And we'd like to encourage you who come to be in teams of two. Otherwise, there will be kind of a lot of you. So if when you come in, if you sit on that side of the room, put your two names on these boards. And if you're on the other side, put them on these boards because Hannah and David will be posted there to keep score. And we will muster up some small prize for the winner here. So, yes, question. AUDIENCE: Will this count as the final? BARBARA IMPERIALI: No. That would be a heck of an incentive, wouldn't it? What this will do for you is actually kind of help you refresh a lot of things, maybe identify blind spots. It was worth asking. It was a good question, yeah. So I'll post the questions afterwards as well. So you'll be able to kind of say, guy, I don't know what this question was about at all. And you can go and review a little of the material. Nice try, though. Oh, and there was one other thing. I just want to remind you that a vote is a vote. This is a democratic society. So you should please take time to fill in the evaluations on the course. OK.
MIT_7016_Introductory_Biology_Fall_2018
31_Immunology_2_Memory_T_cells_Autoimmunity.txt
PROFESSOR: All right. So today, we're going to talk about immunity again. And so this movie up on the screen here-- this is a cell. You can see the outline of the cells kind of around here. That's the outline of the cell. But what you can see is that there is something in the cell moving around, and that is an intracellular bacteria called listeria. And you can see it's rocketing around in this cell. It's having a total party in this cell, and what you'll see here is you can often see the bacteria push out from the cell. So if you look here, one is going to push right now. See? There it goes. And it kind of runs into the edge of the cell and pushes out, and this enables the bacteria to spread from cell to cell without actually going into the extracellular space surrounding the cells, OK? So let's take a hypothetical situation. So listeria is a foodborne illness. It causes a nasty sort of intestinal disease. So Brett, do you want these bacteria having a party in your cells? AUDIENCE: Unlikely. PROFESSOR: Hell no. OK, Malik, do you want these bacteria having a party in your cells? Hell no. Carmen, do you want these bacteria having a party in your cells? AUDIENCE: Hell no. PROFESSOR: Hell no! Yes. OK, so our body has to have some way to sort of address this type of an illness, and the problem is if you're thinking about what we discussed on Wednesday, is-- all right. So you're hosting this party, right? This is your cell. So you have a host cell-- that's your cell-- and you have an intracellular pathogen, such as a bacteria or it could also be a virus, and they're essentially using your generous host cell to reproduce itself to spread to other cells of the body. And so you don't want that, but the problem is that-- I told you about B cells, so remember B cells-- they have an antigen receptor. It's initially on their plasma membrane. It can also be secreted, and it's secreted into the extracellular space. The problem is that these pathogens are inside the cell, and there's a plasma membrane separating them from the antigen receptors that you need to recognize them, OK? So this presents an issue. It's also the case for T cells, because as you heard on Wednesday, T cells only have this membrane-bound form of the receptor, and the antigen recognition domains of all of these are extracellular, so there is really-- with just this system, there's no way for your immune cells to see in the cells. So today, I want to talk about how is it that the immune cells are able-- how our immune cells are able to see within the cell in order to address an infection like this one, with listeria. OK? And the first part of the answer is that it involves a process known as antigen presentation. And antigen presentation is the process by which peptides, so short sequences of amino acids, are presented and displayed on the surface of the cell for the immune system-- for immune cells to see. So here, peptides are displayed on the cell surface for immune cells to see them. And in this specific case, it's going to be for the T cells to observe what's going on inside the cell, OK? So this mechanism involves another molecule, which I briefly introduced. It's called the major histocompatibility complex, which is abbreviated MHC. So when I referred to MHC in Wednesday's lecture, I was referring to this major histocompatibility complex. And there are two classes of MHCs. Thankfully, the first one is known as class 1, so class 1 MHC, and class 1 MHC looks like this. Like many of the immune receptors that I've talked about, it has a heavy chain, which is this long polypeptide light blue, and it has a light chain in purple. So the MHC is composed of these two separate polypeptides. They're encoded by different genes, and then it assembles into this structure shown here. So this molecule has two Ig domains, and these are proximal to the plasma membrane. And this thing is all inserted in the plasma membrane. It's an integral membrane protein. And then distal to the plasma membrane is this structure here, and if you look at the crystal structure, it's kind of like a sheet-- a beta sheet with two alpha helices. And altogether what it does is it basically creates, like, a little cup, OK? So it's creating, like, a cup. And what sits in this cup is a peptide, so you get peptides, and the peptides sort of sit in that hand, if you will. And some of the amino acids from that peptide are sticking out and they're sort of displayed away from the MHC molecule. So this is basically a hand that holds peptides and displays them on the outside of the cell, right? So the outside of the cell here is up. This would be the exoplasm out here, and it's displaying these peptides for immune cells like T cells to observe. All right. So class 1 MHC is a class that's expressed on all nucleated cells in your body. So that's all of your nucleotide cells are synthesizing in a class 1 MHC, and then it's sort of being displayed on the surface. And the peptides that are held by this class 1 MHC-- the peptides here are being derived from a specific place in the cell, which is the cytoplasm. So the peptides are from the cytoplasm, so this is the source of the peptides, and I'll tell you how these peptides are sort of loaded on to this MHC molecule. So the MHC molecule is a membrane protein, so it's translated on the endoplasmic reticulum, and its extracellular domain is initially present in the lumen of the ER. And the peptides are from proteins that are present in the cytoplasm, and what happens to these proteins-- and this occurs for unfolded proteins, but also for proteins that might be ubiquitinated-- is that they're processed by the proteasome, which is this kind of a shredder-like function for proteins, and it cuts up the proteins into little snippets, or peptides, that can then be pumped into the lumen of the ER through this transporter, TAP. So these peptides can be taken and transported into the lumen of the ER, and that's where they're loaded onto the class 1 MHC molecule. But the source peptides is from proteins that are in the cytoplasm. They're processed by the proteasome. So then, once you have a peptide-MHC complex, it can then be trafficked through the normal vesicle trafficking pathway all the way out to the plasma membrane of the cell where now that peptide will be displayed for T cells to observe. And so the peptides here, they're processed by the proteasome-- processed or cut by the proteasome-- and then the type of T cell that's going to look at these class 1 molecules-- they are known as seeds CD8 positive T cells. So this is the first class of MHC molecule. Because there is a class 1, that means there also must be a class 2, which there is. And so class 2 MHCs are fundamentally different in all of these properties. The function shared by these MHC molecules is they both display peptides on the surface of the cell. So MHC molecules do display peptides on the surface, which is known as antigen presentation. But other than that, MHC class 2 is pretty different. You see the structure of MHC class 2 is very similar to that of class 1, but you see that rather than having a heavy and a light chain, here there are two chains that are roughly of equal size. And so these are encoded by different genes than the class 1 molecule, and they encode different proteins. But the overall structural similarity is very similar, so there are two Ig domains, they're proximal to the plasma membrane, and then there's this structure at the very end of the MHC molecule, which has this groove in it which can hold a peptide that would be displayed on the surface of the cell. And there, you see the groove and you see the peptide that is present in it. All right, so one big difference between class 1 and class 2 is that class 2 is expressed on a much more restricted set of cells. So class 2 MHCs are expressed specifically on specialized cells known as antigen-presenting cells, and these antigen presenting cells include cells like B cells, which are the ones that I'll focus on, but also phagocytic that can phagocytose foreign substances-- phagocytic cells-- and there's another cell type called the dendritic cell, which is also an antigen-presenting cell. I'm going to focus on the B cells. So class 1 is expressed everywhere. Class 2 is really expressed on these professional antigen-presenting cells. And the way that the peptides-- the source of the peptides and the way they're generated is also very different. So peptides for class 2 come from the extracellular space, and they are processed by lysosomal proteases. And so I'll show you how that looks in cartoon form. So for MHC class 2, the peptides are from the extracellular space. And so we've talked about ways that cells can take in material. One way is through endocytosis, right? So if this is my antigen, the antigen could be endocytosed by the cell, and now it's in a vesicle that's present in the cell. And so if you endocytose this protein, then it's now in a vesicle, and one compartment that it can go to is the lysosome, where are these there are these lysosomal proteases they can then chop up this protein into little snippets, or peptides. And so MHC class 2, again, is translated at the end of endoplasmic reticulum, like all plasma membrane proteins. But in the endoplasmic reticulum, you see the peptide groove is blocked such that peptides derived from the cytoplasm can't interact with class 2, but then is trafficked to a unique compartment which can combine with the compartment that has the peptides that originated from outside the cell. And then those can get loaded onto this class 2 MHC molecule, and then this can be recognized by T cells. But in this case, it is a-- oh, I endocytosed my chalk. I need to get it back. Here. So in this case, it's not a CD8 T cell that's recognizing it, but a CD4 positive T cell. OK, so let me briefly review what I just went through, and review the differences between class 1 and class 2. So class 1 MHC is expressed on all nucleated cells, whereas class 2 is much more restricted, being expressed specifically on antigen-presenting cells. The T cells that recognize these two classes are different. Class 1's recognized by CD8 positive T cells. Class 2 is recognized by CD4 positive T cells. And the source of the antigen is different in these two cases. The source of the antigen for class 1 is the cytoplasm. For class 2, it's the extracellular space. So the different MHCs are sampling different sort of pools of proteins. And where the peptide is loaded is distinct between these two, which allows these distinct classes to basically discriminate between the sources of the peptides that they're loading. So for class 1, that's in the ER. For class 2, it arises from a vesicle compartment that results from endocytosis of an antigen from outside the cell. All right. Now, the type of molecule that recognizes this MHC peptide complex is the T cell receptor, which I briefly outlined on Wednesday, but now we're going to talk about it in much more detail. So the T cell receptor, or TCR-- and I talked about its structure which is shown up on the slide, but I'll just draw more simply here. If this is the plasma membrane, this is the cytoplasm, and this is the exoplasm facing down, then this T cell receptor has two chains. One is called the alpha chain, and the second is called the beta chain. And each is comprised of two Ig domains, which you see up there. So the T cell receptor here is in pink. You can see an Ig domain there on one strand-- Ig domain there. And you have another two Ig domains on the other subunit, and this receptor, the T cell receptor, recognizes antigens through its variable domain, which is here. And it's binding basically to the end of this receptor, so this is a sort of ribbon diagram of a structure for a T cell receptor. The plasma membrane would be up here. This is the end of the T cell receptor. And MHC is in green, and it's holding a peptide here in yellow. And you can see how the TCR is sort of interacting or docking to this MHC-peptide complex. So for the T cell receptor to interact and bind to MHC, you have to have a T cell receptor that recognizes the specific conformation of the peptide that is being sort of extended away from the cell. So let's say this is my T cell receptor, and I'm going around and searching for cells that might want to look at this. Then if I had a T cell receptor that was like this, it's not going to be able to stick to this MHC-peptide complex. However, if I had a T cell receptor that had the right conformation, because there are different types of T cell receptors, it might be able to dock on and stick to the peptide, and then the T cell is now stuck to the peptide-MHC complex, OK? So there are different T cell receptors. There's a diversity of T cell receptors, and they're able to discriminate between different peptides loaded onto MHC. OK, so now, we have to think about where this diversity of T cell receptors comes from. There's a diversity of TCRs, and lucky for you, the mechanism that generates the diversity of TCRs is the same that generates diversity for antibodies. Now, Georgia asked a really good and really important question at the end of lecture on Wednesday, which is-- she asked if this sort of rearrangement of gene segments in the variable domain of the antibody was due to splicing or recombination at the genomic locus. And the answer is that it's recombination at the genomic locus, and that's a very important point. So here's a diagram for the beta chain of the TCR. You can see that like the B cell receptor, there's a gene rearrangement in the genomic DNA that brings V, D, and J segments together to make the variable chain of the T cell receptor. So like the B cell receptor, there is a gene rearrangement, also known as VDJ recombination, and this is not splicing of the transcript. This is in the genomic DNA-- a very important point, because by having this happen in the genomic DNA, it creates an irreversible change in that genomic DNA such that all subsequent cells that are derived from that original B or T cell are going to express the identical B or T cell receptor. So it's not splicing, but it's a real sort of irreversible change to the genomic DNA. So you have a diversity of T cell receptors, but the T cell receptor is not the only thing that enables the T cell to interact with whatever cell is presenting the antigen. There are these other co-receptors which are important. So there are co-receptors on the T cell-- this is on the T cell-- and the co-receptors are CD4 and CD8, and they're expressed on different subsets of T cells. And these co-receptors-- because it's not sufficient for just the T cell receptor to interact with a specific peptide, it also requires this co-receptor in order to get an immune response. So the logic is that if the T cell receptor and the co-receptor both bind to the MHC, then you get a particular type of response, so you need both. And CD4 cells recognize the class 2 of MHC. CD8 recognizes class 1 MHC. So you have these two different subsets of T cells and they recognize these distinct MHC complexes. So my question for you is what should these CD8 positive T cells do? To help with that, you might want to look at where the peptides are coming from that are presented on the class 1 MHCs, which are going to be presenting specifically to CD8. So what should these do? What does it mean if you have a class 1 MHC molecule containing a peptide that looks foreign? Well, where do the peptides come from? What's that, Patricia? Patricia is right. They're coming from the cytosol. So if you have foreign elements coming from the cytosol, what might that mean for that cell? Good, bad, irrelevant? What's that? AUDIENCE: [INAUDIBLE] PROFESSOR: What's that? OK, Brett's saying it needs to be dealt with, and I totally agree. Here's one scenario-- would be the scenario I showed you in the beginning of class where you have some sort of intracellular parasite that is basically using the host cell for its own evil purposes to produce more viruses or more bacteria. So if the immune cell has some sort of indication that this is going wrong, another example is in cancer, because if you have oncogenic mutations in certain genes, then those could be recognized as foreign. And so an appropriate response might be to do something to that cell that would limit the expansion of the tumor. Or in the case of an intracellular parasite, you really need to terminate the cell so that you stem the tide of viruses that are going to be produced by that cell. So the response should be to kill. So it was CD8 positive. If you have a CD8 positive T cell, it indicates there's something wrong inside that cell, and the response should be to kill it. And these CD8 positive T cells are known as killer or cytotoxic T cells. So what happens if a CD8 positive T cell recognizes a MHC class 1 peptide complex, then it releases materials from inside it that perforate that cell and lead it to undergo cell death. So it's a way of limiting an infection by killing the cells that the virus or pathogen is using to reproduce itself. OK, what about CD4 positive? What should be the response of a CD4 positive T cell? Should it also kill? Should be like the T-1000? No one gets my cultural references. Yeah. Should it be the Terminator 2? No. Yes or no? Who thinks it should terminate? OK. Steven, can you tell us your logic? Why should it not terminate? AUDIENCE: Because it's a [INAUDIBLE] B cell from the same [INAUDIBLE]. PROFESSOR: What are the MHC class 2 cells? AUDIENCE: Like, a B cell or [INAUDIBLE].. PROFESSOR: Yeah. It's not only a B cell, it's a B cell that recognizes the foreign agent that you're infected with. Yeah, Brett? AUDIENCE: So those B cells are antigen presenting cells. They have the information about what is bad or what is wrong in probably other cells? So like, oh, hey, we have this information. You should go and mobilize. PROFESSOR: They're binding something that it recognizes as foreign, internalizing it, and then presenting bits of that foreign element on the outside of itself. AUDIENCE: Shoots the messenger. PROFESSOR: What's that? It's shooting the messenger, exactly. Yeah. So it would be an extremely bad idea for the CD4 positive T cell to kill what's presenting the antigen, because you would kill the exact cell that you would need to fight that antigen, right? Here you have a B cell. It would be a B cell that's producing an antibody that actually can produce antibodies that might be able to neutralize that foreign invader, and so you don't want to kill it. You want to help it or enhance the B cell function. And so these CD4 positive cells are known as helper T cells, and they enhance B cell function in a number of different ways. Oh, I should point out where this happens. So this sort of interaction between B and T cells happens in the lymph node, because in the lymph node, you have antigen-presenting cells, or even soluble antigens, coming into these lymph nodes. And you also have B and T cells, and this is kind of like the B and T cell hangout to get sort of, like, interactions between these two distinct immune cell types. And when you get sort of a B cell that presents an antigen that's recognized by a T cell, then the T cell enhances B cell function, and it does so in a number of different ways. The first way that it induces a response in the B cell, known as affinity maturation. And this affinity maturation results from a hypermutation of the variable domain of the antibody such that you get even more diversity, and such that a B cell can be selected that even has a tighter binding to the antigen. So for affinity maturation, this is responsible for the transition in binding from a more weak binding to a tighter binding, which I talked about as being a difference between the primary infection and the secondary sort of immune response, OK? So the antibodies get better because of this B and T cell interaction and this affinity maturation process. One other thing that happens is that the B cells can produce different classes or isotypes of antibodies, and this is known as isotype switching. And so this is, again, the genomic locus for the heavy chain of an immunoglobulin. You see, here's the VDJ segment, so it's undergone VDJ recombination, and then what you see are these different blue regions here. Each of these are exons that encode a different isotope for the antibody. So the first one is mu, and so that produces IgM when that's the one that's proximal to VDJ. So if you have IgM, that's the initial state of the antibody, and that's initially membrane bound and serves as the B cell receptor. But each of these different constant domains, even though they're not undergoing variation, they have different effector functions and can do different things for the body. So for example, if you had isotype switching and you had a recombination event that brought this gamma 2 segment together with VDJ, that would produce the isotype which is known as IgG, and IgG is a highly secreted form of the antibody that is highly effective for bacterial infections because it's secreted in the blood, and it's able to neutralize bacteria and limit the infection that way. But there are other possibilities, because you have all of these different possibilities. And so you could get VDJ together with this alpha, and that would produce an isotype known as IgA. And IgA promotes mucosal immunity because it's able to pass through the epithelial linings. In addition, IgE is another type of antibody, and the constant domains are constant for each of isotypes, but they recruit different effector functions. So IgG would be hitting bacteria by promoting phagocytosis of those bacteria. IgE, in contrast, is especially good at dealing with worms, right? So if you have an intracellular-- or not intracellular, but like, an intestinal worm or something like that, then IgE-- its effector functions are better at dealing with that. So this process of isotype switching sort of allows the immune system to adapt to tackle a particular type of pathogen. All right. The last way in which T cells enhance this function is by promoting the differentiation of B cells into different types of B cells. One of those types of B cells is known as a memory B cell, and the memory B cell is a B cell that can last in the body for decades, even if the antigen is not present. So this mediates sort of the memory of the immune system. And so just to summarize what I just told you, if you have a B cell and it recognizes an antigen, which could be a protein, it would internalize that protein via endocytosis and then process it so that it can display peptides from that antigen on its surface. And if that's recognized by a T cell, then that leads to an interaction between the T and B cell that will lead to these different things happening, such as affinity maturation, isotype switching, so the red here would be a different constant chain on this same variable chain. So the variable chain doesn't change with the isotype switching, so it's still always able to recognize that antigen-- it's just recruiting different effector functions. And you can also get differentiation of B cells into plasma cells, which really secrete a ton of antibody, and therefore help the body fight infection. Now this is important because for a vaccine to be effective, you need to engage this T cell response such that you have all of these things happening. So all of these things need to happen for an effective vaccine. So for an effective vaccine, you can't just activate the humoral side of the immune system. You have to activate both the humoral and the cell-mediated sides such that they interact in order to enhance the immune response. All right. Now I'm going to move on and talk about a big problem that the immune system has, which is that it needs to somehow be able to discriminate between self and foreign, right? And so if you have your immune system recognizing an antigen that is natively part of your body, that results in an autoimmune disease. So there's a balance in the immune system between tolerating antigens or attacking them, and if it's attacking a native antigen, then it's autoimmune. And this is a huge problem because, if you think about it, because we've talked about the B cell receptor, the antibody, and the T cell receptor, right? Our bodies are generating tens of millions of these receptors that are diverse and can recognize different molecules. So our body is generating tens of millions of antigen receptors, and it does this constitutively, so that means that it's just doing it automatically. You don't even need to be infected for this to happen. This is just part of the development of B and T cells. OK, so it's constitutive, doesn't require infection-- constitutive. In addition, it's totally random. Your body could generate any sort of combination of V, Ds, and Js, and it could mutate in a certain way that it's likely that at some point during your lifetime you're going to generate a receptor that recognizes a native protein in your body. So it's totally random-- at least what the sort of rearrangement of VDJ gives. That process is constitutive and random. So I just want to point out several diseases that are caused by autoimmunity, and I've distinguished them based on whether the disease involves the generation of antibodies that recognize self or T cells that recognize self. So for antibodies, there's a disease, myasthenia gravis, which an individual's-- individuals generate an antibody against a receptor for a neurotransmitter, acetylcholine. And acetylcholine is the neurotransmitter which is predominantly involved in sending signals from a motor neuron to a muscle, and therefore antibodies that inhibit this receptor result in muscle weakness. Now self antibodies can also result in diabetes, and individuals can develop antibodies that recognize and inhibit the insulin receptor, and this leads to insulin resistance and diabetes mellitus. Some examples of T cell mediated diseases are-- if you recall back in the beginning of the month, when we talked about electrical signaling in neurons, I told you about the myelin sheath and how this increases the speed of the action potential along that axon. And if T cells attack the myelin sheath, then it disrupts this process of electrical signaling, and that results in a devastating disease, which is multiple sclerosis. Autoimmune disease involving T cells also involves diabetes, and if T cells attack and destroy the islet cells of the pancreas, this also disrupts the body's ability to produce insulin, and that results in type 1 diabetes. So I'm sure many of you know people with these types of diseases, and they're obviously of significant impact both in this country and around the world. So the problem for the cells in our body and the immune system is that the immune system has to have some sort of way to distinguish between self and foreign. So how is it that the immune system does this? And also, it has to have different responses to self-recognition versus foreign recognition. So what should the immune system's response be if there is a self recognition? What should it do to the cells that recognizes a native protein? Rachel? AUDIENCE: [INAUDIBLE] PROFESSOR: It could delete that cell. What Rachel said is you should get rid of it. And so one way to think about this process is there's a bit of a Darwinian natural selection going on in the body, and if there is a self recognition, then there should be a negative selection against that cell, so there should be negative selection. This cell should be more unfit, whereas if it's obviously foreign, then there should be positive selection. This B cell should be more fit. And what Rachel suggested is to get rid of the cell, which is a great idea, because if you kill off the cell then you won't generate any more cells that have that recognition against self. So negative selection is mediated by apoptosis and cell death. And positive selection could be both the activation of the cell and also its proliferation. As you see up on the slide there, that orange cell-- if it was recognizing a foreign antigen, would get activated and it would undergo a monoclonal expansion. All the cells resulting from that expansion would express the same antibody and therefore recognize that antigen, so this would result in cell division or expansion of that population of cell. So now we know what to do with self versus foreign, but how is it that we distinguish between self and foreign? So how does the immune system distinguish self from foreign? And there are several mechanisms to do this. The first is that the organs-- the lymphoid organs-- where are these B and T cells mature and undergo these genomic rearrangements are largely protected from foreign agents. So there are basically only self antigens in the generative lymphoid organs. These are the lymphoid organs were B and T cells are being generated. So the generative lymphoid organs would be the bone marrow for B cells and the thymus for T cells. Therefore, if a B or T cell-- if its receptor engages with something very tightly during its development, that's a signal for the immune system to delete and kill off that cell. So if you get self recognition here, you get apoptosis and deletion of that cell. The second way that the body is able to distinguish is that it responds to antigens specifically when there is an innate immune response, or if it responds better when there is an innate immune response. So you can think of it like a coincidence detector, right? If you have an immune cell and it recognizes an antigen, and there's also an innate immune response, that's a strong indication that this is foreign. So this would indicate "foreign" to the immune system. If there is antigen only and the body is not mounting an innate immune response, it's much more likely that this will generate a robust immune response, and this is the immune system's signal that this is a self antigen. This is also important for vaccine development because in most vaccines, in addition to having some antigen that's a part of the infectious agent, there's also something called an adjuvant, which is basically something that activates the innate immune system. So the adjuvant activates the innate immune response, and that's important because if you just had the vaccine with just the antigen, there wouldn't be nearly as robust a response. So you need both to activate the adaptive immune system, but also the innate immune system to really get a robust response. So I want to end by talking about this year's Nobel Prize work, and it involves another mechanism that basically prevents autoimmunity and downregulates the activity of these T cells, and that involves another type of-- we've only talked about activating receptors on the T cell, right? The T cell receptor, CD4, CD8-- they're activating receptors for the T cell receptor, but there are also inhibitory receptors that are on the surface of T cells. So inhibitory-- we'll just call them receptors. One is called CTLA4 and another is called PD1. Their names are not terribly important, but what they do is they keep the immune system in check. And we've talked a lot about signaling and how signaling gets activated, and often a step in signaling is once you get the signal sent, and it's been sent, there is, like, a negative feedback that then turns off the signal such that there's signal termination. So you often have some type of signal termination. That way, you don't have just a constitutive activation of the signal, which in this case would be sort of inflammation and an immune response, and one or both of these is involved in sort of keeping the immune system in check and stopping it after you get that initial reaction. Now, the reason this is so important and why James Allison and Tasuku Honjo won the Nobel Prize is they had the idea to use this as a therapy for cancer. And it turns out that some cancer cells can express the ligand for these inhibitory receptors such that they can avoid the immune system from recognizing the tumor. So this would be one case where the tumor cell is expressing the ligand for PD1, and that inhibits the function of this T cell receptor so that it doesn't kill the tumor cell, and that leads to the expansion of the tumor so the tumor can expand in an unchecked way. And what James Allison and Tasuku Honjo determined is that if you block that inhibitory receptor, then you sort of uncheck the response of the immune system such that these immune cells are now able to recognize the tumor cells and kill them. So by sort of blocking the inhibitor, you now have T cells-- these are CD8 positive T cells that are killer cells-- they will now recognize these T cells and kill them off. So there's what's known as an inhibitor blockade because you're blocking the inhibitor, and these inhibitors are antibodies that recognize these inhibitory receptors, and they're now being used to treat some forms of advanced cancer. And so this is something that the cancer field and immunology fields are both really excited about. What might be one complication with this type of treatment? If you get rid of the inhibitory receptors, what might be a consequence? Yeah, Steven? AUDIENCE: Then you could recognize other self cells that inhibited [INAUDIBLE]. PROFESSOR: Yeah, you get autoimmunity. That's exactly right. So one of the downsides of this is that-- one of the side effects is that you can have patients with an autoimmune reaction. So it's not the magic bullet, but it's a step in the right direction. All right, we'll see you next week.
MIT_7016_Introductory_Biology_Fall_2018
5_Carbohydrates_and_Glycoproteins.txt
[SOUND EFFECTS] PROFESSOR: OK, I just want to just highlight your attention to things. Like every day, I get that MIT news feed. I get a couple of other news feeds. And I just thought this was really sort of a striking image and I think a great way to convey science and engineering is through sort of really eye catching imagery. This is a synapse, which is where nerves contact to other nerves or to the neuromuscular junction in order to trigger activity. And this is a piece of research out of the Cima-- whoops, go back one-- whoops, go back one more-- between the Cima and Langer labs, where they've designed little tiny microchip probes. They're about 10 microns big. They can be planted in the brain in different sites, hopefully non-invasively. And they can report on concentrations of this neurotransmitter, dopamine. And the reason you might want to be able to do that is that there is a dopamine deficit in a lot of neurologic disorders. So you would like to understand what the deficits are and pinpoint points of the brain where there may be issues. And in fact, they used it to track Parkinson's disease. Because Parkinson's disease, some of the therapeutic approaches involve deep brain stimulation. But you can't really tell if it's working unless you can measure something. And the thing that you could measure would be the absolute levels of this neurotransmitter, dopamine, which, by the way, is originated from the amino acid tyrosine. You could sort of spot some of the parts of that. That would be the carbon. Becomes a c-- there's a carboxylate and an amine. Oh, I've drawn the alcohol. This should be an NH2. Sorry, I just added this at last minute. So dopamine actually though originates from tyrosine. So I just thought you would be interested because when I talk about the highlighting things that are in the news, I mean things like this where we're all kind of interested. It's cool. It's relevant to what we do. And it really combines the efforts of scientists and engineers to make tools and methods and invent methods to make measurements that are quantitative enough to guide the analysis of a disorder. So I just want to wrap up a little bit on the aspects. I was showing you the energy diagrams in the last class and telling you how enzymes affect the course of a reaction by lowering the energy of activation, by stabilizing a transition state or a high energy intermediate state. So the energy of activation becomes smaller in the catalyzed reaction, therefore faster than any uncatalyzed reaction. And I did not give you this number last time. Enzymes catalyze reactions through about 10 to the 6, a million 10 to the 10-fold. So these are dramatic increases in rates that are really physio-- we depend on physiologically. They ensure specificity. And they're essential in all systems. And the way we discuss energetic changes in reaction diagrams is by looking at what's known as the delta G. It's the change in free energy. What I show you here is an exergonic reaction, where it is a negative delta G. I didn't reinforce that point enough last time. The energy of the products is lower than the energy of the substrates. So energy is given out at the end of the transformation. That means the reaction is favorable with respect to an equilibrium, a thermodynamic parameter. And then it's the enzyme that takes care of the kinetic aspects of the reaction. So just to sort of show you the correlate, an endergonic reaction would look like this, where the delta G is positive. The products are less stable than the starting materials, which would mean the reaction is not favorable. But it will still proceed in the presence of a catalyst. And in this next small bit where we talk about pathways and different aspects of metabolism, I'm going to tell you how we get around the unfavorable equilibrium problem. Because that's obviously a big predicament in biochemistry. If reactions aren't favorable, why do they go far enough to be useful to us in metabolism? So that reaction would have a positive delta G. And then I also mentioned these two terms. Anabolic refers to the endergonic reactions. I don't know why you're doing this to me-- the endergonic reactions. And catabolic refers to the exergonic reactions. OK, then this last point-- I showed you this slide last time. But I think it was important to think about, why are enzymes so big? And I want to give one example of an additional genetic mutation that causes a human disorder, where I at least show you how the mutations are spread quite a distance from the reaction center to show you that all of that structure that you see in an enzyme as it interacts with a small substrate is critical for catalysis. So we see small substrates in each case. But we have this very large enzyme, which is many, many times its size, all engaged in catalysis. But what's the proof of that? So why are enzymes so big? So phenylketonuria is a human disorder. It's one of those disorders that neonates, brand newborns are tested for. They are checked for the genetic signal that shows that the protein will have mutations in its structure. And there are many mutations associated with a defect in this particular protein. And the protein that I'm talking about is phenylalanine hydroxylase. So the disorder is related to defects in phenylalanine hydroxylase. Now what does that enzyme do? It takes phenylalanine and installs a hydroxyl group opposite to where it's attached to the amino acid. So this is the hydrophobic amino acid phenylalanine. And this is another one of the hydrophobic amino acids in a similar family that's tyrosine. And in fact, it's the precursor to dopamine physiologically. Now it turns out there can always be too much of a good thing. So if you have too much phenylalanine, it has to be converted to a different amino acid. Because the build up of phenylalanine gets to a certain stage where it is converted itself to a toxic byproduct that actually causes severe mental disorders and seizures. So the body needs to monitor the levels of phenylalanine. And at a certain stage, phenylalanine hydroxylase will convert it to tyrosine. So even though phenylalanine is essential, too much phenylalanine is a bad thing. So that enzyme is the one that is associated with defects-- with mutations. The lower the activity of the phenylalanine hydroxylase and end you up accumulating phenylalanine too high. And so the PAH regulates the clearance from the body converting it to tyrosine. So why-- you know, I told you I would give you some insight into how entire enzymes are in fact critical for cat lysis. So what I'm showing you on this a little movie is that the sites shown in magenta all around this protein, the active site would be where this big ball is. It's actually cofactor iron. But the sites that are involved in the reduction of activity of phenylalanine hydroxylase are way out on the protein, out on its perimeters. So this protein is about 49 angstroms, 4.9 nanometers across. But the sites that cause a reduction in activity are a long way away, 10 angstroms longer, 15 angstroms, 16. So it turns out that enzymes as catalysts don't just use the local environment right near where chemistry happens. The entire protein collaborates to make the changes happen in catalysis. So enzymes are big, because you're not just using this inner shell of functional groups where the substrate binds. You're actually using a lot of the dynamics of the enzyme to promote catalysis. And just this sort of visual shows you how far away things can be where they suppress the activity of a protein and make it a poorer catalyst. OK. So as I said, this is one of the disorders, the genetic mutations that is tested for at birth. If you have one of these set of mutations, you immediately have to be put on a diet that's low in phenylalanine, so you don't bombard your body with too much phenylalanine, not allowing the phenylalanine to build up too much. But I don't know if any of you have lately grabbed a can of Coca-Cola that's got aspartame as a sweetener. It turns out that sweetener, NutraSweet rather, is a dipeptide. See? But it contains phenylalanine. So if you drink a ton of diet drinks that include NutraSweet, you are actually bombing yourself with high levels of phenylalanine that the body can't deal with. So people who have phenylketonuria, i.e. a defect in that enzyme, shouldn't be drinking or using NutraSweet type sweeteners. Because they actually give you too much phenylalanine at once, that you can mitigate the levels of. So I want to just tell you about these genetic disorders. There's a fairly decent set that are tested for at birth. But the ones that are tested are for ones that you can make dietary modifications or lifestyle modifications and mitigate the symptoms. And so that's a very important thing to know. You're not tested for things you don't know how to fix, because that wouldn't be appropriate. All right, any questions about that? Yeah. STUDENT: [INAUDIBLE] PROFESSOR: Yes. STUDENT: [INAUDIBLE] PROFESSOR: Oh, you know, that was me being clunky with-- of course you guys notice everything. This morning, I was making-- all right, let's go forward. One, two, and that and that. I know exactly what you're talking about. This little curb down, yeah that's me shaking when I'm doing the ChemDraw drawing. It should be flat. But thank you for noticing that. Just so that's not ambiguity, there should not be-- there could be, actually, a little dip when substrates bind to enzymes. But I didn't mean to imply it. OK, anything else? OK, so what I want to talk about now is the equilibrium problem. And what is that? So if we have reactions that are endergonic-- so we have one of these situations where we have a substrate going to a product that's higher energy. What that means is, at equilibrium between substrate and product, which is defined by the delta G, you have substrate going to product. But mostly, you've got a lot of substrate there, which is the amount of each of these is defined by this energy difference. OK. So how do we survive with these kinds of transformations when we really want the flux for an enzyme to be in the forward direction? Because if we need the product, we need to move things forward. So nature deals with this by coupling reactions to other reactions. So it finds ways around the equilibrium problem by, for example, putting a series of reactions. For example, let's go from A to B to C. So these are three intermediates catalyzed by enzyme 1, enzyme 2. OK. And let's just say this has an unfavorable delta G. So we don't make-- we have a lot of A. We have very little B. And then what happens? How are we going to-- and then let's just say, for example, this reaction is favorable. So let's see. The delta G here is positive. The delta G here is negative. So by plus VE, I mean positive and negative. So this is a favorable reaction, whereas this is an unfavorable reaction. How does putting two reactions adjacent to each other happening on the same substrate and moving through help that situation? Yeah. STUDENT: [INAUDIBLE] PROFESSOR: Perfect. So the answer here is that, as you take whatever B you have and turn it into C, A has to-- A1 has to make more B. So you solve the equilibrium problem in that way to a certain extent by just opening the tap at the other end of the series. And nature organizes a ton of reactions in sequential pathways to get around these kinds of problems. So this is done by coupling reactions. And so there are some reactions that are highly favorable, highly exothermic, where you can really sort of guarantee flux through that enzyme. A lot of these enzymes are ATP dependent. They use and burn ATP. So you get a lot of energy out. And they drive the flux through enzyme one, by enzyme to being an exergonic reaction, while the enzyme one is and endergonic. So flux is guaranteed. And a lot of pathways sort of set up to be in this way with this coupling of the chemistries. What else? So what we find is that enzymes, first of all, work in pathways where they are co-located in certain places. They may be, for example, co-localized to certain organelles in a human cell. They may be co-localized in the mitochondria or in some organelle. And we'll talk more about organelles and mitochondria later. They could be co-localized at a membrane. And so you ensure that the enzymes are together by putting them in the same place. And then nature also evolves ways where the enzymes physically interact with each other, either covalently or non-covalently. So they could be associated in the pathway by some kind of non-covalent interaction. Or you may actually link enzymes to make them single species. So if this is enzyme one and this is enzyme two, here you have an enzyme one, enzyme two single long polypeptide chain that has two domains. They can't get away from each other, because they're joined covalently. And they catalyze sequential reactions. This just doesn't happen with just three enzymes. It can happen with 10 enzymes or more. So there are very clever ways to ensure the flux occurs through pathways. OK. But don't forget, all along, that each of these enzymes is responsible for a single transformation. That's important. There's another phenomenon that having flux through pathways is very useful for, is to deal with toxic intermediates. Because that makes it very advantageous, again, to couple reactions. So let's say product B is toxic. We don't want it hanging around much. We don't want it released from an enzyme to go do damage somewhere in the cell. So having flux through a pathway basically ensures the B never gets out of the game. It just goes straight to enzyme two. So that's another physical advantage of those processes being linked together. And finally, the process also ensures a very nice opportunity to do regulation. So here we on this slide, I show you this sort of mega mess of metabolic pathways in physiology. That there would be the Krebs cycle. But all the metabolic pathways that are all interlinked. And many of these pathways will be co-locali-- steps in these pathways will be co-localized as clusters. And so that deal tells you how we can solve the equilibrium problem by linking enzymes-- the flux through pathways. And one of the best examples is in aerobic glycolysis, where in the early steps, they're not energetically favorable. You use ATP to start to break down glucose. Then you get to a certain intermediate where it's conversion to a smaller molecule generates ATP with very favorable reactions. So glycolysis is a really great example of this. Now the other thing that I just want to describe to you is the issue of feedback. And I've described this here on this slide. And so we'll talk about it on this slide. So if we are working with a pathway that goes through multiple steps to make an end product, five steps, three steps or whatever. And you've made too much of that. You've already made a lot of that product. And you don't need anymore. Nature also has in place a regulatory mechanism to feedback and stop flux through the pathway. You don't just stop all the enzymes. You find a way to stop the first enzyme. So this is a very important paradigm in biochemistry. And that's called negative feedback. So I'm showing you it here just in a very simple cartoon form in the isoleucine biosynthesis pathway. That's one of the hydrophobic amino acids. This is an intermediate that is on its way from this amino acid that is polar but not charged, threonine. So threonine gets converted to isoleucine. We need threonine to make isoleucine. But once we got a lot of isoleucine, it binds to the very first enzyme in the pathway and acts as an allosteric regulator that dampens activity. So in this case, I just want to point out to you that I'm showing a very common way we notate things. When we are talking about inhibiting a reaction, we draw it like this. We align and then align to the so the arrow, see which means you've stopped the activity. So you'll see this again and again. You're going to see it a lot in signaling pathways, because things often feedback. So another thing that nature ensures, in addition to not building up toxic products and dealing with equilibria, is that you don't want a ton of enzymes working to make something you don't need anymore. So you might as well take the end product and use it to stop the first enzyme. As that in product becomes scarce, it dissociates from the first enzyme. And it turns the pathway back on again. And I think that's a really neat way of making that happen. And also, in these cases, the enzymes have to be clustered as a group. Because you wouldn't get those local concentrations to be so advantageous if the enzymes weren't co-localized. So does that make sense? If they weren't in a really near location, the enzyme that present produces isoleucine, that isoleucine couldn't bind back to the first enzyme in the pathway if they were in different compartments of the cell. OK, everyone good with that? Good. All right. OK, so now we are moving on to carbohydrates. It's a pretty strange word, carbohydrate. But actually, it relates to early findings that glucose is a carbohydrate. And its molecular formula is C6H12O6. So they called it a hydrate of carbon before when they knew the elemental composition but they didn't know anything about the structure. There's a lot of carbohydrates that don't obey this rule. But that's where the name comes from, a hydrate of carbon. All right now carbohydrates account for-- what did I have here? 25% of the mass of macromolecules, so a good amount. They are very, very important, carbohydrates in central metabolism. We use carbohydrates as a source of energy. But also, carbohydrates are a part of storage of energy in the form of polymeric structures, that I'll describe to you. There are different ones in plants and in humans. One is glycogen. One is cellulose. But carbohydrates have an increasingly important role in the extracellular matrix in these polymers that are wrapped around your cells and also as signaling entities both inside and outside the cell. So we used to think of carbohydrates and straight away connect this with metabolism. But the story is far greater than that. And I'll try to explain to you why. And it's because of the richness of functional groups in a carbohydrate. So the simplest carbohydrate, before I go up there, is a three carbon molecule. This would actually be called glyceraldehyde. But don't worry about that other name for it. It's a three carbon carbohydrate. And this molecule would be called a triose. So we've got a couple of new-- a new suffix. Because anything that is a carbohydrate ends with the suffix "ose," not to be confused with the suffix "ase," which is an enzyme. So look carefully whether it's an "a" or "o," because it's the difference between a big protein that catalyzes a reaction and a carbohydrate. So a triose would be a carbohydrate with three carbons. They have an aldehyde. Or let's just say they have a carbonyl functionality. So remember, there would be lone pair electrons on those OHs, similarly on these. And remember, that each of these vertices corresponds to a carbon. It's some-- the way you recognize carbohydrates is most commonly-- is that they are rich in carbon dash OH bonds, which are hydroxyls. Which makes them polar molecules, likely to be highly solvated in water and very different for from the compounds that are rich in just CH bonds that don't have such opportunities. So if you see a molecule and it's got a bunch of CHs but not a bunch of OHs, it's probably a lipid. If you see a compound that's rich in OHs, it's likely to be a carbohydrate. The story gets a little bit more interesting as we move up to some of the different carbohydrates, which I've shown you there on that screen. And I'm going to go forward to talk about those. Because this triose is important in primary metabolism. We break down carbon carbohydrates that have six sugars to small carbohydrates that have three-- excuse me, carbohydrates that have six carbons to carbohydrates that have three carbons. And that's where these three carbon entities crop up. But I want to focus you in on two sets of carbohydrates, the hexoses and the pentoses. So we immediately know they're carbohydrates, right? "Ose." The hexoses obviously have six carbons. And the pentoses have five carbons. And these are the most important of the carbohydrates. Yes, there are carbohydrates with four carbons. And then there were ones with seven, eight, nine carbons. But these are the ones will totally focus on in 7016, because these are very important in different biopolymers. And the hexoses are important components, for example, of cellulose and glycogen. But where are the pentose carbons? And why are they so important? Actually, I need to draw this is its straight chain version. Because I'll drive you crazy. So this sugar here is a pentose, one, two, three, four, five carbons, a bunch of OHs, aldehyde at one end. Commonly, carbohydrates will fold up into a cyclic structure through an equilibrium. I won't worry you too much with the chemistry. But I'm just going to show you that structure. It looks like this, a five-membered ring. The interconversion of these two is an equilibrium process. And those carbohydrates are incredibly important where? Yeah, nucleic acids. So your phosphodiester backbone is attached to sugars that are attached a bit to purines and pyrimidines. And we'll see those structures later. But an absolutely essential feature of that polymer is the five-membered ring carbohydrates that are known as ribose, which is what that guy is, and two deoxyribose, where one of the hydroxyls is actually a hydrogen. It's not a hydroxyl. And it's this one. So instead of being an OH, it's an H. And we number carbohydrates. And I'll reinforce this much more when we talk about carbohydrate, about nucleic acids. There's a numbering system. So two deoxyribose is in your DNA. And ribose itself is in your RNA. OK, so obviously, we need to worry about carbohydrates and learn a little bit about them based on that key criterion. OK, let's move now to the hexoses. I'm not going to make you keep drawing hexoses and things. I'm going to just tell you a little bit about them with respect to their structures. So I've shown you on the board the cyclic form of a pentose. This is the linear form and the cyclic form. Let me write that down. And by the way, in your DNA, it's always in the cyclic form. It's not in the linear form. And the hexoses also have a linear and a cyclic form. And I show you that equilibrium for glucose. You see six carbons. I've got them numbered. And they form favorably into the six-membered ring. This is the cyclic form of glucose. This is the linear form of glucose. When glucose associates into polymers, it becomes these things like cellulose or glycogen. And it depends a lot on the linkages in those polymers to define which one it is. Now, as I mentioned, carbohydrates aren't always just a series of OH groups. There's sometimes other functionality. So I'm just going to quickly draw those. And if you've got them on your handouts, you can play along with me here. So sometimes, there's an NH2 there. That's called glucose amine, very creatively. Sometimes that NH2 is converted to an amide, like that bond in a peptide. So it's glucose amide. And sometimes-- so you can draw these on your handout, because the six-membered rings are there. This stays as an OH. But this OH here is at a different oxidation state. Don't worry about that terminology. But what's important, it's glucuronic acid. So it can be negatively charged, positively charged, neutral. So there are variations on a lot of our hexoses, basically meaning this carbohydrate molecular formula doesn't work anymore. But the term has stuck. So there's quite a variety of different carbohydrates with slight differences. And the intriguing thing is that the carbohydrates that you and I use in all of our physiologic processes are much simpler than the carbohydrates that bacteria use. There is an expansion of like 10 to the 2 or 10 to the 3 in the variety of sugars that bacteria use. And that's definitely a story for another day. So let me now move on to thinking about carbohydrates not as the monomers that you metabolize, but rather as the polymers that are involved in many other different types of processes. And when we think about the polymers, we want to think about how their polymers differ from the polymers of nucleic acids or proteins. Because this will tell you why carbohydrates are so complicated. When you take a bunch of amino acids and make a polymer, it's a linear polymer. Every unit is joined to the next by a name. There's no branching there. It's just a linear polymer. So the diversity is not enormous. Or-- it's pretty enormous. But it's not as big as it could be if it were a branch polymer with different types of side chains. We will see next week that the polymers of the nucleic acids-- here's the basic structure. There's the ribose, by the way. The R could be an OH or an H attached to a base of purine or pyrimidine-- you don't need to know that yet; you'll know it next week-- and then to a phosphate. But those again, are linear polymers. You don't have branching. You just have a single continuous chain. The crazy thing about sugars is they can branch from any of those OHs. So there's much more diversity of structure and function wrapped in the carbohydrates, which makes them real trouble to study. OK, I want to now introduce you to another feature. The other thing is, when we join amino acids or we join nucleic acid building blocks, nucleosides, there's no difference in the shapes that we can form. We don't have variety there. But in sugars, we can make different kinds of linkages depending on how two OH groups in a sugar are joined. So for example, if you join two glucoses, in this kind of linkage, that would give you maltose. But if you join two glucoses in that kind of linkage, it would give you lactose. And those are different compounds. They serve different types of physiologic roles. And there are enzymes that will make these bonds and then enzymes that will break these bonds. There is a common disorder that people have as they grow older. The enzyme that breaks the lactose bond the gets turned off. It doesn't work anymore. And that's the enzyme, lactose. So when people are intolerant to sugars-- lactose is the sugar in milk. And they can't digest dairy products. Because lactase doesn't work anymore. Or you can take supplements. So that's how that relates to physiology. The reaction between two glucoses to make the disaccharide is actually a condensation. Because when you join that bond, you kick out water as a side product. Or when you break that bond, you release water. So this is another one of the condensation reactions. So underline that on the slide. Because condensation means a reaction that precedes and produces you a molecule of water. And this just sums up the lactase problem, where you can digest lactose, the milk sugar. OK, so those are monomers. Now let's think about polymers and complex structures of sugars. And I put these all on the slides, because it's just impossible for you to keep drawing them. And I'm just going to give you sort of one of my pet peeves. I draw sugars like this. A lot of other people draw sugars like that. And I don't like that. Because this is what they look like. So if you see this and you go, I haven't seen sugars looking like that before, it's because this is the way to draw them that actually represents their shape. So if it's unusual format to you, I'm not going to ask you to construct these. I just want you to be familiar with looking at them like this rather than like this. Because, to me, that's the best way to render them. OK, so polymers of sugar. I just mentioned to you, there are polymers of sugars that are important to storage. So when we have excess glucose, we store glucose as glycogen. It's often stored in the liver. And later on in the semester, we'll see how a bolt of adrenaline sets all the processes in motion to chew up glycogen, to release more glucose so you can have a lot of energy quickly. So in that polymer, the sugars are linked in a different way. The common polymer in plants and, in fact, accounts for a massive proportion of the biomass, is a different polymer of glucose where the linkages are beta-- we would call this a beta linkage. But that's the way it looks. And that would be cellulose. And it's a linear polymer. Coming down here, glycogen is often a branched polymer with different kinds of linkages with different aspects or different shapes. So the glycogen that we store and can break down in order to produce glucose to make energy is glycogen. We cannot break down cellulose. We don't digest plant cellulose, because we don't have those enzymes, which is why we don't get nutritional value out of the cellulose the same way we can put forces in action to break down glucose. So the way those bonds look is absolutely critical for how you can use the energy that's within them by using enzymes to break those bonds. OK, so in general-- I've just said that. Glucose can be stored a cellulose or as glycogen. The process of photosynthesis converts energy and sunlight into glucose. And then you can make the polymers of glucose. And what I'm showing you here in these polymers is a simplified view. I want to introduce to you one other term. And that term is glycan. And that basically refers to something that is more than one sugar, one carbohydrate. I was looking at the videos of my lectures. I realized my handwriting is horrible. So I'm really trying very hard to make it a little bit neater today. So glycan is just the name for a polymer of sugars. But they can also be called polysaccharides. But glycan now is the commonly accepted term for a lot of sugars. Now, I told you about energy storage. I've told you about simple disaccharides and monosaccharides and metabolism. But what I want to do now is give you a little overview of all the different places where sugars feature in a cell. Because I think it's really important to realize that sugars form a great sort of set of molecules for communication. So I'm going to go around this sort of funny looking square cell. And this would be a eukaryotic cell, because it's got a nucleus. And there are different compartments. So within the cell, the cell sits in what's known as an extracellular matrix. It's a mesh work of sugar polymers that actually is important for the cells and important for trapping signals that come from cells and go to other cells. So those are all predominantly made up of carbohydrate. The next thing to look at is that there are proteins within the cell. They can become modified with a sugar and go to the nucleus or leave the nucleus. So there is a type of signaling that is based on adding a sugar or taking it off a protein. We'll see later in the semester how also phosphorylation does a lot of functions that look like that. There are sugars that are displayed on the cell surface polymers. And that's where signaling becomes important. Because sugars may be attached to lipids. You remember, we've talked about phospholipids being part of the membrane. Sugars can be attached to lipids that sit in the membrane and face out. And it's how cell-cell communication occurs in some instances. Or they may be attached to proteins that are also displayed on the surface of the cell. So this tells you a little bit about where we put these sugars is what's responsible for communication. Because they're on the surface of the cell with their sides out. And then the final thing I want to talk about is the blood group system. So a lot of you may be aware of blood groups. There are four principal blood groups. The differences between the blood in a A, B, AB and O blood groups are differences in just the sugars that are attached to the surface of the cell. So let me describe those. And then we'll talk about blood groups and things that are being done to enhance the supplies of red blood cells in cases of emergency. I have a nice little video I'm hoping to get to. So on the surface of the cell, you might have different sugars. This is a trisaccharide. All right, you can see one, two, three sugars. They have different linkages. But you can pick out the sugars. And they are joined by a bond called a glycoside bond and those join sugars. So what do you think the enzyme that cleaves the glycoside is? STUDENT: Glycosidase. PROFESSOR: Glycosidase. You would say glycosidase. I would say glycosidase. But if they are the same thing. And that's going to be pertinent in a minute. So remember, if in doubt, guess and stick "ase" at the end if I ask you the name of an enzyme, because that works pretty well. So if you have O blood group, you would have exclusively this trisaccharide on your red blood cells. If you have the A blood group, you have an extra sugar attached to that trisaccharide. And it's a glucosamide sugar. There's an extra carbonyl group on an amide. And if you have B blood group, it's gluco-- it's a sugar. It's actually called galactose. That has all OH groups. So the differences between if you're O group, if you're A group, or if you're B group, adjust those differences in sugars. And then people who have AB blood group have a composite. They have a mixture of the A and the B blood group. So do you-- yeah, question. STUDENT: [INAUDIBLE] PROFESSOR: That's a second marker, which we won't talk about. So it's A plus another marker that's either one type or another type. But the AB and O are defined by the sugars. Yeah, good question. OK, so do people know their blood group? It's good to know, frankly. Because someone-- you know, you may you cut yourself. And someone says, do you know your blood group? And you'll go, yes. And they'll give you an on-site transfusion. I've been watching too much Jack Bauer. But, you know, that's a story for another day as well. So it turns out that depending on blood groups, you can either be a universal donor or a universal acceptor. So the people with the O blood group can give anyone their blood. People with O, A, B, and AB. But unfortunately for the poor people with O blood group is that they cannot receive blood from blood group A, B, or AB. And I wanted to show this. And I hope I can get it working. Because we have just the right amount of time, which makes me really happy. Because I don't really program things that well sometimes. There was a pay-- there was-- the latest American chemist-- NARRATOR: --or other disaster. Those who are affected by crisis usually need four vital things, food, shelter, water and blood. In particular, O-type blood, because it can be safely given to any patient. Now, scientists say they have identified enzymes from the human gut that can turn type A and B blood into type O up to 30 times more efficiently than previously studied enzymes. Type A or B blood has specific sugars on the outside of its cells. PROFESSOR: I just described-- NARRATOR: These sugars are recognized by the immune system. And if they don't match the type of blood that's already in an individual, those cells are destroyed. Because these sugars are recognized by the immune system, they're called antigens. Type AB blood has both antigens. And type O blood has none. The researchers presented these findings at the recent American Chemical Society National Meeting in Boston. Stephen Withers from the University of British Columbia has been studying enzymes that remove A or B antigens from red blood cells. If those antigens can be removed, then type A or B can be converted to type O blood. To find the enzymes more quickly, Withers and a colleague at his institution used a technique called metagenomics, which allows scientists to sample the genes of millions of microorganisms without the need for individual c-- PROFESSOR: So that tells you something about technology and engineering and understanding structures. So it would be really advantageous. The stage at which you really need blood supplies is to take all the blood in the blood banks and turn everything into a universal donor. But the blood banks keep these differentiated supplies of blood. But what if you need something that you can give everyone safely? And so there are enzymes in the gut. There are bacteria in the gut that actually live off of sugars that they chop off cells in the matrix in the GI system. And so what the Withers group did was to screen a lot of different enzymes from bacteria in the gut and find ones that had a really good specificity for removing those sugars. Then they did protein engineering work to make them more efficient, thus creating something that was proposed a long time ago as could be useful, but now really making it useful. Because those enzymes are much more efficient. They really catalyze reactions at great speed. So they're much more efficient engineering enzymes to treat your blood cells, to make sure you get rid of all the features that are characteristic of the A, the B, and the AB antigens. Because if you don't do a good job of it, then the human immune system will recognize the little bits that sneak in and start triggering an immune response. So the efficiency of the enzymes that cleave those sugars off is absolutely paramount. And that's what I have for you today. I will see you Monday, so don't forget. The Pset's due at 3:00. Monday, I would like for you to have a preview of the-- whoops, come on-- of the DNA reading material. I don't know if any of you do this. But I think it's really handy. So there's just some very little sections that will give you a bit of exposure to nucleic acids. It'll just make the lecture a little bit more kind of familiar. But that's what we're up for Monday.
MIT_7016_Introductory_Biology_Fall_2018
9_Chromatin_Remodeling_and_Splicing.txt
BARBARA IMPERIALI: So what we're going to do today is we're going to finish up a bit of transcription, and then I'm going to talk largely about transcription control. Because it's tremendously important that we actually not just know how to transcribe, but we know how it's controlled, and how ultimately the appropriate messenger RNA is sent out for a translation into proteins. So transcription control is a very important component. So let me just start with this first. So these are typical of questions. And they look like, gosh, I don't have enough information to answer this question. And it's basically about what's known as the transcription bubble, which is the portion of the double-stranded DNA, where transcription is occurring. And so a transiently, the double-stranded DNA is opened up for the RNA polymerase, which remember has inbuilt helicase activity to start transcribing the gene. So let's just say we have the information we might give you is that transcription starts here at this point on the double-stranded DNA. And what we want you to know is that you have all the information you need to know which of the two strands is copied. Have you had a chance to think about that? Does anyone want to give me a good answer to that? Yes, here. AUDIENCE: The bottom [INAUDIBLE].. BARBARA IMPERIALI: And why is that? AUDIENCE: Because it's [INAUDIBLE].. BARBARA IMPERIALI: Right. So that reading things 3 prime to 5 prime, and making things 5 prime to 3 prime, is all you need to know to answer this question. So you would straightaway know, OK, we're going to start somewhere here, but we're actually going to start on the lower strand, because that's the one 3 prime to 5 prime. And we're going to make the new transcript in the appropriate direction, and know what it's going to be. So you can only read the bottom strand in this case. But watch out, because you might be reading the top strand in the appropriate direction as well. But what we've told you already is where the start site is. So you're going to know that information. If we'd put, for example, that the start site was over here, you might have different answers to this question. So the only singular piece of information you need is that you read 3 to 5, and you make 5 to 3. Then you can also fill in the bases to know what the transcribed sequence looks like. And I always recommend when you're filling in bases, you just write 5 prime to 3 prime. So you really are following properly which is being read, and the direction it's being read. And so if you were asked what the new messenger RNA sequence would be, you'd get it right. And here I just have a few questions that will help highlight the differences between the transcription process and the replication process. So can you guys just read each of these, and see whether they look like rules that apply to transcription, or they're things that are not true about transcription. So just give you one second to think about that, who? OK, who would like to give me an answer? Someone over in this part of the room, I haven't heard as much-- yes? AUDIENCE: Makes a complete copy of one strata. BARBARA IMPERIALI: DNA. OK, so the correct answer is that we only transcribe about 1.5% of the genomic DNA. We're not transcribing the whole thing. We're only transcribing the bits that we need to make proteins. So the correct answer is C, because we don't make a copy of the whole of genomic DNA. But these are all right. We have a different set of nucleotides, or rather a difference in one of the nucleotide triphosphate building blocks. Remember that RNA polymerase does not require a primer. That was a complication when we looked at replication, because we had to paste in primers, made often by the primase, RNA primase. All RNA polymerase is much cleverer than DNA polymerase, because it has the polymerase activity. It has the helicase activity. There isn't needed a topoisomerase, because we're only opening a little bit of the double-stranded to copy. I'll show you a movie in a second. And it also has a 3 prime exonuclease, which means that RNA is able to do its own proofreading, just like the DNA polymerase. So the error rate is similar to the error rate with the exonuclease activity for. DNA so that's about 1 in 10 to the 5 to 10 to the sixth. Now I'm going to talk in a minute. There's not the same cadre of repair enzymes that we have for DNA. And that's a curious thing, until you start thinking about what the reality is. So let me try to lead you to my thinking. So with genomic DNA, that's the copy of the DNA that's in the nucleus that has to stay good. So if there's any mistakes, we need them cleaned up. Because otherwise when we replicate a set of all of the double-stranded DNA, there will be an error in the progeny, the daughter cells. For transcribing RNA, an error rate of about 1 in 10 to the 5 to 1 in 10 to a 6, is just fine. That piece of RNA ends up, after a few of the processing steps that I'll describe to you, ends up leaving the nucleus to be made into proteins. And it's not such a bad thing if you have a little bit more error in that RNA. Why is that? Over here, or up there actually. Yeah? AUDIENCE: It has a shorter life, so it's not going to mess up everything for the rest of its life. BARBARA IMPERIALI: Right, so if you have a bit of an error, maybe you make a new protein, but it's not full length, or the protein you make isn't perfect; it's OK. Because there'll be other transcripts that are correct, and then after you've done making the protein, then you're just going to destroy that RNA, because it has a transient lifetime because of the structure of the RNA, and the nucleases that chew up the RNA. So this is an acceptable error rate for RNA polymerase. Remember, it's not an acceptable error rate to have in your genome. OK, it's just too big. Any questions about that? Does that all make sense? OK. All right, so now we want to talk about transcription control. But before I do that, I do want to very quickly show you something, because I feel like it really caps off the transcription part. And I'm going to show you only about a minute of this, because once again, I love the sound effects here. And I can't turn down the volume. This is one of those animations showing transcription. And it basically shows on the double-stranded DNA, lot of things accumulating to make a decision with respect to starting. But now, the whole complex, the RNA polymerase is just screeching through the DNA, making that messenger RNA. And you're only unraveling just a little bit of DNA around that transcription bubble. So you wouldn't need topoisomerase in this case. You're only copying one of the two strands. We saw how to identify which. And thus, the new RNA strand is basically falling out of the complex. You can see [INAUDIBLE] it's moving a lot. So that gives you a feel for things. When I hear that music, I'm just sort of waiting for the whole thing to crash somewhere. But who would know, right? All right. So at the beginning of that animation a few things were highlighted, where there's the double-stranded DNA, and a collection of entities starts clustering around where the reading is going to take place. So this thin, black strand is the double-stranded DNA. So some of the entities accumulate quite close to the start site. There is a section of DNA known as a TATA box that we spoke about, at the very end of the last lecture. But there are a number of transcription factors that are quite important. But then you might also, if you go back and look at that animation, there are sections of DNA that are at quite a distance that also help really regulate and promote transcription. So many factors regulate where the transcription occurs. First of all, is there a promoter site right near where we need to start? That causes a fair amount of collecting of complex components. But then there may be other things at a distance. So the promoter region might be located quite near the transcription start site, but then there are also enhancers that can be located at quite a big distance away from the start site that also play a role. Why do we need this much control? Because we don't need to be making the RNA for every protein all at the same time. Certain times in cell cycle you'll see, I only need to make one protein or a different protein. So we need all of this control to decide when transcription occurs, when do we need to make the messenger RNA to make our favorite new protein that needs to be made. And you'll learn a lot in signal transduction and cell cycle, where we really show you how a lot of the housekeeping genes are all fine. The proteins are all there. But at a certain stage, we need to make more of a particular protein. And that's when the transcription control comes into play. And you'll commonly hear about things called transcription factors. And those may be the proteins, for example, that regulate that transcription should start. In other cases, there may be times when transcription is turned down. So we have things that activate. So we start transcription, we make that happen, or others that repress. So they turn down transcription. So it's all about when you need to start, which is controlled by external factors acting on the double-stranded DNA to send the transcription complex, making the new entity. All right, does that make sense? We can't have a dysregulated system, or if we do, then we have problems with, for example, proliferation of cells. All right, so what I want to talk about are the key things that regulate transcription, and the key things that we do to the messenger that's been made. So what you're seeing here on this slide is just where we are in the process. We've seen replication. We're now at the transcription step. But there are many steps to go in eukaryotes before that transcript can leave the nucleus. All right? And I want you to remember the difference between eukaryotic and prokaryotic cells. This was just a picture we saw very early on. Prokaryotic cells, like bacteria, do not have a nucleus. They have an area called a nucleoid, but they don't have a discrete membrane encased organelle, where the processes of replication and transcription occur. In contrast, in eukaryotes, there is a discrete area of the cell, the nucleus, that includes all the machinery for replication and transcription. And it also includes the machinery that takes a pre-messenger RNA into a mature messenger RNA that can leave the nucleus. If you send that pre-messenger out there, it's going to have a lot of stuff wrong with it. It's not going to be ready to face the outside of the nucleus. It's going to be readily degraded. It's not going to have the full information. So I want to talk to you about the processes that are put in place for this conversion from the pre-messenger RNA to the messenger RNA, which we don't have to think about in prokaryotes, the small organisms without organelles. All right, so, and this is the foundation of transcriptional control. All right. So, we've talked about promoters and enhancers. Those happen early. That's all about making the pre-messenger RNA. But now we have to discuss some aspects that are also critical for making the initial pre-messenger RNA, and that's chromatin remodelers, or chromatin remodeling. Because in order to transcribe anything, there's a lot of ground to cover with respect to unwrapping the chromatin, the chromosomes. Because they're all packed up in such a way that you can't possibly do any transcription there, because they're too tightly packed in order to be accessible for the transcription machinery to tackle it. So I show you parts of that machinery up here. These are the nucleosomes that make up chromatin, which makes up the chromosomes. And in order for you to even be able to start transcribing, you have to unravel that part, those complex structures that are tightly wrapped up. You've got make the double-stranded DNA accessible, otherwise you can't break into it. So there are two things that also contribute to allowing transcription, and those occur both at the DNA level, and the histone level. And I'm going to talk about the histone level changes first. I want you to recall that histones are proteins that have a lot of positive charge, by virtue of the fact that they include two of the positively charged amino acids, arginine and lysine. And if you're curious about those structures, you can go look back at the table of the amino acids and see that those guys are always positively charged. The reason we use histones as the core of the nucleus home structure, is they're very positively charged, and they neutralize the dense negative charge of the nucleic acid. Otherwise we couldn't pack it up as tightly as we do. So changes that occur at the histone level, and their remodeling of the chromatin in order to promote transcription is modification oftentimes to neutralize those charges. So the one most obvious one I'll show you, and then there are others where you add methyl groups that dampen down the charge. But I'm just going to show you the very obvious one. So let's just look at lysine in a protein. That looks like this. That is a terrible drawing. It has a positive charge for bonds to nitrogen. In order to neutralize that charge, there are enzymes that acylate, or transfer an acyl group to turn this positively charged amine into a neutral amide. Let me draw that side chain, because I think it makes much more sense to understand it, this particular thing from a chemical perspective. So we still have the n but it is now part of an amide. So the charge has been neutralized on the nitrogen. If the charge is neutralized on the positively charged residues in the histones, the DNA will be encouraged to unravel from those. It makes good chemical sense that that's a good way to start on packaging DNA. The other modification occurs at the DNA level and it's methylation of cytosine. A little harder to explain from a chemical sense, but up there in the corner of this slide I've shown you the pictures. So methylation on cytosine would look like this. This encourages stabilization of the chromatin. So what would that do to transcription if I've stabilized the chromatin? It represses it. It turns it down. So these are pairs of changes that act in opposite direction. So we'd go down-regulate transcription. OK, so DNA methylation causes the chromatin to be a bit more compact, more stable. It's much harder to unravel the DNA. It's obviously then harder to start transcription. In contrast, modification of the histone proteins to neutralize their charges destabilizes and up-regulates transcription, because it's allowing us to open up the nucleosomes in order to make the DNA available. Does that all make sense? So we have two counter-balances that play in each direction. OK? All right, so that's chromatic remodeling. The next two transformations I'll show you may certainly look fairly complicated. But I'm going to describe them to you, and show you what the changes are, and how they would contribute to stabilizing the transcript and finishing the transcript up, to make it ready to leave the nucleus, to go out to the cytoplasm, where the machinery to translate proteins is. Because you want to remember the ribosomes that we're going to use on Friday aren't in the nucleus. They're in the cytoplasm. So there is a variety of events. There is what's known as 5 prime capping. So that is going to be a change. Let's just say this goes down. This is the base. Remember 1, 2, 3 prime, 4 prime, 5 prime. It's something that's happening to this end of the messenger RNA that stabilizes it. So 5 prime capping is important for one end. And then at the other end there is polyadenylation of the 3 prime end. OK, so let's just look at these one at a time. And let me convince you that they are important changes in the transcript that will preserve its identity, and in fact, give it a little bit more information. Because indeed, the 5 prime capping modification actually is a signal later on, when the transcript leaves the nucleus for protein translation. But in general, both of these changes mechanically protect the ends of the part of the gene that you're going to want to translate. They basically leave that piece of gene in the middle, where it's not going to be nibbled up, it's not going to be degraded. Because the biggest threat to the messenger RNA are things known as exonucleases. Exonucleases chew down nucleic acids from the two ends. All right. So let's look first, and it's kind of wild and crazy chemistry, at the 5 prime capping. And then we'll discuss the other two. And then I will finish these types of changes with splicing, which is really cool and extremely important. But the transcript, the pre-messenger RNA has to go through all of these steps before it's ready for nuclear export. So in 5 prime capping, you have this strand of pre-messenger RNA. Everything looks pretty happy. It's all in one piece. But the first thing that happens is three phosphates are added to that 5 prime OH group. So that's the start of this process. And this process actually happens while you're still transcribing the double-stranded DNA. It's actually already going on. As soon as that component of the newly transcribed pre-messenger RNA is made, things start happening at the 5 prime end to protect it. At that stage, then once those three phosphates are put on there, and nucleobase is added backwards. So there's a couple of functions. This all looks pretty strange. The rest of this still looks quite good. It looks fairly intact. But then the next thing that happens is that the guanine that's here is-- you do not have to remember this stuff, I couldn't remember it. I just want to show you how weird and different the 5 prime end looks. It doesn't look like a strand of RNA. So the guanine is methylated. And then a couple of the riboses, those sugars that have an OH usually at 2 prime, get methylated. So we've created this entire thing at the 5 prime end, known as the 5 prime cap that looks nothing like regular messenger RNA. And that protects that end of the messenger, and makes it safe from a variety of insults. So let's take a look. Why does it happen? So the first thing is it stops nuclease activity. Because the nuclease could look at it, and go, I don't recognize any of this happy mess. I'm not going to chop down this component. It's too foreign to me. So that's the primary thing. But then it's actually quite important for regulation of when that messenger needs to leave the nucleus. There are proteins involved in helping export the messenger RNA, when it's ready from the nucleus. So it's an important signal or recognition element. It marks this thing as a messenger RNA that's going to be important in translation. It gives it an identity. And then finally, it can actually in the next step promote translation. So all of these things are useful. So we do a number of these unusual transformations, but they all have a reason, and they all carried out in the nucleus purposed to protect the 5 prime cap of the messenger, and actually make it ready for its next tasks. All right? The next thing that happens is what's known as polyadenylation. So here's it. This would be the 5 prime cap. In the middle would be your gene that you're going to transcribe. But at the other end there is an enzyme that puts on a lot of adenine nucleotides, and basically adds to the other end a lot of A's. And when I say a lot, it can be hundreds. And it just promiscuously keeps on adenylating. And now what's this bit for? Once again, it protects from exonuclease activity from the other end of the strand. Because if you have exonuclease activity, you might start munching away at these As. But these aren't the important parts of the messenger, right? These are just add-ons. It's kind of a buffer. It's like something to do while you're waiting. Oh, I'm going to chew up some As. But you're not messing up the messenger RNA that you need at the end. So it's adding some dummy sequence that will be handy there. It contributes to stability. The tail is shortened over time, but it's non-coding, so that's OK. And actually though, when the tail is short enough, the polyadenine tail kind of acts as a bit of a timer. Because once the tail gets short, you start chewing into the transcript, and you basically end up with a degradation of the transcript. But it gives you time in the cytoplasm for the gene to be translated into protein. Does that all make sense? So it's basically like an egg timer, just watching, watching; back, back, back. OK, this transcript has been out here long enough. We've made all the protein we need. Now we're going to chew up the coding part. And then finally it's actually a good marker again for leaving the nucleus. And then finally it's kind of a cool tool for technology, because there's lots of interest in characterizing what's known as the transcriptome. OK, it's all well and good to characterize a genome. But it's a lot of work, 3.2 billion base pairs. And only a part of it, a tiny part of it, are parts of the genes that encode the proteins. So what you'd really want to know, if you want to analyze the genes that are going to become the proteins and maybe look for defects, is look at the things that are going to be transcribed. So you can use the fact that there is this poly-A tail on the transcripts that are going to leave the nucleus with something that will pull them out of the mix. What would I use? Let's say I've got a resin bead, polystyrene or some favorable polymer. And I can attach covalently nucleic acids to that bead. What would I add there to fish this lot out? Yeah, up here? Yeah. AUDIENCE: [INAUDIBLE] BARBARA IMPERIALI: Yeah, a lot of Ts. So I'm going to make this. I'm going to put on a bunch of Ts, and I can do this a lot. This is done very, very commonly. And then I'm just going to fish out everything that is part of the transcriptome, not the genome. So I've got a much smaller job then to find out errors in the genes that encode the proteins, than if I was going to start with the entire genome. So pretty cool tool, and later on we're going to see how this can be used. It's actually used in concert with an interesting enzyme that comes from viruses called reverse transcriptase. But that's a story for a later day. But it is a cool story. Now the last thing that we do in the nucleus is arguably the most important. And that is splicing. OK, all right. Once again, this is a picture with a lot of moving parts. But I want to convey to you the point of splicing, as opposed to you knowing all the little details. It was noticed for a long time that the messenger didn't always correspond to the original transcript. And this is the case eukaryotes, not in bacteria. But that there was quite a lot of processing done, not just to cap the ends, but by cutting out chunks of the transcript, so removal of segments of the transcript. And a lot of the seminal work was done by Phillip Sharp, who is a member of our faculty in the Biology Department. So we're very proud of this. This was actually the topic of a Nobel Prize in the '90s. And so what was noticed is that if you had a gene, and let's just put it here, 5 prime to 3 prime. And I'm going to name these. And then I'll explain to you what they are. These would be called exons. These would be called intros. And this is another exon. Outside, inside; that's the way to remember them. And what happens in splicing is that the introns are chopped out. OK, so your gene then ends up being, if this is exon 1 and exon 2, not represented by this entire thing, but eliminating that middle component. And remember all the time we have the cap at 5, we have the poly-A tail ad infinitum, the 3 prime end. So we've just changed the middle. And it was recognized through bioinformatics analysis that basically noticed the pieces that was chopped out and the pieces that ended up being joined. And protein splicing occurs as a sequence of events that ends you up with the protein being spliced together through a series of rearrangements that occur on the structure of the messenger RNA. So there is an internal rearrangement. A new phosphate is made. And then there's another rearrangement, where there's a new bond made between one end of the pre-messenger RNA, and the other end of the pre-messenger RNA, to give you the exon rejoined, ready to be read in translation. Now why is this so important? So I want to go here to a few numbers that I think are very pertinent, which I had to drill my colleagues for, because I didn't know all the numbers. So when the genome was sequenced, we were a bit stunned, because the number of coding things that are genes that are going to be turned into proteins was much smaller than we anticipated. For man, it's currently at about 20,000 genes. That's 20,000 mature transcripts that could make proteins. Fly? Not much smaller, arguably they're pretty smart, 16,000. Yeast 6,000, a bacteria at 4,000. There doesn't seem to be enough of a difference. But a huge amount of diversity is introduced into the transcripts by different splicing events. Because if you make one transcript, that would count as one gene. But if there is a lot of opportunities for difference splicing activity, you can piece together new transcripts that will encode proteins differently. And I think this next slide will show you an example of a transcript that has several different introns and exons. So you can see here across, there's a blue, green, red, another blue, and an orange exon. But depending on which pieces are spliced out, we'll have different proteins end up. So it could be a way, for example, to think of a practical use, to have a protein that is either secreted as a soluble protein, or left in the membrane with a membrane association domain, or left in the cytoplasm because it has no way to be secreted. So it could be a way to make three proteins that are in completely different places of a cell, or have different functions. It's also very important in tissues, because we splice proteins differently in muscle, or liver, or heart to achieve different outcomes in the proteins that we make. So basically, it's a source of huge diversity, and it gives us the genetic diversity that means this 20,000 can be a much bigger number. And then with the post-translational modifications I talk about when we come back later on, you'll see we've got lots of ways to diversify that 20,000. That mean these aren't literal numbers. They're quite varied. But what you want to remember is E. coli doesn't do splicing. There's no opportunity there. It's much more limited in yeast, so is the post-translational modification. So these numbers settle out quite differently from what they look like by looking directly at those numbers. And I wanted to give you an example. You don't have this in your slides, but I was thinking about it the other day. A colleague of mine, Professor Pentelute in chemistry, works on trying to reprogram genes to overcome a disease known as Duchenne muscular dystrophy. It's another genetic disorder. It's X-linked. So it's much more serious in males than in females. Because if you only have one bad copy of the gene, you can do OK with this disease. Whereas in the male, you'd only have the component of the gene that's on the X chromosome. And it's a defect in RNA splicing, so directly there. So this is a biopsy of muscle where red would be good muscle cells, whereas the white would be fat cells. And they actually weaken the integrity of the muscle, because the muscle doesn't develop to have all the well-defined muscle cells that are important for muscle tensile strength, and contractility, and everything else that muscles do. And it's all related to a protein known as dystrophin, which is a huge protein. And it's a critical structural protein that's important to maintain the cellular membranes within the muscle cells. So if dystrophin is no good, the cell membrane integrity is no good. And you just end up with losing the muscle cells, a cost of replacing them with fat cells in the muscle. And here's a really amazing number. The gene that encodes dystrophin has 79 exons. So you can picture the opportunities for things to go wrong, and it's actually a defect in the place where the splicing happens. So a splicing event cannot happen. And so the protein at the end of the day is not good. So there's a lot of efforts going on, antisense efforts, and even other gene therapy efforts, and also nowadays there's obviously a large focus on CRISPR-based gene editing. But remember this is a serious debilitating disease of all muscle tissue. It starts to be noticed when the babies are toddlers, because that's really when they start to engage muscle strength. So around the age of four it's actually noticed. And then the life expectancy is in the 30 to 40-years-old range. But it's also a terrible lifestyle, because the lungs don't work. So many things rely on muscle strength. OK, good. All right, so I believe that's the end of that. But I want to give you an introduction to translation, so you'll see what we have in store for Friday. And just get you back to this picture, we've done everything we need. We've done all the processing. We're here, and then the mature messenger RNA can finally leave the nucleus to be ready for translation. And the cap-- there is a complex that binds the 5 prime cap that's tasked with helping export that transcript through the nuclear pore, which is a fairly large opening, outside into the cytoplasm. And I want to show you one other thing and I hope I can get this reliably, because with this I'm going to actually show you some of the moving parts. There's a little thing on the sidebar now, called a short translation. And so what you're seeing here, the black line is the messenger RNA. OK, so that's the thing that's finally out there in the cytoplasm ready. The pale green and yellow are the two components of the ribosome, which is a soluble organelle that assembles on the double-stranded DNA based on a few cues. The dark blue components are transfer RNAs that bring in new amino acids. And what you see threading out here is a new protein strand that is being translated. This particular translation is to make a protein that becomes membrane bound and secreted. So I'm not going to go further than that. But there's a lot of those light blue proteins that are actually helping escort the transfer RNAs to where they're needed to continue building the proteins. So we've seen the messenger. We'll learn more about the ribosome, and will learn about the transfer RNAs. And you can also watch this to your heart's content. I just find it just a useful simple cartoon that tells us a lot. So what I really want to encourage you to do is read this section 14.5, because the next parts will make a great deal more sense. So what I want to do though now is first of all, start with describing the players in translation, and just the way we did for replication, we're going to systematically ascribe the importance to each of these players. So what you see, there are molecular players and there are key steps. So the molecular players are shown here. So far, we've focused very strongly on the messenger RNA. You know all about that. You know where it's from. You know it's stable in the cytoplasm for a while. You know it's got signals that tell the complexes where to go down. Then we need transfer RNAs and amino acids, and we need the ribosomes. And ribosomes are made of a composite of nucleic acid and protein wrapped together. There's quite a big difference in prokaryotes and eukaryotic ribosomes, and I'll mention that when we get to it. But I want to focus for a few moments, first of all, on the landmarks that got us to finally in 2009 the structure of the ribosome. It's a cool development for everyone. But also where we came from in the '60s, when Crick and Brenner finally cracked the genetic code, and found out that three bases on that messenger RNA will encode each new amino acid that gets put into your protein sequence. So I want to introduce you to the transfer RNAs that are really the most important element to start thinking about. And you can call these guys the decoders, because what the transfer RNA does is it can carry an amino acid on one end of its nucleic acid. But on one of the other loops within the transfer RNA is what's known as an anticodon that recognizes a codon in the messenger RNA, and basically prescribes what amino acids get put in. So it was always a wonder how you went from the nucleic acid world to the protein world. The answer is you use a nucleic acid that you can load with an amino acid, but it can also recognize the messenger RNA that codes for the protein sequence. And that's why I like to think of them as decoders. But what this picture of the RNA shows you, is the beautiful structure of RNA, where at one end is where an amino acid would be attached. There's this cool kind of cloverleaf structure. And then at the other end is the anticodon loop that will actually recognize your messenger RNA. So I'm going to stop now. But I really want to encourage you, just skim through that small section. It'll make a lot more sense on Friday. I've been waiting to talk to you about protein translation since we started this class. But it'll make a lot more sense. And there's some super cool initiatives in chemical biology now, where people have been able to completely hijack protein translation, and not put in just 20 regular boring amino acids, but actually put in all kinds of other amino acids. So we understand the system well enough to manipulate it. And this really hearkens to the Feynman quotes, "If you can build it, you can understand it." That's the level to which we understand translation nowadays. That's it for today.
MIT_7016_Introductory_Biology_Fall_2018
2_Chemical_Bonding_and_Molecular_Interactions_Lipids_and_Membranes.txt
PROFESSOR: So what I want to do today is-- I want to introduce this to you very quickly-- is-- and I was going to show you this at the end of the last class-- if we simply go to the far end of the scale, the picometer scale-- you see carbon. I'm not going to start you with carbon, that is a little dull. But over the next few weeks-- few classes, rather, because we have to do this in fast order-- we will cover details of carbohydrates, amino acids nucleosides, and phospholipids and how those building blocks are put together-- their properties, their ability to interact and engage in non-covalent interactions with other molecules and the ability to make polymers out of some of these, such as the nucleosides and the amino acids and the carbohydrates, which then start to create the richness of life. We will also discuss today the super molecular chemistry of phospholipids as they make micelles and lipid bilayers, which are the key boundary of cells. So this is very important. And then in the following week, we'll go to some of the bigger things like proteins, nucleic acid, polymers-- for example, here's RNA. So the course will literally do this-- take you from one end of the scale to the other. So I want you to get a sense of these dimensions. I want to mention one sort of fairly stupid thing with respect to how chemists and biochemists talk about certain metrics, certain distances that are pertinent to biology and biochemistry. Engineers tend to talk about micrometers and nanometers. There is one unit that chemists and biologists use quite a lot, it's the Angstrom after a Finnish or Sw-- no, not Finnish. I think it was a Norwegian. And that is equivalent-- 10 Angstrom equals 1 nanometer. So when you're looking at scales, we tend to talk about Angstrom because they're a convenient number. But don't get fooled by this. It can be a little bit confusing because it's 10 to the negative 10. So a nanometer is 10 to the negative 9, you know that quite frequently. Picometer-- 10 to the negative 12, micrometer-- negative 6. But the Angstrom is just a funny unit we use a lot, and it's 10 to the negative 10. So just to make sure there's no ambiguity about that particular detail, OK? All right. So today's lecture will focus on the molecules of life. And in particular, I'm going to, through the next few classes, introduce you to the various molecules of life. But first of all, we have to do a little bit to understand chemical bonding. And in particular, we want to look at both covalent and non-covalent bonding because covalent bonding is important-- it's the structure, it's the framework. But non-covalent bonding is what gives us dynamics. These are much weaker forces that can be broken and remade very readily that are essential for things like forming the DNA duplex, folding your proteins, associating the lipid bilayer. All of those are non-covalent forces and they are dynamic because they're weak, you can break one relatively easily as long as you're ready to make another one in its place. So I will spend a little bit of time on that. And then today, we'll talk about lipids and membranes. But first of all, let me introduce you to some of the molecules of life in this rendition that's done by David Goodsell at Scripps. So up in the top corner here, you look at 2.3 is the three dimensional structure of a protein. It's folded into a globular state through non-covalent forces. I brought a little 3D model of a protein for you to look at and take a look at later. That was one of the suggestions I made. You could coordinate printing a 3D model as one of your later projects. We will learn about the forces that hold the polymer together-- the covalent forces. But then the non-covalent forces that make globular structures that are very important for function. They're not much use as unraveled spaghetti. They're way more useful as their three dimensional structure. Down here in the corner is a carbohydrate. It really looks pretty pathetic in this rendition, but carbohydrates have a lot of value, particularly in energy storage but also in things like the extracellular matrix and as entities that signal information between cells. There's a lot of communication done by cell surface carbohydrates. Over here you see the canonical structure of double stranded DNA. We'll look at the covalent structure of those single strands, but then we'll focus in on the non-covalent interactions that make the double-stranded DNA and store genetic information which is also central to life. And then lastly on this, but we'll cover this today, is a lipid bilayer. It's a fascinating supramolecule structure that really is at the heart of how all your cells are held in a compartment surrounded by a lipid bilayer. So by the time we start talking about those, you'll understand the forces that put in place that lipid bilayer that arguably-- and I've read articles that say this-- that the evolution of lipid bilayers is as important as the genetic code. Because if cells did not have a surrounding, did not have an inside where you could concentrate reagents and macromolecules and do biochemistry, life wouldn't exist in the same way. OK, so let's take a look at the composition of living systems. And remarkably, we are about 75% water. So most proteins are very hydrated. There's a lot of water in cells. There's a lot of water outside of cells in the matrix. And really, we sort of survive weird. We survive in an aqueous environment. And the thing that you also want to think about is when we think about non-covalent forces, these are forces put in place in water. We don't live on a far distant planet where we're in sort of liquid methane or anything like that. So water is critical to life. The establishment of the hydrosphere when Earth first formed, the evolutionary events that happen after that were really hand in hand with the fact that it was an aqueous environment. Because forces are different whether they are in hydrophobic environments or hydrophilic environments. And really, you'll start to get appreciation for that as we move forward. So this basically suggests that if I put one of you in a giant desiccator and pumped out all the water I could possibly pull out, there'd be about sort of-- depending on your weight-- 40 pounds of things left behind. Of what's left behind, the majority of it is going to be biological macromolecules-- whoops. And then the rest of it, that little sliver, are things like ions and small molecules-- calcium, magnesium, iron, manganese, those small inorganic ions as well as small molecule metabolites that are involved in central metabolism. Let's now look at the macromolecules and their sort of proportions relative to each other. The smallest sliver of the lipids, which we'll talk about today. Then you have the nucleic acids that are critical for information storage. You have proteins, which make the largest piece of the pie. And carbohydrates, which are is the 25%. So you can see how important carbohydrates are because of their proportion being relatively large. The lipid proportion, though, is small but absolutely critical, harking back to the membrane bilayer. Because if we didn't have the membrane bilayer, once again, we wouldn't have life in the same way that we have it now. So that gives you a sense of the relative proportions of things. And frankly, when I discuss the macromolecules, I really like to start with lipids because of the membrane bilayer, but because their structures are comparatively simple relative to amino acids and nucleic acids. So we can get a few of the basics of the chemical structures down and how we render them on paper so that we can do that with lipids, which are a little simpler. Now life, to a chemist, they have to sort of worry about this entire mess of the periodic table. But the good news for you is for biological systems, we deal with very focused components of the periodic table. So those biological macromolecules are made up largely of only six elements-- hydrogen, carbon, nitrogen, and oxygen, phosphorus, and sulfur. So that makes the amount of stuff you need to know about basic covalent structures way more simple than it is for the average chemist who has to worry about everything down here in the nether regions and-- whoops, what are you doing? And the things that are radioactive, all kinds of other things. You don't have to worry about any of that. So the covalent bonding we will talk about is amongst those six different elements. And they make up 98% of the cellular mass. And then the other components that are important in cells are some metal ions-- the alkali and [? alkalia ?] elements. So sodium, magnesium, potassium, calcium-- those are all quite important in life. And then these transition metal ions that are really important in enzyme catalysis, for example. But we will not cover very much of that. But those are what are known as trace elements that are very-- transition metal elements that are very important for biochemistry. And then last of all, there are some rogue ones that there's even smaller amounts in physiologic systems. These are things like chromium, molybdenum, and tungsten, selenium, and iodine. And of those, certain of these elements only are found in totally bizarre organisms. So for example, you and I don't have much molybdenum and tungsten, I don't think, unless it slipped in there by accident. But you and I definitely need selenium and iodine as trace elements. Does anyone know where iodine comes and figures most prominently? Yeah? AUDIENCE: Thyroid. PROFESSOR: Thyroid, absolutely. So the thyroid hormone is a small organic molecule with several iodines in it. And we need-- absolutely need-- iodine in our diet in order to build the thyroxine hormone that deals with a lot of aspects of metabolism. So we don't need a lot. And if we get too much, it's bad for you. But we definitely need traces of these elements. Now I will spend a very small amount of time just laying down the basics of organic chemistry bonding. Now who have you either taken the chemistry GIR or had high school chemistry quite recently? Is that pretty much all of you? And now if you didn't put your hand up, don't worry. We're here to bring you up to speed if you need it. Frankly, if you just know what's on the next two or three slides, you're in great shape. All the information that you need has been condensed . But if it's a little bit out of nowhere, you could come see me in office hours and I can just run through things for you and we can just get you up to speed. There is no need for pre-knowledge, I just need an idea of how much pre-knowledge you have. So when we talk about covalent bonding and start to think about the elements that are critical for life, it's important to consider the electronic structures of these elements and why they happen to be the chosen elements, OK? The most important thing about hydrogen, carbon, nitrogen, oxygen, phosphorus, and sulfur is they love to make covalent bonds. A lot of metal ions form salts, you know-- sodium chloride or many other different salts. But covalent bonds are the main structure of all macromolecules. Strong bonds between elements, such as these six in particular-- these six-- where they share electrons in covalent bonds rather than form ionic interactions where somebody gives an electron to someone else and you have a plus-minus type interaction. So these shared bonds are important for life. So it's good to understand why hydrogen, carbon, nitrogen, and oxygen, and then phosphorous and sulfur are so important. In order to understand the covalent bonding of these elements, it's useful to know the electronic configuration, but you could live without that. The most important thing is that covalent bonds, such as the one between carbon and hydrogen here, reflect a shared pair of electrons-- one from the hydrogen, one from the carbon-- to make a stable covalent bond. Because of its electronic configuration, carbon is neutral when it has four covalent bonds. Nitrogen is neutral when it has three covalent bonds. But there's an extra lone pair of electrons that are not forming bonds in neutral nitrogen. And oxygen is neutral when it has two covalent bonds. These could be with hydrogen, they could be with carbon, they could be with several of the other elements. For carbon, we don't deal with charged states of carbon because they're pretty high energy. They may be high energy intermediates in an enzyme catalyzed reaction, but they're not sitting there as high energy intermediates in your macromolecules. The key thing you want to notice is for all of these elements, the valence shell is complete with eight electrons. But these lone pairs-- and I-- or bunny ears, as people like to call them-- really feature very prominently in biochemistry and biology because they are places for hydrogen bonding interactions. So we run a lot on electrostatic hydrogen bonding and hydrophobic interactions. If we know where the lone pair electrons are, we know one part of a hydrogen bonding interaction. It turns out that in biology, we're mostly at pH 7 or in that range except for a few sub cellular compartments. But at pH 8, nitrogen lone pair of electrons will pick up a proton to become a positively charged nitrogen. And you'll mostly see that as a positively charged. So the side chain of lysine, which has an NH 3, an NH 2 at the very end of a carbon chain, is most commonly protonated and positively charged. So it could be involved in an interaction. So we can consider both the neutral and the positively charged state of nitrogen. For oxygen, that oxygen lone pair can pick up a proton to form the hydronium ion. So that's a positively charged OH group. So it would have an extra proton, using up a lone pair and three hydrogens, or it could give up a proton to form the hydroxide ion. And those are the states of oxygen that are most common. So in that, we've kind of dispatched those first four of the six elements. Phosphorus and sulfur are a little tricky, but there is some good news. Sulfur copies oxygen, so you don't really have to worry too much about sulfur. You'll just consider it to really be sort of an older sibling of oxygen where all the chemistry is very, very similar. Sulfur, or the negatively charged sulfur anion, are both important. Phosphorus is different. Phosphorus does not tend to show up in the version that copies nitrogen. It is capable of adopting higher oxidation states. And all of the phosphorus you meet in biochemistry for the most part-- there's a few odd things in weird organisms-- is going to be in the form of an oxidized form of phosphorus, which generally has one, two, three, four, five bonds to phosphorus. It can take on a higher oxidation state. And you will see phosphorus. Phosphorus in the phosphate form is absolutely essential to life because it's the place where we store a ton of reactivity for the reactions of nucleotides, adenosine triphosphate, adenosine diphosphate, the phosphodiester backbone in nucleic acids, phosphorylation of amino acids to form phosphoproteins. It's always in this state with all the extra oxygens and that configuration of bonds, OK? If you know this, you've got a lot of the covalent bonds under control. So any questions about this? Is everyone all right? I know it might be-- it's probably a refresher for most of you. The next thing I just briefly want to mention is the most typical functional groups that occur in biological molecules. And you may, say, well, what does it mean, functional group? Usually it's a place where the action happens. If you have a large molecule that's a bunch of carbon-carbon and carbon-hydrogen covalent bonds, there's not a lot going on unless you can really rip those bonds apart, but they're high energy. But functional groups are oftentimes where chemistry happens or biochemistry happens. So there's the OH hydroxyl. We, as chemists and biochemists, will tend to use an R where we mean something else. So we don't write out a whole structure, we would just put R OH equals-- I'm going to just say anything. So for example, if R was CH 3, CH 2, you would have ethanol. But I'm keeping it more generic. The next functional group that is important is the carboxylate group, or the carboxyl group. Looks like this. Now when we look at these molecules, you always want to sort of think where the lone pair electrons are. There's two on oxygen, two on oxygen, two on oxygen. So that actually shows you where the rest of the electrons are. This is the carboxyl group. But in nature, in physiologic systems, this shows up most commonly in its anionic form. That's important because when we start to think of interactions between enzymes and their substrates, or the folding of proteins, we're thinking of something with a negative charge, not a neutral. So this group loses a proton to form the carboxylate group. And if you want to know where the lone pairs are now, that's what they look like. So those are two of the key ones. Let's now go to nitrogen. That is the neutral amine. But as I just mentioned to you, that will very commonly pick up a proton and be in the positively charged state. Now when I show you both of those guys in the positively charged state, what you could immediately tell me is that if I have an amino acid with one of these groups and a nearby amino acid with one of these groups, they could form an electrostatic interaction between themselves-- plus and minus complementing each other. So if you know the charge states, you're much better off because you can tell where non-covalent types of ionic or electrostatic interactions occur. So these are very important. Then there's the phosphate group-- it's often ionized-- and the sulfhydryl group. So phosphate-- the sulfhydryl group is also called the thiol group. And I'm sure I've spelt that wrong because hydryl-- they look like that. And the most common state of the sulfhydryl-- well, not the most common-- can also appear as the anionic structure. So that's the basic functional groups. Now there are two more functional group assemblies that you will see a lot in physiologic systems that are basically composites of some of these structures. Because when we have single building blocks, we need to join them to each other through different types of chemistries. So I want to show you the types of chemistry that you get by forming a composite of hydroxyl and a carboxyl group and a composite of a carboxyl group and an amide. Because the polymer that's the protein polymer has building blocks that have a means and carboxyls, but they're all put together into what polymeric structure where those groups have been joined in a condensation polymer. So let me just show you what those look like. And then we'll be done with the functional groups. So there are-- the first one-- because I've drawn them in this order, OK-- is the amide. And the other one is the ester. When you do these two reactions, if you do them in the lab, they're called condensation reactions because as you form that bond, you kick out a molecule of water. These are really important new functional groups to you because your proteins are held together by amide groups. In fact, they're so important in proteins, we often call them peptide groups. You'll see more about that on Monday. And the esters are really important. For example, in derivatives of glycerol that make fatty acids or phospholipids, you'll see esters occurring again and again. The other composite group that you can also see is with the phosphate plus an alcohol. And what that group looks like is as follows. And you're going to see this sort of endlessly in nucleic acids. Let's keep the charges all even here. And this is what's known as the phosphate ester. OK, and that is yet another condensation where you kick out water. All right, so let's just run back to this image. And we can sum it all up. Those are all the groups that I just described to you. And if you want, you can go back and put lone pairs of electrons on everything. And then the composite groups that I want to mention to you in particular are the amide and the ester. And they're very important in physiologic systems. They are the bond that holds together the biopolymer in many cases. Not shown on this picture is the phosphate ester-- I've added that this year because it's kind of important-- is a similar condensation reaction between phosphorus and an alcohol, and that in particular is the bond you'll see that holds together nucleic acids. And now one sort of thing that we won't go into a lot of detail-- I want you to notice that this nitrogen here has a lone pair of electrons. It picks up a proton very readily. The amide nitrogen is not so willing to pick up a proton because it messes up the rest of its chemistry. So that nitrogen in an amide tends to be observed as a neutral. However, that hydrogen can be involved in hydrogen bonds. OK, any questions about that before we move on to non-covalent bonds? Is everything clear? Now I try to put everything in one place so you have it in front of you. What I've put on those two slides is what you need to know about organic covalent bonding. It doesn't go beyond it. I will say there's a tiny bit of memorization, but once you commit that stuff to memory, you're in a good place with respect to understanding how the molecules of life are put together. OK. Now what is more important to me once we've put those structures in place is non-covalent bonding. Because to me, non-covalent bonding is synonymous with dynamics-- forces that can be readily broken and reassembled, broken and reassembled. The energy, the strength of a typical bond between two carbons or a carbon and a hydrogen is on the order of 90 to 100 kilocalories per mole. It takes a lot to break those bonds. We can't break them at will to go and do some biological activity. But the range of energies of the non-covalent bonds are far more modest. They range from-- so this is covalent. But the non-covalent range from 1, maybe to about 10 kilocalories per mole. So when you think about those forces, they're readily broken and made, broken and made. And what's so amazing about protein and nucleic acid structure is that you can gradually break a bond while you're making another non-covalent bond so you can have the dynamics of the structure that define a lot of its functional properties. Because structures are dynamic, an enzyme that's a composite of a lot of non-covalent interaction combined a substrate can gradually form a set of covalent bonds with that substrate but then can start changing the shape of that structure and that shape in order to go through a catalytic cycle to do chemistry and then to liberate products. That is all driven by changes in non-covalent bonding. Subtle changes that occur without big energy barriers that would be necessary to break the covalent bonds. So shown at the top here, you see the average bond energy of covalent bonds. This small number is something, for example, between two chlorines. That's a pretty weak bond. But of course, we don't have a lot of them running around. So really, carbon-hydrogen, carbon-carbon, they're at the higher end-- about 100 kilocalories, 80 kilocalories per mole. The other important interactions, though, that make up the non-covalent interactions are as follows. So the first important one is the ionic bond. It is also called a salt bridge or an electrostatic interaction. Why we give three names for this probably comes from which type of chemist decided to define them. They are all the same things. They are basically interactions between a positively charged entity, a protonated amine; and a negatively charged entity, a deprotonated carboxylate. Those are about the strongest of the non-covalent bonds, but it's very variable because it depends a lot on their environment. If those two entities are in a hydrophobic environment, they're going to charge right for each other to form a strong electrostatic interaction. But if those are out in water, each of those groups could be solvated by water and they'd have to give up solvation in order to form a good electrostatic interaction. When we talk about protein folding, we'll go into that in a little bit more detail. So the reason why this says very variable is not to drive you crazy. It's just they're very variable. But they will still range, I would say, from 2 to 10 kilocalories. Come on. So those are important-- easy to pick out. The strongest of the set. If Dr. Ray gives you a problem set and starts asking you to pick out non-covalent interactions, that's the one you take care of straight away because it is the most obvious. The next most important, though, is the hydrogen bond. Now hydrogen bonds have been known to mystify people for years because people are like, how do I pick these things out, how do I pick these things out? I'm going to give you a foolproof way of picking out hydrogen bonds so you will never be at a loss for hydrogen bonds, OK. Well, how do we recognize them? They are between hydrogens that are on electronegative elements such as oxygen-- of course, there's other things attached here-- or on nitrogen, or on sulfur. So all of those three functional groups can serve as hydrogen bond donors. They can give a proton in a hydrogen bond and share that proton between a hydrogen bond acceptor, OK. So these are all going to be known as donors. So you can recognize them. This-- carbon is not a hydrogen bond donor. Carbon's got his hydrogen and he's not giving it away to anybody for love or money. Its holding on tight. So this is not a hydrogen bond donor. OK? Now what are the hydrogen bond acceptors are places where that hydrogen would want to sit-- yes. AUDIENCE: There's the two lines next to it-- PROFESSOR: Actually, they just read-- they could be double or they could be single, but I was just putting them so that you see that the nitrogen has one, two, three bonds to it. OK, yeah. It could alternatively also be the form of nitrogen-- just to confuse you-- that has an extra proton that could be the protonated version because that can still be a hydrogen bond donor. OK. Now what are the hydrogen bond acceptors? They are any place where you have a lone pair. So let's just think of a carbonyl group-- two lone pairs. A hydroxyl group-- two lone pairs. A nitrogen that is not protonated-- one lone pair. Those are the hydrogen bond acceptors. So as long as you know your structures in the functional groups and you know where the lone pairs are, you can figure out where there could be a hydrogen bond. So all of these types are acceptors. OK. So in protein biochemistry, for example, those kinds of hydrogen bonding is very, very important to form the three-dimensional structures of proteins. And the reason why is because in a protein, proteins are made up of amide bonds where this Hn can be a donor, this O can be an acceptor, and you can get networks of hydrogen bonding interactions to establish structures of proteins. When a small molecule binds to a protein, it may look to fit in a place where it can maximize electrostatic interactions and the hydrogen bonding interactions. So we'll ask you to start to be able to pick out hydrogen bonding. So here you saw the electrostatic interaction. Here is a typical hydrogen bonding interaction between a hydroxyl and a carbonyl group. I couldn't spot that very readily unless I remembered that there were lone pairs of electrons there, OK. The other two ty-- any questions about that? Any questions about hydrogen bonding? Are you comfortable with thinking you could derive your way to figuring out where they are? You'll see them used a lot, so they'll become more and more familiar to you as you move forward. OK, good. The last two types of interactions are the hydrophobic interactions and van der Waals forces. I never get the spelling right, but I'll get the concepts over you. Now hydrophobic interactions are incredibly important. So when you think of folding a protein driven solely by electrostatic interactions and hydrogen bonding, you have a bit of a problem because all of those groups are hydrogen bonded to water. So you'd have to get rid of the water before they could make interactions with each other. Does that makes sense? Because we are in water. We're folding in water. Hydrophobic interactions are really great because they want to form in water. If you're making, you know, a batch of salad dressing, oil and vinegar, and you shake it up, what happens? It separates. The oil goes to the top, the vinegar goes to the bottom. Why? Because of hydrophobic interactions in the oil phase. So if you have a large protein that has a bunch of hydrophobic groups, they will want to collapse out of the water to interact with each other. And then hydrogen bonding and electrostatic will fall into place. So hydrophobic interactions are a very important and vital force in nature in the non-covalent bonding. And those are literally interactions amongst molecules that have a lot of CH and CC bonds. The final force that's shown up there is the van der Waals force. And we don't worry too much about that, but it is simply the interaction between very weakly polarized carbon-hydrogen or other types of bonds where there's a little bit of a dipole between the bond and they form little dipolar interactions. But mostly, I think you really want to focus on the electrostatic, the hydrogen bond, and the hydrophobic. These are more minor and it's a little bit of a subtlety. So let's focus on those three. All right, so with that said, the key thing for you-- what do you need to be able to do is understand them and recognize them in complex systems. Lastly I'm just going to leave this. It's going to stay in your notes. We in biochemistry tend to use line angle drawings. It's kind of complicated to draw these sort of great big things with all the hydrogens and oxygen and stuff spelled out, so we use the line angled drawing. There's some shown here for different molecules. And the rules are laid out so that you can go and just figure out, do a bit of practicing, and just figure out the line angle drawing and what it means. Basically, every line represents a bond, every vertex represents a carbon atom. But what you do show on the drawings are the non-carbon atoms. So for example, oxygen, or nitrogen. And when you show, you imply the hydrogens that are bonded to carbon but you have to show the hydrogens that are on nitrogen or oxygen, for example, and you have to figure out what your charged state might be. So I'm going to leave you with that. All right. OK. So what we've learned so far is these basic forces in biology are critical for the assembly of the building blocks of biological macromolecules. What I want to talk to you about now-- and we'll probably, because I've spent a little bit of time on that, spill over a little more to next week-- but I'm going to talk to you about the first class of macromolecules, which are the lipids. So what makes something a lipid? These are the most sort of complicated mixture of biological molecules. And formally, they're not really macromolecules. They're small molecules. But what's common to all of them is that they are very rich in carbon-carbon and carbon-hydrogen bonds because all of these-- the line angle drawings of all of these would suggest to you that the dominant feature of all these molecules is a bunch of CC and CH ions, which makes the molecules quite hydrophobic. There are no functional groups there. And they behave very differently. For example, they would have a tough time dissolving in water in some cases. And so this complicated looking set of molecules can be distilled out as being very rich in carbon-hydrogen and carbon-carbon bonds. And we call those collectively lipids. And they have a lot of different functions. So for example, triglycerides, such as shown here, with three ester bonds are storage for energy-- things like estradiol, things like steroids. They have this 6-6-6-5 arrangement of rings. All your steroid hormones kind of look like that. A lot of CH bonds. There are some vitamins. So for example, retinol is a vitamin. It's also a lipid. And then there are the phospholipids shown down here. I just briefly want to mention a little bit about retinal and retinol, which are crucial. Retinol is a critical vitamin. It comes actually from carotene, which is a molecule that you find in a lot of orange and yellow fruits, such as carrots. But the oxidized product of retinol is this lipid called retinal, which is central to the process of vision. So retinal binds to proteins that sit in the membrane. When light shines on them, the shape of the retinal changes. It goes from a particular configuration of the double bond to a different one. The shape just changes, and that sends a signal to your brain. So lipids are important, absolutely essential, in vision and sight because they are involved in the signaling process because their shapes change and send signals. Other types of lipids-- so these things-- and we call them fatty acids mostly because they are greasy long-chained acids with a long hydrophobic tail and a hydrophilic end group here. These molecules are also what are known as amphipathic because they have a sort of split personality. They have a hydrophobic moiety and a hydrophilic moiety. Whenever you see amphi at the beginning of a word, it means in both. So both hydrophilic and lipophilic. So these are important. And these are very important components. You probably heard a lot of press about some of the fatty acids and how bad trans fats are for you and how you should be careful to make sure your diet is rich in cis fats rather than trans fats because the trans fats are contributors to coronary heart disease. So you may wonder, what's the relationship between heart disease and these two types of lipophilic components which are in the body? So let me describe to you that relationship. Remember that cis fats are rich in things like the nut oils and fruit oils, such as olive oil. So coronary heart disease is associated with trans fats. What's the linkage, what's the biology in that? So the story is related to cholesterol. Cholesterol is a critical component in our membranes. The trouble is we have to be able to move cholesterol around. But it's so hydrophobic it doesn't dissolve in water, OK? So in the body, your cholesterol is moved around in the form of lipoproteins that bind to the cholesterol and take it to the different organs where it is needed, all right? And so the lipoproteins can either be low density and associate with cholesterol, or they can be high density, and those also associate with cholesterol. The high density lipoproteins are kind of large. In fact, they're fairly agile. They don't stick to arteries and vessels, and they can be excreted in the liver or move around the bloodstream without any problem. It's the low density ones that are problems because they're low density and they kind of stick to the walls of your arteries and start making buildups and then plaques, which contribute to heart disease. So the low density ones have cholesterol, but they're very small, sticky, and it's a physical interaction with your blood vessels and they start to clog your arteries. What's the relationship to saturated and trans fats? It's that they increase the low density lipoprotein in preference to the high density. So if you have a lot of trans fats, you make a lot of low density lipoproteins, it's trying to carry cholesterol around, but it gets stuck to your blood vessels and you start to clog your blood vessels. That contributes to heart disease. So these lipophilic molecules are important. They are places to store energy. They are critical to hormones and signaling, for example. But there are some complications with disease because certain types of fatty acids contribute to heart disease. Yeah. AUDIENCE: Is it a lower density if it doesn't have a bend in it? PROFESSOR: No, no. the density is of the entire physical particle. It's a nanoparticle that would show a different density respective to how it floats in water. So the density is really the physical metric of the entire particle as opposed to just the molecule. It might be different because of the way it compacts, but the important thing about the trans fats is that they really contribute to making the protein that forms the low density particles. OK, all right. So I'm just going to introduce these-- not quickly, but I'll show you some cool images at the beginning of the next class. This is the last group of lipidic molecules, and they are actually-- whoops-- esters and phosphoesters of fatty acids with glycerol. This is a small molecule that forms esters through its oxygen to these long chains and also to phosphate. And these contribute to really important functions in the body. They are also amphipathic because they have a hydrophobic component and a hydrophilic component. And we often draw them in a shorthand form like this to represent this head group and these tails. And I want to just leave you with this wonderful image of the sorts of supramolecular structures that these kinds of phospholipids can form. So supramolecular is a very important term in biology as it is in engineering-- supramolecular. It means it's a structure that's above the molecular level. It's an aggregation of different molecules to make a super molecule with different properties from the individual components. Phospholipids self-assemble-- and that's another important term-- into supramolecular structures that are very, very important in living systems. Some of them just are useful in other sorts of engineering approaches, such as liposomes and micelles, but the most important supramolecular structure of a phospholipid is the lipid bilayer that surrounds your cells. And what happens is you simply put those molecules-- the phospholipids in water and they will self-assemble on their own into these supramolecular structures. Whether they form micelles or liposomes or bilayers is dependent very much on the tails of the lipids-- what sorts of shapes and structures you get. But in physiology-- in human physiology-- the phospholipids that we have want to form these bilayer structures that have incredibly important properties. Most importantly that they are semi-permeable and they can wrap, form the boundary to certain cells. So I will continue with the final discussion of this on Monday before we move forward to the amino acids, peptides, and proteins. And I just quickly want to move you to ask you for Monday to try to catch a read of the section 3.2 in the text if you have a chance. It'll give you a nice preview.
MIT_7016_Introductory_Biology_Fall_2018
16_Recombinant_DNA_Cloning_Editing.txt
ADAM MARTIN: All right, so we're going to switch gears again today, and we're going to move off of kind of pure genetics and start to talk about molecular genetics. And I want to start with the concept of-- let's say you want to identify a piece of DNA, purify it, and propagate it so that you have it for future use. And so the process of doing this is known as cloning. And it's the process of, if you will, purifying and propagating a piece of DNA in an organism. So sort of the goal for this lecture is for you to know how if you wanted to, let's say, identify a piece of DNA-- maybe you're interested in the piece of DNA that contains a given gene. How would you go about getting that DNA in a state that can be propagated sort of on and on? And how can you identify the piece of DNA you want? And so one tool that we're going to use is we're going to use an organism bacteria as a tool. So I'll draw my sample bacteria cell here. Could be something like E. coli, just some bacterium. And you'll recall, when I talked about cells a few weeks ago, bacteria and prokaryotic cells have a chromosome called the nucleoid that's present inside. So that's the bacterial chromosome. But bacteria can also have extra chromosomal pieces of DNA that are called plasmids that exist in their cytoplasm. These are plasmids. And these extra chromosomal pieces of DNA replicate independently of the chromosome. And they can persist in the bacterial cell and be passed on to the daughters of this bacterial cell as the bacterial cells divide. So if we focus on an individual plasmid, what a plasmid would look like, often they have a cassette or a gene that confers antibiotic resistance. And that's often the reason that these bacteria are harboring these plasmids, because it gives them a sort of advantage if they're exposed to a certain antibiotic from a predator organism. So this would confer antibiotic resistance. One common example is ampicillin resistance, which I'll abbreviate just amp with an R next to it. But these plasmids, for them to propagate from bacterial cell to bacterial cell, they also need to be able to undergo replication. So they also have what is known as an origin of replication, which is often abbreviated ori, which basically allows this plasmid to be replicated such that copies of the plasma are passed on to the subsequent generation of bacteria. And we can take advantage of this plasmid system in bacteria, because we could take, let's say, a piece of foreign DNA-- and this foreign DNA could be of eukaryotic origin. We could take a piece of eukaryotic DNA and insert it inside of this plasmid. And we can basically use the plasma as a sort of platform or a vector to carry the piece of DNA that we might be interested in and to use the bacteria to replicate that DNA and pass it on to its descendants so that you essentially have a clone of bacteria, and you have a clone of this DNA in a given bacterial population. So, again, this would have an origin and maybe an ampicillin resistance to it. So how would you determine whether or not bacteria have a plasmid in it? Can you think of an experiment you could do to determine whether the bacteria has this plasmid? Stephen? AUDIENCE: You could add ampicillin, and it'll survive with the plasmid. ADAM MARTIN: Exactly. So what Stephen suggested is that if he wanted to know whether or not this bacteria had the plasmid in it, he would add ampicillin to the media. And if the bacteria doesn't have the plasmid, it won't be able to grow. But if it has the plasma, it encodes this gene that confers ampicillin resistance, and it will thus be able to grow. So that's exactly right. So you're able to select for bacteria with a given plasmid by simply growing it on media that contains an antibiotic. I now want to go through steps in cloning a piece of DNA. And we'll go through it sort of as a series of ordered steps so you can see how the process works. I'm going to start with a step to cut the DNA. After cutting the DNA, we'll then mix pieces together. Once we mix the pieces together, we'll do something known as a ligation, and I'll explain that to you in just a minute. And then, finally, we'll end with selecting the plasmids that have the piece of foreign DNA that we want. And this is known as recombinant DNA, because you've recombined a piece of DNA from one organism-- it could be a eukaryotic organism, like humans-- with a piece of DNA that's from a prokaryotic cell, a bacterium. So we then have some sort of selection process. So we're going to go through this step-by-step. And we're going to start with cutting DNA, OK? So, cutting DNA. And it turns out, we've talked about-- what type of enzyme do you think would cut DNA? Just generally, not as specific as what's up on the slide. What type of enzyme would cleave DNA? Think about how enzymes are named. AUDIENCE: DNase? ADAM MARTIN: Yeah, so Stephen suggested DNase, which is a really good suggestion, right? So an enzyme that will cut DNA would be a DNase. Another word for that is a nuclease. It's some type of nuclease. And the type of nuclease we're going to talk about is going to be an endonuclease. We talked about exonucleases, which chew DNA from the ends in the context of DNA replication. But what an endonuclease is, is it's a nuclease that's going to recognize a fragment of DNA and cleave it in the middle. So it doesn't require an end. It's going to chop it right in the middle. And there's a type of endonuclease, and these are called restriction endonucleases. They are nucleases that our natively present in a lot of different bacteria. And these restriction endonucleases essentially look through the DNA sequence, and they recognize a specific sequence of nucleotides and make a cut right at that sequence. So I have a few examples to show you here. The first is this EcoR1 restriction endonuclease. EcoR1's from E. coli, and it recognizes the sequence GAATTC. So it recognizes this six-nucleotide . Sequence and it cleaves the double-stranded DNA on the top strand between the G and the A and on the bottom strand between the G and the A, OK? And you can see that the two cuts are staggered. So when this cut is made, it leaves the DNA with two ends, and they're sticky, because there's a five-prime overhang at each end. So each end has this TTAA sequence. And these nucleotide bases can base pair with the complementary sequence. So this sequence could base pair with a sequence that has an end AATT. So these two ends that I've generated here could stick to each other, or there could be other ends that have the TTAA sequence that could stick to them. So another example is this Kpn1 endonuclease. And this is from a different bacterium. But again, it cleaves the DNA on the two strands. And this time, it cleaves the top strand farther down the sequence. And that generates what's known as a three-prime overhang. But again, you have an overhang. So this is what is known as having a sticky end. Because again, these nucleotides are available to base pair with a complementary sequence. The final type of restriction enzyme that I'll tell you about is EcoR5, which is a different enzyme from E. coli. And this generates a break, but this time, it cuts at the same position on both DNA strands. And that generates an end that's known as a blunt end, because there's no overhang, and there are no nucleotides that would sort of basically recognize a complementary sort of end like the sticky ends do, OK? So these are several of many, many different types of restriction endonucleases that are present in a wide range of bacteria. So then, now that you have a tool that allows you to cut DNA, you could then cut DNA from two different sources. And I've outlined that here. The vector is what the plasmid DNA is commonly referred to. So we commonly refer to this prokaryotic part of the plasmid, the vector DNA, and the part that we're trying to insert that's the foreign DNA, the insert. That's just kind of the lingo in the field. So here, I have a plasmid. It looks like a plasmid. It has an origin of replication. It has ampicillin resistance. And it has this EcoR1 site, which just means that this DNA sequence has a GAATTC, OK? So it's something that will be recognized by this restriction endonuclease. And then if you cut this enzyme with EcoR1, you start with the linear piece, right? So I, at 6:00 AM, started engineering this here. So let's say I have my plasmid DNA, and I cut it at the EcoR1 site. Then I cleave it. If you cleave a circle, now you have a linear piece of DNA, OK? But it has sticky ends, right? And these sticky ends-- so pretend that this is a foreign piece of DNA. This is my eukaryotic DNA. Let's pretend it carries the gene elastin. And this has ends, too. And if they're EcoR1 ends, then they will be able to stick to the sticky ends of the plasmid. And if you just get one sticking, now you have this piece of DNA which is two different fragments in tandem. But it's going to be moving around in space in the cytoplasm. And at some point, it might be recognized and stick to the other EcoR1 end. And then you have now a circular piece of DNA again, but now your circular piece of DNA has this piece of foreign DNA that's present inside the vector, which is the sort of poster tape here. So I just wanted to show you that. So you can kind of-- when you're doing molecular biology, you have to imagine the sort of end stick sticking to each other and how they're going to sort of wrap around and connect for the final product. OK, so let's say you get DNA, and your DNA could be eukaryotic DNA from, let's say, humans or flies or whatever your favorite eukaryote is. And in the genome of that organism, there will be many restriction sites. But if you chop it up, you will get various fragments that have sticky ends for EcoR1 on both sides. And then if you mixed the vector and the insert together, you have some probability of getting that insert to be incorporated into the vector. And then once this is present and ligated together, you can then put this into bacteria, and it will be replicated. So we focused on cutting, but if you mix these together, like I showed you at the tape, you're going to have sticky ends that come together and stick together. And you'll eventually get a situation where you have your insert-- so you have your insert here that might have your gene of interest and your vector DNA here. But when these things initially stick together, you don't have a single molecule where everything is covalently attached, right? You just have these base pair interactions between the two overhangs as they stick to each other through base pair interactions. So if we think about what's going on right here, you have, initially, if we're thinking about EcoR1, a sequence that is-- oh, sorry, this should be a C. This is the nucleotide sequence, but when it's sticking together, the top strand will have a free five-prime phosphate on this adenosine, and there will be no covalent bond between this adenosine and this guanosine. So that's what the top strand would look like. The bottom strand would look like this, where there are covalent bonds along the DNA backbone. Sorry. Had a mutation there. And this is incorporated in a broader sequence. And this bottom strand will have a free five-prime phosphate here and a free three-prime hydroxyl here, but there's no covalent bond here. There's no covalent bond here, right? So, at this stage, you don't have a single piece of DNA. You have two pieces of DNA that are interacting with each other through base pair interactions. So, eventually, you want it to be a single piece of DNA. And so you have to perform a step that is known as-- sorry, my phosphate got in the way. But you want to perform what's known as a ligation, where you take something that's just sticking together through nucleotide base pair interactions and you add a type of enzyme, which is called DNA ligase, to catalyze the formation of covalent bonds here and here. So DNA ligase is an enzyme that if you have a three five-prime phosphate here and a free three-prime hydroxyl, it'll catalyze the formation of a phosphodiester bond between these two bases and the DNA. So this DNA ligase forms a phosphodiester bond. And you go from having no bond there to having a bond. So then you eventually would get this sequence now, where you have covalent bonds between each of the base pairs. And what you see here is you've recreated the EcoR1 site. So when you get two EcoR1 sticky ends sort of recognizing each other and sticking to each other and then you ligate them together, you recreate that nuclease site so that you can cleave it again with the EcoR1 enzyme. So now, moving on. I'll move on right here. So the last step is that once we have pieces of DNA with this insert-- and let's say we're trying to find a piece of DNA from a eukaryotic organism. We might start with an animal. We have to extract its chromosomal DNA, digest it with a restriction enzyme, and then we digest the vector with the same restriction enzyme. And then we're going to make-- we're going to randomly insert these pieces of DNA into different vectors such that each bacteria who gets one of these plasmids will be replicating a different piece of DNA that's of eukaryotic origin. And this is what is known as a DNA library. So this is making a DNA library. And a DNA library is essentially a collection of different pieces of DNA that are from some source, OK? But different bacterial clones will be replicating a different piece of that DNA. So you can see the challenge now is to find the needle in the haystack, right? You're trying to find that one piece of DNA which is the one you're interested in. And I'll talk about several strategies that you can use to find the piece of DNA that you're interested in. I'll focus on selection. But first, I just want to differentiate between two different types of ways you could search for a piece of DNA. You could do a screen. And this is similar to what we talked about on Monday, where you look through a whole population of individuals, and you look for a given phenotype. So in the case of flies, we talked about how Morgan's lab was looking for differences in eye color. And that was a screen, because they looked through a ton of normal flies to find the one they want. Another strategy would be to do what's called a selection, where you kill off everything that's not what you want by making the organism grow in some sort of selective condition. And then you only allow the organisms to grow that are the ones that you want. So this is called a selection. And I'm going to illustrate several examples of selections, just so that you get an idea of how this works. So the first example I'll give is antibiotic resistance. And as Stephen so kindly pointed out earlier in the lecture, the way we can select for bacteria that have taken up this plasmid is to select the bacteria that grow in the presence of the antibiotic. So let's say you had a population of bacteria and let's say this started out being sensitive to an antibiotic. You could transform them with DNA from a strain that's resistant. And maybe that resistant strain has a plasmid that has a gene that confers antibiotic resistance, in which case, if it's taken up by this sensitive bacteria, if you then grow it on a plate that has the antibiotic, you might get a colony or a clone of cells that has taken up the piece of DNA that you're interested in-- in this case, the piece of DNA that confers antibiotic resistance. Everyone see how that works? So you're selecting only the cells to grow here that have taken up this antibiotic resistance gene. I'm going to use another example now from yeast, and it involves functional complementation. And I'm going to start with something that involves the biosynthesis of an essential amino acid. And then I'm going to go to a more interesting case, which is a case that involves an experiment that involved the identification of the master regulator of cell division in humans. But we'll start with just amino acid biosynthesis. And there are mutants of yeast known as auxotrophs. And these are mutant yeast, or mutant microorganisms, that fail to produce an essential nutrient. So an auxotroph is a mutant that fails to make a nutrient that's essential. So fails to make a nutrient. And the opposite of an auxotroph is called a prototroph. A prototroph is essentially a normal-functioning microorganism that's able to produce all of the essential nutrients that it needs in order to grow and survive, OK? So this produces all nutrients. And so let's say you had an auxotroph for leucine. So you had a strain that if you didn't provide leucine to it, it would fail to grow. So we'll start with a leucine auxotroph. And let's say you want to identify the gene that's defective in this leucine auxotroph. You'd perform a similar strategy to this one, where you'd have your auxotroph that you're transforming, so your auxotroph down here. And you would transform that strain with DNA from what organism? If you're trying to identify the functional gene, what organism would you use to produce the DNA you're going to transform into that organism? AUDIENCE: Prototroph? ADAM MARTIN: Javier's exactly right. You'd use the prototroph, right? So in this case, you would use DNA from the prototroph, because the prototroph has a functional copy of that gene. You know it does, because it's able to grow without adding leucine. And then you could take the auxotroph mutants that's transformed with DNA from the prototroph, and you played it on media. And what should be a property of the media? Should leucine be present or absent? Carmen? AUDIENCE: Absent. ADAM MARTIN: It should be absent, exactly right. So you'd look on plates that lack leucine or minus for leucine. And you'd select for colonies that now are all of a sudden able to grow leucine. So you've restored the function of leucine biosynthesis in this clone, and you've made it into a prototroph again. OK, this is what's known as functional complementation, because you're taking a cell that is defective in some function, and you're complementing it. You're complementing or rescuing the phenotype, OK? Now, even as a former yeast geneticist, I don't find amino acid biosynthesis and functional complementation in the context of leucine all that exciting. So I want to present one last example that involves an experiment that is going to involve the yeast cell cycle mutants. And I'm going to tell you about the experiment that led to the cloning of the master regulator of cell division in humans. And it involves a yeast mutant, and specifically, a yeast cell cycle mutant. And these yeast cell cycle mutants are what are known as conditional mutants. They are isolated as conditional mutants, meaning that these mutants are able to grow under certain conditions, but not others. And specifically, the condition they used is temperature, so they're temperature-sensitive mutants. The yeast cells can grow at 25 degrees, but not at 37 degrees. So this is known as a temperature-sensitive mutant, where you can propagate the mutant at one temperature, but then you can see if you raise the temperature, then it stops growing. And so you can see the mutant phenotype, because normal wild type functional yeast can grow at both temperatures. So this is a special type of mutant. And I'm going to tell you about an experiment done by Paul Nurse, who's an excellent yeast geneticist. And what he did was he used these yeast cell cycle mutants to identify the human gene for what's now known as cyclin-dependent kinase, or CDK for short. This is the master regulator of cell division in organisms ranging from yeast all the way up to humans, OK? But he used yeast as a model system to identify this gene. And the process was he took yeast cells-- and Paul Nurse worked on fission yeast cells, which are rod-shaped cells. And he identified yeast mutants. Yeast mutants. And he had a mutant in the CDK gene of yeast. He didn't really know it at the time. But the yeast CDK mutant-- what he knew was that this mutant was critically involved in the cell cycle in numerous types of yeast. So he knew this is an important gene. And what he wanted to do was to identify if humans had an equivalent gene that could function in the same way. So if you just have this CDK mutant and do nothing to it, it will not grow at 37 degrees, OK? But what he did was to take a DNA library-- similar to what I showed you before, where you just chop up DNA from an organism. In this case, he's using a human DNA library. And he used a particular type of library, but I'm going to skip over that for now and come back to it later. So he used a human DNA library. That's just a collection of pieces of DNA from a human source, OK? So he's taking human DNA, putting it into a yeast plasmid, and transforming yeast with that human DNA. And he's looking for a piece of DNA that's able to complement the CDK mutant, meaning the yeast cells would then be able to grow at the non-permissive temperature of 37 degrees. So he then looks for, on a plate, colonies of yeast that are growing at the non-permissive temperature of 37 degrees. So if you didn't do anything with this mutant, if you didn't transform in the DNA, nothing would grow. But he identified pieces of human DNA that rescued the phenotype of this mutant, OK? And so these are yeast that have the human gene for CDK, and they now grow. And this is a functional complementation experiment, because you're rescuing the growth of this yeast now not with a yeast gene, but with a human gene. And this human CDK gene is so conserved across eukaryotes that it's able to still function in a yeast cell, which is pretty amazing. So this just outlines the experiment here. At 25 degrees, these yeast mutants can grow and form colonies. And at that temperature, you can transform the yeast with different pieces of human DNA. Most of the human DNA is not going to be what you want. You're looking for the needle in the haystack. So most of these are not going to grow at 37 degree. But you're looking for this guy here that gets the human CDK, and that restores growth to this mutant strain. So voila, you get a colony of cells that are growing. And boom, Paul Nurse wins a Nobel Prize and the rest of the yeast field, as well, or a number of people who are working on cell cycle mutants. This is one of the experiments that led to the 2001 Nobel Prize for a bunch of yeast cell cycle geneticists. All right, so I've told you about how to find the needle in the haystack. And this was more common when we didn't know the genome sequence of an organism. But now I want to tell you how knowing the genome sequence of an organism would allow you to replicate and amplify a piece of DNA in vitro. So I'm going to tell you about an approach known as Polymerase Chain Reaction, or PCR. And what PCR is, is it's an in vitro method. So it's an in vitro approach to essentially amplify DNA. And so let's say you have a piece of DNA-- it could be a piece of DNA in the genome-- and you know the sequence of this DNA. And it has base pairs between the two strands. So, normally, for DNA replication to occur, what do you need? What needs to happen here? Can a polymerase get in now? No? Why not? Miles? AUDIENCE: The DNA's going to-- because they'll try and [INAUDIBLE] away [INAUDIBLE] from each other, so you have to [INAUDIBLE].. ADAM MARTIN: Yeah, you have to unwind it, right? So you have to denature it first. So if you do nature it, now you have two single-stranded pieces of DNA, right? Now what would a polymerase need to replicate that? Yeah, Jeremy? AUDIENCE: A prime. ADAM MARTIN: A primer. Exactly, right? And if you know the sequence, you can have a company synthesize a primer that's the exact sequence here and base pairs here. And I'll just draw the five-prime end of the primer right there. And now this primer has a free three-prime hydroxyl here. And if you added a polymerase, it would synthesize this bottom strand here. So this is known as the forward primer. And on the other strand, you can design a primer that's complementary to these bases here. Again, the five-primer end is out. This would be known as the reverse primer. And then you could have the DNA polymerase synthesize the opposite strand. All right, so the step here will be to first denature. So the first step would be to melt or denature the DNA, double-stranded DNA. So you denature the double-stranded DNA. This is commonly done above 90 degrees Celsius. And then the next step is once you have these single-stranded pieces of DNA, you can act you can have a primer present that anneals to the opposite strands. So you can have primer annealing. And this is commonly done between 50 and 60 degrees Celsius. You have to cool it down so that the primer can now base pair, such that not everything is denatured. So you have to cool it down for these primers to recognize their cognate sequence and base pair with it. And then once you have the primer annealed to the template, then you can add DNA polymerase to synthesize a new strand. So DNA polymerize for new strand synthesis. And this is commonly done at around 70 degrees Celsius. And then you can repeat this process over and over again. And at each step, you're going to double the amount of DNA that you have between these two primers. So let me just-- this is just a figure illustrating this. It's on the handout and online. Basically, you have your original double-stranded piece of DNA. You denature it and allow the primers to anneal. New strand synthesis. Then you take these new pieces of double-stranded DNA, denature them. The primers anneal to those new strands, and now you get new strands. And you just keep doing this cycle over and over again, and you essentially amplify the piece of DNA that's between the two primer sequences. So this is often used in forensics, because you can have very little DNA, and just by adding primers, you can really amplify the number of pieces of DNA you have between these two fragments. So you go from having very little DNA to a lot of DNA. OK, any questions about PCR? I'm going to move on to something that-- all right. I've really been focused on discovery up to this point. But I know that a number of you are engineers, and you probably want to engineer something. And so I've had to-- I'm going to tell you about a field that is moving so rapidly, I'm going to probably have to totally revamp my lecture for next year, OK? And I'm going to tell you about genome editing. So the last part of this story, genome or DNA editing. And I'm going to tell you about a specific type of system called CRISPR-Cas9, which has been in the news a lot, and there's a lot of excitement about this approach in the context of editing the human genome and possibly curing genetic diseases. Who here has heard of CRISPR-Cas9? OK, good. That's good. Our media is doing its job. So who knows what it is? OK, some of us know what it is. I just want to just give you a very superficial overview of what it is and why it's important. And I'm going to keep coming back to it during the course of the semester, because I think it raises a lot of ethical questions, and especially in the context of stem cells. I need you to know the foundation before we get into the really good stuff. So, let's see. So we're going to engineer something. So we're going to talk about repairing DNA. And if we want to edit the genome, the way this is most often done is by making a double-stranded break, OK? So if you make a double-stranded break in a piece of DNA, it can be repaired one of two ways. One is by non-homologous end joining, where the two pieces of DNA are basically just shoved back together again. And this results, often, in mutations. So if you're trying to fix something, unless you're just trying to break it, that's probably not what you want. But an alternative approach to DNA repair that organisms have is something called homology-directed repair. In this case, you can break a piece of DNA and add a piece of DNA that has a different sequence, but with homology near where the double-stranded break is. And in that case, you can replace the original sequence with what you provide as donor DNA. So it gives you an ability to essentially change the DNA sequence at a given locus if you're able to cleave the DNA at a specific locus. So, first, I want to start with just a thought experiment, right? You're all thinking, OK, we need to cleave double-stranded DNA. And boy, I just gave you a perfect tool for that. I gave you all these restriction endonucleases, right? So what's the problem with those? Well, let's think about the human genome. The human genome is 3 billion base pairs. And an EcoR1 site looks like this-- GAATTC. So it's six nucleotides long. And if you think of just a random sequence of 3 billion base pairs, you would get this sequence randomly one out of every four to the sixth times. So that's going to be one every 4,096 times. So if you get this in random DNA 1 every roughly 4,000 times, if you use it to cleave the human genome, it's going to cleave hundreds of thousands of places in the human genome. So we need much more specificity if we want to select, let's say, a given gene that has a disease-causing allele and try to fix it. Because if we use a restriction endonuclease, we just chop up the whole genome, and that would be bad. So specificity is the name of the game here. This is not specific, and we need a tool that's more specific. And that tool is going to be CRISPR-Cas9. And what CRISPR-Cas9 is, is it's essentially an RNA-guided endonuclease. So it's RNA guided, and it's an endonuclease. Restriction enzymes, right, they have nothing to do with RNA. They don't use RNA to recognize the nucleotide sequence. It's just a protein, and the protein recognizes the nucleotide sequence. In CRISPR-Cas9, you have an endonuclease, which is Cas9. Let's bump this up. So the endonuclease is the-- the Cas9 is the protein. That's the endonuclease. But its selection of a target depends on an RNA molecule that it's bound to, OK? So the specificity comes, at least in part, from what's known as a guide RNA, or single guide RNA. That's what's most often used in genome editing. So this guide RNA basically allows this enzyme to find a specific sequence. And the guide RNA is 20 nucleotides, or looks for homology for a 20-nucleotide base pair sequence. So you can see, already, we're doing way better than the six base pair recognition motif. We have 20 nucleotides. And there are other components of the system which increase the specificity. Then you have your Cas9 in blue, which is the endonuclease, your RNA, the guide RNA, in black, and the template here is in gray. And what you see is this RNA sort of exhibiting complementarity to this target sequence. And only if there's complementarity between the RNA and the target will this endonuclease get activated and cleave at this site. So the RNA is sort of serving like a guide dog for this endonuclease to guide it to a certain location to cleave. So the idea, then, is if you want to edit the genome-- and why people are so excited about this these days is you now have a system that might allow you to generate a double-stranded break in one specific place in the genome. And if you can do that, then if you provide donor DNA that maybe has a different sequence-- if you consider a disease allele, right? Let's say you know there's a gene that when there's a certain allele causes an inherited form of a disease. You could then take donor DNA from an unaffected individual and take cells from the affected individual and cut the locus that's problematic and get a repair of the defective allele using a normal allele of the gene. And that would essentially rescue the function of that gene if it were then reintroduced into the patient, OK? Do you see sort of roughly how this works? So this is a very sort of broad and general sort of conceptual framework for how this might happen. Let's say you have an individual with a blood disorder-- let's say sickle cell anemia or beta thalassemia. Those are inherited blood disorders, which lead to anemia. You could remove cells, and what might be the best are the stem cells from a patient. And you could then take those stem cells and use CRISPR-Cas9 in vitro in cell culture to edit that individual's cells to repair the genetic defect. And you could then reintroduce those to the patient, where if they're stem cells, they'd reoccupy the stem cell niche and produce functional blood cells that would then essentially cure the individual of the disease. This is how scientists are thinking about the use of the system nowadays. And this hasn't really been successful yet, but there are several clinical trials that are currently underway, where people are trying to show that this can be used to treat human genetic diseases. So in the next year, you are going to hear more about this, almost undoubtedly, as we start to hear the results of some of these patients. There are concerns about this, as well. I don't want to overblow it. There are certainly concerns. We don't know this is going to work. I mean, people have been talking about this type of stuff since I was a student 20 years ago. But I feel like we're getting-- we're much more advanced now, and the tools are more advanced. And so I feel like we're kind of getting to the point where there's a much greater chance that this will be successful these days than it was 20 years ago. I just want to point out where this system-- how it was discovered and where it came from. And I like this as an example. Much like for the fly genes that defined major signaling pathways, this is a discovery that came from fundamental research on, basically, the ecology of bacteria. So this CRISPR-Cas9 system essentially evolved in bacteria as a form of an arms race between bacteria and their predators, bacteriophage, which are viruses that infect bacteria. So this is an arms race between bacteria and their vicious predators, bacteriophage. And what CRISPR is, where these enzymes and this system evolved from, is this is a form of an adaptive immune system for bacteria, which is pretty wild in and of itself. So CRISPR is an adaptive immune system for bacteria. If you haven't gotten your flu shot, you should get it. We'll talk about human immunity later in the semester. But this is where bacterial immunity kind of-- I'm sneaking it in. So the way this works in bacteria-- what CRISPR stands for is Clusters of Regularly Interspaced Short Palindromic Repeats. So you can see already thank god they gave it an acronym. Otherwise, it wouldn't be getting nearly as much buzz, because no one can say that. And so where this CRISPR came from is on bacterial chromosomes of many bacteria, there's these clusters of interspaced short palindromic repeats, and the repeats are interrupted by spacers. And what researchers discovered are these spacers have sequence similarity and identity to sequences that are from bacteriophage. So each of these colored sequences here has some type of complementarity to some type of bacteriophage. And so when a phage infects bacteria, or some bacteria, what happens is that there's a system to recognize that foreign genetic element and take a piece of it and insert it in the genome. And that serves as a memory for the bacteria to remember that it got infected by that particular phage. And then, later on, what the bacteria does is it transcribes this region and forms these mature what are known as CRISPR RNAs, where you can see there's some sequence would recognize a foreign genetic element. So, therefore, in the future, if this phage came around again, what would happen is one of these CRISPR RNAs would recognize the foreign genetic element through base pair complementarity, and it would know to cut it. And after the target is cut, it's then degraded by the bacterial cell. So this is a way for bacteria to remember what viruses have infected them and to have a defense mechanism against it. So it's a pretty cool system. You know, what's also cool about this system is it's an adaptive immune system, similar to how we sort of recognize foreign pathogens. What's different about it is this is heritable. It's incorporated into the genome. And the more phage the descendants of this bacteria see, the more of these repeats you see. So this is a heritable immune system, which, unfortunately, we don't have. So you should still get your flu shot. We'll talk about vaccination later on. Have a good few days, and I will see you on Friday.
MIT_7016_Introductory_Biology_Fall_2018
29_Cell_Imaging_Techniques.txt
ADAM MARTIN: OK. So I'm going to start out today's lecture on the wrong foot by quoting somebody that you guys probably don't know and who was a New York Yankee. So Yogi Berra, the famous Yankee catcher once said, "You can observe a lot by watching," OK? And that is very appropriate for biology because a lot of things in biology have been discovered simply by watching for them in cells or watching for them to happen at the molecular level. And so our ability to visualize and see what's going on inside cells and at the molecular level is really critical for the process of biological discovery. So today I'm going to tell you about tools, both sort of older tools but also kind of the cutting edge, for how biologists are really observing what's going on in living cells and in life in general. OK. So let me start by just having you guys think a little bit. What do you require of me to see what I write on the board? Yeah, Rachel. AUDIENCE: Light. ADAM MARTIN: What's that? AUDIENCE: Light. ADAM MARTIN: You need light. And what does the light help you to do? What's that? AUDIENCE: [INAUDIBLE] ADAM MARTIN: You need it to see the board. And so let's say the light's on, OK? Is that, can you read this? No, what's the problem? What's that? Size, right? So Natalie suggested size. Right, so one thing that you need is some amount of magnification, right? But let's take another-- let's say I do magnify this. What if I magnify it? And I'm going to start writing my notes on the board, right? How is this? Helpful? Jeremy, what's wrong? AUDIENCE: Differentiate [INAUDIBLE].. ADAM MARTIN: Yeah. You have to be able to distinguish different objects, in this case, these letters, right? So in addition to just magnifying it, you also need the structures to be far enough apart such that you can distinguish them. So you need what is known as resolution. OK. This was resolution. But this is resolution where the letters are actually resolved, OK? So structures need to be far enough apart so that you can resolve them. Now let's come back to Rachel's point, right? Why is it that we need light to see what's on the board? Right? What if I draw without pressing? Right? Is that-- yeah, Orey. AUDIENCE: You need contrast. ADAM MARTIN: You need contrast. Exactly, right? The light sort of gives you contrast between the chalk and the black part of the board. So you also need contrast. And contrast is the ability to-- the structures need to be differentiated from the background, OK? So structures need to be different from background. What else do you need to read my writing? Right? What if I were just to-- everyone can read that? What's wrong? Carlos? AUDIENCE: Needs to be clear and legible-- ADAM MARTIN: What's that? Yeah. I need to have, like, good handwriting, right? So I like to think of this as this is an aspect of sample preservation, OK? So there's a sample preservation issue. I can't butcher the letters and the words. OK. So in the process of doing all these other things, right, magnifying your image, resolving things in your image, and generating contrast, you can't destroy your sample such that it's illegible, basically, OK? So in this case, structure must be preserved while doing one through three on this list. OK. So I'm going to start with resolution. We'll talk about, what are the limits to resolving things in biological specimens. And in biology, the one instrument we use a lot is a microscope, OK? And a microscope is basically a collection of lenses that allow you to do many of the things I just drew on the board. I'll point out a couple sort of broad sort of types of microscopy. So the human eye up here can resolve up to about 100 to 200 microns, if you're looking at something at reading distant distance, right? But cells are like way smaller than that, right? So we need some sort of instrument that allows us to see things that is lower than the resolution limit of a human eye. And so one way is to use a light microscope where you're using visible light to observe your sample. And many of the images that you're seeing that we're showing you are from visible light microscopes. To see smaller things, type of microscope that's often used is an electron microscope. And electron microscopy allows us to observe structures all the way down to the sub nanometer level of resolution, OK? Now one limitation to the electron microscope is, you have to kill the sample, OK? So that can lead to artifacts and problems. And we'll discuss away at the end where light microscopy is being extended down to the limits that approximate that of an electron microscope. OK. So what determines then, the resolution of a microscope? And so I'm going to sort of define a measurement of resolution which I'll call the d-min, or minimum distance. And this will be the minimum distance between two points that can be resolved. OK. And what I showed you on that past slide is basically the limit on the right here is the d-min for these different types of microscopy techniques, OK? And what that means is, so the minimum distance would be, if your minimum distance is 200 nanometers, if two points are greater than 200 nanometers apart from each other, then you can distinguish them as two different objects. However, if they are closer than 200 nanometers together, you wouldn't be able to see that these are two different objects. They would be overlapping each other, OK? And typically, the d-min for a light microscope is around 200 nanometers, OK? And the d-min results, if we are to determine-- if I'm to tell you what determines this minimum distance, we have to think about a microscope. So here, I'm drawing a specimen here. I've just drawn my specimen. It's on a slide or a cover slip. Here's your specimen here. And you might have a light source to generate the contrast. And then there'd be some sort of objective lens underneath the slide and the specimen. So this would be an objective lens. Sorry about my sample preservation here. And so the light is going to be hitting the sample. And the objective lens will be collecting a cone of light that's going into the lens here, OK? And maybe I'll magnify this a bit so you can see it better. So I'm just going to magnify this region over here. So if this is my specimen, I'm going to draw the objective a little farther away this time. This is the objective. And the objective is able to capture a range of different angles of light that come from the specimen, OK? So it's collecting angles. And I'll just define here an angle theta, which is like the 1/2 angle of this whole cone of light, OK? So what determines the resolution limit in this type of system is first of all, the wavelength of the light that's used. So if you're using white light, that might be from 400 to 800. If you're exciting GFP, you're going to excite with a wavelength that's 488 nanometers. So it's usually around maybe between 450 and 550 nanometers for many different fluorescent proteins. OK. So lambda here is the wavelength of the excitation of the light you're using. And this is all then divided by 2 times the NA, which is a property of the subjective. And what NA is, NA stands for numerical aperture. And what that is, is basically the range of angles that this objective can collect, OK? So it's N sine theta, where theta is this angle here. So you get the best performance if the objective can collect all of the light that comes from this side of the sample. OK, and then N refers to the refractive index of the media that this light is going through. OK. And so if you have an objective and you have your sample in here and there's a slide and a cover slip-- I'll extend this out-- you often have immersion oil. There'd be some sort of immersion media here. And I don't know if you've ever used a microscope that's meant to be used with immersion media and you don't add that immersion media. But your image quality, if you don't add that media, is like really bad, right? And that's because you're affecting the numerical aperture of what this lens can collect. And therefore, you degrade your image quality, OK? But basically, the more light, the more angles of light that you collect, the higher the numerical aperture. And therefore, the lower this d-min is going to be. And so the greater you'll be able to resolve objects that are near each other in space. OK. So the take home message from all of this is that you notice that magnification is not a part of this. But the wavelength of the light is really critical, OK? So usually, this minimum distance ends up being the wavelength of light that you're using divided by 2. And this usually ends up being about 200 nanometers. So that's the diffraction limit of a light microscope, as you see up there. And this resolution is basically limited by the diffraction or behavior of light, OK? So light microscopy is limited by the diffraction of light. And it was thought for a long time that no matter what you do, you'd never be able to break this limit of about 200 nanometers. But at the end of the lecture, I'll tell you about some very smart people who figured out a way to actually break this limit. And we'll talk about how they were able to do that. Now I want to talk about a few other limitations of microscopy. And I'm starting by showing you this electron micrograph of the endoplasmic reticulum. And one important consideration you have to make is two dimensional versus three dimensional structure. So for electron microscopy, you basically cut the sample so you have a very thin slice of it. It's like slicing bread except these slices are on the order of 30 to 60 nanometers in height. And then you pass an electron beam through the sample after it's stained in order to visualize your specimen. And one thing you have to keep in mind is that, you're looking at a slice through this. And it doesn't give you three dimensional information, OK? So if we were to think about the endoplasmic reticulum, you might have an endoplasmic reticulum. And if you take an optical slice through this, then you would see something like this where you see each of the stacks individually. And so this might make you to conclude that the way that the endoplasmic reticulum is structured is it's kind of like a stack of pancakes, where each of these, you have a lipid bilayer surrounding a lumen of the ER. So right, the lumen would be inside like this for each of these. And they're just stacked on top of each other. And this is the textbook model for endoplasmic reticulum structure. But it was actually, if you don't consider this in 3D, you might miss something. And what was missed was reported in 2013 in this paper, where rather than just taking a single slice, what they did is they made lots of slices. And they kept track of where they are. So they basically did a three dimensional reconstruction of the endoplasmic reticulum. And by imaging this other dimension, they came to a very different conclusion about how the endoplasmic reticulum is organized. And instead of being stacks of membranes on top of each other like pancakes, instead, it's a helicoid. So this is an ER from a professional secretory cell, like a salivary gland cell. And you can see in 3D, you get a very different picture of the organization of this organelle. It's actually wrapped around and spiraling membrane stacks, OK? So their model is that, basically the endoplasmic reticulum in some cell types basically has a parking garage like structure, OK? So in this case, in these cells, it seems like the ER Is basically a parking garage for ribosomes. OK. And you don't get that information unless you consider the three dimensional structure of the thing that you're looking at. So in addition to electron microscopy, there are techniques that involve light microscopy that involve optical sectioning. And so normally, if you're looking at fluorescence, if you're doing fluorescence microscopy, you'd be exciting the whole volume of your sample and exciting all of the fluorophores such that fluorescence from out of the focal plane would be getting into your image. And that would give you a much more hazy, unclear image. But there are techniques such as confocal fluorescence microscopy that allow you to exclude the out of focus light such that you're basically getting an optical section through your sample. And that can give you a much cleaner and better resolved image, OK? Now I want to talk a little bit about another dimension, which is time. And again, you're seeing images in textbooks. And usually, you're just seeing a single image. And whenever you see a single image, you have to think about how things might be changing in time in order to understand the system. So one example that I like here is shown here, where these are different proteins that are labeled in a yeast cell. And you see that these proteins form patches at the edge of the yeast cell. And some of these patches just contain the green protein, which is SLA1. And other patches contain just the red protein which is ABP1. And there's another class of patch which contains both, OK? So how might you interpret this fixed image over here? What might be one model you would conclude? Well, what was initially concluded from this type of experiment is you have three different types of patches that are distinct from each other in the cell because they have different molecular compositions, OK? And that was what was initially thought. But it was wrong because researchers had to really consider the aspect of time in this problem. And I'm going to show you a movie now over here where you're going to see this yeast cell. And now you have these different proteins tagged with different fluorescent proteins. And we can watch them in time as they progress through a stereotypic cycle. So what you're going to notice in this movie is that you see these green patches appear. And every single green patch at some point is joined by red. And then the green disappears and the red stays around. And then the thing disassembles, OK? So what was initially thought to be three different structures in the cell, eventually, it was found out that there was a dynamic process where this patch sort of matured over time and eventually disappeared into the cell. And what this process is, is actually endocytosis in yeast. And you're seeing different proteins getting recruited to endocytic vesicles as they bud from the plasma membrane of this yeast. OK. So that's just my caution in interpreting fixed images, because you have to think about how they might be changing in time. All right. So now we have to consider contrast. And in bright field microscopy, and bright field microscopy basically involves white light as your light source. And so you'd have a microscope that has a white light source. You might have your specimen here. Here's your specimen. Then you'd have some sort of detector at the end of your system. And there would be some objective lens in between, which I'm going to ignore for now. And so, for bright field microscopy, you're taking a sample and shining it with light. The light that doesn't go through your sample will go right through to the detector. And that's your background. But some of this light, the light that's going and hitting your sample, could be absorbed or it could be refracted. And it's the refraction or absorption of this light which generates the contrast for bright field microscopy. So in bright field, native structures in the cell absorb or refract light. And this is what generates the contrast. OK. So the images shown up here are bright field images of cells. And in each of these cases, there's no dye. There's no fluorescent protein. But you're able to see the outline of the cell. And you're able to see even individual organelles or structures within the cell that are interacting with the white light and generating contrast, OK? So that's one way to generate contrast is just hope that whatever is native in your cell generates the contrast. But there are also-- you can increase contrast in specimens by adding dyes. And if these dyes bind to specific structures like a membrane, then that will increase your contrast. So the electron microscopy images that I showed you-- so for electron microscopy, you generate contrast by adding a dye that is an electron-dense dye, which will bend the electron beam. And that's what allows you to get an image from an EM. So an EM contrast is from an electron-dense dye such as uranyl acetate or some other type of dye. Now, fluorescence microscopy, as Professor Imperiali showed you, involves taking a fluorescent molecule and attaching it to your protein of interest. So you're actually getting protein-specific contrast, which is very useful. OK? And the way a fluorescence microscope works is just shown up here where you might have a light source that has a range of wavelengths. And you can use a filter to select one. In the case of GFP, it would be blue light or 488 nanometer light. And that would then be shined onto your specimen. And then the light is absorbed by fluorophores in your specimen. Some energy is lost, such that the light that's emitted from GFP is a longer wavelength, in this case, green. And then you can filter again to make sure only the green light is what goes to the detector. So this is a very efficient way of generating contrast because you can use filters to select only the wavelength of light that is emitted from your fluorescent molecule. OK. Any questions about that and about my very short version of how fluorescence microscopy works? Yeah, Rachel? AUDIENCE: [INAUDIBLE] dichroic mirror? ADAM MARTIN: The dichroic mirror reflects certain wavelengths that are below a certain wavelength. And will pass wavelengths that are above a wavelength. So it will basically reflect the excitation light. But it will pass the emitted light, OK? And so, there are tons of these mirrors. Some are not dichroic, but they can reflect four different wavelengths and pass all other wavelengths. And so, this allows you to image multiple fluorophores at the same time, OK? The specifics aren't as important as the general concept of how this works. OK. Now I want to come back to the resolution limit. And I want to tell you about how we can beat it. So, beating. We all like winning. So beating the diffraction limit. And this is going to involve a type of microscopy that's really sort of been developed in the past decade, which is known as super resolution microscopy. So super resolution microscopy. And remember, I mentioned for you before that yes, electron microscopy can get you nanometer resolution. But you have to kill the cell. And also, it's hard to get protein specific contrast, right? So that kind of sucks because as biologists, usually we're interested in how things are functioning to stay, to live. So wouldn't it be great if we could somehow use light microscopy to get down into this nanometer range so that we can see how individual proteins are interacting with each other and organized at the nanometer level, OK? And so in the past decade, there's really been a revolution that's enabled us to do light microscopy with a resolution that gets down to the 10 or even single nanometer resolution. And there's a number of different super resolution techniques. I'm going to talk about just one of them. But both these techniques basically use the same concept, which is that they enable whoever's doing it to identify single molecules and define where those molecules are very precisely. And to turn fluorescent molecules on and off so that you can select individual molecules such that you can see them. So these are two different techniques. They're conceptually very similar. I'm going to focus on this one here. But it's pretty much similar to this one up here. And I just want to point out that one of our colleagues here at MIT, Ibrahim Cisse who's in the physics department, his lab builds these super resolution microscopes. And they're using super resolution microscopy to study the collective behaviors of proteins, in his case, during the function of gene expression. OK, so this is research that's actively being pursued at MIT. So let's just do a thought experiment again. OK. I'm drawing a single molecule or what you would see in an image if you were looking at a single molecule GFP. Great. Where is GFP here? Carmen? AUDIENCE: It's right there on the board. ADAM MARTIN: It's right there on the board, right? Is it here? What's that? AUDIENCE: I don't know. ADAM MARTIN: Who thinks GFP is right here? Who does not think GFP is right there? You have to be thinking one or the other. Yeah, Rachel. AUDIENCE: [INAUDIBLE] ADAM MARTIN: OK. So what Rachel says is that it's probably not right here. It's probably in the middle of this thing, right? And so if you're seeing a diffraction limited spot, you're going to get some sort of Gaussian of intensity, which I didn't draw well here. But it might be a little bit brighter in the center and drop off as you go towards the edge, right? So if I were to take a image intensity profile along the line here, you'd see something that kind of looked like a Gaussian, OK? And GFP, if there's a single molecule that you're imaging, should be right in the center of this Gaussian, OK? And so even though we're not seeing a spot, we're seeing a spot that its width here is diffraction limited. So this width is 200 nanometers. But if we can estimate where the molecule is in this region with nanometer precision, we could get a very accurate view of where this fluorescent molecule is, OK? So it relies on certain assumptions. The first assumption is that you're assuming we can see single fluorescent molecules. So that we visualize single fluorescent molecules. And that we can then estimate with some amount of precision the location of the molecule based on this diffraction limited sort of image that we get. So then we have to estimate the location based on the image. OK. And then our resolution is basically the error in fitting this curve, OK? So the error in the fitted position is equal to the standard deviation of this Gaussian. The standard deviation divided by the square root of the number of photons that you collected to get that image. So the square root of the number of photons. OK. And I just told you in the beginning of the lecture that this standard deviation is limited by the diffraction of light. So the standard deviation is going to be around 200 nanometers, right? But if you collect a lot of photons, you can accurately figure out where the fluorescent molecule is here if you know that it's a single molecule. OK? So the number of photons in a typical experiment is going to be around 10 to the fourth, OK? And so if 200 nanometers by 10 to the fourth, you're going to have sub nanometer resolution if you do the experiment right. OK? So you really need to see fluorescent molecules, however, in order to do this, OK? And the real breakthrough came with the realization that you could combine this type of fitting to estimate the position of single molecules with a certain type of fluorescent protein where you can turn the protein on and off stochastically, OK? OK. So we need one more component which is a photo-activatable fluorescent protein, in this case, the first one was photo-activatable GFP PA-GFP. And PA-GFP is a fluorescent protein like GPF. It's genetically encoded. But when it matures, it's not fluorescent. It's in a dark state, OK? So when it matures, it's dark. It has a dark state. And it starts out in this dark state. But you can turn it on. And you can turn it on with light. So that's where the photo activation is, because you're able to photo activate this fluorescent molecule. And you can photo activate with sort of UV light or 405 nanometer light. And so that's not normally the excitation wavelength. But if you shine your sample with 405 nanometer light, it will convert some set of your molecules into the now fluorescent state, OK? So this then causes it to be fluorescent. And now it's going to be lighting up. OK. And I want to thank Professor Cisse because he gave me the next slide which I think nicely shows how this technique works. So the way you can get super resolution is you can't be looking at all your fluorophores at once because they're not far enough apart and they'll all bleed together so that you get a bad image, right? So this would be your conventional diffraction limited image where all the fluorophores-- there's about 20 fluorophores here. And you can see, you can't see individual fluorophores and you can't see what that says, OK? But if you take a divide and conquer approach with this, if you have a photo-activatable GFP, you don't need to look at them all at once. You can just look at three to start, OK? So now if you only activate a small subset and you ensure that you're activating it at a frequency such that they're well resolved from each other, then you can distinguish that there are three single molecules here. You can fit where they are. And now you know where they are with nanometer precision. So you know where those are. And then you want to look at other molecules. And so you have to get rid of these. And so what you would do is to bleach them. And bleaching is to use light to basically damage the fluorophores and get it to no longer fluoresce, OK? So this process is going to involve an iterative photo activation followed by measuring and fitting the image so that you can basically determine where each single molecule is in your image. And then ending with bleaching to get rid of the fluorophores you just turned on so that everything is now dark again. And then you repeat this process iteratively to collect all of the single molecules that you can, OK? So in this case, we just got these three molecules. We would then want to bleach them so that we're now going to look at different fluorescent molecules. And we'll turn on a certain number of other fluorescent molecules. Here you see four. Here are two. They're a little close together, but you can see that there are two. Here are another two. They're close together, but you see two clear intensity peaks. And so you can fit those four. Now you have four more molecules to make up your image. You bleach them, excite or activate five more. Here are five fluorescent molecules. You can fit those. Determine their positions. And you just do this iteratively over and over again till you get as many molecules as you can. And at the end, you basically add all these images together to get the final super resolution image, OK? So this is an iterative process where the photo activation allows you to image single molecules such that you can see where they are with nanometer precision. And then you add them all together to get a super resolution image. Here is an example of this in practice. And this is the storm technique which doesn't involve a photo activatable fluorescent protein, but involves organic dyes blinking. And the concept is basically the same. And here you see a conventional image of an axon. And it's labeled with this beta-II spectrin. And you see beta-II spectrin is continuous. And it's staining in this axon. But if you look at the super resolution image, what you see is the beta spectrin actually has this repeated periodic pattern along the axon. And this is a cytoskeletal element that's basically present in rings up and down the axons of neurons. And you can kind of think of this as like the axon is a vacuum hose, where you have these rigid sort of rings that are aligned all along the axon. But because it's repeated and you have intervening areas without much cytoskeleton, you can kind of think of it as a way for the axon to be both rigid but also flexible and maneuverable. I just wanted to point out that several super resolution techniques were recognized in the 2014 Nobel Prize in chemistry. Eric Betzig on the left here developed the approach using the photo activatable GFP that I described to you. And two others, Stefan Hell and W.E. Moerner were awarded it for other types of super resolution technique. If you get a chance, you should go to the Nobel Prize website and listen to Eric Betzig's Nobel lecture. He has a very interesting story. And part of it involved how he managed to develop this technique. And he actually developed it in the living room of his friend. So this is actually one of the first super resolution microscopes. Here's the microscope and here you see-- I love this chair. But you see, you basically have this microscope in this guy's living room. So if you want to hear more about this story, listen to his Nobel lecture. He's a really funny guy and you get a sense of how science really works where you get this unemployed guy like building a microscope in his friend's living room and then wins a Nobel Prize. And just one reminder to end today, remember your news brief is due this Friday, November 30th. If you need help on selecting a topic, please see a member of our staff, including Professor Imperiali or myself. And so, good luck with that. Thank you. I'm all set.
Myths_from_Around_the_World
The_myth_of_Irelands_two_greatest_warriors_Iseult_Gillespie.txt
Cú Chulainn, hero of Ulster, stood at the ford at Cooley, ready to face an entire army singlehandedly— all for the sake of a single bull. The army in question belonged to Queen Meadhbh of Connaught. Enraged at her husband’s possession of a white bull of awesome strength, she had set out to capture the fabled brown bull of Ulster at any cost. Unfortunately, the King of Ulster had chosen this moment to force the goddess Macha to race her chariot while pregnant. In retaliation, she struck down him and his entire army with stomach cramps that eerily resembled childbirth— all except Cú Chulainn. Though he was the best warrior in Ulster, Cú Chulainn knew he could not take on Queen Meadhbh’s whole army at once. He invoked the sacred rite of single combat in order to fight the intruders one by one. But as Queen Meadhbh’s army approached, one thing worried him more than the grueling ordeal ahead. Years before, Cú Chulainn had travelled to Scotland to train with the renowned warrior Scáthach. There, he met a young warrior from Connaught named Ferdiad. They lived and trained side-by-side, and soon became close friends. When they returned to their respective homes, Cú Chulainn and Ferdiad found themselves on opposite sides of a war. Cú Chulainn knew Ferdiad was marching in Meadhbh’s army, and that if he succeeded in fending off her troops, they would eventually meet. Day after day, Cú Chulainn defended Ulster alone. He sent the heads of some of his adversaries back to Meadhbh’s camp, while the rushing waters of the ford carried others away. At times, he slipped into a trance and slayed hundreds of soldiers in a row. Whenever he saw the queen in the distance, he hurled stones at her— never quite hitting her, but once coming close enough to knock a squirrel off her shoulder. Back at the Connaught camp, Ferdiad was laying low, doing everything he could to avoid the moment when he’d have to face his best friend in combat. But the Queen was impatient to get her hands on the prize bull, and she knew Ferdiad was her best chance to defeat Cú Chulainn. So she goaded him and questioned his honor until he had no choice but to fight. The two faced off at the ford, matching each other exactly in strength and skill no matter what weapons they used. Then, on the third day of their fight, Ferdiad began to gain the upper hand over the exhausted Cu Chulainn. But Cú Chulainn had one last trick up his sleeve: their teacher had shared a secret with him alone. She told him how to summon the Gáe Bulg, a magical spear fashioned from the bones of sea monsters that lay at the bottom of the ocean. Cu Chulainn called the spear, stabbed Ferdiad to death, and collapsed. Meadhbh seized her chance and swooped in with the rest of her army to capture the brown bull. At last, the men of Ulster were recovering from their magical illness, and they surged out in pursuit. But they were too late: Queen Meadhbh crossed the border unscathed, dragging the brown bull with her. Once home, Meadhbh demanded another battle, this time between the brown bull and her husband’s white bull. The bulls were well matched, and struggled into the night, dragging each other all over Ireland. At long last, the brown bull killed the white bull, and Queen Meadhbh was finally satisfied. But the brown bull’s victory meant nothing to him. He was tired, injured, and devastated. Soon after, he died of a broken heart, leaving behind a land that would remain ravaged by Meadhbh’s war for years to come.
Myths_from_Around_the_World
The_Hawaiian_story_of_the_wind_keepers_Sydney_Iaukea.txt
Long ago, La’amaomao, the Hawaiian wind goddess, wielded a gourd that housed the winds of the Islands. It came to hold her bones, along with the life force they carried, and was eventually passed to her grandson, Paka’a. He learned the hundreds of distinct winds that wafted and whipped around his homeland. Chanting their names, he could stir the skies and raise the waves. Like his father before him, he became the most trusted attendant to King Keawenuia’umi of Hawaii Island. But his privileged status also made him a target. Two of the king's seafaring navigators were especially envious. They knew Paka’a’s skills and responsibility to the king were divinely inherited, but they coveted his position. So they whispered rumors and eventually turned the king against his most loyal companion. Paka’a watched bitterly as he was stripped of his land and privileges. He fled, escaping the navigators who plotted to drown him as he sailed away, and took refuge on Molokaʻi, where he married a young chiefess. They brought a son into the world, but Paka’a never stopped imagining his return. He taught his son, Kuapaka’a, the way of the winds until Kuapaka’a was poised to avenge his father and restore his rightful place beside the king. Back on Hawaii Island, as the two navigators revealed their selfishness, the king realized how easily he’d been deceived and longed for Paka’a. Some of his more trustworthy attendants divined that Paka’a was still alive and told the king to construct canoes for a journey. However, Paka’a could not return so easily. First, the king’s loyalty and dedication had to be tested. As the king rallied his attendants, Paka’a’s ancestral spirits arrived in the form of two birds and rotted the trees he was using for canoe-building. Though exhausted, the king had his best archers shoot the birds, and he started again. Later, as Paka’a dreamed, the king’s spirit announced his search. However, Paka’a’s own spirit misled the king, saying he was on Ka’ula— not Molokaʻi. The king’s fleet soon set sail. As they passed Molokaʻi, Paka’a’s son, Kuapaka’a, greeted them, warning that a storm was brewing. He chanted the names of the winds, but kept his identity a secret, as per Paka’a’s plan. The king’s navigators dismissed the young boy’s claims. But as they sailed off, Kuapaka’a unleashed a vicious storm and all were forced to shelter on Molokaʻi. For four months, Kuapaka’a maintained the storm. With Paka’a’s secret supervision, he earned the king’s trust, and, after clearing the sky, Kuapaka’a agreed to join the king’s search. At sea, the two navigators continuously discredited Kuapaka’a. Finally, he readied himself for revenge and called the winds. As waves crashed, Kuapaka’a anchored the canoe and passed provisions to everyone— except the two navigators. They grew cold and weak, eventually falling overboard. But Kuapaka’a’s work wasn’t done. While everyone slept, he brightened the sky and sailed towards Hawaii Island instead of Ka’ula. Though the king regretted not finding Paka’a, everyone was glad to be home and forgot about Kuapaka’a— until the day he proposed a canoe race. He wagered his catch of flying fish against that of eight fishermen who had been appointed by the two treacherous navigators. They agreed, figuring it'd be an easy win. But Kuapaka’a called to La’amaomao, and a great wave whisked him ahead of his opponents. Enraged and convinced this was a fluke, the fishermen asked for a rematch. But this time, they demanded Kuapaka’a wager his bones against theirs. At first, the men paddled fiercely, with Kuapaka’a gliding effortlessly in their wake. As they tired, Kuapaka’a hurtled himself to victory. Hearing that eight of his fishermen were to die, the king asked Kuapaka’a to have mercy on them. But the time had come for Kuapaka’a to reveal his identity and have the King prove his commitment to Paka’a. Overcome, the king agreed to their deaths and asked to welcome Paka’a home, promising his lands and position would be restored. At last, the king and Paka’a were back at each other’s sides. Wielding the sacred wind gourd, Paka’a and Kuapaka’a ensured the names of the winds would never be lost, and those who understood them never undermined or forgotten.
Myths_from_Around_the_World
Japans_scariest_ghost_story_Kit_Brooks.txt
Looking at her father’s brutally murdered body, Oiwa was sick with despair. Her father had been Oiwa’s only hope for ending her marriage to the cruel and dishonorable samurai Iemon. And now, while her husband and brother-in-law vowed to find the culprit, Oiwa was trapped in her unhappy home with only the household servant Kohei to witness her suffering. What the grieving woman couldn’t guess, however, was just how close the killer was. After Oiwa’s father tried to end the marriage, it was Iemon who murdered him in cold blood. Hearing of her troubles, Oiwa’s wealthy doctor neighbor sent some medicine to soothe her. However, when Iemon went to offer thanks, the doctor revealed his gift was part of a sordid scheme. His beautiful young granddaughter was madly in love with Iemon, and if the samurai left Oiwa for her, the doctor would offer him great riches. Iemon happily accepted this bargain, and eager to marry his new bride, he sent a man called Takuetsu to dispose of his poisoned wife. But when Takuetsu arrived in Oiwa’s room, he was appalled. The poison had swollen her eye and her hair fell to the floor in bloody clumps. Taking pity, Takuetsu told Oiwa about the doctor’s scheme. Furious, Oiwa lunged for a sword. Takuetsu wrestled it away and flung the blade across the room. But when Oiwa ran to confront her husband, she stumbled, falling against the sword. Wounded and poisoned, Oiwa cursed Iemon’s name as the life left her body. At the discovery of his wife's demise, Iemon arranged to remarry that very night— but not before killing his servant Kohei, who heard Oiwa’s death. While Iemon celebrated his wedding, his friends nailed both corpses to a heavy door and sunk them in a nearby river. That night, Iemon reveled in his successful scheme. But suddenly his bride’s sleeping face shifted into Oiwa’s tortured features. Iemon acted on his violent instincts, slashing her throat. But when his fear subsided, he realized that he’d killed his new wife. He stumbled out of the room and into another monstrous figure wearing the face of his deceased servant. The samurai ran his sword through the man— only to discover he’d slain his new grandfather-in-law as well. Iemon fled the house, running frantically until he came upon a moonlit river. Here, he stopped to plot his next move, fishing as he thought. Soon his fishing rod began to twitch, but the harder he pulled, the heavier his catch became. Finally, a wooden door broke the river’s surface— with Oiwa’s writhing body on one side and Kohei’s on the other. Iemon ran for days, finally taking shelter in a mountain hermitage. Over the following months, he tried to convince himself these horrible visions were just illusions— but his nightmares never relented. One night, as he attempted to walk off another bad dream, a nearby lantern began to crackle and tear. The paper stretched larger and larger until Oiwa’s ghost appeared in a blaze of fire. Iemon begged for mercy, but Oiwa had none to offer. Over just 24 hours, the spirit slaughtered his parents and friends, and tortured the samurai with ravenous rats. Only when Iemon was truly hopeless did Oiwa enlist her brother-in-law to secure bloody justice for her and her father. In the 19th century, Oiwa’s quest for vengeance was one of the most popular kabuki theater performances, renowned for its grisly narrative and groundbreaking special effects. To depict Oiwa’s iconic transformation, designers hid bags of fake blood in her wig. And for her grand, ghostly entrance, Oiwa’s actor really would emerge from a flaming lantern, doing an assisted handstand to look as though she’s descending from above. Today, Oiwa is considered Japan’s most famous ghost, and her image continues to inspire counterparts in film and television. But those who retell her story still tread carefully, often asking her spirit’s permission at her rumored grave in Tokyo. In this way, modern storytellers continue to give Oiwa the respect— and fear— she so rightfully deserves.
Myths_from_Around_the_World
The_myth_of_Cupid_and_Psyche_Brendan_Pelsue.txt
"Beauty is a curse," Psyche thought as she looked over the cliff's edge where she'd been abandoned by her father. She'd been born with the physical perfection so complete that she was worshipped as a new incarnation of Venus, the goddess of love. But real-life human lovers were too intimidated even to approach her. When her father asked for guidance from the Oracle of Apollo, the god of light, reason, and prophecy. He was told to abandon his daughter on a rocky crag where she would marry a cruel and savage serpent-like winged evil. Alone on the crag, Psyche felt Zephyr the West Wind gently lifting her into the air. It set her down before a palace. "You are home," she heard an unseen voice say. "Your husband awaits you in the bedroom, if you dare to meet him." She was brave enough, Psyche told herself. The bedroom was so dark that she couldn't see her husband. But he didn't feel serpent-like at all. His skin was soft, and his voice and manner were gentle. She asked him who he was, but he told her this was the one question he could never answer. If she loved him, she would not need to know. His visits continued night after night. Before long, Psyche was pregnant. She rejoiced, but was also conflicted. How could she raise her baby with a man she'd never seen? That night, Psyche approached her sleeping husband holding an oil lamp. What she found was the god Cupid who sent gods and humans lusting after each other with the pinpricks of his arrows. Psyche dropped her lamp, burning Cupid with hot oil. He said he'd been in love with Psyche ever since his jealous mother, Venus, asked him to embarrass the young woman by pricking her with an arrow. But taken with Psyche's beauty, Cupid used the arrow on himself. He didn't believe, however, that gods and humans could love as equals. Now that she knew his true form, their hopes for happiness were dashed, so he flew away. Psyche was left in despair until the unseen voice returned and told her that it was indeed possible for her and Cupid to love each other as equals. Encouraged, she set out to find him. But Venus intercepted Psyche and said she and Cupid could only wed if she completed a series of impossible tasks. First, Psyche was told to sort a huge, messy pile of seeds in a single night. Just as she was abandoning hope, an ant colony took pity on her and helped with the work. Successfully passing the first trial, Psyche next had to bring Venus the fleece of the golden sheep, who had a reputation for disemboweling stray adventurers, but a river god showed her how to collect the fleece the sheep had snagged on briars, and she succeeded. Finally, Psyche had to travel to the Underworld and convince Proserpina, queen of the dead, to put a drop of her beauty in a box for Venus. Once again, the unseen voice came to Psyche's aide. It told her to bring barley cakes for Cerberus, the guard dog to the Underworld and coins to pay the boatman, Charon to ferry her across the river Styx. With her third and final task complete, Psyche returned to the land of the living. Just outside Venus's palace, she opened the box of Proserpina's beauty, hoping to keep some for herself. But the box was filled with sleep, not beauty, and Psyche collapsed in the road. Cupid, now recovered from his wounds, flew to his sleeping bride. He told her he'd been wrong and foolish. Her fearlessness in the face of the unknown proved that she was more than his equal. Cupid gave Psyche amborsia, the nectar of the gods, making her immortal. Shortly after, Psyche bore their daughter. They named her Pleasure, and she, Cupid, and Psyche, whose name means soul, have been complicating people's love lives ever since.
Myths_from_Around_the_World
신화에_숨어_있는_과학_호머의_오디세이_Matt_Kaplan.txt
Homer's "Odyssey", one of the oldest works of Western literature, recounts the adventures of the Greek hero Odysseus during his ten-year journey home from the Trojan War. Though some parts may be based on real events, the encounters with strange monsters, terrifying giants and powerful magicians are considered to be complete fiction. But might there be more to these myths than meets the eye? Let's look at one famous episode from the poem. In the midst of their long voyage, Odysseus and his crew find themselves on the mysterious island of Aeaea. Starving and exhausted, some of the men stumble upon a palatial home where a stunning woman welcomes them inside for a sumptuous feast. Of course, this all turns out to be too good to be true. The woman, in fact, is the nefarious sorceress Circe, and as soon as the soldiers have eaten their fill at her table, she turns them all into animals with a wave of her wand. Fortunately, one of the men escapes, finds Odysseus and tells him of the crew's plight. But as Odysseus rushes to save his men, he meets the messenger god, Hermes, who advises him to first consume a magical herb. Odysseus follows this advice, and when he finally encounters Circe, her spells have no effect on him, allowing him to defeat her and rescue his crew. Naturally, this story of witchcraft and animal transformations was dismissed as nothing more than imagination for centuries. But in recent years, the many mentions of herbs and drugs throughout the passage have piqued the interest of scientists, leading some to suggest the myths might have been fictional expressions of real experiences. The earliest versions of Homer's text say that Circe mixed baneful drugs into the food such that the crew might utterly forget their native land. As it happens, one of the plants growing in the Mediterranean region is an innocent sounding herb known as Jimson weed, whose effects include pronounced amnesia. The plant is also loaded with compounds that disrupt the vital neurotransmitter called acetylcholine. Such disruption can cause vivid hallucinations, bizarre behaviors, and general difficulty distinguishing fantasy from reality, just the sorts of things which might make people believe they've been turned into animals, which also suggests that Circe was no sorceress, but in fact a chemist who knew how to use local plants to great effect. But Jimson weed is only half the story. Unlike a lot of material in the Odyssey, the text about the herb that Hermes gives to Odysseus is unusually specific. Called moly by the gods, it's described as being found in a forest glen, black at the root and with a flower as white as milk. Like the rest of the Circe episode, moly was dismissed as fictional invention for centuries. But in 1951, Russian pharmacologist Mikhail Mashkovsky discovered that villagers in the Ural Mountains used a plant with a milk-white flower and a black root to stave off paralysis in children suffering from polio. The plant, called snowdrop, turned out to contain a compound called galantamine that prevented the disruption of the neurotransmitter acetylcholine, making it effective in treating not only polio but other disease, such as Alzheimer's. At the 12th World Congress of Neurology, Doctors Andreas Plaitakis and Roger Duvoisin first proposed that snowdrop was, in fact, the plant Hermes gave to Odysseus. Although there is not much direct evidence that people in Homer's day would have known about its anti-hallucinatory effects, we do have a passage from 4th century Greek writer Theophrastus stating that moly is used as an antidote against poisons. So, does this all mean that Odysseus, Circe, and other characters in the Odyssey were real? Not necessarily. But it does suggest that ancient stories may have more elements of truth to them than we previously thought. And as we learn more about the world around us, we may uncover some of the same knowledge hidden within the myths and legends of ages passed.
Myths_from_Around_the_World
The_myth_of_Sisyphus_Alex_Gendler.txt
Whether it’s being chained to a burning wheel, turned into a spider, or having an eagle eat one’s liver, Greek mythology is filled with stories of the gods inflicting gruesome horrors on mortals who angered them. Yet one of their most famous punishments is not remembered for its outrageous cruelty, but for its disturbing familiarity. Sisyphus was the first king of Ephyra, now known as Corinth. Although a clever ruler who made his city prosperous, he was also a devious tyrant who seduced his niece and killed visitors to show off his power. This violation of the sacred hospitality tradition greatly angered the gods. But Sisyphus may still have avoided punishment if it hadn’t been for his reckless confidence. The trouble began when Zeus kidnapped the nymph Aegina, carrying her away in the form of a massive eagle. Aegina’s father, the river god Asopus, pursued their trail to Ephyra, where he encountered Sisyphus. In exchange for the god making a spring inside the city, the king told Asopus which way Zeus had taken the girl. When Zeus found out, he was so furious that he ordered Thanatos, or Death, to chain Sisyphus in the underworld so he couldn’t cause any more problems. But Sisyphus lived up to his crafty reputation. As he was about to be imprisoned, the king asked Thanatos to show him how the chains worked – and quickly bound him instead, before escaping back among the living. With Thanatos trapped, no one could die, and the world was thrown into chaos. Things only returned to normal when the god of war Ares, upset that battles were no longer fun, freed Thanatos from his chains. Sisyphus knew his reckoning was at hand. But he had another trick up his sleeve. Before dying, he asked his wife Merope to throw his body in the public square, from where it eventually washed up on the shores of the river Styx. Now back among the dead, Sisyphus approached Persephone, queen of the Underworld, and complained that his wife had disrespected him by not giving him a proper burial. Persephone granted him permission to go back to the land of living and punish Merope, on the condition that he would return when he was done. Of course, Sisyphus refused to keep his promise, now having twice escaped death by tricking the gods. There wouldn’t be a third time, as the messenger Hermes dragged Sisyphus back to Hades. The king had thought he was more clever than the gods, but Zeus would have the last laugh. Sisyphus’s punishment was a straightforward task – rolling a massive boulder up a hill. But just as he approached the top, the rock would roll all the way back down, forcing him to start over …and over, and over, for all eternity. Historians have suggested that the tale of Sisyphus may stem from ancient myths about the rising and setting sun, or other natural cycles. But the vivid image of someone condemned to endlessly repeat a futile task has resonated as an allegory about the human condition. In his classic essay The Myth of Sisyphus, existentialist philosopher Albert Camus compared the punishment to humanity’s futile search for meaning and truth in a meaningless and indifferent universe. Instead of despairing, Camus imagined Sisyphus defiantly meeting his fate as he walks down the hill to begin rolling the rock again. And even if the daily struggles of our lives sometimes seem equally repetitive and absurd, we still give them significance and value by embracing them as our own.
Myths_from_Around_the_World
The_myth_of_Pegasus_and_the_chimera_Iseult_Gillespie.txt
Shielded from the gorgon’s stone-cold gaze, Perseus crept through Medusa’s cave. When he reached her, he took a deep breath, and in one sudden movement, drew his sickle and brought it down on her neck. Medusa’s head rolled to the ground and from her neck sprung two children. One of them was Chrysaor, a giant wielding a golden sword; The other was the magnificent, white, winged horse, Pegasus. He was swifter than any other steed, and with the stomp of his hooves, he could alter mountains and draw streams from dry rock. No bridle could contain him— until one fateful day. Bellerophon, prince of the Greek city-state of Corinth, seemed to have it all. But his ambitions exceeded his earthly circumstances. What he truly wanted was to be a hero so great that the gods would welcome him on Mount Olympus. Bellerophon believed that Pegasus would be key in helping him reach such heights. One night, he visited the temple of Athena, the goddess of war and wisdom, and prayed for the power to appease the mighty animal. When Bellerophon woke, he found a magical golden bridle, and sped to the fountain that Pegasus drank from. As soon as the horse bent towards the water, Bellerophon jumped on his back and slipped the bridle on. Finally, Pegasus was subdued. With this conquest, Bellerophon felt that he was on his way to becoming a legendary hero. He trained for battle day and night. But one training session went horribly wrong, and Bellerophon mortally injured his brother, Deliades. Disgraced, he was exiled to Argos, where King Proetus purified him. Bellerophon was resolved to repair his reputation, but the Queen of Argos had her eye on him. And when Bellerophon rebuffed her advances, she accused him of trying to seduce her, further tarnishing his honor. King Proetus soon devised a plan to exact revenge. He banished Bellerophon and Pegasus and sent them to the kingdom of Lycia, carrying a note to Iobates, Lycia’s king. But unbeknownst to Bellerophon, he was carrying a decree for his own death. Iobates considered how to dispose of the youth and picked just the right monster for the job: the fire-breathing lion-goat-dragon Chimera that had long been terrorizing his kingdom. Bellerophon— eager to achieve greatness— jumped at the challenge. He mounted Pegasus, and the two shot into the sky. Swooping above the earth, they saw the Chimera surrounded by its charred victims. Soon, they too were facing its firepower. In a sequence of agile aerial acrobatics, Pegasus dodged every blast from the Chimera as Bellerophon launched his arrows. Finally, Pegasus closed in on the beast at just the right angle, and Bellerophon dealt it a deadly blow. Iobates was incredulous. He was glad to be rid of the monster, but still needed to deal with Bellerophon. So, he set forth more challenges, putting Bellerophon up against fearsome warriors, highly skilled archers, and, ultimately, Lycia’s best soldiers. Every time, Pegasus’ power turned the tide in Bellerophon’s favor. Finally, Iobates had no choice but to concede that Bellerophon was a true hero. He even offered him his daughter’s hand in marriage. But Bellerophon’s sights were set far beyond the land of mortals. He was certain he must now be entitled to a place on Mount Olympus. So, he jumped onto Pegasus and urged him higher and higher. Zeus watched as Bellerophon, buoyed by hubris, neared his palace. To punish the youth, he released a single gadfly, which beelined towards Pegasus and bit into his flesh. This was as high as Bellerophon would ever get. As Pegasus flinched, he flung his rider into the air, and Bellerophon fell careening back to Earth. Pegasus, on the other hand, ascended with Zeus’s blessing. The gods welcomed him into the halls of Mount Olympus and immortalized him in a constellation. There in the night sky, Pegasus can be seen soaring, unfettered and free.
Myths_from_Around_the_World
The_Chinese_myth_of_the_immortal_white_snake_Shunan_Teng.txt
The talented young herbalist named Xu Xian was in trouble. It should have been a victorious moment– he had just opened his very own medicine shop. But he bought his supplies from his former employer, and the resentful man sold him rotten herbs. As Xu Xian wondered what to do with this useless inventory, patients flooded into his shop. A plague had stricken the city, and he had nothing to treat them. Just as he was starting to panic, his wife, Bai Su Zhen, produced a recipe to use the rotten herbs as medicine. Her remedy cured all the plague-afflicted citizens immediately. Xu Xian’s former boss even had to buy back some of the rotten herbs to treat his own family. Shortly after, a monk named Fa Hai approached Xu Xian, warning him that there was a demon in his house. The demon, he said, was Bai Su Zhen. Xu Xian laughed. His kindhearted, resourceful wife was not a demon. Fa Hai insisted. He told Xu Xian to serve his wife realgar wine on the 5th day of the 5th month, when demons’ powers are weakest. If she wasn’t a demon, he explained, it wouldn’t hurt her. Xu Xian dismissed the monk politely, with no intention of serving Bai Su Zhen the wine. But as the day approached, he decided to try it. As soon as the wine touched Bai Su Zhen’s lips, she ran to the bedroom, claiming she wasn’t feeling well. Xu Xian prepared some medicine and went to check on her. But instead of his wife, he found a giant white serpent with a bloody forked tongue in the bed. He collapsed, killed by the shock. When Bai Su Zhen opened her eyes, she realized immediately what must have happened. The truth was that Bai Su Zhen was an immortal snake with formidable magical powers. She had used her powers to take a human form and improve her and her husband’s fortunes. Her magic couldn’t revive Xu Xian, but she had one more idea to save him: an herb that could grant longevity and even bring the dead back to life, guarded by the Old Man of the South Pole in the forbidden peaks of the Kun Lun Mountains. She rode to the mountains on a cloud, then continued on foot passed gateways and arches until she reached one marked “beyond mortals” hanging over a silver bridge. On the other side, two of the Old Man’s disciples guarded the herb. Bai Su Zhen disguised herself as a monk and told them she’d come to invite the Old Man to a gathering of the gods. While they relayed her message, she plucked some leaves from the herb and ran. The servants realized they had been tricked and chased her. Bai Su Zhen coughed up a magic ball and threw it at one. As the other closed in on her, she put the herb under her tongue for safekeeping, but its magic forced both of them into their true forms. As the crane’s long beak clamped around her, the Old Man appeared. Why, he asked, would she risk her life to steal his herb when she was already immortal? Bai Su Zhen explained her love for Xu Xian. Even if he didn’t want to be with her now that he knew she was a demon, she was determined to bring him back to life. The two had a karmic connection dating back more than a thousand years. When Bai Su Zhen was a small snake, a beggar was about to kill her, but a kind passerby rescued her. Her rescuer was Xu Xian in a past life. Touched by her willingness to risk her life for him, the Old Man permitted her to leave the mountain with the immortal herb. Bai Su Zhen returned home to revive Xu Xian. When he opened his eyes, the terrified look frozen on his face became a smile. Demon or not, he was still happy to see his wife.
Myths_from_Around_the_World
The_myth_of_Hades_and_Persephone_Iseult_Gillespie.txt
Every year before the ancient Greeks sowed their seeds of grain, they celebrated Demeter, the goddess of agriculture. On Earth, each morsel of food was sweet sustenance, while in the land of the dead, it ensured a permanent stay. Demeter tended to Earth’s fields with her beloved daughter, Persephone, who inherited her mother’s passions and grew into a bright young woman. But all the while, a shadowy figure watched from below. One day, Persephone was frolicking in a meadow with a freshwater nymph, Cyane. As they admired a blooming narcissus flower, they noticed it tremble in the ground. Suddenly, the earth split, and a terrifying figure arose. It was Hades, god of the dead and the underworld. He wrenched Persephone from Cyane, dragged her into his inky chariot, and blasted back through the earth. Cyane wept so hard she dissolved, becoming one with the river. By the time Demeter arrived at the scene of the abduction, the crater in the meadow had closed— and Cyane and Persephone had vanished. Demeter sped to Mount Olympus for help. Many of the gods had witnessed the scene. And they knew about the deal that paved the way for it: Zeus, Persephone’s father, had granted Hades her hand in marriage without her consent— or Demeter’s. But when faced with Demeter’s pleas, the other gods stayed silent, So, she searched alone. In her grief and desperation, she neglected her usual tasks. Crops withered, and a great famine plagued the Earth. As mortals began to perish, the gods grew wary. Who would worship them and offer tributes if the humans disappeared? So, Zeus ordered Demeter to stop her crusade and returned to her duties. But she refused. Deep below, across the frigid river Styx, and through the halls of the underworld, Persephone was waging her own protest. Hades expected her to serve as his wife and queen. But Persephone rebuffed the god’s advances and refused food. As she longed for her mother’s company, her friends’ laughter, and the sun’s warmth, Persephone grew colder and lonelier. And she was starving. She hungered for satisfying grains, crisp vegetables and fresh fruit. Wandering the ghostly gardens, she contemplated the pomegranates that hung heavily on their branches... Meanwhile, Demeter continued her hunt. She appealed to the all-seeing sun god, Helios, whose rays had long warmed her crops. Indeed, when Helios drew his golden chariot across the sky that fateful day, he saw what happened— and he knew of the deal. Out of respect and sympathy for Demeter, Helios told her of Hades’ demands, Zeus’ betrayal and Persephone’s abduction. Furious and heartbroken, Demeter sped to Mount Olympus and confronted Zeus, demanding their daughter’s return. But Zeus declined: in her ravenous hunger, Persephone had eaten a few seeds from the pomegranate that grew in the underworld. Though a meager amount, it was enough to ensnare her in Hades forever. Demeter wouldn't accept this fate. She swore that if she wasn’t reunited with Persephone, the fields would never be fertile again, and the distinction between the Earth and the underworld would soon dissolve. So, they made a pact. For two thirds of every year, Persephone would return to the land of the living, But for the remainder, she would stay in the world of the dead. When Persephone ascended to Earth, she and her mother rejoiced. Together, they showered the fields with rain and nurtured them with sun. For mortals, Persephone’s arrival came to herald the start of spring. But her descent always came too soon. Each time she returned to Hades, Demeter mourned, and the earth grew cold, dark, and unyielding, ushering in the winter months. Knowing Demeter couldn’t be roused from her grief, the humans stored their crops, stoked their fires, and awaited Persephone’s safe return. And so it was that her transit marked the gradual turning of the seasons and the bittersweet compromise between life and death.
Myths_from_Around_the_World
The_myth_of_Anansi_the_trickster_spider_Emily_Zobel_Marshall.txt
As the sun sets on a plantation in Jamaica, children flock to Mr. Kwaku for a story. They all know he’s full of tales from Ghana, the land of their ancestors. But what they don’t know, Kwaku winks, is how their ancestors got those stories in the first place. Long ago, all stories belonged to Nyame, the all-seeing Sky God. People on Earth were bored and knew nothing about their history. But one creature decided enough was enough. Anansi, the tricky, shapeshifting spider, resolved to bring the stories down to Earth. He spun a web that stretched into the clouds and climbed up to confront the Sky God. Crouching at Nyame’s feet, Anansi shouted at the top of his lungs that he had come to take ownership of the world’s stories. Looking down from his golden stool, Nyame hooted with laughter at the spider’s absurd request. Nyame told Anansi that he could have all the stories he wished— but only if he could complete an impossible task. If Anansi brought him Osebo the Leopard, Onini the Python, Mmoboro the Hornet, and Mmoatia the Forest Spirit, then he could take the stories. Anansi humbly accepted. Nyame didn’t see him grinning as he scuttled away. Back on Earth, Anansi grabbed his magic bag and set to work. Anansi found Onini the Python bathing in the sun. Anansi scoffed that Onini couldn’t be the longest animal, saying he looked no longer than a piece of bamboo cane. Enraged, Onini stretched himself across the bamboo to prove his lengthiness. Anansi quickly bound him tight-tight to each end and placed him in his bag. Next, Anansi dug a great pit in the middle of the path Osebo the Leopard usually prowled, and covered it with banana leaves. Sure enough, mighty Osebo soon fell in. Anansi scolded Osebo for his carelessness, but offered to rescue him. As he helped Osebo out of the pit, Anansi swiftly jabbed him with his knife. Osebo fell back to the ground where Anansi wound him up tight-tight in spider thread. Then, Anansi heard Mmoboro and his hornets buzzing. He cautiously approached them. This would be tricky— their stings could make someone swell up and die— but Anansi knew they hated rain. He filled his mouth with water and spat it at the swarm. As they panicked, Anansi urged the hornets to shelter in his gourd, where they found themselves trapped. Anansi had one more task: to capture Mmoatia the elusive and mischievous Forest Spirit. She usually hid herself deep in the woods, but Anansi knew she was lonely. So, he made a little doll covered in sap and left it in her path. When she came upon it, Mmoatia spoke to the doll but became enraged when it didn’t answer. She hit the disrespectful doll and her small fists stuck to its sticky surface. Anansi wrapped Mmoatia up tight-tight and scooped her into his bag along with the other creatures. Triumphantly, he climbed his web back into the clouds. When the Sky God saw that Anansi had completed the impossible task, he was amazed. Nyame told Anansi that he had earned the world’s stories. Dancing for joy, Anansi gathered them up, stuffed the stories into his bag, and descended to Earth. There, he scattered the stories throughout the world for people to share. And they did, Kwaku tells the children. Generations have continued telling and reimagining Anansi’s stories even after being stolen from Africa and enslaved. Anansi may be small, but “cunning’s better than strong,” Kwaku says, and tells the children to take Anansi’s stories with them wherever they go. Looking at his audience, Kwaku knows that Anansi will persist as a symbol of resourcefulness and resistance in the face of oppression, and a testament to the enduring power of storytelling.
Myths_from_Around_the_World
The_myth_of_Icarus_and_Daedalus_Amy_Adkins.txt
In mythological ancient Greece, soaring above Crete on wings made from wax and feathers, Icarus, the son of Daedalus, defied the laws of both man and nature. Ignoring the warnings of his father, he rose higher and higher. To witnesses on the ground, he looked like a god, and as he peered down from above, he felt like one, too. But, in mythological ancient Greece, the line that separated god from man was absolute and the punishment for mortals who attempted to cross it was severe. Such was the case for Icarus and Daedalus. Years before Icarus was born, his father Daedalus was highly regarded as a genius inventor, craftsman, and sculptor in his homeland of Athens. He invented carpentry and all the tools used for it. He designed the first bathhouse and the first dance floor. He made sculptures so lifelike that Hercules mistook them for actual men. Though skilled and celebrated, Daedalus was egotistical and jealous. Worried that his nephew was a more skillful craftsman, Daedalus murdered him. As punishment, Daedalus was banished from Athens and made his way to Crete. Preceded by his storied reputation, Daedalus was welcomed with open arms by Crete's King Minos. There, acting as the palace technical advisor, Daedalus continued to push the boundaries. For the king's children, he made mechanically animated toys that seemed alive. He invented the ship's sail and mast, which gave humans control over the wind. With every creation, Daedalus challenged human limitations that had so far kept mortals separate from gods, until finally, he broke right through. King Minos's wife, Pasiphaë, had been cursed by the god Poseidon to fall in love with the king's prized bull. Under this spell, she asked Daedalus to help her seduce it. With characteristic audacity, he agreed. Daedalus constructed a hollow wooden cow so realistic that it fooled the bull. With Pasiphaë hiding inside Daedalus's creation, she conceived and gave birth to the half-human half-bull minotaur. This, of course, enraged the king who blamed Daedalus for enabling such a horrible perversion of natural law. As punishment, Daedalus was forced to construct an inescapable labyrinth beneath the palace for the minotaur. When it was finished, Minos then imprisoned Daedalus and his only son Icarus within the top of the tallest tower on the island where they were to remain for the rest of their lives. But Daedalus was still a genius inventor. While observing the birds that circled his prison, the means for escape became clear. He and Icarus would fly away from their prison as only birds or gods could do. Using feathers from the flocks that perched on the tower, and the wax from candles, Daedalus constructed two pairs of giant wings. As he strapped the wings to his son Icarus, he gave a warning: flying too near the ocean would dampen the wings and make them too heavy to use. Flying too near the sun, the heat would melt the wax and the wings would disintegrate. In either case, they surely would die. Therefore, the key to their escape would be in keeping to the middle. With the instructions clear, both men leapt from the tower. They were the first mortals ever to fly. While Daedalus stayed carefully to the midway course, Icarus was overwhelmed with the ecstasy of flight and overcome with the feeling of divine power that came with it. Daedalus could only watch in horror as Icarus ascended higher and higher, powerless to change his son's dire fate. When the heat from the sun melted the wax on his wings, Icarus fell from the sky. Just as Daedalus had many times ignored the consequences of defying the natural laws of mortal men in the service of his ego, Icarus was also carried away by his own hubris. In the end, both men paid for their departure from the path of moderation dearly, Icarus with his life and Daedalus with his regret.
Myths_from_Around_the_World
The_myth_of_Zeus_test_Iseult_Gillespie.txt
It was dark when two mysterious, shrouded figures appeared in the hillside village. The strangers knocked on every door in town, asking for food and shelter. But, again and again, they were turned away. Soon, there was just one door left: that of a small, thatched shack. An elderly couple, Baucis and Philemon, answered the thunderous knock. Although there was something off about these visitors, it was in the pair’s nature to care for those in need. Philemon invited them to rest, and the cottage flooded with warmth as Baucis teased the fire back to life. When they were young, Baucis and Philemon had fallen in love, married, and settled in the humble cottage. Decades later, their home was still standing— and they were more devoted to each other than ever. The strangers watched intently as Baucis nestled twigs under a battered pot filled with vegetables. The couple could rarely afford meat, but in honor of their guests, Philemon cut strips from an aging shank for the stew. They made cheerful conversation and offered their visitors hot baths. Baucis used a chip of broken clay to balance the wobbly table’s legs and rubbed its surface with mint until it smelled sweet and fresh. Weaving around each other with care, the couple transformed what they had into a feast. Soon, the tabletop overflowed with food and the last of their sweet wine. Privately, Baucis and Philemon worried that their provisions would run out. Yet, as the night wore on, and their strange guests took hearty gulps of wine, the clay vessel never ran dry. The couple, at first relieved, grew terrified. Their guests weren’t humble peasants traveling the countryside. They were almost certainly gods in disguise— but which gods, they didn’t know. Panicked that their preparations were inadequate, Baucis and Philemon searched for another offering. The only precious thing left was the goose that guarded their home. The couple repeatedly lunged after the bird, but they were too worn out for the chase. So, they prepared to receive the wrath of the gods. Their guests rose up, shedding their rags and mortal masks. Looming before them was Zeus, the storm-brewing ruler of the gods, and his son, Hermes, the fleet-footed messenger who shepherded mortals to the underworld. The gods told the old couple that, unlike the other townspeople, they had shown true xenia, or loving hospitality to strangers. They alone had passed the test. The gods commanded the couple to follow them, and the group ascended the nearest mountain. Nearing the summit, Baucis and Philemon looked back— but were shocked to see a murky swamp where their village stood just moments before. As punishment for refusing to shelter the gods, Zeus and Hermes had cast the townspeople underwater, leaving only their hosts’ home intact, Recalling their friends and neighbors, Baucis and Philemon couldn’t hide their terror and mournful tears, even as their house transformed below. It grew larger and sprouted marble pillars and steps. Legends etched themselves onto its grand doors. Their rickety cottage had metamorphosed into a gleaming temple for the gods. Hermes commended the couple and gently asked if there was anything they desired. After a brief discussion, Philemon requested that he and Baucis be permitted to care for the new temple. And he asked if, when their time came, they could die together, so neither would have to face life without the other. Tending to the temple and one another, they lived many more years. Until, one day, Baucis noticed leaves fluttering from her husband’s hands and looked down to find her own skin hardening. They embraced, becoming rooted in place. Vines wound around their legs and canopies flourished overhead. They bid each other a loving, last farewell as humans. And where Baucis and Philemon had just stood, bent with age, there towered a linden and an oak tree, their branches intertwined for eternity.
Myths_from_Around_the_World
The_myth_of_Jason_Medea_and_the_Golden_Fleece_Iseult_Gillespie.txt
In the center of Colchis in an enchanted garden, the hide of a mystical flying ram hung from the tallest oak, guarded by a dragon who never slept. Jason would have to tread carefully to pry it from King Aeetes’ clutches and win back his promised throne. But diplomacy was hardly one of the Argonauts’ strengths. Jason would have to navigate this difficult task alone. Or so he thought. Leaving most of his bedraggled crew to rest, Jason made for the palace with some of his more even-tempered men. His first instinct was to simply ask the king for his prized possession. But Aeetes was enraged at the hero’s presumption. If this outsider wanted his treasure, he would have to prove his worth by facing three perilous tasks. The trials would begin the following day, and Jason was dismissed to prepare. But another member of the royal family was also plotting something. Thanks to the encouragement of Jason’s guardians on Mount Olympus, Medea, princess of Colchis and priestess of the witch goddess Hecate, had fallen in love with the challenger. She intended to protect her beloved from her father’s tricks — at any cost. After a sleepless night, Jason somberly marched to the castle— but was intercepted. The princess armed him with strange vials and trinkets, in exchange for a promise of eternal devotion. As they whispered and planned their victory, both hero and princess fell deeply under each other’s spell. Unaware of his daughter’s scheming, the king confidently led Jason to face his first task. The hero was brought to a huge field of oxen that lay between him and the fleece, and told that he had to plough the land around the crowds of oxen. A simple task— or so Jason thought. But Medea had concocted a fire-proof ointment, and so he plowed the flickering fields unscathed. For the second task, he was given a box of serpent’s teeth to plant into the scorched earth. As soon as Jason scattered them, each seed sprouted into a bloodthirsty warrior. They burst up around him, barricading his way forward, but Medea had prepared him for this task as well. Hurling a heavy stone she had given him into their midst, the fighters turned on themselves as they scrabbled for it, letting him slip by the fray. For the third task, Jason was finally face to face with the guardian of the Fleece. Dodging sharp claws and singeing breath, Jason scrambled up the tree and sprinkled a sweet-smelling concoction over the dragon. As the strains of Medea’s incantations reached its ears and the potion settled in its eyes, the dragon sank into a deep sleep. Elated, Jason climbed to the top of the tallest oak, where he slipped the gleaming fleece off its branch. When the king saw the hero sprinting away— not only with the fleece, but his daughter in tow— he realized he had been betrayed. Furious, he sent an army led by his son Absyrtus to bring the ill-gotten prize and his conniving daughter home. But all the players in this tale had underestimated the viciousness of these disgraced lovers. To the horror of the Gods, Jason ran his sword through Absyrtus in cold blood. Medea then helped him scatter pieces of the body along the shore, distracting her grieving father while the Argonauts escaped. As Colchis and their pursuers grew smaller on the horizon, a solemn silence fell aboard the Argo. Jason could now return to Thessaly victorious— but his terrible act had tarnished his crew’s honor, and turned the Gods against them. Buffeted by hostile winds, the wretched crew washed up on the island of Circe the sorceress. Medea begged her aunt to absolve them of wrongdoing— but bloody deeds are not so easily forgotten, and fallen heroes not so rapidly redeemed.
Myths_from_Around_the_World
The_Japanese_myth_of_the_trickster_raccoon_Iseult_Gillespie.txt
On the dusty roads of a small village, a travelling salesman was having difficulty selling his wares. He’d recently traversed the region just a few weeks ago, and most of the villagers had already seen his supply. So he wandered the outskirts of the town in the hopes of finding some new customers. Unfortunately, the road was largely deserted, and the salesman was about to turn back, when he heard a high-pitched yelp coming from the edge of the forest. Following the screams to their source, he discovered a trapped tanuki. While these racoon-like creatures were known for their wily ways, this one appeared terrified and powerless. The salesman freed the struggling creature, but before he could tend to its wounds, it bolted into the undergrowth. The next day, he set off on his usual route. As he trudged along, he spotted a discarded tea kettle. It was rusty and old— but perhaps he could sell it to the local monks. The salesman polished it until it sparkled and shone. He carried the kettle to Morin-ji Temple and presented it to the solemn monks. His timing was perfect— they were in need of a large kettle for an important service, and purchased his pot for a handsome price. To open the ceremony, they began to pour cups of tea for each monk— but the kettle cooled too quickly. It had to be reheated often throughout the long service, and when it was hot, it seemed to squirm in the pourer’s hand. By the end of the ceremony, the monks felt cheated by their purchase, and called for the salesman to return and explain himself. The following morning, the salesman examined the pot, but he couldn’t find anything unusual about it. Hoping a cup of tea would help them think, they set the kettle on the fire. Within moments, the metal began to sweat. Suddenly, it sprouted a scrubby tail, furry paws and pointed nose. With a yelp, the salesman recognized the tanuki he’d freed. The salesman was shocked. He’d heard tales of shape-shifting tanuki who transformed by pulling on their testicles. But they were usually troublesome tricksters, who played embarrassing pranks on travellers, or made it rain money that later dissolved into leaves. Some people even placed tanuki statues outside their homes and businesses to trick potential pranksters into taking their antics elsewhere. However, this tanuki only smiled sweetly. Why had he chosen this unsuspecting form? The tanuki explained that he wanted to repay the salesman’s kindness. However, he’d grown too hot as a tea kettle, and didn’t like being burned, scrubbed, or polished. The monk and salesman laughed, both impressed by this honourable trickster. From that day on, the tanuki became an esteemed guest of the temple. He could frequently be found telling tales and performing tricks that amused even the most serious monks. Villagers came from far away to see the temple tanuki, and the salesman visited often to share tea made from an entirely normal kettle.
Myths_from_Around_the_World
The_Egyptian_myth_of_the_death_of_Osiris_Alex_Gendler.txt
It was a feast like Egypt had never seen before. The warrior god Set and his wife, the goddess Nephtys, decorated an extravagant hall for the occasion, with a beautiful wooden chest as the centerpiece. They invited all the most important gods, dozens of lesser deities, and foreign monarchs. But no one caused as big a stir as Set and Nephtys’s older brother Osiris, the god who ruled all of Egypt and had brought prosperity to everyone. Set announced a game— whoever could fit perfectly in the chest could have it as a gift. One by one, the guests clambered in, but no one fit. Finally, it was Osiris’s turn. As he lay down, everyone could see it was a perfect fit— another win for the god who could do no wrong. Then Set slammed the lid down with Osiris still inside, sealed it shut, and tossed it into the Nile. The chest was a coffin. Set had constructed it specifically to trap his brother and planned the party to lure him into it. Set had long been jealous of his brother’s successful reign, and hoped to replace him as the ruler of all Egypt. The Nile bore the coffin out to sea and it drifted for many days before washing ashore near Byblos, where a great cedar grew around it. The essence of the god within gave the tree a divine aura, and when the king of Byblos noticed it, he ordered the tree cut down and brought to his palace. Unbeknownst to him, the coffin containing Egypt’s most powerful god was still inside. Set’s victory seemed complete, but he hadn’t counted on his sisters. Set’s wife Nephtys was also his sister, while their other sister, the goddess Isis, was married to their brother Osiris. Isis was determined to find Osiris, and enlisted Nephtys’s help behind Set’s back. The two sisters took the shape of falcons and travelled far and wide. Some children who had seen the coffin float by pointed them to the palace of Byblos. Isis adopted a new disguise and approached the palace. The queen was so charmed by the disguised goddess that she entrusted her with nursing the baby prince. Isis decided to make the child immortal by bathing him in flame. When the horrified queen came upon this scene, Isis revealed herself and demanded the tree. When she cut the coffin from the trunk and opened it, Osiris was dead inside. Weeping, she carried his body back to Egypt and hid it in a swamp, while she set off in search of a means of resurrecting him. But while she was gone, Set found the body and cut it into many pieces, scattering them throughout Egypt. Isis had lost Osiris for the second time, but she did not give up. She searched all over the land, traveling in a boat of papyrus. One by one, she tracked down the parts of her husband’s dismembered body in every province of Egypt, holding a funeral for each piece. At long last, she had recovered every piece but one— his penis, which a fish in the Nile had eaten. Working with what she had, Isis reconstructed and revived her husband. But without his penis, Osiris was incomplete. He could not remain among the living, could not return to his old position as ruler of Egypt. Instead, he would have to rule over Duat, the realm of the dead. Before he went, though, he and Isis conceived a son to bear Osiris’s legacy— and one day, avenge him.
Myths_from_Around_the_World
The_myth_of_Thors_journey_to_the_land_of_giants_Scott_A_Mellor.txt
Thor—son of Odin, god of thunder, and protector of mankind— struggled mightily against his greatest challenge yet: opening a bag of food. It’d all started when Thor, along with his fleet-footed human servant Thjalfi and Loki, the trickster god, set out on a journey to Jotunheim, land of the giants. Along the way, they’d met a giant named Skrymir, who offered to accompany them and even carry their provisions in his bag. But when they made camp, Skrymir dozed off and Thor couldn’t untie the sack. Frustrated and hungry, Thor tried to wake the giant three times by striking his head with his hammer Mjolnir as hard as he could. But each time, Skrymir thought it was only a falling acorn and went back to sleep. The next morning, Skrymir departed and eventually, the travelers reached a massive fortress called Utgard. Inside the long hall, they met the king of giants, Utgard-Loki, who greeted his guests with a challenge: each of them was to prove they were the best at some particular skill. Loki went first, declaring himself the world’s fastest eater. To test him, the king summoned his servant Logi and the two were placed at either end of a long trough stuffed with food. Loki ate his way inward with blinding speed. But when the contestants met in the center, Loki saw that his adversary had not only eaten just as much food, but also the bones and even the trough itself. Next was Thjalfi, who could outrun anything in the wild. The king summoned an ethereal-looking giant named Hugi, who outraced Thjalfi easily. But the boy would not give up and demanded a rematch. This time, Thjalfi finished close behind and the king admitted he’d never seen a human run so fast. Thjalfi tried a third time, running like his life depended on it, but Hugi was even faster than before. Finally, it was Thor’s turn. The king offered him a drinking horn, saying all his men could drain it in two gulps. Thor raised it to his lips and drank the surprisingly cold and salty mead in the longest gulp he could muster. Then a second. Then a third. But the level of the mead in the horn was only slightly lowered. To test Thor’s renowned strength, the king offered a seemingly easy challenge: lift his pet cat off the ground. But this cat was as tall as Thor. Every time he tried to lift it, it arched its back, and straining with all his godly might, he only managed to lift one paw. Enraged, Thor demanded to wrestle any of the giants. The king summoned the giants’ old nursemaid, Elli. Though the woman looked frail, Thor couldn’t overpower her and grew weaker the longer he struggled, until he was brought to one knee. The three companions prepared to leave, disappointed and humbled. But as the king escorted them out, he revealed that nothing in the castle had been what it seemed. Loki lost the eating contest because his opponent Logi was wildfire itself, devouring everything in its path. Thjalfi couldn’t outrun Hugi because Hugi was the embodiment of thought, always faster than action. And even Thor couldn’t defeat Elli, or old age, which weakens everyone eventually. As for the other challenges, they had also been illusions. The drinking horn was filled with the ocean, and Thor had drained enough to cause the tides. The cat was the serpent that encircles the world, and Thor’s efforts shifted the Earth. And Skrymir had been Utgard-Loki in disguise, deflecting Thor’s hammer-blows to form valleys in the surrounding mountains. The giant congratulated them on their prowess, which so frightened him he would never allow them in his land again. Thor and his companions failed the challenges presented to them. But in trying to achieve the impossible, they’d pushed themselves harder than ever before and changed the world in ways no one had expected.
Myths_from_Around_the_World
A_tour_of_the_ancient_Greek_Underworld_Iseult_Gillespie.txt
Achilles, welcome! I’m the Sibyl of Cumae, prophetess and avid reader of leaves. To clarify, you were just slain in the Trojan War. Sorry about that. It’s normal to feel mixed emotions right now. But you will be immortalized as one of the greatest warriors ever. And you’ll have endless distractions down here. So, pros and cons. It gets a bad rap, but the Underworld is actually a lovely place to “live.” It boasts historic charm and eccentric neighbors with eternal ties to the area. The community even has its own guard dog, Cerberus. Heel, boy! Oh. Sorry, I know that’s a sensitive spot. Anyway, with Cerberus, you get three for the price of one! He’s just not a big fan of anyone leaving. And who would want to leave anyway? This is the Styx— it’s like the subterranean riviera. But you’ve been here before; it was the source of your almost complete invulnerability, of course! The Underworld also features four other waterways: Acheron, the river of woe; Cocytus, river of wailing; Lethe, river of oblivion; and Phlegethon, river of fire, a great source of natural light. Now, on your left, you’ll see the Mourning Fields, inhabited by souls tormented by love. Quite an attractive place, really, when you’re not in the throes of endless heartbreak. And without further ado: Elysium, the Underworld’s exclusive VIP section— and your permanent home. Here, you'll join the ranks of royalty and heroes. Cadmus over there once slayed a dragon! And Patroclus is around here somewhere, along with lots of other friends and foes. I'll let you two get reacquainted soon, but our tour wouldn’t be complete without a quick whirl through the heart of Hades: Tartarus. Tisiphone here guards the portal. She's one of the legendary Furies and is particularly passionate about avenging murder. She never sleeps. So, if you need anything, just ask! Tartarus is reserved for a select few who some might call the greatest sinners of all time. Take Ixion. He was once a king. When he didn’t pay his wedding dowry, his father-in-law, Deioneus, stole his horses to get even. In retaliation, Ixion pitched Deioneus into a pit of fire. Ixion was banished, but Zeus miraculously took pity on him, and invited him to a Mount Olympian feast. There, however, it soon became clear that the disgraced king was trying to seduce Zeus’s wife, Hera. So, Zeus contrived a trap: a fluffy cloud that resembled Hera exactly. When Zeus had proof of Ixion having his way with the cumulus, well, you could say it was all nimbus from there. That landed Ixion on the flaming wheel. Poor thing. Oh, and don’t mind Tantalus here. He was part of the first generation of mortals, enjoying privileges like dining with the gods. Some say Tantalus stole ambrosia from Zeus, others that he doubted the omniscience of the gods and cooked his own son into a stew to see if they would notice. Naturally, they did. And as eternal punishment, when Tantalus reaches for food, the branches grow taller. And when he stoops to quench his thirst, the water recedes. And here we have the Danaids. At their father’s order, they beheaded their husbands on their wedding night. They must fill this basin with water. But, the trick is, their jars are cracked, so it always... just... leaks away. Oh, but don’t worry! No leaky appliances for you. Finally, our last stop on the tour is one of our loveliest vistas. From here, you can see the hill where Sisyphus pushes his boulder day after day, only for it to roll back down again— all for trying to cheat death. As you can see, Achilles, the Underworld is full of exciting amenities. Here, you don’t have to worry about brutal wars or painful cycles of revenge. You can finally just put your feet up and relax.
Myths_from_Around_the_World
The_Maya_myth_of_the_morning_star.txt
Chak Ek’ rose from the underworld to the surface of the eastern sea and on into the heavens. His brother K’in Ahaw followed. Though Chak Ek’ had risen first, K’in Ahaw outshone him, and the resentful Chak Ek’ descended back to the underworld to plot against his brother. In Mayan mythology, Chak Ek’ represents Venus and K’in Ahaw represents the sun. Known as both the morning and the evening star, Venus moves through the sky, sometimes visible before sunrise, sometimes after sunset, and occasionally not at all. The ancient Maya identified this roughly 584 day cycle more than a thousand years ago and it still accurately predicts when and where Venus will appear in the sky around the world. Five of these cycles make up almost exactly eight years, and the Maya also recognized this larger cycle. They assigned Chak Ek’ five different forms, one for each cycle of Venus, that were repeated every eight years. Within the 584 day cycle, Venus is visible in the evening sky for 250 days, then disappears for 8 days before reappearing as the Morning Star. The ancient Maya ascribed particular significance to this point in this cycle: the first time Venus appears before sunrise after being invisible. On this day, Chak Ek’ rose again from the underworld, wielding a spearthrower and darts. To bring discord to the world, he decided to attack his brother and his brother’s allies. His first target was K’awiil, god of sustenance and lightning. Rising in the late rainy season, Chak Ek’ aimed his spear and struck K’awiil, causing damage to the food and a period of chaos in the social order until K’awiil was reborn. 584 days after attacking K’awiil, Chak Ek’ turned his attention back to his brother, the Sun. Each night, the Sun took the form of jaguar and journeyed through the underworld. Chak Ek’ speared the jaguar sun as it rose at dawn towards the end of the dry season. The Sun was wounded, plunging the world into a period of chaos and warfare. Chak Ek’s third victim was the god of maize, who provided sustenance for all humankind. Chak Ek’ speared him at the time of the harvest. He was buried in the underworld, and maize—the staple of life— was no longer available to Earth’s inhabitants. But the maize god emerged after three months in the place of new beginnings– the eastern cave known as Seven Water Place– bringing food once again to earth. When the turtle Ak Na'ak rose in the sky to mark the summer solstice, Chak Ek’ claimed his fourth victim. With the death of this good omen, the Sun, the food supply, and the people were buried within the earth, and the forces of chaos reigned. But out of the chaos rose a new order established by Hun Ajaw, one of the hero twins known to all for having vanquished the lords of the underworld. A new race of humans was created, made from maize. This state of balance was not to last, however. Chak Ek’s fifth and final victim was a mysterious stranger from the west, and his death in the heart of the dry season shook the order established by Hun Ajaw. The gods, the lords, and the maize were buried in the underworld. But this victory for Chak Ek’ would also prove temporary. The two brothers, Venus and the Sun, were caught in an endless cycle as they battled for supremacy, re-enacting the same five struggles, while the world alternated between order and chaos with the rising of the Morning Star.
Myths_from_Around_the_World
The_epic_of_Gilgamesh_the_king_who_tried_to_conquer_death_Soraya_Field_Fiorio.txt
In 1849, in the ancient city of Nineveh in northern Iraq, archaeologists sifted through dusty remains, hoping to find records to prove that Bible stories were true. What they found instead was one of the oldest libraries in the world. Inscribed on crumbling clay tablets was a 4,000-year-old story so riveting the first person to translate it started stripping from excitement. Called the epic of Gilgamesh, the story starts with Gilgamesh, king of the city of Uruk, crashing every wedding and sleeping with the bride before she has a chance to sleep with her husband. To tame Gilgamesh, the goddess Aruru created a rival called Enkidu. Enkidu lived beyond the walls of the city, where chaos reigned and wild animals, invaders, and evil spirits prowled. After a priestess of the goddess Ishtar seduced Enkidu, the wild animals beyond the wall rejected him and he ventured into the city. There, he encountered Gilgamesh up to his usual tricks. Enkidu stepped in to stop him. Almost perfectly matched, the two men wrestled all through the city streets until Gilgamesh won the fight by a hair. Afterwards, they were inseparable. With his new friend, Gilgamesh turned his attention from the brides of Uruk to proving his strength in combat. They set out to slay Humbaba, a creature with a thousand faces who guarded the trees of the Forest of Cedar. They tracked Humbaba and ambushed him. Cornered, he begged for his life, then cursed them as Gilgamesh dealt the final blow. Back home in Uruk, the goddess Ishtar took a romantic interest in Gilgamesh. Knowing she tended to lose interest and curse her former flames, Gilgamesh refused her advances. So Ishtar unleashed the Bull of Heaven on Uruk to destroy crops and kill people. When Gilgamesh and Enkidu slayed the creature defending the city, the gods killed Enkidu. He entered the House of Dust, the shadowy Mesopotamian underworld where the spirits of the dead knelt eternally on the ground, eating dirt and drinking stone. Grieving for Enkidu and terrified of meeting this fate himself, Gilgamesh set off beyond the cosmic mountains to seek immortality. He passed scorpion people and groves of gemstone trees, travelled beneath the mountains and outran the rising sun, until he finally came to the end of the world, where he found a bar. The bartender was a goddess named Shiduri, who urged Gilgamesh to give up his quest. She told him all mortals must die, but until death comes, he should enjoy his life. But Gilgamesh refused to give up. Reluctantly, Shiduri gave him directions to cross the Waters of Death and meet the immortal man Utanapishti. The gods had granted Utanapishti immortality following a great flood, during which he built a boat, loaded two of every animal onto it, and landed on a mountain peak. Utanapishti also encouraged Gilgamesh to accept that death comes for everyone. But Gilgamesh still would not budge. So Utanapishti told him that if he could conquer sleep, the gods might grant him immortality. Gilgamesh intended to stay awake for seven days, but fell asleep immediately. Utanapishti then told him about a magical plant that grew at the bottom of the ocean and granted eternal youth. Though Gilgamesh successfully retrieved the plant, a snake stole it on his way home. But when Gilgamesh laid eyes on his beautiful city again, he made peace with his mortality and vowed to spend his lifetime doing great deeds. He wrote his story on a lapis lazuli tablet and buried it under the city walls for future generations to find and learn from. The tablets uncovered in Nineveh were part of the library of the Assyrian king, Ashurbanipal. Though the story is mythical, Gilgamesh was probably a real king of Uruk. Versions of his tale date to 2000 BCE and perhaps even longer ago, and still echo through literature today.
Myths_from_Around_the_World
Greek_mythologys_greatest_warrior_Iseult_Gillespie.txt
Achilles was a demigod destined for greatness. He was born to a sea nymph and a king. And like the legendary Heracles before him, he was trained by the centaur Chiron in hunting, music, and medicine. Meanwhile, his closest companion since boyhood was Patroclus, a mortal with no divine parentage or lofty prophecies tied to his name. Despite these differences, the two loved one another unconditionally. But when Greece declared war on Troy, Achilles was called upon as a crucial weapon. Helen, the wife of a Greek king, had vanished to Troy with Paris, a Trojan prince. An army of Greeks assembled, determined to retrieve her. And as war loomed, the gods themselves took sides and argued over the mortals’ fates. Achilles knew the war was written into his destiny. And with horses born from the west wind and a spear wrought from a mountain peak, he readied himself. But he wouldn’t be alone: Patroclus was by his side. They sailed to Troy along with 1,186 ships and surged into battle. The Trojans were led by the formidable Prince Hector, brother of Paris and son of King Priam. But they were no match for Achilles, who held the upper hand for the Greeks with his striking skill. Some said Achilles was invincible because his mother dipped him into the Styx; others said that she bathed him in ambrosia, the nectar of immortality. Despite his talent, the war wore on for nine years and internal conflicts crystallized. Early on, Achilles took a woman named Briseis captive. But the Greek army’s leader, King Agamemnon, had grown jealous of Achilles and seized Briseis for himself. Incensed, Achilles went on strike and the situation became dire without him. Patroclus witnessed the carnage firsthand. But still Achilles refused to fight. Panicked at the sight of the Trojans entering the Greek encampment, Patroclus urged Achilles to lend him his armor. The sight of Achilles alone, he argued, would drive the Trojans back. Achilles agreed— provided Patroclus avoid the gates of Troy, from which the god Apollo protected the city. Suiting Patroclus in the armor, Achilles prayed for his safe return. Leading a swarm of Greeks, the disguised Patroclus drove the Trojans away. And for a few precious moments he felt as untouchable as Achilles himself. He hurtled towards Troy— until Apollo struck him down, knocking away his armor. Hector seized the opportunity, claiming Patroclus’ life— and Achilles’ armor. Overcome by guilt and grief, Achilles vowed not to bury his beloved until he was avenged. He threw himself into battle, leaving a trail of bodies in his wake. Soon, all Trojans had fled or perished— all but Hector, clad in the armor that had failed to protect Patroclus. Their spears clashed, but Achilles knew the armor's weak spot. With a deadly strike, he took his revenge. And yet, his grief and fury weren’t satisfied. Achilles seized Hector’s body. Denying him burial was a heinous offense, but nothing felt sacred anymore. He dragged the body behind his chariot, jeering at the Trojans all the while. At night, the ghost of Patroclus appeared to Achilles, warning that his death was imminent and asking that their bones be laid to rest together. Achilles agreed and tried to embrace him, but the apparition disappeared. Meanwhile, Priam, the Trojan king, was also tormented by grief. He finally resolved to go to Achilles and ask him for mercy. He kissed the hands that killed his son and offered payment for Hector's body. Together, they wept and shared a meal. Achilles returned Hector’s corpse, praying to Patroclus for forgiveness. And with little left to lose, he returned to battle, defeating even the most skilled warriors. But, just as Patroclus had predicted, Achilles wouldn’t live long. Paris struck his heel with an arrow that some say was guided by Apollo. The remains of Achilles and Patroclus were mingled for eternity. And the Greeks went on to win the war. But, in the course of battle, each side lost some of the greatest heroes of their time— their zeal turning into heartbreak, even as their stories hardened into legend.
Myths_from_Around_the_World
The_myth_of_the_moon_goddess_Cynthia_Fay_Davis.txt
The moon goddess, Ix Chel, patiently watched a spider at work. She could make use of it skills, she thought. Through careful observation and imitation, she soon became a skilled weaver. The sun god, Kinich Ahau, was impressed with her work, and admired her from afar. But the goddess’ grandfather was very possessive and would not let the sun god anywhere near his beloved granddaughter. To get past the grandfather, the sun god disguised himself as a hummingbird. As he took a drink of tobacco flower honey, the moon goddess spotted him and asked her grandfather to capture the bird for her. The grandfather shot the disguised sun god with a blow dart, stunning him. Ix Chel nursed the wounded bird back to health, and soon, he was able to spread his wings and fly again. He transformed back into the sun god and invited the moon goddess to escape with him. The two rowed away in a canoe, but the grandfather called upon the powerful storm god to help him stop them. Sensing danger, the moon goddess jumped from the canoe into the water below and transformed into a crab. But the storm god had already thrown a lightning bolt, which hit the crab and pierced her through her heart, killing her. Hundreds and hundreds of dragonflies gathered, buzzing songs and fluttering their transparent wings. They formed a thick, magical cloud over the moon goddess’ body. For thirteen days the dragonflies cut, cleaned, and hollowed thirteen logs. On the thirteenth night, the logs burst open, and the moon goddess emerged— alive and more brilliant than ever. The sun god wasted no time in proposing marriage. The moon goddess happily agreed. Side-by-side, they were ready to light up the sky with their powerful rays. Unfortunately, the story doesn’t end there. The sun god’s brother visited often. Sensing he was also in love with Ix Chel, the sun god grew jealous, and began to mistreat her. One day, Ix Chel was sitting on the riverbank, furious at her husband. A huge bird came gliding down and offered to take her to the high mountain peaks. To get away from the cruel sun god, she agreed. There, she met the king of the vultures. The vulture king was kind and fun-loving— a much better partner than the violent sun god. The moon goddess made a new home with him in the mountains. When the sun god heard, he was distraught. He hid inside a deer carcass until a hungry vulture came swooping down, then hopped onto its back and rode to the mountain kingdom where the moon goddess now lived with the vulture king. He begged her to come home with him, apologizing for how he had treated her. The kind and forgiving goddess took pity on him and agreed to go back. But Kinich Ahau soon began to show his true nature again. He struck her, scarring her face and dimming her bright rays. Ix Chel flew off into the dark. From then on, she vowed to appear only at night. She befriended the stars, and combined her pale blue rays with their light to guide night travelers to safety. She used her healing gift, which she had once used on the wounded sun god, to cure people who were ill. Today, Ix Chel is so widely known that she’s become a symbol of Maya culture. But archeological evidence suggests that for the ancient Maya Ix Chel and the moon goddess were separate deities. In the retellings of Maya people and the records of anthropologists, the two have merged so that Ix Chel’s story extends beyond the limits of the historical record. Her story, like all myths, isn’t just one story: the variations, ancient and modern, speak to what people value, and how they see themselves in their mythological heroes.
Myths_from_Around_the_World
The_myth_of_Oisín_and_the_land_of_eternal_youth_Iseult_Gillespie.txt
In a typical hero's journey, the protagonist sets out on an adventure, undergoes great change, and returns in triumph to their point of origin. But in the Irish genre of myth known as Eachtraí, the journey to the other world ends in a point of no return. While there are many different versions of the otherworld in Irish mythology, the most well-known example occurs in the story of Oisín. Oisín was the son of Fionn mac Cumhaill, the leader of a group of pagan warriors known as the Fianna. As Oisín rode with his companions one day, he was visited by the immortal princess Niamh. The two fell instantly in love and Niamh put Oisín onto her white horse and rode with him to the edge of the Irish sea. As they made for the horizon, the riders sunk into a golden haze. They came to the shores of the gleaming kingdom called Tír na nÓg. This was the home of the Tuatha Dé Danann, the people who ruled Ancient Ireland long before Oisín's time. From the point of his arrival, Oisín's every need was met. He married Niamh in a grand ceremony and was welcomed into her family. When he wished to hear music, his ears filled with bewitching tones. When he hungered, golden plates appeared laden with fragrant food. He admired scenes of great beauty, and colors that he had no name for. All around him, the land and the people existed in a state of unmoving perfection. But what Oisín didn't know was that Tír na nÓg was the land of youth, in which time stood still and the people never aged. In his new home, Oisín continued to hunt and explore as he had in Ireland. But in the land of youth, he possessed a strange, new invincibility. At the end of each day of adventuring, Oisín's wounds magically healed themselves as he slept in Niamh's arms. Although glory and pleasure came easily to Oisín in the land of youth, he missed the Fianna and the adventures they had in Ireland. After three years in Tír na nÓg, he was struck by a deep yearning for home. Before he embarked on his journey back, Niamh warned him that he must not alight from his horse to touch the earth with his own feet. When Oisín reached the shores of Ireland, it felt as if a shadow had fallen over the world. On the hill where his father's palace lay, he saw only a ruin strewn with weeds. His calls for his friends and family echoed from derelict walls. Horrified, Oisín rode until he came upon a group of peasants working in the fields. They were struggling to remove a boulder from their land, and forgetting Niamh's warning, Oisín leapt from his horse and rolled it away with his superhuman strength. The crowd's cheers soon turned into shrieks. In place of the youth was an old man whose beard swept the ground and whose legs buckled under him. He cried out for Finn and the Fianna, but the people only recognized these names from the distant past of 300 years before. Time had betrayed Oisín and his return to mortal lands had aged him irreversibly. Throughout Irish folklore, sightings of the land of youth have been reported in the depths of wells, on the brink of the horizon, or in the gloom of caves. But those who know the tale of Oisín tell of another vision, that of a shining princess carried upon the distant waves by a white horse, still hoping for the return of her doomed love.
Myths_from_Around_the_World
The_Greek_myth_of_Talos_the_first_robot_Adrienne_Mayor.txt
Hephaestus, god of technology, was hard at work on his most ingenious invention yet. He was creating a new defense system for King Minos, who wanted fewer intruders on his island kingdom of Crete. But mortal guards and ordinary weapons wouldn’t suffice, so the visionary god devised an indomitable new defender. In the fires of his forge, Hephaestus cast his invention in the shape of a giant man. Made of gleaming bronze; endowed with superhuman strength, and powered by ichor, the life fluid of the gods, this automaton was unlike anything Hephaestus had forged before. The god named his creation Talos: the first robot. Three times a day, the bronze guardian marched around the island's perimeter searching for interlopers. When he identified ships approaching the coast, he hurled massive boulders into their path. If any survivors made it ashore, he would heat his metal body red-hot and crush victims to his chest. Talos was intended to fulfill his duties day after day, with no variation. But despite his robotic behavior, he possessed an internal life his victims could scarcely imagine. And soon, the behemoth would encounter a ship of invaders that would test his mettle. The bedraggled crew of Jason, Medea, and the Argonauts were returning from their hard-won quest to retrieve the Golden Fleece. Their adventure had taken many dark turns, and the weary sailors were desperate to rest in a safe harbor. They’d heard tales of Crete’s invulnerable bronze colossus, and made for a sheltered cove. But before they could even drop anchor, Talos spotted them. While the Argonauts cowered at the approach of the awesome automaton, the sorceress Medea spotted a glinting bolt on the robot’s ankle— and devised a clever gambit. Medea offered Talos a bargain: she claimed that she could make Talos immortal in exchange for removing the bolt. Medea's promise resonated deep within his core. Unaware of his own mechanical nature, and human enough to long for eternal life, Talos agreed. While Medea muttered incantations, Jason removed the bolt. As Medea suspected, the bolt was a weak point in Hephaestus’ design. The ichor flowed out like molten lead, draining Talos of his power source. The robot collapsed with a thunderous crash, and the Argonauts were free to travel home. This story, first recorded in roughly 700 BCE, raises some familiar anxieties about artificial intelligence— and even provides an ancient blueprint for science fiction. But according to historians, ancient robots were more than just myths. By the 4th century BCE, Greek engineers began making actual automatons including robotic servants and flying models of birds. None of these creations were as famous as Talos, who appeared on Greek coins, vase paintings, public frescoes, and in theatrical performances. Even 2,500 years ago, Greeks had already begun to investigate the uncertain line between human and machine. And like many modern myths about artificial intelligence, Talos’ tale is as much about his robotic heart as it is about his robotic brain. Illustrating the demise of Talos on a vase of the fifth century BCE, one painter captured the dying automaton’s despair with a tear rolling down his bronze cheek.
Myths_from_Around_the_World
How_Sun_Wukong_escaped_the_underworld_Shunan_Teng.txt
In the depths of their underwater kingdom, the mighty Dragon Lords quaked with fear. Before them pranced Sun Wukong, the Monkey King. The legendary troublemaker been hatched from a stone, schooled in divine magic, and was currently brandishing the Dragon Lord’s most treasured weapon. This magical staff, originally large enough to measure the depth of a great flood, now obeyed the Monkey King’s will and shrank at his touch. Terrified of this bewildering power, the Dragons graciously allowed Sun Wukong to keep the staff. The Monkey King stowed the weapon away, and gleefully sped back to his kingdom to show this treasure to his tribe of warrior monkeys. After a lavish celebration, Sun Wukong fell into a deep sleep. But just as he began to dream, the Monkey King quickly realized two things. The first was that this was no ordinary slumber. The second was that he wasn’t alone. Suddenly, he found himself caught in the clutches of two grisly figures. At first the Monkey King didn’t know who his captors were. But as they dragged him toward their city’s gates, Sun Wukong realized his deathly predicament. These were soul collectors tasked with transporting mortals to the Realm of the Dead. This was the domain of the Death Lords, who mercilessly sorted souls and designed gruesome punishments. From here, the kingdom of death was laid out before him. He could see the Death Lord’s palaces, and the fabled bridge across the river Nai He. Manning the bridge was an old woman who offered worthy souls a bowl of soup. After drinking, the spirits forgot their previous life, and were sent back to the world of the living in a new form. Further below were the souls not worthy of reincarnation. In this twisting maze of chambers, unfortunate spirits endured endless rooms of punishment— from mountains spiked with sharp blades, to pools of blood and vats of boiling oil. But Sun Wukong was not about to accept torture or reincarnation. As the soul collectors attempted to drag him through the gates, the Monkey King whipped out his staff and swung himself out of their clutches. His battle cries and the clang of weapons echoed throughout the underworld. Sensing a disturbance, the ten Death Lords swooped upon him. But they had never met such resistance from a mortal soul. What was this unusual creature? And was he a mortal, a god— or something else? The Lords consulted the Book of Death and Life— a tome which showed the time of every living soul’s death. Not knowing what category this strange being was under, the Death Lords struggled to find Sun Wukong at first; but the Monkey King knew just where to look. Unfortunately, the records confirmed the Death Lord’s claim— Sun Wukong was scheduled to die this very night. But the Monkey King was not afraid. This was far from the first time he’d defied destiny in his quest for wisdom and power. His past rebellions had earned him the power to transfigure his body, ride clouds at dizzying speeds, and govern his tribe with magic and martial arts. In this crisis, he saw yet another opportunity. With a flash of his nimble fingers, the Monkey King struck his own name from the Book. Before the Death Lords could respond, he found the names of his monkey tribe and swept them away as well. Liberated from the bonds of death, Sun Wukong began to battle his way out of the underworld. He deftly defeated endless swarms of angry spirits— before tripping on his way out of the kingdom. Just before he hit the ground, Sun Wukong suddenly awoke in his bed. At first he thought the journey might have been a dream, but the Monkey King felt his new immortality surging from the top of his head to the tip of his tail. With a cry of triumph, he woke his warriors to share his latest adventure— and commence another round of celebration.
Myths_from_Around_the_World
The_tale_of_the_boy_who_tricked_the_Devil_Iseult_Gillespie.txt
In the sun-dappled streets of a small town, a proud mother showed off her newborn son. Upon noticing his lucky birthmark, townsfolk predicted he would marry a princess. But soon, these rumors reached the ears of the wicked king. Enraged, the king stole the child away, and sent him hurtling down the river. But the infant’s luck proved greater than the king’s plan. Years later, the king was traveling his realm, when he spotted a strapping young man with an uncanny birthmark. After confirming the child’s origins, the sly king entrusted the boy with a letter for the queen. The youth eagerly set out to deliver the message— not knowing he was carrying his own death sentence. That night, roaming bandits stumbled upon his camp. Yet when they read the brutal letter, they were filled with pity. Deciding to make trouble for the king instead, they scribbled a new note. As soon as the youth arrived at the palace, he locked eyes with the princess. The two felt destined for each other. And when the queen read that the king approved this union, she joyfully organized a whirlwind wedding. When the king returned, he was furious. But he couldn’t execute his daughter’s beloved without reason. So he devised a diabolical trial. He ordered the youth to travel to Hell itself, and return with three golden hairs freshly plucked from the Devil’s head. Only upon succeeding could he return to his bride. The youth searched across the land for the entrance to Hell, until he finally reached an eerie village. Here, he saw some villagers gathered around a well. They closed in on the youth, refusing to let him pass until he answered their question: why was the well dry? The youth replied, “I will answer when I return.” They directed him further into town, where he came across another set of villagers contemplating a gnarled tree. They refused to let him pass until he answered their question: why was the tree barren? Again, the youth responded, “I will answer when I return." These villagers guided him to the dock, where an elderly ferryman awaited. As he paddled through the black water, the ferryman rasped a third question: how can I escape my interminable task? Once more, the youth promised, “I will answer when I return.” At last, they reached a hut sinking into the swampy banks of Hell. Reluctantly, the youth knocked on the rotting door. The devil’s grandmother answered his call. She was known to help some visiting souls, and harm others. The youth had just finished his story when they heard the devil’s footsteps. Without warning, the boy’s world appeared to shrink. The devil’s grandmother lifted him into the folds of her sleeve, and welcomed her grandson. The old woman set to work, lavishing the devil with food and drink. When he fell asleep, she deftly plucked three gleaming hairs from his head. With each plucked hair, the Devil briefly awoke and complained about his dreams, full of nearby villagers and their problems. The next morning, the youth departed— armed with three golden hairs, and three pieces of information. He shared the devil’s first dream with the ferryman. If the boatman could hand his oars to a willing passenger, he would be free from his task. Back at the village, the youth declared that there was a mouse gnawing at the root of the tree, and an enormous toad blocking the well. The villagers rewarded him handsomely for his help. Back from his journey, the youth thrust the devil’s hairs at the king— but his greedy father-in-law only had eyes for the gold. The sly youth told the king that even greater wealth awaited him across the river. Immediately, the king hastened to the riverbank. Eager to claim his riches, he held out his hands impatiently to the grinning ferryman— who happily handed over his oars.
Myths_from_Around_the_World
The_Irish_myth_of_the_Giants_Causeway_Iseult_Gillespie.txt
On the coast of Northern Ireland, a vast plateau of basalt slabs and columns called the Giant’s Causeway stretches into the ocean. The scientific explanation for this is that it’s the result of molten lava contracting and fracturing as it cooled in the wake of a volcanic eruption. But an ancient Irish myth has a different accounting. According to legend, the giant Finn MacCool lived happily on the North Antrim coast with his wife Oonagh. Their only disturbance came from the taunts and threats of the giant Benandonner, or the red man, who lived across the sea in Scotland. The two roared insults and hurled rocks at each other in dramatic shows of strength. Once, Finn tore up a great clump of land and heaved it at his rival, but it fell short of reaching land. Instead, the clump became the Isle of Man, and the crater left from the disturbed earth filled with water to become Lough Neagh. The giants’ tough talk continued, until one day Benandonner challenged Finn to a fight, face to face. And so the Irish giant tossed enough boulders into the sea to create a bridge of stepping stones to the Scottish coast. Finn marched across in a fit of rage. When Scotland loomed before him, he made out the figure of Benandonner from afar. Finn was a substantial size, but at the sight of his colossal enemy thundering towards him, his courage faltered. With one look at Benandonner’s thick neck and crushing fists, Finn turned and ran. Back home, with Benandonner fast approaching, Finn trembled as he described his enemy’s bulk to Oonagh. They knew that if he faced Benandonner head on, he’d be crushed. And so Oonagh hatched a cunning plan - they needed to create an illusion of size, to suggest Finn was a mountain of a man whilst keeping him out of sight. As Benandonner neared the end of the bridge, Oonagh stuffed her husband in a huge cradle. Disguised as an enormous baby, Finn lay quiet as Benandonnner pounded on the door. The house shook as he stepped inside. Oonagh told the enraged visitor that her husband wasn’t home, but welcomed him to sit and eat while he waited. When Benandonner tore into the cakes placed before him, he cried out in pain for he’d shattered his teeth on the metal Oonagh had concealed inside. She told him that this was Finn’s favorite bread, sowing a seed of doubt in Benandonner’s mind that he was any match for his rival. When Finn let out a squawk, Benandonner’s attention was drawn to the gigantic baby in the corner. So hefty was the infant swaddled under piles of blankets, Benandonner shuddered at the thought of what the father would look like. He decided he’d rather not find out. As he fled, Benandonner tore up the rocks connecting the shores, breaking up the causeway. What remains are two identical rock formations: one on the North Antrim coast of Ireland and one at Fingal’s Cave in Scotland, right across the sea.
Myths_from_Around_the_World
The_Greek_myth_of_Demeters_revenge_Iseult_Gillespie.txt
Mestra, Princess of Thessaly, was far from home. She had watched her father, King Erysichthon, plunge into a ruin of his own making. Now, to save himself, he sold his own daughter to the highest bidder. But Mestra refused to accept this fate. Finding herself momentarily alone, she began to plan her escape. Months earlier, Erysichthon had decided to build himself a gleaming new hall, declaring that only the finest wood would suffice. The king was well known for spurning the gods, as he was more interested in honoring himself. But in an unprecedented act of disrespect, he marched his men into the sacred grove of Demeter, goddess of food and agriculture. Ignoring the prayer offerings that hung from the trees, Erysichthon headed straight for the most magnificent oak. As he swung his axe, the tree trembled and turned pale. Blood gushed from the wound, and a strangled cry rung out. It was the voice of one of Demeter’s wood nymphs who resided in the tree. With her last breaths, she called out to her patron for revenge. Erysichthon, though, was unfazed. He decimated the rest of the forest and dragged the wood back to his palace. Upon learning of the loss and destruction, Demeter quaked the earth with her anger. Swiftly, she ordered a mountain nymph to go and enlist the help of another fearsome goddess. In a dragon-drawn chariot, the mountain nymph soared over barren lands and icy seas. At last, she reached the remote lair of Hunger, goddess of famine. She found her picking through weeds with her rotten nails and teeth, clutching her hollow stomach and twisting her knotted limbs. Not daring to come too close, the nymph called for Hunger and shared Demeter’s vengeful plan. Hunger usually kept to her lair— but she relished this gruesome mission. Under the cover of night, she crept into the palace and released her famished breath into the sleeping king. Erysichthon immediately began to dream of a lavish feast, gulping air and grinding his teeth. He awoke to a ravenous hunger, which only seemed to increase as he ate. As Mestra looked on in horror, her father devoured all the food in the palace, before calling for the city’s crops and goods. But no matter how many feasts he devoured, he felt empty and weak. Before long, Erysichthon had sold his entire estate for food— with only Mestra left by his side. But not even his loyal daughter could escape the depths of his greed, and he shamelessly sold her into slavery. As she set sail with her captor, Mestra stared at the sea. This wasn’t the first time she’d suffered at the hands of men— years before, she’d been violently pursued and assaulted by the god Poseidon. Now, she demanded his help. As an act of repentance, Poseidon granted her the power to change her shape at will. With this, Mestra immediately transformed into a fisherman. And distracting her captor with a bounty of fish, she escaped. For the first time, Mestra was in control, able to freely adapt and slip away from any situation. But she felt compelled to return to her tortured father. However, when Erysichthon discovered Mestra’s new powers, he only saw an opportunity for himself. He exploited his talented daughter, selling her again and again for food. Each time, she gracefully transformed herself— morphing into a swift-footed mare, a soaring bird, or an elusive deer to steal more meals while evading capture. But as her father continued to sell her at higher and higher prices, Mestra was left with little hope. One day, when arriving home in one of her many forms, Mestra entered the hollow palace only to discover the king’s lifeless body— Erysichthon’s hunger had grown so great that he had consumed his own limbs. Gazing upon her wasted father, Mestra’s hope returned. She was no longer unfairly burdened with the wrath of the gods that the king had courted. Untethered from her father’s selfish agenda and buoyed by her ability to transform herself at will, Mestra was finally free.
Myths_from_Around_the_World
The_Chinese_myth_of_the_forbidden_lovers_Shannon_Zhao.txt
In the celestial court of the Jade Emperor lived seven princesses. Each had their chosen place in court, but the youngest princess had a special skill. She could pluck clouds from the sky and spin them into the softest robes. Her work was so precise, not even the most expert eye could find a seam. But her craft was the same day after day, and she longed for new inspiration. Finally, the Queen Mother granted the weaver permission to visit Earth. The other princesses would accompany her to protect their sister from earthly dangers. Dressed in special robes that allowed them to fly between Heaven and Earth, the sisters soared down from the sky. The weaver was in awe of the rolling hills and rivers, and the sisters decided to swim in one of the glittering streams. As the weaver floated, she dreamt about staying forever. Meanwhile, a lone cowherd approached the riverbank. He came here often to sweep his parent’s grave and speak with his only companion— a stoic bull who listened patiently to the cowherd’s sorrows. But upon seeing the weaver’s beauty, the cowherd forgot his routine. While he longed to introduce himself, his lonely lifestyle had made him timid. Thankfully, the bull saw his friend's plight and offered some advice. He told the cowherd of the swimmer’s celestial origins and of her dream to stay on Earth; but also that she could only remain if she lost her ticket back to Heaven. As the cowherd approached, the princesses flew away in fear— leaving their dreaming sister behind. While keeping her magic robes hidden, the cowherd offered his own garment as a substitute. And after gaining her trust, the pair began exploring the countryside. She was struck by his caring nature, and he learned to see the world’s wonder through her eyes. Before long, the two had fallen deeply in love. The weaver and the cowherd built a prosperous life. Their farm flourished, and the weaver taught her skills to local villagers. As time marched on, the pair was blessed with two healthy children, but their bull was growing old. Before he died, the bull implored the family to keep his hide and use its magic at their time of need. While the husband grieved for his friend, the weaver’s mind turned to her other family. Dusting off her magical robe, she decided to pay a visit to the heavens. But when the weaver swept into her old home, no one seemed surprised to see her. With a start, she realized that barely any time had passed— for a year on Earth was merely a day in Heaven. When her family learned of her new life, they were enraged. How dare she waste her love on a human? The weaver tried to escape back to Earth, but the Queen Mother plucked a golden hairpin from her head and tore through the sky. A great gulf opened, forming a river of stars between Heaven and Earth. Below, the cowherd trembled, but he also remembered the bull’s final words. Hastily placing each child in a basket, he draped the bull’s pelt over his back and hurtled upwards. Above the clouds, each lover attempted to wade through the surging stars. But no matter how hard they struggled, the gulf between them only grew wider. Day after day, the Queen Mother watched without pity. Years passed, and the weaver and the cowherd had no one, except the passing magpies to cheer them on. Finally, their love moved the Queen Mother’s heart. While she couldn’t forgive her granddaughter entirely, the Queen Mother would allow the weaver to meet her earthly family once a year. And so, in late summer, the magpies form a bridge across the Milky Way, reuniting the weaver and the cowherd. At this time of year, millions of people in East and Southeast Asian countries tell similar tales of these star-crossed lovers, celebrating their annual reunion.
Myths_from_Around_the_World
The_Japanese_folktale_of_the_selfish_scholar_Iseult_Gillespie.txt
In ancient Kyoto, a devout Shinto scholar lived a simple life, but he was often distracted from his prayers by the bustling city. He felt that his neighbors were polluting his soul, and he sought to perform some kind of personal harae— a purification ritual that would cleanse his body and his mind. He decided to travel to the revered Hie Shrine. The trip was an arduous climb that took all day. But he was glad for the solitude it afforded him, and the peace he felt upon returning home was profound. The scholar was determined to maintain this clarity for as long as possible, and resolved to make this pilgrimage another 99 times. He would walk the path alone, ignoring any distractions in his quest for balance, and never straying from his purpose. The man was true to his word, and as days stretched into weeks, he walked through driving rain and searing sun. Over time, his devotion revealed the invisible world of spirits which exists alongside our own. He began to sense the kami, which animated the rocks underfoot, the breeze that cooled him, and the animals grazing in the fields. Still he spoke to no one, spirit or human. He was determined to avoid contact with those who had strayed from the path and become polluted with kegare. This taboo of defilement hung over the sick and deceased, as well as those who defiled the land or committed violent crimes. Of all of the threats to the scholar’s quest for spiritual purity, kegare was by far the greatest. After paying his respects for the 80th time, he set out for home once more. But as darkness fell, he heard strained sobs in the night air. The scholar tried to push forward and ignore the moans. But the desperate cries overwhelmed him. Grimacing, he left his path to follow the sound to its source. He soon came to a cramped cottage, with a woman crumpled outside. Filled with pity, the scholar implored the woman to share her sorrow. She explained that her mother had just died— but no one would help her with the burial. At that news, his heart sank. Touching the body would defile his spirit, draining his life force and leaving him forsaken by the kami. But as he listened to her cries, his sympathy soared. And so, they buried the old woman together, to ensure her safe passage into the spirit world. The burial was complete, but the taboo of death weighed heavily on the scholar. How could he have been so foolish, to shirk his most important rule and corrupt his divine journey? After a tormented night, he resolved to go back to the shrine to cleanse himself. To his surprise, the usually quiet temple was filled with people, all gathering around a medium who communicated directly with the kami. The man hid himself, not daring approach in case anyone glimpse his polluted soul. But the medium had other ways of seeing, and called him forward from the crowd. Ready to be forsaken, the scholar approached the holy woman. But the medium merely smiled. She took his impure hand in hers, and whispered a blessing only he could hear— thanking him for his kindness. In that moment, the scholar discovered a great spiritual secret: contamination and corruption are two very different things. Filled with insight, the scholar set himself back on his journey. But this time, he stopped to help those he met. He began to see the beauty of the spirit world everywhere he went, even in the city he'd previously shunned. Others cautioned that he risked kegare— but he never told them why he so freely mingled with the sick and disadvantaged. For he knew that people could only truly understand harae through a journey of their own.
Myths_from_Around_the_World
The_myth_of_the_Sampo_an_infinite_source_of_fortune_and_greed_HannaIlona_Härmävaara.txt
After a savage seafaring skirmish and eight long days of being battered by waves, Väinämöinen— a powerful bard and sage as old as the world itself— washed up on the shores of distant Pohjola. Unlike his home Kalevala, Pohjola was a dark and frozen land, ruled by Louhi, “the gap-tooth hag of the North." The cunning witch nursed Väinämöinen back to health but demanded a reward for returning him home. Not content with mere gold or silver, Louhi wanted what did not yet exist— the Sampo. To be forged from “the tips of white-swan feathers," “the milk of greatest virtue," “a single grain of barley," and “the finest wool of lambskins," this artifact was said to be an endless font of wealth. But Väinämöinen knew that only Seppo Ilmarinen, the Eternal Hammerer who forged the sky-dome itself, could craft such an object. So he convinced Louhi to send him home and fetch the smith. Though the journey was far from easy, the bard finally made it back to Kalevala. But Ilmarinen refused to go to the gloomy North— a land of witches and man-eaters. But keeping true to his word, Väinämöinen tricked Ilmarinen into climbing a giant tree, before summoning a mighty storm to carry the smith all the way to Pohjola. Ilmarinen was well received in the North. Louhi lavished her guest with extravagant hospitality and promised him the hand of her beautiful daughter— if he could craft what she wished. When she finally asked if Ilmarinen was capable of forging the Sampo, the powerful smith declared he could indeed accomplish the task. But try as he might to bend the forge to his will, its fires only produced other artifacts— beautiful in appearance but ill-mannered in nature. An elegant crossbow that thirsted for blood and a gleaming plow that ruined cultivated fields among others. Finally, Ilmarinen summoned the winds themselves to work the bellows, and in three days time he pulled the Sampo, with its lid of many colors from the forge’s flames. On its sides the smith carefully crafted a grain mill, a salt mill, and a money mill. Louhi was so delighted with the object’s limitless productive power that she ran off to lock her treasure inside a mountain. But when Ilmarinen tried to claim his prize, the promised maiden refused to marry him, and the smith had to return home alone. Years passed, and while Pohjola prospered, Ilmarinen and Väinämöinen were without wives or great wealth. Bitter about this injustice, the bard proposed a quest to retrieve the Sampo, and the two sailed north with the help of Lemminkäinen— a beautiful young man with a history of starting trouble. Upon arrival, Väinämöinen requested half the Sampo’s profits as compensation— or they’d take the artifact by force. Outraged at this request, Louhi summoned her forces to fight the heroes. But as her army readied for war, the bard played his magic harp, Kantele, entrancing all who heard it and sending Pohjola into a deep slumber. Unimpeded, the three men took the Sampo and quietly made their escape. Lemminkäinen was ecstatic at their success, and demanded that Väinämöinen sing of their triumph. The bard refused, knowing the dangers of celebrating too early. But after three days of traveling, Lemminkäinen’s excitement overwhelmed him, and he recklessly broke out in song. His awful singing voice woke a nearby crane, whose screeching cries roused the Pohjolan horde. The army made chase. As their warship closed in, Väinämöinen raised a rock to breach their hull. Undeterred, Louhi transformed into a giant eagle, carrying her army on her back as they attacked the heroes’ vessel. She managed to grab the Sampo in her claw, but just as quickly, it dropped into the sea, shattering into pieces and sinking deep beyond her talon’s reach. Buried on the ocean floor, the remnants of this powerful device remain in the realm of Ahti, god of water— where they grind salt for the seas to this very day.
Myths_from_Around_the_World
What_really_happened_to_Oedipus_Stephen_Esposito.txt
Though Oedipus would dodge death, vanquish the monstrous Sphinx, and whether wrathful plagues, the truth would prove his greatest challenger. When Oedipus’ mother, Queen Jocasta of Thebes, gave birth to him, a grim heir seized the occasion. Her husband, King Laius, had received a prophecy from Apollo’s oracle foretelling that he would die at the hands of his own son. Determined to escape this fate, Laius had the newborn’s ankles pierced, and Jocasta ordered a shepherd to abandon him on Mount Cithaeron to perish. But divine prophecies can be quite stubborn. The shepherd took pity on the baby and gave him to to another shepherd— this one from Corinth. He decided to bring the baby to the childless Corinthian king and queen, Polybus and Merope. They called the boy Oedipus, or “swollen-foot,” and raised him as their own, never revealing his true origin. Years passed, till one night, a drunken reveler told Oedipus that he was not Polybus and Merope’s son by birth— an allegation they staunchly denied. But the seeds of doubt burrowed into Oedipus’ mind. He left to seek counsel from Apollo’s oracle at Delphi, who instead delivered a deeply disturbing prophecy: Oedipus would murder his father and have children with his mother. Horrified, Oedipus determined to stay far from Corinth and the only parents he’d ever known. He ventured towards Thebes— and thus, unwittingly, towards the city where his birth parents reigned. At a crossroads on the way, a fancy carriage threatened to run Oedipus off the road, and a lethal fight ensued. Little did Oedipus know, one of the casualties was King Laius of Thebes, his own birth father. In killing him, Oedipus had fulfilled the first half of Apollo’s prophecy. When Oedipus reached the gates of Thebes, he was met by the treacherous Sphinx. She’d ravaged the city, posing a bewildering riddle to those she encountered and mercilessly devouring all who answered incorrectly. But when she fixed her keen, expectant gaze on Oedipus, he gave the correct response. Thebes celebrated the Sphinx's defeat, and Oedipus married the city's recently widowed queen, Jocasta. They had four children, neither realizing they were, in fact, mother and son— or that they’d completed the second half of Apollo’s prophecy. Eventually, a devastating plague descended on Thebes. To save the city, Oedipus sent his brother-in-law to consult Apollo’s oracle. She declared that the divine plague would only relent if the killer of Thebes’ previous king, Laius, was finally revealed, then driven out or avenged with blood. Oedipus hastily opened an investigation. He interrogated Tiresias, a blind prophet, who stayed silent before suggesting that Oedipus himself was the killer. Oedipus denied and deflected the accusation. But it stuck with him. Jocasta likewise insisted that Laius’ killer couldn’t have been Oedipus, for she'd heard that Laius was killed at a crossroads by robbers. Yet, through conversations with a messenger from Corinth and, finally, the shepherd who’d rescued him as an infant, the truth came bearing down upon Oedipus. In searching for Laius’ murderer, he’d been looking for himself, and Apollo’s prophecy had come to pass, in all its dreadful detail. Full of fury, resentment, and shame, Oedipus rushed to kill Jocasta— but she too had realized the truth and taken her own life. Using brooches from her dress, Oedipus blinded himself in anguish, expunging his deceitful sense of sight, which had kept him from truly seeing so much. Oedipus begged for exile, but was led back into the castle to await word from Apollo’s oracle. Thus ends Sophocles’ first play centering Oedipus. But it wouldn’t be his final word on the tragic hero. Decades later, a roughly 89-year-old Sophocles wrote its sequel, set in Colonus, his own birthplace. It finds Oedipus, now aged and exiled, confronted with accusations of incest and patricide. Oedipus, having accepted the truth and released himself from its shame, proclaims his innocence and maintains that he committed these deeds unwittingly— and unwillingly. Finally, Oedipus knows it’s time to go— and a divine voice urges him on. Having said his loving farewells, Oedipus then transcends— peacefully and marvelously— into death.
Myths_from_Around_the_World
The_twins_who_tricked_the_Maya_gods_of_death_Ilan_Stavans.txt
Day after day, the twin brothers, Jun and Wuqub, ran back and forth playing ball. One day, their vigorous game disturbed the lords of the underworld, who challenged the twins to a match. But when the brothers arrived, the lords trapped and killed them, hanging Jun’s head from a tree as a trophy. The tree soon sprouted massive fruit, which caught the attention of one of the lords’ daughters. When she reached for it, the skull of Jun spat on her hand and impregnated her. Fleeing of her father’s wrath, she sought refuge with her mother-in-law and gave birth to twin sons: Junajpu and Ixb’alanke. The second generation of twins discovered their father’s ballgame equipment, which their grandmother had hidden, and began to play. Soon enough, the messengers from the underworld arrived to issue another challenge. Knowing what had happened to their fathers, the twins nevertheless answered the call, trekking through deep caverns and across rivers filled with scorpions, blood, and pus, until they reached the great city from which the lords of the underworld controlled every aspect of nature and caused suffering for humans. The twins pressed on, searching for the lords who had challenged them. The lords had hidden among statues of themselves to confuse their guests, but the brothers sent a mosquito ahead of them. When it stung the figures, the lords cried out, revealing themselves. They forced the twins to spend the night in the House of Darkness. They gave them a torch, but warned they must return it unburnt or face death. As the darkness closed in, the quick-thinking brothers adorned the torch with red macaw feathers and fireflies. Come morning, the lords were shocked to see the torch lit, but unburnt. They insisted on playing the game with their own ball. The twins agreed, only to find that the lords had hidden a weapon inside the ball, which chased them around the court, trying to kill them. The twins survived that first round, but by now they were sure this would be no ordinary match. They played many more rounds, and each time the twins scored no better than a tie, leaving them to face whatever supernatural trial the lords set for them before picking up the game again. They survived the House of Cold by lighting a fire, and the House of Jaguars by feeding bones to the beasts. But in the House of Bats, a bat bit off Junajpu’s head. Certain they now had the advantage, the lords called for another round of the ballgame, hanging Junajpu’s head over the court. The quick-thinking Ixb’alanke called the animals to him. A turtle brought him a chilacayote squash, and he carved it into the likeness of a head. While the lords chased the ball, he swapped it with the head. With Junajpu’s head back on his body, the twins played harder than ever. Finally, they won the game, hitting the hanging squash so it shattered on the ground. The twins knew their treacherous hosts would not take the loss well. To protect themselves, they enlisted a pair of seers. Sure enough, the lords burned the brothers in an oven, but the seers made sure their remains were thrown in the river, which restored them to life. The brothers then came before the lords disguised as two disheveled children and began to dance and perform miracles. For their final trick, Ixb’alanke pretended to kill Junajpu, then resurrect him. The lords were so delighted that they demanded the same trick be performed on them. Still in disguise, the brothers were only too happy to oblige, and began killing the lords one by one. As the surviving lords realized who stood before them and that no resurrection was forthcoming, they begged for mercy, and the twins spoke their curse. Henceforth, the lords would have no sacrifices and no power over the surface world. Their days of terrorizing humans were over.
Myths_from_Around_the_World
The_tragedy_of_the_one_guy_who_was_right_about_the_Trojan_Horse_Noah_Charney.txt
For ten grueling years, the Greeks laid siege to Troy, scattering ships and encampments across the city's shores. But as the Trojans awoke for another day of battle, they found their enemies had vanished overnight— leaving behind only an enormous wooden horse. Seeing this as a symbol of the Greek’s surrender, the soldiers dragged their prize into the city and began to celebrate. But one Trojan wasn't happy. Laocoön, a seer and priest, was deeply suspicious of the Greek gift. He reminded his fellow Trojans of their enemy’s reputation for trickery, and cautioned them not to accept this strange offering. The crowd jeered at his warning, but Laocoön was undeterred. He forced his way to the wooden beast and thrust his sword into its belly. Yet his blade drew no blood. And if there were men shifting inside, Laocoön couldn't hear them over the crowd. Still grim with foreboding, Laocoön retreated home and enlisted his sons in preparing a sacrifice to the gods. But his fate— and that of his fellow Trojans— was already sealed. The gods had decided to grant the Greeks victory by ensuring the success of their scheme to infiltrate Troy. And Poseidon sought to punish the priest for threatening that plan. Two great serpents emerged from the sea’s rolling waves and descended on Laocoön and his sons. The seer’s violent death went unnoticed amidst the celebrations. But, that night, when tragedy struck, the Trojans finally remembered the old priest’s warning. Laocoön's tragic tale inspired countless retellings across the ancient world. Virgil describes the seer’s demise in his epic poem “The Aeneid,” and Sophocles composed an entire play about the ill-fated priest. However, his most famous and influential depiction is a marble statue called “Laocoön and His Sons.” Likely carved by a trio of artists from Rhodes, the exact origins of this piece remain mysterious, with current theories dating its creation anywhere from 200 BCE to 68 CE. Whenever it was made, this sculpture remains the epitome of the Hellenistic Baroque style. But even within a tradition known for its dramatic facial expressions and contorted figures, no other piece in this style comes close to the intensity of “Laocoön and His Sons.” The nearly life-sized figures are writhing in agony, straining to untangle massive snakes from their limbs. Their faces are packed with desperation and hopelessness, yet Laocoön’s expression is fiercely determined to resist. The scene is also uniquely brutal— paused precisely as the serpent’s venomous jaws are about to bite down. Displayed as the centerpiece of Emperor Nero’s Domus Aurea palace complex, this gruesome sculpture was one of the most talked about artworks of its time. Renowned Roman writer Pliny the Elder even went so far as to call it “preferable to any other production of the art of painting or of statuary.” Unfortunately, the statue was lost when Domus Aurea was consumed by fire in 109 CE. But Laocoön's tale was far from finished. In 1506, Michelangelo Buonarotti— then the most famous sculptor in Rome— received a message that Pope Julius II had unearthed something marvelous. Even caked with dirt, “Laocoön and His Sons” astonished Michelangelo. The dramatic musculature was over-the-top, but all the more powerful for being so extreme. And the curving shapes of its serpent and human figures drew his eyes in constant motion. Pope Julius prominently displayed the piece at the Vatican, but its influence on Michelangelo is what made the statue truly famous. The sculpture's emotive, exaggerated elements transformed his approach to representing the human body. His paintings and sculptures began to feature contorted poses, referred to as “figura serpentinata,” meaning snake-like shapes. And his celebrated work in the Sistine Chapel centered on muscular, hyperextended figures. Soon, Michelangelo’s new style sparked an entire artistic movement called Mannerism— influencing artists throughout the 1500s to exaggerate and twist human bodies for dramatic effect. Since artists of the Renaissance revered ancient Greco-Roman art above all else, perhaps it’s not surprising that “Laocoön and His Sons” made such a large impact. But not even the real Laocoön could have predicted that his likeness would become one of the most influential sculptures ever made.
Myths_from_Around_the_World
The_myth_of_Narcissus_and_Echo_Iseult_Gillespie.txt
Hera, queen of the gods, was on the edge of her throne. A mountain nymph named Echo, renowned for her charm and chatter, was regaling her with a sensational story. But what Hera didn’t know was that Echo was merely distracting her while her husband, Zeus, was frolicking about with the other nymphs. Unfortunately for Echo, Zeus got sloppy, and Hera realized what was going on. Enraged by Echo’s duplicity— and powerless to stop her husband’s adultery— Hera decided to silence the nymph for good. From then on, Echo could no longer enrapture listeners with her stories; she could only repeat the last words another said. As her conversations became dull and her company undesirable, Echo grew dispirited. One day, while Echo was drifting through the woods, she spotted a young man hunting deer. It was Narcissus, the stunningly beautiful son of a river god and water nymph. After his birth, a seer had given his mother a cryptic prophecy: Narcissus would live a long life— but only if he never really knew himself. No one was sure exactly what to make of this. And, in the meantime, Narcissus grew into a proud youth. His good looks attracted many admirers. But he preferred to amble through life on his own and left a trail of broken hearts in his wake. Seeing Narcissus there, Echo was filled with longing. Unable to initiate a conversation, she walked after him. Soon, Narcissus heard a rustle, and called out, “Who goes there? Who are you?” Echo revealed herself, but only repeated the word “you,” making her tone as endearing as possible as she went to hold him. Agitated, Narcissus said, “Let me go, I can’t stay.” Echo could only counter with a plea for him to do so. Freeing himself from her embrace, Narcissus snapped, “I’d rather die than have you love me!” To which Echo could only cry, “Love me... love me.” Narcissus told Echo once more to leave him alone, then faded from her gaze. Echo wandered to a cave. And gradually, her heart grew heavy and her body frail until all that was left of her was her voice, which the wind carried to vast, empty places. Forever after, it could be heard reverberating through hollow caves and rebounding across lonely clearings. But this wasn’t even the first time heartbreak over Narcissus had proven fatal. A young man named Ameinias had also been cruelly rejected by Narcissus. Before his death, he prayed to Nemesis, the goddess of revenge, that Narcissus would also one day know the pain of love. She heard Ameinias’ pleas and, upon witnessing Echo’s fate, decided that it would be the final affront. It was time for retribution. So, Nemesis set Narcissus towards a clear, glassy pool. As he bent towards the water to drink, he caught sight of a hauntingly beautiful young man. Never before had Narcissus seen himself with such clarity. He spent the day acquainting himself with every glinting angle and glowing curl then passed the evening gazing at his reflection by moonlight and sleeping with his fingers grazing the water. Days wore on, and Narcissus never parted from his one true love. When he reached out, his double reached for him; and when he leaned in to bestow a kiss, he also tilted his face. But when he tried to hold the bewitching figure, it disappeared. At last, Narcissus knew the agony of unrequited love. Eating and drinking nothing, Narcissus too wasted away. His neck ached from bending over the lake, and his legs became rooted to the grass. When the wood nymphs finally passed by, all that was left of him was a white and yellow flower bending towards its reflection. From then on, it was known as narcissus.
Myths_from_Around_the_World
The_Egyptian_myth_of_Isis_and_the_seven_scorpions_Alex_Gendler.txt
A woman in rags emerged from the swamp flanked by seven giant scorpions. Carrying a baby, she headed for the nearest village to beg for food. She approached a magnificent mansion, but the mistress of the house took one look at her grimy clothes and unusual companions and slammed the door in her face. So she continued down the road until she came to a cottage. The woman there took pity on the stranger and offered her what she could: a simple meal and a bed of straw. Her guest was no ordinary beggar. She was Isis, the most powerful goddess in Egypt. Isis was in hiding from her brother Set, who murdered her husband and wanted to murder her infant son, Horus. Set was also a powerful god, and he was looking for them. So to keep her cover, Isis had to be very discreet— she couldn’t risk using her powers. But she was not without aid. Serket, goddess of venomous creatures, had sent seven of her fiercest servants to guard Isis and her son. As Isis and Horus settled into their humble accommodation, the scorpions fumed at how the wealthy woman had offended their divine mistress. They all combined their venom and gave it to one of the seven, Tefen. In the dead of night, Tefen crept over to the mansion. As he crawled under the door, he saw the owner’s young son sleeping peacefully and gave him a mighty sting. Isis and her hostess were soon awakened by loud wailing. As they peered out of the doorway of the cottage, they saw a mother running through the street, weeping as she cradled her son. When Isis recognized the woman who had turned her away, she understood what her scorpions had done. Isis took the boy in her arms and began to recite a powerful spell: "O poison of Tefen, come out of him and fall upon the ground! Poison of Befen, advance not, penetrate no farther, come out of him, and fall upon the ground! For I am Isis, the great Enchantress, the Speaker of spells. Fall down, O poison of Mestet! Hasten not, poison of Mestetef! Rise not, poison of Petet and Thetet! Approach not, poison of Matet!" With each name she invoked, that scorpion’s poison was neutralized. The child stirred, and his mother wept with gratitude and lamented her earlier callousness, offering all her wealth to Isis in repentance. The woman who had taken Isis in watched in awe— she had had no idea who she’d brought under her roof. And from that day on, the people learned to make a poultice to treat scorpion bites, speaking magical incantations just as the goddess had.
Myths_from_Around_the_World
The_tragic_myth_of_Orpheus_and_Eurydice_Brendan_Pelsue.txt
It was the perfect wedding, the guests thought. The groom was Orpheus, the greatest of all poets and musicians. The bride Eurydice, a wood nymph. Anyone could tell the couple was truly and deeply in love. Suddenly, Eurydice stumbled, then fell to the ground. By the time Orpheus reached her side, she was dead, and the snake that had bitten her was slithering away through the grass. Following Eurydice’s funeral, Orpheus was overcome with a grief the human world could not contain, and so he decided he would journey to the land of the dead, a place from which no living creature had ever returned, to rescue his beloved. When Orpheus reached the gates of the underworld, he began to strum his lyre. The music was so beautiful that Cerberus, the three-headed dog who guards the dead, lay down as Orpheus passed. Charon, the ferry captain who charged dead souls to cross the River Styx, was so moved by the music that he brought Orpheus across free of charge. When Orpheus entered the palace of Hades and Persephone, the king and queen of the dead, he began to sing. He sang of his love for Eurydice, and said she had been taken away too soon. The day would come when she, like all living creatures, dwelled in the land of the dead for all eternity, so couldn’t Hades grant her just a few more years on Earth? In the moment after Orpheus finished, all hell stood still. Sisyphus no longer rolled his rock up the hill. Tantalus did not reach for the water he would never be allowed to drink. Even the Furies, the demonic goddesses of vengeance, wept. Hades and Persephone granted Orpheus’s plea, but on one condition. As he climbed back out of the underworld, he must not turn around to see if Eurydice was following behind him. If he did, she would return to the land of the dead forever. Orpheus began to climb. With each step, he worried more and more about whether Eurydice was behind him. He heard nothing— where were her footsteps? Finally, just before he stepped out of the underworld and into the bright light of day, he gave into temptation. Orpheus tried to return to the underworld, but was refused entry. Separated from Eurydice, Orpheus swore never to love another woman again. Instead, he sat in a grove of trees and sang songs of lovers. There was Ganymede, the beautiful boy who Zeus made drink-bearer to the gods. There was Myrrah, who loved her father and was punished for it, and Pygmalion, who sculpted his ideal woman out of ivory, then prayed to Venus until she came to life. And there was Venus herself, whose beautiful Adonis was killed by a wild boar. It was as if Orpheus’s own love and loss had allowed him to see into the hearts of gods and people everywhere. For some, however, poetry was not enough. A group of wild women called the Maenads could not bear the thought that a poet who sang so beautifully of love would not love them. Their jealousy drove them to a frenzy and they destroyed poor Orpheus. The birds, nature’s singers, mourned Orpheus, as did the rivers, who made music as they babbled. The world had lost two great souls. Orpheus and Eurydice had loved each other so deeply that when they were separated, Orpheus had understood the pain and joys of lovers everywhere, and a new art form, the love poem, was born. While the world wept, Orpheus found peace, and his other half, in the underworld. There, to this day, he walks with Eurydice along the banks of the River Styx. Sometimes, they stroll side by side; sometimes, she is in front; and sometimes, he takes the lead, turning to look back at her as often as he likes.
Myths_from_Around_the_World
The_myth_of_Loki_and_the_master_builder_Alex_Gendler.txt
Asgard, a realm of wonders, was where the Norse Gods made their home. There Odin’s great hall of Valhalla towered above the mountains and Bifrost, the rainbow bridge, anchored itself. But though their domain was magnificent, it stood undefended from the giants and trolls of Jotunheim, who despised the gods and sought to destroy them. One day when Thor, strongest of the gods, was off fighting these foes, a stranger appeared, riding a powerful gray horse. The visitor made the gods an astonishing offer. He would build them the greatest wall they’d ever seen, higher than any giant could climb and stronger than any troll could break. All he asked in return was the beautiful goddess Freya’s hand in marriage— along with the sun and moon from the sky. The gods balked at this request and were ready to send him away. But the trickster Loki concocted a devious plan. He told the gods they should accept the stranger’s offer, but set such strict conditions that he would fail to complete the wall in time. That way, they would lose nothing, while getting most of the wall built for free. Freya didn’t like this idea at all, but Odin and the other gods were convinced and came to an agreement with the builder. He would only have one winter to complete the wall. If any part was unfinished by the first day of summer, he would receive no payment. And he could have no help from any other people. The gods sealed the deal with solemn oaths and swore the mason would come to no harm in Asgard. In the morning, the stranger began to dig the foundations at an astonishing speed, and at nightfall he set off towards the mountains to obtain the building stones. But it was only the next morning, when they saw him returning, that the gods began to worry. As agreed, no other people were helping the mason. But his horse Svadilfari was hauling a load of stones so massive it left trenches in the ground behind them. Winter came and went. The stranger kept building, Svadilfari kept hauling, and neither snow nor rain could slow their progress. With only three days left until summer, the wall stood high and impenetrable, with only the gate left to be built. Horrified, the gods realized that not only would they lose their fertility goddess forever, but without the sun and moon the world would be plunged into eternal darkness. They wondered why they’d made such a foolish wager— and then remembered Loki and his terrible advice. Suddenly, Loki didn’t feel so clever. All of his fellow gods threatened him with an unimaginably painful death if he didn’t find some way to prevent the builder from getting his payment. So Loki promised to take care of the situation, and dashed away. Outside, night had fallen, and the builder prepared to set off to retrieve the final load of stones. But just as he called Svadilfari to him, a mare appeared in the field. She was so beautiful that Svadilfari ignored his master and broke free of his reins. The mason tried to catch him, but the mare ran deep into the woods and Svadilfari followed. The stranger was furious. He knew that the gods were behind this and confronted them: no longer as a mild-mannered mason, but in his true form as a terrifying mountain giant. This was a big mistake. Thor had just returned to Asgard, and now that the gods knew a giant was in their midst, they disregarded their oaths. The only payment the builder would receive— and the last thing he would ever see— was the swing of Thor’s mighty hammer Mjolnir. As they set the final stones into the wall, the gods celebrated their victory. Loki was not among them, however. Several months would pass before he finally returned, followed by a beautiful gray foal with eight legs. The foal would grow into a magnificent steed named Sleipnir and become Odin’s mount, a horse that could outrun the wind itself. But exactly where he had come from was something Loki preferred not to discuss.
Myths_from_Around_the_World
The_myth_of_Prometheus_Iseult_Gillespie.txt
Before the creation of humanity, the Greek gods won a great battle against a race of giants called the Titans. Most Titans were destroyed or driven to the eternal hell of Tartarus. But the Titan Prometheus, whose name means foresight, persuaded his brother Epimetheus to fight with him on the side of the gods. As thanks, Zeus entrusted the brothers with the task of creating all living things. Epimetheus was to distribute the gifts of the gods among the creatures. To some, he gave flight; to others, the ability to move through water or race through grass. He gave the beasts glittering scales, soft fur, and sharp claws. Meanwhile, Prometheus shaped the first humans out of mud. He formed them in the image of the gods, but Zeus decreed they were too remain mortal and worship the inhabitants of Mount Olympus from below. Zeus deemed humans subservient creatures vulnerable to the elements and dependent on the gods for protection. However, Prometheus envisioned his crude creations with a greater purpose. So when Zeus asked him to decide how sacrifices would be made, the wily Prometheus planned a trick that would give humans some advantage. He killed a bull and divided it into two parts to present to Zeus. On one side, he concealed the succulent flesh and skin under the unappealing belly of the animal. On the other, he hid the bones under a thick layer of fat. When Zeus chose the seemingly best portion for himself, he was outraged at Prometheus's deception. Fuming, Zeus forbade the use of fire on Earth, whether to cook meat or for any other purpose. But Prometheus refused to see his creations denied this resource. And so, he scaled Mount Olympus to steal fire from the workshop of Hephaestus and Athena. He hid the flames in a hollow fennel stalk and brought it safely down to the people. This gave them the power to harness nature for their own benefit and ultimately dominate the natural order. With fire, humans could care for themselves with food and warmth. But they could also forge weapons and wage war. Prometheus's flames acted as a catalyst for the rapid progression of civilization. When Zeus looked down at this scene, he realized what had happened. Prometheus had once again wounded his pride and subverted his authority. Furious, Zeus imposed a brutal punishment. Prometheus was to be chained to a cliff for eternity. Each day, he would be visited by a vulture who would tear out his liver and each night his liver would grow back to be attacked again in the morning. Although Prometheus remained in perpetual agony, he never expressed regret at his act of rebellion. His resilience in the face of oppression made him a beloved figure in mythology. He was also celebrated for his mischievous and inquisitive spirit, and for the knowledge, progress, and power he brought to human hands. He's also a recurring figure in art and literature. In Percy Bysshe Shelley's lyrical drama "Prometheus Unbound," the author imagines Prometheus as a romantic hero who escapes and continues to spread empathy and knowledge. Of his protagonist, Shelley wrote, "Prometheus is the type of the highest perfection of moral and intellectual nature, impelled by the purest and the truest motives to the best and noblest ends." His wife Mary envisaged Prometheus as a more cautionary figure and subtitled her novel "Frankenstein: The Modern Prometheus." This suggests the damage of corrupting the natural order and remains relevant to the ethical questions surrounding science and technology today. As hero, rebel, or trickster, Prometheus remains a symbol of our capacity to capture the powers of nature, and ultimately, he reminds us of the potential of individual acts to ignite the world.
Myths_from_Around_the_World
The_myth_of_Loki_and_the_deadly_mistletoe_Iseult_Gillespie.txt
Baldur— son of All Father Odin and Queen Frigg, husband of Nanna the Peaceful, and God of truth and light— was the gentlest and most beloved being in all of Asgard. In his great hall of Breidablik, Baldur’s soothing presence eased his subject’s woes. But lately, he was plagued by troubles of his own. Every night, Baldur had gruesome visions foretelling his own imminent death. Determined to protect her son from these grim prophecies, Queen Frigg travelled across the nine realms, begging all living things not to harm Baldur. Her grace moved each being she encountered. Every animal and element, every plague and plant, every blade and bug gladly gave their word. Frigg returned to Breidablik, and threw a great feast to celebrate. Wine flowed freely, and soon the gods took turns testing Baldur’s immunity. Lurking in the corner, Loki rolled his eyes. The trickster god had never cared for Baldur the Bright, and found his new gift profoundly irritating. Surely there was a flaw in Frigg’s plan. Taking the form of an old woman, Loki crept to Frigg’s side and feigned confusion. Why were the gods attacking sweet Baldur, whom they all loved so dearly? Frigg told her of the oaths, but the old woman pressed on. Surely you didn’t receive a vow from everything, she asked. Frigg shrugged. The only being she hadn’t visited was mistletoe. After all, what god could fear a trifling weed? At this, Loki dashed outside to find a sprig of mistletoe. When he returned, the festivities had grown even rowdier. But not every god was enjoying the party. Baldur’s brother Hodur, who was blind and weapon-less, sat dejected. Seeing his opportunity, the trickster slyly offered Hodur a chance to participate. Loki armed him with mistletoe, guided his aim towards his brother, and told Hodur to hurl with all his might. The mistletoe pierced Baldur’s chest with deadly force. The god’s light quickly flickered out, and despair swept over the crowd. Within moments, the impact of Baldur’s death could be felt across the nine realms. But from the weeping masses, Hermod the Brave stepped forward. The warrior god believed that with the help of Odin’s mighty steed, there was no plane he could not reach. He would travel to halls of Hel herself, and bring Baldur home. The god rode for nine days and nine nights, past halls of corpses and over paths paved with bone. When he finally reached the Queen of the Underworld, Hermod begged her to return Baldur to his family. Hel considered taking pity, but she wanted to know the extent of the gods’ mourning. She agreed to relinquish Baldur’s soul— if Hermod could prove that every living thing wept at Baldur’s death. Hermod shot back to the land of the living. He met with every creature that Frigg visited earlier— all of which cried for Baldur and begged for his return. Meanwhile, Loki watched Hermod’s mission with disdain. He would not let his work be so easily undone, but if he interfered too boldly it might reveal his hand in Baldur’s murder. Disguising himself as a ferocious giant, he hid himself at Hermod’s final stop. When the warrior came, the howling wind and craggy rocks each declared their love for Baldur. But the giant within spewed only contempt for the deceased. No matter how much Hermod begged, she would not shed a single tear. With his last hope dashed, the god began to mourn Baldur a second time. But an echo from the cave rang out above his sobs. Loki’s twisted cackle was well-known to every Asgardian, and Hermod realized he’d been tricked. As he leapt to accost the trickster, Loki took the form of a salmon and wriggled into the waterfall. His escape was guaranteed, until Thor arrived at the scene. Dragging Loki back to the cave, the gods bound him with a poisonous serpent. Here, Loki would remain chained until the end of days— the serpent dripping venom on his brow as punishment for dousing Asgard’s brightest light.
Myths_from_Around_the_World
The_myth_of_Lokis_monstrous_children_Iseult_Gillespie.txt
Odin, the king of Asgard, was plagued by nightmares. Three fearsome figures haunted his dreams: a massive, writhing shadow; a shambling, rotting corpse; and worst of all, a monstrous beast with a deadly bite. Night after night, the creatures besieged the king. And although their true forms were unknown to him, he could tell they were related to Asgard’s most persistent problem: Loki. Despite having settled down with his wife and sons, Loki had been sneaking off to visit the giantess Angrboda. And when the king learned this affair had produced three children, he was filled with unease. Odin summoned Thor and Tyr, two of his bravest warriors, to travel to Jotunheim to capture Loki’s secret children. Upon arriving at Angrboda’s home, the pair were immediately accosted by Loki's first child, a serpent named Jörmungandr. The God of Thunder dodged the snake’s venom and swiftly bound him to a pine tree. The second child, Hel, appeared as a glowing young woman from the right and a moldering corpse from the left. Her flesh flaked onto the ground as she silently submitted to her captors. Finally, the third child leapt at Tyr. The small wolf was fierce but harmless. Tyr playfully cuffed its claws and stowed the cub in his pocket. Back in Asgard, the warriors presented their prisoners and fearful recognition seized Odin's heart. Though these three were meager reflections of his dark dreams, the king was determined to dispose of them before his visions came true. First, he banished Jörmungandr to the sea at the edge of the world. Then he sent Hel deep below the earth to join her fellow corpses. But the wolf, named Fenrir, presented a challenge. He’d already grown strong enough to threaten the gods, so Odin took a more patient approach. For months, he supervised the creature, watching Fenrir grow from a cub to a wolf to a beast who spoke with the voice of a God. Tyr visited frequently and found Fenrir to be strong and clever. But as their bond deepened, Odin's fear only grew. One day, Odin forged his heaviest chains and hauled them to Fenrir with a challenge. He would bind the wolf to test his growing strength. Fenrir eagerly accepted the challenge and splintered the metal like old wood. Odin returned to the forge, crafting shackles that no man could lift alone. These sturdy chains gave Fenrir pause. But with an encouraging wink from Tyr, he accepted the challenge. The beast strained for a moment and then shattered his restraints into a thousand pieces. Desperate, Odin sought help from the most skilled makers of all: the Dwarves. Rather than metal, they sought the rarest ingredients; from feline footsteps and fish breath to the sinews of mountains and mighty bears. With these, the Dwarves crafted Gleipnir, an unbreakable chain in the guise of fine thread. When Odin challenged Fenrir a third time, the wolf laughed. But as he examined the thread more closely, Fenrir sensed Odin’s trickery and began to feel some fear himself. Fenrir struck a deal. He would accept the challenge, but only if a god kept their hand in his mouth throughout. With a heavy heart, Tyr volunteered. The gods bound the wolf and as he strained Gleipnir only grew tighter. Fenrir felt the agony of betrayal— not only from Odin, but from his reluctant friend. With a howl of fury, he bit through Tyr’s wrist and vowed to destroy Odin for tricking him. Watching his nightmare come to life, Odin thrust Tyr’s blade between Fenrir jaws, releasing a torrent of saliva that became a furious river. While the beast was not dead, he was bound, and Odin celebrated his victory over fate. But in truth, his actions had only sealed his doom. Beneath the waves, Jörmungandr grew to encircle the world. Hel rose to rule the dead as queen of the Underworld. And every day, Fenrir strained a little more against his chains, inching ever closer to his bloody revenge.