playlist stringclasses 160
values | file_name stringlengths 9 102 | content stringlengths 29 329k |
|---|---|---|
TedEd_History | 아르키메데스의_유레카_그_뒤에_숨겨진_실화_아르만드_당구어.txt | When you think of Archimedes' "Eureka!" moment, you probably think of this. As it turns out, it may have been more like this. In the third century BC, Hieron, king of the Sicilian city of Syracuse, chose Archimedes to supervise an engineering project of unprecedented scale. Hieron commissioned a sailing vessel 50 times bigger than a standard ancient warship, named the Syracusia after his city. Hieron wanted to construct the largest ship ever, which was destined to be given as a present for Egypt's ruler, Ptolemy. But could a boat the size of a palace possibly float? In Archimedes's day, no one had attempted anything like this. It was like asking, "Can a mountain fly?" King Hieron had a lot riding on that question. Hundreds of workmen were to labor for years on constructing the Syracusia out of beams of pine and fir from Mount Etna, ropes from hemp grown in Spain, and pitch from France. The top deck, on which eight watchtowers were to stand, was to be supported not by columns, but by vast wooden images of Atlas holding the world on his shoulders. On the ship's bow, a massive catapult would be able to fire 180 pound stone missiles. For the enjoyment of its passengers, the ship was to feature a flower-lined promenade, a sheltered swimming pool, and bathhouse with heated water, a library filled with books and statues, a temple to the goddess Aphrodite, and a gymnasium. And just to make things more difficult for Archimedes, Hieron intended to pack the vessel full of cargo: 400 tons of grain, 10,000 jars of pickled fish, 74 tons of drinking water, and 600 tons of wool. It would have carried well over a thousand people on board, including 600 soldiers. And it housed 20 horses in separate stalls. To build something of this scale, only for that to sink on its maiden voyage? Well, let's just say that failure wouldn't have been a pleasant option for Archimedes. So he took on the problem: will it sink? Perhaps he was sitting in the bathhouse one day, wondering how a heavy bathtub can float, when inspiration came to him. An object partially immersed in a fluid is buoyed up by a force equal to the weight of the fluid displaced by the object. In other words, if a 2,000 ton Syracusia displaced exactly 2,000 tons of water, it would just barely float. If it displaced 4,000 tons of water, it would float with no problem. Of course, if it only displaced 1,000 tons of water, well, Hieron wouldn't be too happy. This is the law of buoyancy, and engineers still call it Archimedes' Principle. It explains why a steel supertanker can float as easily as a wooden rowboat or a bathtub. If the weight of water displaced by the vessel below the keel is equivalent to the vessel's weight, whatever is above the keel will remain afloat above the waterline. This sounds a lot like another story involving Archimedes and a bathtub, and it's possible that's because they're actually the same story, twisted by the vagaries of history. The classical story of Archimedes' Eureka! and subsequent streak through the streets centers around a crown, or corona in Latin. At the core of the Syracusia story is a keel, or korone in Greek. Could one have been mixed up for the other? We may never know. On the day the Syracusia arrived in Egypt on its first and only voyage, we can only imagine how residents of Alexandria thronged the harbor to marvel at the arrival of this majestic, floating castle. This extraordinary vessel was the Titanic of the ancient world, except without the sinking, thanks to our pal, Archimedes. |
TedEd_History | The_Akune_brothers_Siblings_on_opposite_sides_of_war_Wendell_Oshiro.txt | There are many stories that can be told about World War II, from the tragic to the inspring. But perhaps one of the most heartrending experiences was that of the Akune family, divided by the war against each other and against their own identities. Ichiro Akune and his wife Yukiye immigrated to America from Japan in 1918 in search of opportunity, opening a small grocery store in central California and raising nine children. But when Mrs. Akune died in 1933, the children were sent to live with relatives in Japan, their father following soon after. Though the move was a difficult adjustment after having been born and raised in America, the oldest son, Harry, formed a close bond with his grand uncle, who taught him the Japanese language, culture, and values. Nevertheless, as soon as Harry and his brother Ken were old enough to work, they returned to the country they considered home, settling near Los Angeles. But then, December 7, 1941, the attack on Pearl Harbor. Now at war with Japan, the United States government did not trust the loyalty of those citizens who had family or ancestral ties to the enemy country. In 1942, about 120,000 Japanese Americans living on the West Coast were stripped of their civil rights and forcibly relocated to internment camps, even though most of them, like Harry and Ken, were Nisei, American or dual citizens who had been born in the US to Japanese immigrant parents. The brothers not only had very limited contact with their family in Japan, but found themselves confined to a camp in a remote part of Colorado. But their story took another twist when recruiters from the US Army's military intelligence service arrived at the camp looking for Japanese-speaking volunteers. Despite their treatment by the government, Harry and Ken jumped at the chance to leave the camp and prove their loyalty as American citizens. Having been schooled in Japan, they soon began their service, translating captured documents, interrogating Japanese soldiers, and producing Japanese language propaganda aimed at persuading enemy forces to surrender. The brothers' work was invaluable to the war effort, providing vital strategic information about the size and location of Japanese forces. But they still faced discrimination and mistrust from their fellow soldiers. Harry recalled an instance where his combat gear was mysteriously misplaced just prior to parachuting into enemy territory, with the white officer reluctant to give him a weapon. Nevertheless, both brothers continued to serve loyally through the end of the war. But Harry and Ken were not the only Akune brothers fighting in the Pacific. Unbeknownst to them, two younger brothers, the third and fourth of the five Akune boys, were serving dutifully in the Imperial Japanese Navy. Saburo in the Naval Airforce, and 15-year-old Shiro as an orientation trainer for new recruits. When the war ended, Harry and Ken served in the allied occupational forces and were seen as traitors by the locals. When all the Akune brothers gathered at a family reunion in Kagoshima for the first time in a decade, it was revealed that the two pairs had fought on opposing sides. Tempers flared and a fight almost broke out until their father stepped in. The brothers managed to make peace and Saburo and Shiro joined Harry and Ken in California, and later fought for the US Army in Korea. It took until 1988 for the US government to acknowledge the injustice of its internment camps and approve reparations payments to survivors. For Harry, though, his greatest regret was not having the courage to thank his Japanese grand uncle who had taught him so much. The story of the Akune brothers is many things: a family divided by circumstance, the unjust treatment of Japanese Americans, and the personal struggle of reconciling two national identities. But it also reveals a larger story about American history: the oppression faced by immigrant groups and their perseverance in overcoming it. |
TedEd_History | 베르메르의_진주귀걸이를_한_소녀가_걸작인_이유는_무엇일까요ㅣ제임스_얼James_Earle.txt | Is she turning towards you or away from you? No one can agree. She's the mysterious subject of Dutch master Johannes Vermeer's "Girl with the Pearl Earring," a painting often referred to as the ‘Mona Lisa of the North.’ Belonging to a Dutch style of idealized, sometimes overly expressive paintings known as tronies, the "Girl with the Pearl Earring" has the allure and subtlety characteristic of Vermeer's work. But this painting stands apart from the quiet narrative scenes that we observe from afar in many of Vermeer's paintings. A girl reading a letter. A piano lesson. A portrait artist at work. These paintings give us a sense of intimacy while retaining their distance, a drawn curtain often emphasizes the separation. We can witness a milkmaid serenely pouring a bowl of milk, but that milk isn't for us. We're only onlookers. The studied composition in Vermeer's paintings invokes a balanced harmony. With the checkered floor in many of his works, Vermeer demonstrates his command of perspective and foreshortening. That's a technique that uses distortion to give the illusion of an object receding into the distance. Other elements, like sight lines, mirrors, and light sources describe the moment through space and position. The woman reading a letter by an open window is precisely placed so the window can reflect her image back to the viewer. Vermeer would even hide the leg of an easel for the sake of composition. The absence of these very elements brings the "Girl with the Pearl Earring" to life. Vermeer's treatment of light and shadow, or chiaroscuro, uses a dark, flat background to further spotlight her three-dimensionality. Instead of being like a set piece in a theatrical narrative scene, she becomes a psychological subject. Her eye contact and slightly parted lips, as if she is about to say something, draw us into her gaze. Traditional subjects of portraiture were often nobility or religious figures. So why was Vermeer painting an anonymous girl? In the 17th century, the city of Delft, like the Netherlands in general, had turned against ruling aristocracy and the Catholic church. After eight decades of rebellion against Spanish power, the Dutch came to favor the idea of self-rule and a political republic. Cities like Delft were unsupervised by kings or bishops, so many artists like Vermeer were left without traditional patrons. Fortunately, business innovation spearheaded by the Dutch East India Company transformed the economic landscape in the Netherlands. It created a merchant class and new type of patron. Wishing to be represented in the paintings they financed, these merchants preferred middle class subjects depicted in spaces that looked like their own homes surrounded by familiar objects. The maps that appear in Vermeer's paintings, for example, were considered fashionable and worldly by the merchant class of what is known as the Dutch Golden Age. The oriental turban worn by the “Girl with the Pearl Earring” also emphasizes the worldliness of the merchant class, and the pearl itself, a symbol of wealth, is actually an exaggeration. Vermeer couldn't have afforded a real pearl of its size. It was likely just a glass or tin drop varnished to look like a pearl. This mirage of wealth is mirrored in the painting itself. In greater context, the pearl appears round and heavy, but a detailed view shows that it's just a floating smudge of paint. Upon close inspection, we are reminded of Vermeer's power as an illusion maker. While we may never know the real identity of the "Girl with the Pearl Earring," we can engage with her portrait in a way that is unforgettable. As she hangs in her permanent home in the Mauritshuis Museum in The Hague, her presence is simultaneously penetrating and subtle. In her enigmatic way, she represents the birth of a modern perspective on economics, politics, and love. |
TedEd_History | 무엇이_경제_거품을_일으킬까요_프라틱_싱.txt | How much would you pay for a bouquet of tulips? A few dollars? A hundred dollars? How about a million dollars? Probably not. Well, how much would you pay for this house, or partial ownership of a website that sells pet supplies? At different points in time, tulips, real estate and stock in pets.com have all sold for much more than they were worth. In each instance, the price rose and rose and then abruptly plummeted. Economists call this a bubble. So what is exactly is going on with a bubble? Well, let's start with the tulips to get a better idea. The 17th century saw the Netherlands enter the Dutch golden age. By the 1630s, Amsterdam was an important port and commercial center. Dutch ships imported spices from Asia in huge quantities to earn profits in Europe. So Amsterdam was brimming with wealthy, skilled merchants and traders who displayed their prosperity by living in mansions surrounded by flower gardens. And there was one flower in particularly high demand: the tulip. The tulip was brought to Europe on trading vessels that sailed from the East. Because of this, it was considered an exotic flower that was also difficult to grow, since it could take years for a single tulip to bloom. During the 1630s, an outbreak of tulip breaking virus made select flowers even more beautiful by lining petals with multicolor, flame-like streaks. A tulip like this was scarcer than a normal tulip and as a result, prices for these flowers started to rise, and with them, the tulip's popularity. It wasn't long before the tulip became a nationwide sensation and tulip mania was born. A mania occurs when there is an upward movement of price combined with a willingness to pay large sums of money for something valued much lower in intrinsic value. A recent example of this is the dot-com mania of the 1990s. Stocks in new, exciting websites were like the tulips of the 17th century. Everybody wanted some. The more people who wanted the tulip, the higher the price could go. At one point, a single tulip bulb sold for more than ten times the annual salary of a skilled craftsman. In the stock market, the price of stock is based on the supply and demand of investors. Stock prices tend to rise when it seems like a company will earn more in the future. Investors might then buy more of the stock, raising the prices even further due to an increased demand. This can result in a feedback loop where investors get caught up in the hype and ultimately drive prices far above intrinsic value, creating a bubble. All that is needed for a mania to end and for a bubble to burst is the collective realization that the price of the stock, or a tulip, far exceeds its worth. That's what happened with both manias. Suddenly the demand ended. Prices were pushed to staggering lows, and pop! The bubbles burst, and the market crashed. Today, scholars work long and hard trying to predict what causes a bubble and how to avoid them. Tulip mania is an effective illustration of the underlying principles at work in a bubble and can help us understand more recent examples like the real estate bubble of the late 2000s. The economy will continue to go through phases of booms and busts. So while we wait for the next mania to start, and the next bubble to burst, treat yourself to a bouquet of tulips and enjoy the fact that you didn't have to pay an arm and a leg for them. |
TedEd_History | The_controversial_origins_of_the_Encyclopedia_Addison_Anderson.txt | Denis Diderot left a dungeon outside Paris on November 3, 1749. He'd had his writing burned in public before, but this time, he'd gotten locked up under royal order for an essay about a philosopher's death bed rejection of God. To free himself, Denis promised never to write things like that again. So he got back to work on something a little like that, only way worse, and much bigger. In 1745, publisher André le Breton had hired Diderot to adapt the English cyclopedia, or a universal dictionary of arts and sciences for French subscribers. A broke writer, Diderot survived by translating, tutoring, and authoring sermons for priests, and a pornographic novel once. Le Breton paired him with co-editor Jean le Rond d'Alembert, a math genius found on a church doorstep as a baby. Technical dictionaries, like the cyclopedia, weren't new, but no one had attempted one publication covering all knowledge, so they did. The two men organized the French Enlightenment's brightest stars to produce the first encyclopedia, or rational dictionary of the arts, sciences, and crafts. Assembling every essential fact and principle in, as it turned out, over 70,000 entries, 20,000,000 words in 35 volumes of text and illustrations created over three decades of researching, writing, arguging, smuggling, backstabbing, law-breaking, and alphabetizing. To organize the work, Diderot adapted Francis Bacon's "Classification of Knowledge" into a three-part system based on the mind's approaches to reality: memory, reason, and imagination. He also emphasized the importance of commerce, technology, and crafts, poking around shops to study the tools and techniques of Parisian laborers. To spotlight a few of the nearly 150 philosoph contributers, Jean Jacques Rousseau, Diderot's close friend, wrote much of the music section in three months, and was never reimbursed for copy fees. His entry on political economy holds ideas he'd later develop further in "The Social Contract." D'Alembert wrote the famous preliminary discourse, a key statement of the French Enlightenment, championing independent investigative reasoning as the path to progress. Louis de Jaucourt wrote a quarter of the encyclopedia, 18,000 articles, 5,000,000 words, unpaid. Louis once spent 20 years writing a book on anatomy, shipped it to Amsterdam to be published uncensored, and the ship sank. Voltaire contributed entries, among them history, elegance, and fire. Diderot's entries sometimes exhibit slight bias. In "political authority," he dismantled the divine right of kings. Under "citizen," he argued a state was strongest without great disparity in wealth. Not surprising from the guy who wrote poetry about mankind strangling its kings with the entrails of a priest. So Diderot's masterpiece wasn't a hit with the king or highest priest. Upon release of the first two volumes, Louie XV banned the whole thing but enjoyed his own copy. Pope Clement XIII ordered it burned. It was "dangerous," "reprehensible," as well as "written in French," and in "the most seductive style." He declared readers excommunicated and wanted Diderot arrested on sight. But Diderot kept a step ahead of being shut down, smuggling proofs outside France for publication, and getting help from allies in the French Regime, including the King's mistress, Madame de Pompadour, and the royal librarian and censor, Malesherbes, who tipped Diderot off to impending raids, and even hid Diderot's papers at his dad's house. Still, he faced years of difficulty. D'Alembert dropped out. Rousseau broke off his friendship over a line in a play. Worse yet, his publisher secretly edited some proofs to read less radically. The uncensored pages reappeared in Russia in 1933, long after Diderot had considered the work finished and died at lunch. The encyclopedia he left behind is many things: a cornerstone of the Enlightenment, a testament to France's crisis of authority, evidence of popular opinions migration from pulpit and pew to cafe, salon, and press. It even has recipes. It's also irrepressibly human, as you can tell from Diderot's entry about a plant named aguaxima. Read it yourself, preferably out loud in a French accent. |
TedEd_History | 중국의_황도_12궁에_얽힌_전설_메간_캠피시_펜펜_첸Megan_Campisi_and_PenPen_Chen.txt | What's your sign? In Western astrology, it's a constellation determined by when your birthday falls in the calendar. But according to the Chinese zodiac, or shēngxiào, it's your shǔxiàng, meaning the animal assigned to your birth year. And of the many myths explaining these animal signs and their arrangement, the most enduring one is that of the Great Race. As the story goes, Yù Dì, or Jade Emperor, Ruler of the Heavens, wanted to devise a way to measure time, so he organized a race. The first twelve animals to make it across the river would earn a spot on the zodiac calendar in the order they arrived. The rat rose with the sun to get an early start, but on the way to the river, he met the horse, the tiger, and the ox. Because the rat was small and couldn't swim very well, he asked the bigger animals for help. While the tiger and horse refused, the kind-hearted ox agreed to carry the rat across. Yet, just as they were about to reach the other side, the rat jumped off the ox's head and secured first place. The ox came in second, with the powerful tiger right behind him. The rabbit, too small to battle the current, nimbly hopped across stones and logs to come in fourth. Next came the dragon, who could have flown directly across, but stopped to help some creatures she had encountered on the way. After her came the horse, galloping across the river. But just as she got across, the snake slithered by. The startled horse reared back, letting the snake sneak into sixth place. The Jade Emperor looked out at the river and spotted the sheep, the monkey, and the rooster all atop a raft, working together to push it through the weeds. When they made it across, the trio agreed to give eighth place to the sheep, who had been the most comforting and harmonious of them, followed by the monkey and the rooster. Next came the dog, scrambling onto the shore. He was a great swimmer, but frolicked in the water for so long that he only managed to come in eleventh. The final spot was claimed by the pig, who had gotten hungry and stopped to eat and nap before finally waddling across the finish line. And so, each year is associated with one of the animals in this order, with the cycle starting over every 60 years. Why 60 and not twelve? Well, the traditional Chinese calendar is made up of two overlapping systems. The animals of the zodiac are associated with what's called the Twelve Earthly Branches, or shí'èrzhī. Another system, the Ten Heavenly Stems, or tiāngān, is linked with the five classical elements of metal, xīn, wood, mù, water, shuǐ, fire, huǒ, and earth, tǔ. Each element is assigned yīn or yáng, creating a ten-year cycle. When the twelve animals of the Earthly Branches are matched with the five elements plus the yīn or the yáng of the Heavenly Stems, it creates 60 years of different combinations, known as a sexagenary cycle, or gānzhī. So someone born in 1980 would have the sign of yáng metal monkey, while someone born in 2007 would be yīn fire pig. In fact, you can also have an inner animal based on your birth month, a true animal based on your birth date, and a secret animal based on your birth hour. It was the great race that supposedly determined which animals were enshrined in the Chinese zodiac, but as the system spread through Asia, other cultures made changes to reflect their communities. So if you consult the Vietnamese zodiac, you may discover that you're a cat, not a rabbit, and if you're in Thailand, a mythical snake called a Naga replaces the dragon. So whether or not you place stock in what the zodiac says about you as an individual, it certainly reveals much about the culture it comes from. |
TedEd_History | 시저에_대항한_위대한_음모_캐서린_템페스트Kathryn_Tempest.txt | What would you do if you thought your country was on the path to tyranny? If you saw one man gaining too much power, would you try to stop him? Even if that man was one of your closest friends and allies? These were the questions haunting Roman Senator Marcus Junius Brutus in 44 BCE, the year Julius Caesar would be assassinated. Opposing unchecked power wasn’t just a political matter for Brutus; it was a personal one. He claimed descent from Lucius Junius Brutus, who had helped overthrow the tyrannical king known as Tarquin the Proud. Instead of seizing power himself, the elder Brutus led the people in a rousing oath to never again allow a king to rule. Rome became a republic, based on the principle that no one man should hold too much power. Now, four and a half centuries later, this principle was threatened. Julius Ceasar's rise to the powerful position of consul had been dramatic. Years of military triumphs had made him the wealthiest man in Rome. And after defeating his rival Pompey the Great in a bitter civil war, his power was at its peak. His victories and initiatives, such as distributing lands to the poor, had made him popular with the public, and many senators vied for his favor by showering him with honors. Statues were built, temples were dedicated, and a whole month was renamed, still called July today. More importantly, the title of dictator, meant to grant temporary emergency powers in wartime, had been bestowed upon Caesar several times in succession. And in 44 BCE, he was made dictator perpetuo, dictator for a potentially unlimited term. All of this was too much for the senators who feared a return to the monarchy their ancestors had fought to abolish, as well as those whose own power and ambition were impeded by Caesar's rule. A group of conspirators calling themselves the liberators began to secretly discuss plans for assassination. Leading them were the senator Gaius Cassius Longinus and his friend and brother-in-law, Brutus. Joining the conspiracy was not an easy choice for Brutus. Even though Brutus had sided with Pompey in the ill-fated civil war, Caesar had personally intervened to save his life, not only pardoning him but even accepting him as a close advisor and elevating him to important posts. Brutus was hesitant to conspire against the man who had treated him like a son, but in the end, Cassius's insistence and Brutus's own fear of Caesar's ambitions won out. The moment they had been waiting for came on March 15. At a senate meeting held shortly before Caesar was to depart on his next military campaign, as many as 60 conspirators surrounded him, unsheathing daggers from their togas and stabbing at him from all sides. As the story goes, Caesar struggled fiercely until he saw Brutus. Despite the famous line, "Et tu, Brute?" written by Shakespeare, we don't know Caesar's actual dying words. Some ancient sources claim he said nothing, while others record the phrase, “And you, child?” fueling speculation that Brutus may have actually been Caesar's illegitimate son. But all agree that when Caesar saw Brutus among his attackers, he covered his face and gave up the fight, falling to the ground after being stabbed 23 times. Unfortunately for Brutus, he and the other conspirators had underestimated Caesar's popularity among the Roman public, many of whom saw him as an effective leader, and the senate as a corrupt aristocracy. Within moments of Caesar's assassination, Rome was in a state of panic. Most of the other senators had fled, while the assassins barricaded themselves on the Capitoline Hill. Mark Antony, Caesar's friend and co-consul, was swift to seize the upper hand, delivering a passionate speech at Caesar's funeral days later that whipped the crowd into a frenzy of grief and anger. As a result, the liberators were forced out of Rome. The ensuing power vacuum led to a series of civil wars, during which Brutus, facing certain defeat, took his own life. Ironically, the ultimate result would be the opposite of what the conspirators had hoped to accomplish: the end of the Republic and the concentration of power under the office of Emperor. Opinions over the assassination of Caesar were divided from the start and have remained so. As for Brutus himself, few historical figures have inspired such a conflicting legacy. In Dante's "Inferno," he was placed in the very center of Hell and eternally chewed by Satan himself for his crime of betrayal. But Swift's "Gulliver's Travels" described him as one of the most virtuous and benevolent people to have lived. The interpretation of Brutus as either a selfless fighter against dictatorship or an opportunistic traitor has shifted with the tides of history and politics. But even today, over 2,000 years later, questions about the price of liberty, the conflict between personal loyalties and universal ideals, and unintended consequences remain more relevant than ever. |
TedEd_History | Who_was_Confucius_Bryan_W_Van_Norden.txt | Most people recognize his name and know that he is famous for having said something, but considering the long-lasting impact his teachings have had on the world, very few people know who Confucius really was, what he really said, and why. Amid the chaos of 6th Century BCE China, where warring states fought endlessly among themselves for supremacy, and rulers were frequently assassinated, sometimes by their own relatives, Confucius exemplified benevolence and integrity, and through his teaching, became one of China's greatest philosophers. Born to a nobleman but raised in poverty from a very young age following the untimely death of his father, Confucius developed what would become a lifelong sympathy for the suffering of the common people. Barely supporting his mother and disabled brother as a herder and account keeper at a granary, and with other odd jobs, it was only with the help of a wealthy friend that Confucius was able to study at the Royal Archives, where his world view would be formed. Though the ancient texts there were regarded by some as irrelevant relics of the past, Confucius was inspired by them. Through study and reflection, Confucius came to believe that human character is formed in the family and by education in ritual, literature, and history. A person cultivated in this way works to help others, guiding them by moral inspiration rather than brute force. To put his philosophy into practice, Confucius became an advisor to the ruler of his home state of Lu. But after another state sent Lu's ruler a troop of dancing girls as a present and the ruler ignored his duties while enjoying the girls in private, Confucius resigned in disgust. He then spent the next few years traveling from state to state, trying to find a worthy ruler to serve, while holding fast to his principles. It wasn't easy. In accordance with his philosophy, and contrary to the practice of the time, Confucius dissuaded rulers from relying on harsh punishments and military power to govern their lands because he believed that a good ruler inspires others to spontaneously follow him by virtue of his ethical charisma. Confucius also believed that because the love and respect we learn in the family are fundamental to all other virtues, personal duties to family sometimes supersede obligations to the state. So when one duke bragged that his subjects were so upright that a son testified against his own father when his father stole a sheep, Confucius informed the duke that genuinely upright fathers and sons protected one another. During his travels, Confucius almost starved, he was briefly imprisoned, and his life was threatened at several points. But he was not bitter. Confucius had faith that heaven had a plan for the world, and he taught that a virtuous person could always find joy in learning and music. Failing to find the ruler he sought, Confucius returned to Lu and became a teacher and philosopher so influential, that he helped shaped Chinese culture and we recognize his name worldwide, even today. For the disciples of Confucius, he was the living embodiment of a sage who leads others through his virtue, and they recorded his sayings, which eventually were edited into a book we know in English as "The Analects." Today, millions of people worldwide adhere to the principles of Confucianism, and though the precise meaning of his words has been debated for millennia, when asked to summarize his teachings in a single phrase, Confucius himself said, "Do not inflict upon others that which you yourself would not want." 2,500 years later, it's still sage advice. |
TedEd_History | How_to_understand_power_Eric_Liu.txt | Every day of your life, you move through systems of power that other people made. Do you sense them? Do you understand power? Do you realize why it matters? Power is something we are often uncomfortable talking about. That's especially true in civic life, how we live together in community. In a democracy, power is supposed to reside with the people, period. Any further talk about power and who really has it seems a little dirty, maybe even evil. But power is no more inherently good or evil than fire or physics. It just is. It governs how any form of government works. It determines who gets to determine the rules of the game. So learning how power operates is key to being effective, being taken seriously, and not being taken advantage of. In this lesson, we'll look at where power comes from, how it's exercised and what you can do to become more powerful in public life. Let's start with a basic definition. Power is the ability to make others do what you would have them do. Of course, this plays out in all arenas of life, from family to the workplace to our relationships. Our focus is on the civic arena, where power means getting a community to make the choices and to take the actions that you want. There are six main sources of civic power. First, there's physical force and a capacity for violence. Control of the means of force, whether in the police or a militia, is power at its most primal. A second core source of power is wealth. Money creates the ability to buy results and to buy almost any other kind of power. The third form of power is state action, government. This is the use of law and bureaucracy to compel people to do or not do certain things. In a democracy, for example, we the people, theoretically, give government its power through elections. In a dictatorship, state power emerges from the threat of force, not the consent of the governed. The fourth type of power is social norms or what other people think is okay. Norms don't have the centralized machinery of government. They operate in a softer way, peer to peer. They can certainly make people change behavior and even change laws. Think about how norms around marriage equality today are evolving. The fifth form of power is ideas. An idea, individual liberties, say, or racial equality, can generate boundless amounts of power if it motivates enough people to change their thinking and actions. And so the sixth source of power is numbers, lots of humans. A vocal mass of people creates power by expressing collective intensity of interest and by asserting legitimacy. Think of the Arab Spring or the rise of the Tea Party. Crowds count. These are the six main sources of power, what power is. So now, let's think about how power operates. There are three laws of power worth examining. Law number one: power is never static. It's always either accumulating or decaying in a civic arena. So if you aren't taking action, you're being acted upon. Law number two: power is like water. It flows like a current through everyday life. Politics is the work of harnessing that flow in a direction you prefer. Policymaking is an effort to freeze and perpetuate a particular flow of power. Policy is power frozen. Law number three: power compounds. Power begets more power, and so does powerlessness. The only thing that keeps law number three from leading to a situation where only one person has all the power is how we apply laws one and two. What rules do we set up so that a few people don't accumulate too much power, and so that they can't enshrine their privilege in policy? That's the question of democracy, and you can see each of these laws at work in any news story. Low wage workers organize to get higher pay. Oil companies push to get a big pipeline approved. Gay and lesbian couples seek the legal right to marry. Urban parents demand school vouchers. You may support these efforts or not. Whether you get what you want depends on how adept you are with power, which brings us finally to what you can do to become more powerful in public life. Here, it's useful to think in terms of literacy. Your challenge is to learn how to read power and write power. To read power means to pay attention to as many texts of power as you can. I don't mean books only. I mean seeing society as a set of texts. Don't like how things are in your campus or city or country? Map out who has what kind of power, arrayed in what systems. Understand why it turned out this way, who's made it so, and who wants to keep it so. Study the strategies others in such situations used: frontal attack or indirection, coalitions or charismatic authority. Read so you may write. To write power requires first that you believe you have the right to write, to be an author of change. You do. As with any kind of writing, you learn to express yourself, speak up in a voice that's authentic. Organize your ideas, then organize other people. Practice consensus building. Practice conflict. As with writing, it's all about practice. Every day you have a chance to practice, in your neighborhood and beyond. Set objectives, then bigger ones. Watch the patterns, see what works. Adapt, repeat. This is citizenship. In this short lesson, we've explored where civic power comes from, how it works and what you can do to exercise it. One big question remaining is the "why" of power. Do you want power to benefit everyone or only you? Are your purposes pro-social or anti-social? This question isn't about strategy. It's about character, and that's another set of lessons. But remember this: Power plus character equals a great citizen, and you have the power to be one. |
TedEd_History | 문법이_중요한가_안드레아_S_칼루드Andreea_S_Calude.txt | You're telling a friend an amazing story, and you just get to the best part when suddenly he interrupts, "The alien and I," not "Me and the alien." Most of us would probably be annoyed, but aside from the rude interruption, does your friend have a point? Was your sentence actually grammatically incorrect? And if he still understood it, why does it even matter? From the point of view of linguistics, grammar is a set of patterns for how words are put together to form phrases or clauses, whether spoken or in writing. Different languages have different patterns. In English, the subject normally comes first, followed by the verb, and then the object, while in Japanese and many other languages, the order is subject, object, verb. Some scholars have tried to identify patterns common to all languages, but apart from some basic features, like having nouns or verbs, few of these so-called linguistic universals have been found. And while any language needs consistent patterns to function, the study of these patterns opens up an ongoing debate between two positions known as prescriptivism and descriptivism. Grossly simplified, prescriptivists think a given language should follow consistent rules, while descriptivists see variation and adaptation as a natural and necessary part of language. For much of history, the vast majority of language was spoken. But as people became more interconnected and writing gained importance, written language was standardized to allow broader communication and ensure that people in different parts of a realm could understand each other. In many languages, this standard form came to be considered the only proper one, despite being derived from just one of many spoken varieties, usually that of the people in power. Language purists worked to establish and propagate this standard by detailing a set of rules that reflected the established grammar of their times. And rules for written grammar were applied to spoken language, as well. Speech patterns that deviated from the written rules were considered corruptions, or signs of low social status, and many people who had grown up speaking in these ways were forced to adopt the standardized form. More recently, however, linguists have understood that speech is a separate phenomenon from writing with its own regularities and patterns. Most of us learn to speak at such an early age that we don't even remember it. We form our spoken repertoire through unconscious habits, not memorized rules. And because speech also uses mood and intonation for meaning, its structure is often more flexible, adapting to the needs of speakers and listeners. This could mean avoiding complex clauses that are hard to parse in real time, making changes to avoid awkward pronounciation, or removing sounds to make speech faster. The linguistic approach that tries to understand and map such differences without dictating correct ones is known as descriptivism. Rather than deciding how language should be used, it describes how people actually use it, and tracks the innovations they come up with in the process. But while the debate between prescriptivism and descriptivism continues, the two are not mutually exclusive. At its best, prescriptivism is useful for informing people about the most common established patterns at a given point in time. This is important, not only for formal contexts, but it also makes communication easier between non-native speakers from different backgrounds. Descriptivism, on the other hand, gives us insight into how our minds work and the instinctive ways in which we structure our view of the world. Ultimately, grammar is best thought of as a set of linguistic habits that are constantly being negotiated and reinvented by the entire group of language users. Like language itself, it's a wonderful and complex fabric woven through the contributions of speakers and listeners, writers and readers, prescriptivists and descriptivists, from both near and far. |
TedEd_History | 플라톤의_동굴의_비유_l_알렉스_젠들러.txt | What is reality, knowledge, the meaning of life? Big topics you might tackle figuratively explaining existence as a journey down a road or across an ocean, a climb, a war, a book, a thread, a game, a window of opportunity, or an all-too-short-lived flicker of flame. 2,400 years ago, one of history's famous thinkers said life is like being chained up in a cave, forced to watch shadows flitting across a stone wall. Pretty cheery, right? That's actually what Plato suggested in his Allegory of the Cave, found in Book VII of "The Republic," in which the Greek philosopher envisioned the ideal society by examining concepts like justice, truth, and beauty. In the allegory, a group of prisoners have been confined in a cavern since birth, with no knowledge of the outside world. They are chained, facing a wall, unable to turn their heads, while a fire behind them gives off a faint light. Occasionally, people pass by the fire, carrying figures of animals and other objects that cast shadows on the wall. The prisoners name and classify these illusions, believing they're perceiving actual entities. Suddenly, one prisoner is freed and brought outside for the first time. The sunlight hurts his eyes and he finds the new environment disorienting. When told that the things around him are real, while the shadows were mere reflections, he cannot believe it. The shadows appeared much clearer to him. But gradually, his eyes adjust until he can look at reflections in the water, at objects directly, and finally at the Sun, whose light is the ultimate source of everything he has seen. The prisoner returns to the cave to share his discovery, but he is no longer used to the darkness, and has a hard time seeing the shadows on the wall. The other prisoners think the journey has made him stupid and blind, and violently resist any attempts to free them. Plato introduces this passage as an analogy of what it's like to be a philosopher trying to educate the public. Most people are not just comfortable in their ignorance but hostile to anyone who points it out. In fact, the real life Socrates was sentenced to death by the Athenian government for disrupting the social order, and his student Plato spends much of "The Republic" disparaging Athenian democracy, while promoting rule by philosopher kings. With the cave parable, Plato may be arguing that the masses are too stubborn and ignorant to govern themselves. But the allegory has captured imaginations for 2,400 years because it can be read in far more ways. Importantly, the allegory is connected to the theory of forms, developed in Plato's other dialogues, which holds that like the shadows on the wall, things in the physical world are flawed reflections of ideal forms, such as roundness, or beauty. In this way, the cave leads to many fundamental questions, including the origin of knowledge, the problem of representation, and the nature of reality itself. For theologians, the ideal forms exist in the mind of a creator. For philosophers of language viewing the forms as linguistic concepts, the theory illustrates the problem of grouping concrete things under abstract terms. And others still wonder whether we can really know that the things outside the cave are any more real than the shadows. As we go about our lives, can we be confident in what we think we know? Perhaps one day, a glimmer of light may punch a hole in your most basic assumptions. Will you break free to struggle towards the light, even if it costs you your friends and family, or stick with comfortable and familiar illusions? Truth or habit? Light or shadow? Hard choices, but if it's any consolation, you're not alone. There are lots of us down here. |
TedEd_History | 쿠바_미사일_위기의_역사_매튜_A_조던Matthew_A_Jordan.txt | It's not hard to imagine a world where at any given moment, you and everyone you know could be wiped out without warning at the push of a button. This was the reality for millions of people during the 45-year period after World War II, now known as the Cold War. As the United States and Soviet Union faced off across the globe, each knew that the other had nuclear weapons capable of destroying it. And destruction never loomed closer than during the 13 days of the Cuban Missile Crisis. In 1961, the U.S. unsuccessfully tried to overthrow Cuba's new communist government. That failed attempt was known as the Bay of Pigs, and it convinced Cuba to seek help from the U.S.S.R. Soviet premier Nikita Khrushchev was happy to comply by secretly deploying nuclear missiles to Cuba, not only to protect the island, but to counteract the threat from U.S. missiles in Italy and Turkey. By the time U.S. intelligence discovered the plan, the materials to create the missiles were already in place. At an emergency meeting on October 16, 1962, military advisors urged an airstrike on missile sites and invasion of the island. But President John F. Kennedy chose a more careful approach. On October 22, he announced that the the U.S. Navy would intercept all shipments to Cuba. There was just one problem: a naval blockade was considered an act of war. Although the President called it a quarantine that did not block basic necessities, the Soviets didn't appreciate the distinction. In an outraged letter to Kennedy, Khrushchev wrote, "The violation of freedom to use international waters and international airspace is an act of aggression which pushes mankind toward the abyss of world nuclear missile war." Thus ensued the most intense six days of the Cold War. While the U.S. demanded the removal of the missiles, Cuba and the U.S.S.R insisted they were only defensive. And as the weapons continued to be armed, the U.S. prepared for a possible invasion. On October 27, a spy plane piloted by Major Rudolph Anderson was shot down by a Soviet missile. The same day, a nuclear-armed Soviet submarine was hit by a small-depth charge from a U.S. Navy vessel trying to signal it to come up. The commanders on the sub, too deep to communicate with the surface, thought war had begun and prepared to launch a nuclear torpedo. That decision had to be made unanimously by three officers. The captain and political officer both authorized the launch, but Vasili Arkhipov, second in command, refused. His decision saved the day and perhaps the world. But the crisis wasn't over. For the first time in history, the U.S. Military set itself to DEFCON 2, the defense readiness one step away from nuclear war. With hundreds of nuclear missiles ready to launch, the metaphorical Doomsday Clock stood at one minute to midnight. But diplomacy carried on. In Washington, D.C., Attorney General Robert Kennedy secretly met with Soviet Ambassador Anatoly Dobrynin. After intense negotiation, they reached the following proposal. The U.S. would remove their missiles from Turkey and Italy and promise to never invade Cuba in exchange for the Soviet withdrawal from Cuba under U.N. inspection. Once the meeting had concluded, Dobrynin cabled Moscow saying time is of the essence and we shouldn't miss the chance. And at 9 a.m. the next day, a message arrived from Khrushchev announcing the Soviet missiles would be removed from Cuba. The crisis was now over. While criticized at the time by their respective governments for bargaining with the enemy, contemporary historical analysis shows great admiration for Kennedy's and Khrushchev's ability to diplomatically solve the crisis. But the disturbing lesson was that a slight communication error, or split-second decision by a commander, could have thwarted all their efforts, as it nearly did if not for Vasili Arkhipov's courageous choice. The Cuban Missile Crisis revealed just how fragile human politics are compared to the terrifying power they can unleash. |
TedEd_History | 자연사_박물관에_숨겨진_비밀의_세계_조슈아_드류.txt | When you think of natural history museums, you probably picture exhibits filled with ancient lifeless things, like dinosaurs meteroites, and gemstones. But behind that educational exterior, which only includes about 1% of a museum's collection, there are hidden laboratories where scientific breakthroughs are made. Beyond the unmarked doors, and on the floors the elevators won't take you to, you'd find windows into amazing worlds. This maze of halls and laboratories is a scientific sanctuary that houses a seemingly endless variety of specimens. Here, researchers work to unravel mysteries of evolution, cosmic origins, and the history of our planet. One museum alone may have millions of specimens. The American Museum of Natural History in New York City has over 32,000,000 in its collection. Let's take a look at just one of them. Scientists have logged exactly where and when it was found and used various dating techniques to pinpoint when it originated. Repeat that a million times over, and these plants, animals, minerals, fossils, and artifacts present windows into times and places around the world and across billions of years of history. When a research problem emerges, scientists peer through these windows and test hypotheses about the past. For example, in the 1950s, populations of predatory birds, like peregrine falcons, owls, and eagles started to mysteriously crash, to the point where a number of species, including the bald eagle, were declared endangered. Fortunately, scientists in The Field Museum in Chicago had been collecting the eggs of these predatory birds for decades. They discovered that the egg shells used to be thicker and had started to thin around the time when an insecticide called DDT started being sprayed on crops. DDT worked very well to kill insects, but when birds came and ate those heaps of dead bugs, the DDT accumulated in their bodies. It worked its way up the food chain and was absorbed by apex predator birds in such high concentrations that it thinned their eggs so that they couldn't support the nesting bird's weight. There were omelettes everywhere until scientists from The Field Museum in Chicago, and other institutions, helped solve the mystery and save the day. America thanks you, Field Museum. Natural history museums windows into the past have solved many other scientific mysteries. Museum scientists have used their collections to sequence the Neanderthal genome, discover genes that gave mammoths red fur, and even pinpoint where ancient giant sharks gave birth. There are about 900 natural history museums in the world, and every year they make new discoveries and insights into the Earth's past, present and future. Museum collections even help us understand how modern threats, such as global climate change, are impacting our world. For instance, naturalists have been collecting samples for over 100 years from Walden Pond, famously immortalized by Henry David Thoreau. Thanks to those naturalists, who count Thoreau among their number, we know that the plants around Walden Pond are blooming over three weeks earlier than they did 150 years ago. Because these changes have taken place gradually, one person may not have noticed them over the span of a few decades, but thanks to museum collections, we have an uninterrupted record showing how our world is changing. So the next time you're exploring a natural history museum, remember that what you're seeing is just one gem of a colossal scientific treasure trove. Behind those walls and under your feet are windows into forgotten worlds. And who knows? One day some future scientist may peer through one and see you. |
TedEd_History | 배심원_제도에_일어나고_있는_일들_수자_토마스Suja_A_Thomas.txt | Dating back at least to the time of Socrates, some early societies decided that certain disputes, such as whether a person committed a particular crime, should be heard by a group of citizens. Several centuries later, trial by jury was introduced to England, where it became a fundamental feature of the legal system, checking the government and involving citizens in decision-making. Juries decided whether defendants would be tried on crimes, determined whether the accused defendants were guilty, and resolved monetary disputes. While the American colonies eventually cast off England's rule, its legal tradition of the jury persisted. The United States Constitution instructed a grand jury to decide whether criminal cases proceeded, required a jury to try all crimes, except impeachment, and provided for juries in civil cases as well. Yet, in the US today, grand juries often are not convened, and juries decide less than 4% of criminal cases and less than 1% of civil cases filed in court. That's at the same time as jury systems in other countries are growing. So what happened in the U.S.? Part of the story lies in how the Supreme Court has interpreted the Constitution. It's permitted plea bargaining, which now occurs in almost every criminal case. The way it works is the prosecutor presents the accused with a decision of whether to plead guilty. If they accept the plea, the case won't go in front of a jury, but they'll receive a shorter prison sentence than they'd get if a jury did convict them. The risk of a much greater prison sentence after a trial can frighten even an innocent defendant into taking a plea. Between the 19th century and the 21st century, the proportion of guilty pleas has increased from around 20% to 90%, and the numbers continue to grow. The Supreme Court has permitted the use of another procedure that interferes with the jury called summary judgement. Using summary judgement, judges can decide that civil trials are unnecessary if the people who sue have insufficient evidence. This is intended only for cases where no reasonable jury would disagree. That's a difficult thing to determine, yet usage of summary judgement has stretched to the point where some would argue it's being abused. For instance, judges grant fully, or in part, over 70% of employers' requests to dismiss employment discrimination cases. In other cases, both the person who sues and the person who defends forgo their right to go to court, instead resolving their dispute through a professional arbitrator. These are generally lawyers, professors, or former judges. Arbitration can be a smart decision by both parties to avoid the requirements of a trial in court, but it's often agreed to unwittingly when people sign contracts like employment applications and consumer agreements. That can become a problem. For example, some arbitrators may be biased towards the companies that give them cases. These are just some of the ways in which juries have disappeared. But could the disappearance of juries be a good thing? Well, juries aren't perfect. They're costly, time-consuming, and may make errors. And they're not always necessary, like when people can simply agree to settle their disputes. But juries have their advantages. When properly selected, jurors are more representative of the general population and don't have the same incentives as prosecutors, legislators, or judges seeking reelection or promotion. The founders of the United States trusted in the wisdom of impartial groups of citizens to check the power of all three branches of government. And the jury trial itself has given ordinary citizens a central role in upholding the social fabric. So will the jury system in the U.S. survive into the future? |
TedEd_History | The_exceptional_life_of_Benjamin_Banneker_RoseMargaret_EkengItua.txt | Sometime in the early 1750s, a 22-year-old man named Benjamin Banneker sat industriously carving cogs and gears out of wood. He pieced the parts together to create the complex inner working of a striking clock that would, hopefully, chime every hour. All he had to help him was a pocket watch for inspiration and his own calculations. And yet, his careful engineering worked. Striking clocks had already been around for hundreds of years, but Banneker's may have been the first created in America, and it drew fascinated visitors from across the country. In a show of his brilliance, the clock continued to chime for the rest of Banneker's life. Born in 1731 to freed slaves on a farm in Baltimore, Maryland, from his earliest days, the young Banneker was obsessed with math and science. And his appetite for knowledge only grew as he taught himself astronomy, mathematics, engineering, and the study of the natural world. As an adult, he used astronomy to accurately predict lunar and solar events, like the solar eclipse of 1789, and even applied his mathematical skills to land use planning. These talents caught the eye of a local Baltimore businessman, Andrew Ellicott, who was also the Surveyor General of the United States. Recognizing Banneker's skills in 1791, Ellicott appointed him as an assistant to work on a prestigious new project, planning the layout of the nation's capitol. Meanwhile, Banneker turned his brilliant mind to farming. He used his scientific expertise to pioneer new agricultural methods on his family's tobacco farm. His fascination with the natural world also led to a study on the plague life cycle of locusts. Then in 1792, Banneker began publishing almanacs. These provided detailed annual information on moon and sun cycles, weather forecasts, and planting and tidal time tables. Banneker sent a handwritten copy of his first almanac to Virginia's Secretary of State Thomas Jefferson. This was a decade before Jefferson became president. Banneker included a letter imploring Jefferson to "embrace every opportunity to eradicate that train of absurd and false ideas and opinions" that caused prejudice against black people. Jefferson read the almanac and wrote back in praise of Banneker's work. Banneker's correspondence with the future president is now considered to be one of the first documented examples of a civil rights protest letter in America. For the rest of his life, he fought for this cause, sharing his opposition to slavery through his writing. In 1806 at the age of 75, Banneker died after a lifetime of study and activism. On the day of his funeral, his house mysteriously burned down, and the majority of his life's work, including his striking clock, was destroyed. But still, his legacy lives on. |
MIT_2087_Engineering_Mathematics_Linear_Algebra_and_ODEs_Fall_2014 | 4_SecondOrder_Equations.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: This week is my second pair of lectures. Last week the two lectures were about first order differential equations, and this week second order. Those are the two big topics in differential equations. Let me start with most basic second order equation. We see the second derivative and the function itself, and we don't see yet the first derivative term. This is the nice case, when I just have y double prime and y. In general, I-- I'm taking constant coefficients today. Because if the coefficients depend on time, the problem gets much, much harder now. So let's stay with constant coefficients, meaning we have a mass, for example, we have a spring. The stiffness of the spring is k, the mass is m, and the y, the unknown displacement, tells us the movement of the mass. The classical problem. You will have seen it before. Because you have an exam this afternoon, I wanted to start with things that-- they are about second order equations, but they're still close to the exam idea, particularly the idea of exponentials. With constant coefficients, that's the fundamental message. Exponentials in, exponentials out. But it's not quite so clear when we had first order, y prime equal ay, we knew that the exponent was a. The solution was e to the at. Now we've got second derivatives coming in, and it won't be so much e to the at type thing. Either the at was growth for a positive, decay for a negative. Now we're going to see oscillation. It's still exponentials, but oscillation. Things going up and down, things going around. Harmonic motion, you call it. Sines and cosines. And sines and cosines connect to complex exponentials. So that instead of e to the at-- so now oscillations-- they're going to be coming from e to the i omega t. In other words, instead of an a, we're going to have an i omega. Or, if we like to stay real, we can stay with cos-- cosine-- and sine. And actually, I've written two real guys there, so I better have two complex ones. And it will turn out to be plus or minus. There are two frequencies there. Plus i omega, minus i omega, and they turn into cosine and sine. So in this case, with no damping term, we can stay entirely real without creating any problems. We can work with cosines and sines. The first question is, what's omega. What is the frequency of oscillation. And of course, another similar picture would be a pendulum, a linear pendulum, swinging side to side, keeping time, because that frequency will stay constant. Always I'll start with zero on the right hand side. Just look at these equations. Constants there. I'm looking for solutions. And I'm looking for null solutions, looking for the natural motion of this spring, the natural up and down motion of this spring. Classical problem. Won't be brand new, but it's the right starting point for the full second order equation. It'll get a little complicated on Wednesday, when damping gets in there. The formula got a little messy, because you've got a mass-- you've got an m-- and a k, still, but you also will have a damping constant. Then complex numbers really come in. Here they're optional. So this is my equation to solve. Because we don't have a first derivative, a cosine will solve that. So let me look for-- I could look for exponentials. Maybe I should do that first, look for an exponential solution. Yeah, that's a good idea. And let me not jump ahead to know that the exponent has that i omega form. Let me discover it. So I look for no solutions-- because I have that zero there-- no solutions of the form e to the st, some exponent. Plug it in. That's the message with constant coefficients. Look for exponentials, substitute them in, discover what s will be. So let's just do that. This is the most basic step. For null solutions will be exponentials, I substitute into the equation. I get ms squared from two derivatives. It will bring s down twice. This is just ke to the st, and I'm looking for null solutions. Zero. No forcing. So this is undamped, unforced. Undamped, unforced. Natural motion. What do I do now? Plugged in an exponential, got this equation. And the beauty is that the exponentials cancel. An exponential is never zero, so I can safely divide by it. So I cancel those, and I get ms squared plus k equals zero. The key equation-- and it's so simple-- it's just we're doing algebra now. The calculus, the derivative we took when we plugged it in, but now it's an algebra question. And of course, solving that system is easy. There's no s term, no damping term. So the frequency, s, is-- put k on the other side, divide by n. s is-- s squared, let's say-- is k over m-- is minus k over m. Critical point. That tells me, with that minus sign there, that s is an imaginary number. A complex number has a real part and an imaginary part. In this case, all imaginary. No real part at all. It's natural to think-- s is the square root of that, so I'm going to write-- everybody writes-- s, the frequency s, is i omega. So if I plug that in, I have omega squared equal k over m. i squared and the minus 1 deal with each other. So the frequency omega-- here is the great fact-- square root of k over m. That's-- yes? AUDIENCE: What difference does having imaginary parts to answer affect the oscillation? PROFESSOR: To-- OK. Oscillation, just pure oscillation-- which is what we would see here with no damping-- is the frequency, e to the-- the solution-- the displacement, I could write-- the displacement up and down, y, will involve e to the i omega t, and e to the minus i omega t. We've got second order equation. Let me just go back to that key point. When we have second order equations, we look for two-- we expect and we want and we need two-- solutions. There will be two. I didn't put it here, and I should. s, the frequency, is plus or minus i omega, because in both cases when we square, it comes out right. So we get two frequencies, and here they are. So let's see how to answer your question. The presence of this i is only telling me that, essentially, I've got sines and cosines. That's really what-- when it's a pure imaginary number-- I would call that a pure imaginary number, there's no real part at all-- then equally cos omega t and sine omega t. I can now, if I want, go real. I can say, OK, these were the general null solutions. Let me put this down, then. The null solution-- I'm looking only right now at null solutions-- is some combination of e to the i omega t and e to the minus i omega t. That's what we got from plugging in e to the st, discovering that s was an imaginary number, and we got these guys. But equally-- equally-- yn is a combination of cos omega t and sine omega t. And maybe you'll like those better. I think everybody practically likes those better. Do you see that these guys are the same as these guys? The c's are a little different because, well, we know that we can switch from one to the other. We remember that basic fact that e to the i omega t is cos omega t plus i times sine omega t. You're used to maybe seeing that omega t as theta, e to the i theta is cos theta plus i sine theta. And e to the minus i omega t, of course, is cos omega t minus i sine omega t. I hope you won't think I'm filling the blackboard with formulas, because I'm really just writing down-- well anyway, they're beautiful formulas. So if I have these guys, then I have these and vice versa. If I have cos-- how would you write cos omega t using the exponentials? I want to just see totally clearly that I can go back and forth between complex imaginary exponentials and cosines and sines. So how would I, I want to go in the opposite direction and write the cosine and the sine as combinations of these, just to show if I've got combinations of one, I've got combinations of the other. Combinations of these are the same as combinations of those. So what is cos omega t in terms of these guys? AUDIENCE: [INAUDIBLE] some of them divided by 2? PROFESSOR: Exactly. If I add those two, this part cancels. I've got two of these, so I have to divide by 2, as you say. It's a half of the first plus a half of the second. And how about sine omega t? Sine omega t is always slightly more annoying, because it's the one-- it's the imaginary part that brings in an i. What would be the same formula? How could I produce sine omega t out of that? Yes? AUDIENCE: The difference divided by 2i. PROFESSOR: Yes. If I take the difference, that'll cancel the cosines. So I'm going to take e to the i omega t minus e-- minus e to the minus i omega t. Take the difference. But then I've got 2i multiplying this sine. Up here I had a 2, but now I've got-- when I take the subtract, these i's are in there, so I divide by 2i. So this just tells me that I can go either way. Next time, we'll see what happens when there is damping and there are complex numbers instead of pure i omegas. We're golden here. We've found the great quantity with the right units. The right units of omega are 1 over time. Actually the units are radians per second, would be the typical appropriate unit. Radians per second. I'll use the word frequency for that, but there's another definition of frequency, cycles per second. I just want to think about steady motion around a circle. So this tells me how many radians per second. And if this is 2pi-- if omega happened to be 2pi-- then I would go once around the circle. If omega was 2pi, then when t reached one, I would be around the circle. Let me draw a circle in a minute. So there's a 2pi here hiding behind the word radians. And in many cases, you'll want also a definition in cycles per second. So f is omega divided by the 2pi, and that's in cycles per second. Full revolutions per second. And that's hertz. I think I misspoke last time in confusing these two, so let's get them straight here. There's no complicated math in here, it's just a factor 2pi, but of course that factor is important. So a typical frequency in everyday life would be like f, 60 cycles per second, 120pi radians per second. So I'm going around in a circle. Now I'm ready to have initial conditions. This connects, again, to the afternoon exam. We found the general solution with some constants, like here. Let's keep that real form. And now those constants get determined by the initial conditions. Conditions plural, because we have an initial position, like I stretch it-- maybe I stretch it and let go. Maybe I stretch the spring and then I let go. What happens? By stretching it, I'm giving it an initial displacement. And I'm giving it zero initial velocity, because I stretched it and just let go. Another possibility would be to strike it. If I hit that mass, that would be a different initial condition. What would be the initial condition if it's sitting there in equilibrium quietly minding its own business and I hit it? Then I've given it an initial velocity, with initial displacement zero. So those would be the two extreme possibilities. Pull it down, let go, or strike it when it's sitting in equilibrium. Anyway, we've got two initial conditions. You see why-- y double prime is showing up because essentially we've got Newton's Law. This is Newton's Law. Mass times acceleration is equal to minus ky-- that's the, with the minus sign, and the all-important minus sign, that's the acceleration. That's a force, sorry. Mass, acceleration, this thing with a minus sign is the force, and the force is pulling back. If y is stretching, the force is restoring. Let me just go ahead with what you know. The initial conditions. And I want to solve my double prime plus ky equals 0. So I'm still talking about the unforced with given y of 0 and y prime of 0. Just think for a moment. Could you do that? This is the most basic second order equation. We know what the solutions look like. Let's do this one in a box, cosines and sines. We know what omega is. Omega had to be square root of k over m. Then the equation was solved. All I've got left is to get c1 and c2. All I have left is to match-- choose c1 and c2 to match the two initial conditions. So let me just do that. What are c2 and c2? At time zero, I have to match an initial displacement. So at time zero, this is a 1, cosine of zero is a one, and that's a zero. So at t equals 0, I have y of 0. The displacement matches c1 times cosine of omega t, which is a 1, plus c2 times 0. I'll put it in there plus c2 times 0. C1 cos 0 plus c2 sine 0. I've learned c1. And also-- what do I do next? I want to get c2. And where is c2 coming from? Now I would like to know what's the coefficient of the-- the initial conditions are supposed to determine that coefficient. It'll be that initial condition that determines it. y prime of 0. The initial velocity should match the derivative. OK, so what's the derivative? y prime. So the derivative of the cosine will be a sine. And that will disappear at t equals 0. The derivative of this sine will be a cosine with a factor omega. So I'll have y prime of 0 will be the c2 omega cos of 0. Which is what? That tells me c2. You could do all this without my pointing the way. I'm solving this equation. I have the solution in general form with two constants. Now I'm determining those constants, and cosine and sine just determine them perfectly because cosine is 1 and sine is 0 at the start. So we've got the answer. The solution is y of t is c1 is y of 0 cos omega t, and c2 is-- now you'll notice, c2 is y prime at 0 divided by omega. y prime of 0 divided by omega sine omega t. There we go. Finished. Finished. Unforced problem solved. Everybody in this room could get to that point. Let me make some comments about that. It's a combination of cosine and sine. They're both running at the same frequency, omega. I'm going to give a special name to that frequency, omega, this famous formula, all-important. Lots of physics in that formula. I call that the natural frequency, because the next step will be to drive the system by a driving frequency, which would be different from omega. So we need to-- we've got 2 omegas. Actually when I first wrote the book I thought, we've got to keep these two separate. Everybody has to keep them separate. My first attempt was to use little omega and big omega for the two. I concluded after looking at it for a while that it was better to be more conventional. People had figured out a good way to do it. And the good way is to call this the natural frequency and put a subscript, m. So all the omegas that you see on this board should be omega n. I can change them all, but let me just change it here. I'll change omega n. So that's omega n. We've only got that one omega right now, because we don't have a driving term yet. So natural frequency has the advantage, which kind of made me smile, that the n stands for natural, and everybody calls it the natural frequency. And n also stands for null, and we're talking here about the null solution, because there's no forcing. So I could have subscripts on all the y's. Eventually I'll need subscripts on the y to separate what we've done. This is really yn of t, the null solution. Good? Now we could take one more step. This is a combination of cosine and sine. And we learned last time that that could be put in a polar form, but I don't plan to do this. Let me just say I could do it. This would be some amplitude, some gain-- maybe g for gain-- no, a for amplitude is good-- times-- what is this second optional form, which I'm just going to write here, say that we could do it, remember a little about it, but not make a big deal-- what is it I'm after here? I'm looking to write this combination of cosine and sine, which is two oscillations, a cosine curve and a sine curve, but with the same frequency. Then I can combine them into a single cosine, a single cosine of omega nt. But now what else have I got in this form? There's a phase shift, minus phi. Thanks. So there's an a phi, two constants, or there's y of 0, y prime of 0, two constants. Let me not write again the formula for a or for phi, I don't plan to do anything with it. It just could be done. In other words, what we've done so far is just to see that the single spring oscillates with the frequency omega n. That's really what we've done. A single spring oscillates with a frequency omega n. Saying that makes me think, let me look ahead to the linear algebra part of the course. So where is linear algebra going to come in? It's going to come in for a system of springs. When I have another spring. Can I draw another spring and another mass and another spring? Say six springs, six masses. Then-- and they could be different k's, different m's, or not. Then we've got six displacements-- six differential equations-- coupled together, because the whole system is coupled together. So what happens at that point? That's the point where linear algebra, where matrices are coming in. You want to see what's the point of matrices. It's not a separate course by any means. It's a most necessary part, because a single spring happens in reality but also systems today are coupled. Big, actually, there are many, many things. You have an electric circuit with thousands or tens of thousands of elements. You have a coupled system with many gears, many oscillations going on. So we need matrices at that point. Can I even just add one more word about the language? When we had-- here we have a frequency of motion for one spring. What are we going to have for two springs or six springs? The motion will be a combination of six different frequencies. And so you'll see that it's a much more interesting, much more not so simple motion. A combination of six pure frequencies. And those frequencies are determined from the six eigenvalues of the matrix. I'm just using that word looking ahead. We will have a 6x6 matrix to describe the coupled system. That matrix will have six eigenvalues. It will tell us six natural frequencies, and our solution will be a combination of all six oscillations. Here, it's 1. Here it's 1. That spring is not there. So the problem we've solved now is the fundamental, basic problem, and I have to-- next step is forcing. I now want to add a force that drives the motion. In general, it could be any function of time. Calling it f of t. So that's what I'm going to put in now. But in reality, very, very, very often f of t is also a simple harmonic motion. It's also a cosine. But at a different frequency, at a driving frequency. So I'm going to-- the next equation to solve is to put in cosine-- let's stay real for now-- at another, driving frequency. At a driving frequency. And of course, it could have an amplitude. But let me take that amplitude as 1 to keep things simple. So now I'm talking about forced motion. Can we solve it? How can we solve this equation? Let me take out the 0 or-- take out the 0-- equals cosine omega t. With a different omega. If the two omegas were the same, if the driving frequency is the same as the natural frequency, the formulas have to be slightly adjusted. There's still an answer, but it's a case of resonance and you have to look separately. But let's say, no. Let's say omega d is different from omega n. How are you going to solve this? I have to think myself. How do I solve that. Let's start a fresh board. my double prime plus ky equals cosine of omega dt-- or often, I won't put the d. I don't have to put the d anymore. Omega will now represent the driving frequency, because I've got omega n, the natural frequency, as the square root of k over m. What am I looking for now? I found the null solution. I'm looking for a particular solution. I'm trying to keep the whole thing systematic. Null solutions are now dealt with. Took a little more time than just ce to the at for first order equations, because we've now got a two-dimensional collection of null solutions, but we've got them. Now I'm taking a forcing term. So I'm looking for a particular solution. I'm looking for any solution to this equation. I'm looking for a particular guy. What do you suggest? Again, it's a neat problem because of that particular forcing term, a cosine, an oscillation. So I'm going to look for yp is some gain times [INAUDIBLE]. This is the next and, fortunately, a highly, highly important case, in which the particular solution has the same form as the forcing term. It's just a multiple of the forcing term. That's best possible. That's best possible, is to have the forcing term reveal to me-- the forcing term immediately reveals a particular solution. Once I know what I'm looking for, what do I do? Substitute it in. So I substitute that particular solution in here. And notice everything is going to be a cosine, mgy double prime. So what do I get when I plug this in for that guy? I want to-- you can do it quickly, but let's stay together and do it together, because we can with this case. What happens when I plug that in and take its second derivative? I get the g. And then what's the second derivative? AUDIENCE: [INAUDIBLE]. PROFESSOR: We have a negative, because two derivatives of the cosine bring out a minus omega d, will come out twice. And I'll keep writing omega d for a moment, but then I'll give up on the d. Cosine of omega dt. And then k times this, g cosine of omega dt, equals the forcing term, cosine of omega dt. It worked. This is one of that small family of nice functions where the solution has the same form as the function. Actually that list of what you could call best possible forcing functions, where the form of the forcing function tells you the form of the solution. That's a small family. But it's fortunately a very important one. Cosines, sines are included, and we'll see all the other guys that are included. Most forcing functions we couldn't just assume that the solution had the same form. It's only these nice ones. But cosines are nice. So what do I do now? Everything is multiplying cosines, so I just look at-- I have minus m omega squared g-- g is going to factor out-- minus m omega squared and a k times g. Let me remove that off for the moment. I'm canceling cosine omega, so my right hand side is 1. That's it. We looked for a solution with that simple format, and we found it. Now we know g, the gain. So the solution is-- this is g is 1 over k minus m omega squared times cosine of omega t. And omega is omega d. Omega is omega d now. Does that look good to you? This is the periodic solution going at the driving. This is what the-- this g is the gain, the driving force. The driving force is 1 times cosine omega d, then that 1 gets multiplied by this number. This is, you could say, the amplifying factor. I guess frequency response would be the right word. Can I bring in that word, response, again? Response is a word for a solution. It's what comes out. When the input is this, a pure frequency, the output, the response, is a pure frequency-- same frequency, of course-- multiplied by that. That is the frequency response factor. Notice we could write that a cool way, by remembering that omega squared-- that's wrong as it stands. What have I forgotten in writing k minus m omega squared in that denominator? I forgot a subscript, which is n. Which is n. This is n. This is-- is that right? No. Is it? Or is it d? Maybe I didn't make a mistake. Is it d? You're seeing a kind of critical moment. Which is it? AUDIENCE: [INAUDIBLE]. PROFESSOR: It's d, isn't it? Yeah. It's d. Sorry. It's d. But when I see this and remember what omega n squared is-- omega n squared is k over m-- I can see that I can get an omega-- I can use this in here to make it even more interesting. So it'll be equals-- let me get this box ready-- cosine of omega dt divided by-- now I just want to rewrite that. I want to take out an m. I'm going to write this as m times k over m. m times k over m. Safe to do that. Now I have a factor, m, that I can bring out. And what is m multiplying? That's the neat thing. What is m multiplying? k over m is-- omega n squared. And this is minus m omega d squared. Minus omega d squared. That's pretty terrific. The gain is this multiplier, 1 over m, times that. And we see that the gain is bigger and bigger when the frequency is near the natural frequency. And of course everybody has seen the pictures of that bridge-- wherever the heck was that bridge? Somewhere in the Northwest, I think. You know the bridge I'm talking about? AUDIENCE: [INAUDIBLE] Tacoma, Washington. PROFESSOR: Yeah, I think Tacoma, that's right. The Tacoma Narrows Bridge. Right. Tacoma, Washington. Where the natural-- when you build a bridge, you've built in a natural frequency. And then when traffic comes, it's doing a driving frequency. And if you haven't got those two well-separated, you're in trouble, as this shows. Or similarly, when an architect designs a skyscraper, there's going to be a frequency of oscillation, a natural frequency, at which that skyscraper swings. And then there's wind. Actually I talked yesterday to the-- by chance, the math department is not a very party-going department, but once a year we let it out. And so we had our party at Endicott house out in the suburbs, and all the usual people-- that's all the professors I know-- came, of course. But also, there was a really cool person. He's the key architect for Building 2. You've noticed that Building 2 is under wraps and we're moved out. And we move back in January 2016. So we've been out a year and a quarter and we have another year and a quarter to go. It's going to be cool. And you may say, well, who cares. But the key point is Building 1 is next, and Building 1 is going to have the same cool addition of a fourth floor. We're putting in a fourth floor, which all the-- Buildings 3, 4, 5, 6 go up to four, but Buildings 2 and 1 stopped at the third floor. But there's a lot of space up there under the roof. And they've discovered they could put a fourth floor up there. Here was one interesting thing, though. These buildings that we're sitting in are sinking. You know that MIT was built on marshy land, just the way the Back Bay-- which is like the greatest idea in the history of Boston, the Back Bay and the dam that makes the Charles River beautiful-- was built by bringing in trainloads of earth from Needham. So whole mountains and hills in Needham have come into Boston and come here. So anyway, we're sinking. You may say something like 3/16 of an inch a year is not something to worry about, but now it's been more than 100 years that these buildings have been here. Anyway, not good to sink faster. So the weight had to be controlled. So by putting in a fourth floor, that put in a lot a new weight, and faster sinking, probably by some formula here. Probably there. So the weight had to get subtracted out. It turns out that the ceiling, the roof to Building 1-- Building 2 and no doubt to Building 1-- was more than a foot thick of concrete. Really heavy. And some more asbestos probably, which we don't want to think about. That's much reduced. A whole lot of weight came out of the roof. I think they probably did the calculation right, so we won't get rain coming through, but it won't weigh as much and the fourth floor is acceptable. All this was a big decision by MIT to pay for that, or to raise money and pay for the new fourth floor. But it's going to be fantastic. And it'll be fantastic in Building 1 also. So all that is discussion of that formula. That's the frequency response, this factor to frequency, omega d, or omega, is this factor. I guess I should say something about resonance. What happens when that formula breaks down? When the driving force equals the natural frequency, then we're dividing by 0, and something is different. The formula isn't right anymore. What enters in the formula-- let me just tell you what enters, and then we'll see it in a simple example. When I have this repeated thing, two things are equal, what tends to happen is a factor, an extra factor, t, appears. So an extra factor, t, will appear in the case omega n equal omega d. The solution, y, will be some factor, I'll still call it g-- no, I don't want to call it g, let me call it a. There'll be a factor, t, times cosine of omega t. So in this case, there's really only one frequency. We're driving it. So the oscillation grows. As you know, when you push a child on a swing, the whole point of pushing that child is to push at the natural frequency. You wait for this swing to swing back naturally and you drive it again with that-- at that-- maintain that frequency. And of course you see the amplitude-- the child swing higher and higher. Presumably you stop pushing before disaster for that child. But that's a case of resonance. And it's what happened in the Tacoma Narrows Bridge, and there was nothing to-- nobody stopped, traffic just kept coming. The movie is amazing, because there's one car that shows up after it's already swinging wildly, some crazy person still driving across. And you might think, OK, that's ancient history. But you know the bridge in London, the pedestrian bridge, the Millennium Bridge-- it's just a walking bridge across the Thames-- a big feature of modern London, and it had the same problem. It was swaying. People could not walk across. They couldn't keep their balance. So they had change it. So it's not trivial to anticipate. So now we've solved it-- we've solved the the null equation with no force, and we've solved the driving force equal to a cosine. And of course, we could do a sine. What other driving force should we do? I think we should do a delta function. I think we have to understand the fundamental solution is the case when, if we can solve it-- there's always this general rule, if we can solve with a delta function, that will give us a formula for every driving force, because every function is some combination of delta functions. So if we could do it with a delta-- really the great right hand sides are-- well, cosines and sines I'll include as great right hand sides. Those are the exponentials in disguise. So the great right hand sides are really exponentials at different frequencies and delta functions. Delta of impulses. So now I want to find the impulse response. That's the next-- that's really a job. At this point, in these last 20 minutes when I solve my double prime plus ky equal a delta function-- well, what I was going to say was I'm now taking you to something that you won't see on the exam this afternoon. But maybe you will. Delta function, right hand side. I haven't seen it yet. Or I haven't looked recently. You won't see second derivatives, I guess. So what is it? So now this is of the form with an f of t, a very special f of t. And that very special f of t makes that extremely easy to solve. That's really my point here, is it it's going to be a cinch to solve that, and we practically have done it already to solve that with a delta function. And the reason is sort of physical. We have here our spring. And what am I doing with that force? I'm hitting the mass. I'm striking the mass. Let me say, and I'll write it on the board, the point I want to make about this. That point is that this equation with a delta function force starting from 0-- say, y of 0 equal y prime of 0 equals 0, let's give it starting from rest-- it starts from rest by hitting it. And that hit, that impulse, is in no time at all. It's not stretched out. It's hit over one second. So this has the same solution. This is the beauty. This is why we can solve it so easily. Same solution as-- let me write it and see what you think-- as my double prime plus ky equal 0. We know how to solve those. With-- it's still, when I hit it-- when I hit it, what happens in that split second? In that split second, it doesn't have time to move. It doesn't move. It still has y of 0 equals 0. But in that split second, we've given it a velocity. We've given it a velocity. And that velocity will be y prime. The initial velocity is 1-- because here I had a 1-- over an m. We have to have the units right. So here's a point, and we will stay with it. We'll come back to this point next time. Maybe the first thing for you to take in is the fact that it's such a nice thing. We have this equation with this mysterious delta function, and I'm saying that the solution is the same as this equation with no force, but starting from a mass. I'm tempted to take an example to make this point. Let me take an example where the whole thing is a lot simpler. y double prime equal delta of t. I've taken the spring away, so the k is gone, the mass is 1. What's the solution to y double prime equal delta of t? If we concentrate on this example, we're good for today. So my point is the same solution as-- now, what's the other problem? I'm just repeating here, but making it simple by taking k equals 0 and m equal 1. So the same solution as y double prime equals 0, with y of 0 equal what, and y prime of 0 equal what. I just wanted to repeat here what I've said there, and then we'll solve it and we'll see that it's all true. If I look for a solution to y double prime equal delta starting from 0-- this was starting from 0-- if I say that's the same as this, what should y of 0 be here? AUDIENCE: Zero. PROFESSOR: Zero, right. It hasn't had time to move. It hasn't had time to move. But in that instant, what happened to y prime? It jumped to 1. That's right. That's right. Exactly. Now just solve that equation for me. Solve this example for me. Suppose y double prime-- yeah. Here we go. What's the solution if y double prime is 0? What are the solutions to y double prime equals 0? AUDIENCE: [INAUDIBLE]. PROFESSOR: Constant and linear. a plus bt, right, have second derivative 0. Now what's the solution that starts from 0 that kills the a and has slope 1? What's the answer to that question? AUDIENCE: [INAUDIBLE]. PROFESSOR: t. t. The solution to this equation is a ramp. It's zero everything in this course, is zero up until time 0. At time 0, in this example, all the action happens. Everything happens. And what happens is it gets a velocity of 1, and the solution is y equal to t. y is 0 here, of course. At that point, that's the key point, t equals 0, right there-- it gets a slope. We don't have a step function. There's no jump in y. The jump is in y prime, the y prime the velocity jumped from 0 to 1. That's exactly-- I think when I introduced delta functions and drew a picture. What is the derivative, the first derivative, y prime, for that guy? Let's just review, because this is what we've seen already. The first derivative is-- AUDIENCE: [INAUDIBLE] step. PROFESSOR: A step. And the second derivative is delta. The second derivative of this is the first derivative of a step. The derivative of a step is 0 everywhere except at the step, at the jump when it jumps to 1. So that's the solution in this example. And now to end the lecture, let's solve it in this example. Again, let me just say-- why do I like this forcing term? Mathematically, I like it because, if I can solve that guy-- as we're doing, we are solving it-- if I can solve that one, I can solve all forces. Over here, I could solve when I had a very happy f of t, a perfect f of t, where I could guess the answer and push through. Now with a delta, I can build everything out of delta functions. That's why I like it mathematically. Why do I like it physically? Because it's a very physical thing to have an impulse. That happens in real time, in real things. And by the way, let's just, before I write down any more formula, what would-- I would like to be able to solve it for a step function. [? Heavy thud. ?] I would like to be able to do that one. I'm going to have to erase something, or I'll write it right above just for the moment. I would also like to solve my double prime plus ky equal a step function. So I would call the solution, y, the step response. And what would be a step function start? A step function start would be like turning a switch. Suddenly things happen. That's forcing by a step, so I'm looking for the step response. And how do you think these two are related? I look at the relation at the right hand sides. What's the relation of this step to the delta? Yeah? AUDIENCE: One's a derivative. PROFESSOR: One's a derivative of the other. And we've got linear equations. So the right hand sides. The step response, y step, and the delta response, y delta-- I'll use a different letter for this because it's so important. One is the derivative of the other. The great thing about linear equations is we have linear equations, differentiation, integration. Those are linear operations. The step function is just like a steady-- anyway. I was going to-- I won't-- is the integral of the delta. Step function is the integral of the delta, so the step response is the integral of the delta response. I guess to finish the lecture, why don't we solve this problem, which looks tricky because it's got a delta. Instead, we'll solve this problem, which doesn't look tricky at all. It's exactly what we started the lecture with. Zero forcing and some initial conditions. So let me just finally make space for the big deal from today's lecture, which would be the fundamental solution with a force by a delta. I'm just going to write down the answer when you tell me what it is. What's the answer to that? What's the solution to this second order constant coefficient unforced equation with those initial conditions? We probably had it here. I may just have erased it. But now let's get it. So y is y delta. This is the impulse response. y of t-- and I'll give it later another name. So here's a perfect review question. What's the solution to this problem? Everybody remembers-- what are the solutions, what's the general form for the solution to the equation? I'm reviewing today's lecture. The solution to that equation looks like what? AUDIENCE: [INAUDIBLE]. PROFESSOR: It's a cosine and a sine, right. And then how much of a cosine do we have and how much of a sine do we have? The initial condition will tell me how much of a cosine we have. And what's the answer? None? No cosine. This condition, this initial velocity, will tell me how much of a sine we have, because the sines are the things that have initial velocities. So it would be a sine of-- the sine of what? Square root of k over m, right? omega nt, right? And what's the number? What's the number so this has the right-- let me write again what I want. I want y prime at 0 to be 1 over m. What's the number that I put in there? I've got something, its derivative, at zero. This is some number-- I'll call it little a for the moment, but I want to find out what it is. Are we right? Yeah? I think we're right. Yeah. The derivative is at zero, so I just plug that into here, take the derivative at zero-- of course that makes it a cosine, which will be 1-- but it also brings out that factor. So a times-- well, that factor will be 1 over m, and that tells me what a has to be. Well, this is omega. So a is-- this is omega a equal m. This is 1 over m omega. And that's omega. Sorry I'm erasing stuff which I-- this is the formula I'm after. Sine omega t over omega. I think we're good. Are we? Yeah? Yeah. I'll come back to this in-- Wednesday is my day to move to damping terms. I've intentionally stayed with undamped equations here, because you're thinking about that level of equation. Damping is going to bring in new stuff, and that should wait till Wednesday. Shall I recap today? I'll just recap today, and then we're done. Today started with the unforced equation. We solved it by assuming-- by not thinking ahead, just assume I have an exponential, because the beauty of exponentials is, when I plug it in, the exponential cancels. And that told me that s was pure imaginary. It told me that it had this form, e to the i omega t. And there were two s's. Two possible s's, plus and minus. I get to make a little comment about this example here. What was omega? What's the natural frequency in this problem? What's the natural frequency here? I guess this is a case where-- what's the natural frequency? I guess this is a case where m is 1 and k is 0, is that right? This does fit into that pattern, but it's a little special. This is a case where m is 1 and k is 0. So what's the natural frequency in this? AUDIENCE: Zero. PROFESSOR: Zero. Zero. This is a crazy case of resonance. It's a case in which the natural frequency and the driving frequency, say in this-- I'll have to do it here-- this simplest of all equations is, in a way, special. It's a case when the natural frequency is zero and the driving frequency is zero and they're equal. And what happens with resonance? What's the new formula, the new term that comes in with resonance? It's t. You saw it happen for this example, and we didn't have to use the word resonance. We knew that we had a ramp. We just used the word ramp, not resonance. But this is a case of resonance. When omega n is zero and omega d is zero. And the factor t up here. Anyway. Just that small comment there. And now, just going back to the recap. The recap was, we tried exponentials. We learned that they were pure oscillations. We realized that we could do cosines and sines instead, and we did. And we took off. We got the formula. Then of course the-- so this is section 2.1 of the book. And it goes through all those steps carefully. Section 2.2 of the book tells us about complex numbers, and section 2.3 brings damping in. So that's what's coming next time. So the recap again. We found the null solution, we found a particular solution-- oh there's just one comment I want to make, and then I'm done. Where was our particular solution? Yeah. This was our particular solution. Here's my comment. Here's my comment. Suppose I want to solve this basic equation starting from a given y of 0 and a y prime of 0. I'm going to do it in two parts, I think. I've got the null solution, and I've got this particular solution. Now here's my point. If I want to get y of 0-- how shall I say this. You can't just put together-- it's an easy mistake to make-- solve the null equation with the initial conditions and then add in the particular solution. You'd think, I just followed all the rules. But this particular solution that you added in has a-- at t equals 0, it's not zero. So you have to change. So the correct thing, the correct yp plus yn-- let me make that point. Just a warning. So in words the warning is, remember that the particular solution has some initial condition-- in that case, g-- and then that is going to affect the right null solution. So again, y is y null plus y particular-- plus y of particular-- so it's some c1 cos omega nt plus some c2 sine omega nt plus this particular guy, g, cosine of omega dt. All correct. All correct. But now, put in the initial conditions. y of 0 is given. And what do I get on the right hand side when I put in t equals 0? I get c1 here. What do I get when I put t equals 0 in there? Nothing. What do I get when I put t equals 0 in here? g. So it's not c1 equal y of 0 anymore. That's the easy mistake that I'm correcting. When you put in this particular solution, it has an initial value. That initial value is going to come in here. So c1, then, the correct c1 is y of 0 minus g. End of story. Just don't be too quick to just add the two pieces and think you can do them completely separately, because you're putting them together. And then you have to put them together in the initial condition. |
MIT_2087_Engineering_Mathematics_Linear_Algebra_and_ODEs_Fall_2014 | 3_FirstOrder_Equations_continued.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Monday's lecture was all linear equations. And I thought I would start today with nonlinear equations, still first order. And we can't deal with every nonlinear equation. That's too much to ask. These are going to be made easier by a property called "separable." So these will be separable nonlinear equations. And let me start with a couple of examples and then you'll see the whole idea. So one example would be the simplest nonlinear equation I can think of, with a y squared. So how to get there? Here's the trick. This is the separable idea. You're going to see it in one shot. We can separate, put the Ys on one side and the Ds on the other. So I write this as dy over y squared equal dt. I put the dt up and brought the y squared down. So now they're separated, in a kind of hookie way with infinitesimals. But I'll makes sense out of that by integrating. I'll integrate both sides. I'll integrate time from 0 to t. And I have an initial condition, y of 0, always. And since this one is about y, when t starts at 0, this guy starts at y of 0, up to, this ends at t, so this ends at y of t. OK. Now the point is, also the problem was nonlinear, we've got two separate ordinary integrals to do. And we can do them. We can certainly do the right hand side. I get t. And on the left hand side, what do I get? Well, that maybe I better leave a little space to figure out this one. But the point is we can integrate 1 over y squared. And I guess we get minus 1 over y. So I get minus 1 over y between y of 0 and y of t. In other words, I'm getting let's see, so what the right, the derivative of the integral of 1 over y squared is minus 1 over y because I always check the derivative gives me that back. So now I'm ready to plug-in those limits. So I'll do the bottom limit first because it comes with a minus sign, canceling that minus, 1 over y of 0 minus 1 over y of t. Got it. And that equals the other integral, which is just t. So that's the answer as it comes directly from integration. And we can do more. You can see that finding the solution when these things are separable has boiled down to two integrals. And we could have a function of t here, too. And that would be allowed, a function of t multiplying this guy, because then I would leave the function of t on that side. And I would have to integrate that. And I would bring the y. You see, I've just separated the y. In general, these equations look like dy, dt is some function of t divided by some function of y. Maybe the book calls the top one g, I think, and the bottom one f. And everybody in this room sees that I can put the f of y up there. I can put the dt up there. And I've separated it. OK. So that's sort of the general situation. This is a kind of nice example, nice example, dy, dt equals y squared. Can we just play with this a little bit? Let me take y of 0 to be 1, just to make the numbers easy. So if y of 0 is 1, then I have, I'll just keep going a little bit. You do have to keep going a little bit because when you finish the integral right there, you haven't got y equal. You've got some equation that involves y, but you have to solve for y. So I have to solve that equation for y. Let me just do it. So how would I solve it? And let me take y of 0 to be 1. So now, if I just write it below, I'm at 1 minus 1 over y equals t. Good? So I'm going to put the 1 over y of t on that side and the t on that side. So if I just continue here, I've got 1 over y of t on this side and, do I have 1 minus t on that side? Yeah. Looking good. So solution starting from y of 0 equal 1 is y of t equal 1 over 1 minus t. You could do that. You could do that. And I can always, like, mentally I check the algebra at t equals 0. That gives me the answer, 1. But let's step back and look at that answer. I mean, that's part of differential equations is to do some algebra, if possible, and get to a formula. But if we don't think about the formula, we haven't learned anything. Right there, yes. Good. So what happens? I want to compare with the linear case that was like e to the t. This was y prime equal y, right? And that led to e to the t. Y prime equals y squared leads to that one. So first observation. I haven't got exponentials anymore in that solution. Exponentials are just like perfection for linear equations. For nonlinear equations, we get other functions. Professor Fry had a hyperbolic tangent function in his first lecture. Other things happen. OK. Now, how do those compare if I graph those? It's just like, why not try? So they both started at 1. And e to the t went up exponentially. E to the t. And, I don't know, we use exponential. In our minds, we think, that's pretty fast growth. ! mean, that's the common expression, grew exponentially. But here, this guy is going to grow faster because y is going to be bigger than 1. So y squared is going to be bigger than y. That one's going to grow faster. Faster than exponential. This has the exponential growth. Pretty fast. Polynomial, of course, some parabola or something would be hanging way down here, left behind in the dust. But this 1 over 1 minus t, that's going to grow really fast. And what's more, it's going to go to infinity. So that y prime equal y squared, the solution to that doesn't just-- e to the t goes to infinity at time infinity. At any finite time, we get an answer. Eventually, at t equal infinity, it's gone above every bound. But this one, 1 over 1 minus t is what I want to graph now. I believe that that takes off and at a certain point, capital T, it's going to infinity. It's blown up. So it's blow up in finite time. Blow up in finite time. And what is that time? What's the time at which the y prime equal y squared has taken off, gone off the charts? T equal-- AUDIENCE: 1. PROFESSOR: --1. Because when t reaches 1, I have 1 over 0, and I'm dividing by 0, and so that's the blow up. Finite time blow up. OK. So this can happen for some nonlinear equations. It wouldn't happen for a linear equation. For a linear equation, exponentials are in control. OK. So that's one nice example. Oh, another nice thing about that example. Well, I say nice if you're OK with infinite series. I just want to compare. The book mentions the infinite series for these guys because that's an old way to solve differential equations is term-by-term in an infinite series. It's sort of fun to see the two series. Well, because they're the two most important series in math. Actually, they're the two series that everybody should know. The power series, Taylor series-- whatever word you want to give it for those two guys. So let me do them. E to the t, I'll put that one first, and 1 over 1 minus t. These are the great series of math. Shall I just write them down and sort of talk through them? Because this is not a lecture on infinite series by any means. But having these two in front of us, coming from these two beautiful equations, y prime equal y squared and y prime equal y, I can't resist seeing what they look like this way. So e to the t, do you remember e to the t? It starts at 1. What's the slope of e to the t? At t equals 0. So I'm doing everything-- this series is going to be, both of the series are going to be, around t equals 0. That's my, like, starting point. So this e to the t thing has a tangent. It has a slope there. And what's the slope of e to the t at t equals 0? AUDIENCE: 1. PROFESSOR: 1. It's derivative. The derivative of e to the t is e to the t. The slope is 1. So that tangent line has coefficient 1. That's how it starts. That's the linear approximation. That's the heart of calculus, is this. But we're going to go better. We're going to get the next term. So what's the next term? That gave us the tangent line. Now I'm going to move to the tangent parabola. So the parabola has got another is still going to be below the real thing. Can I squeeze in the words "line" and "parab," for "parabola?" Parabola has bending. I'm really explaining the Taylor series in what I hope is a sensible way. Here is the starting point. This has the slope. The next term has the bending. The bending comes from what derivative? What derivative tells us about bending? Second derivative. Second derivative tells us how much it curves. Well, the second derivative of e to the t is still e to the t. So the bending is 1. The bending is also 1. Now that comes in with a factor of a 1/2. There is the tangent parabola. And you will see what these numbers become. Let me just go to, the third derivative would be responsible for the t cubed term. And its coefficient would be 1 over 3 factorial. So 2 is the same as 2 factorial. 3 factorial is 3 times 2 times 16. So the numbers here go 1, 6, 24, 120, whatever the next one is. 720 or something. They grow fast. So that series always gives a finite answer. It does grow with t. But it doesn't spike with t. Now compare that famous series. And of course, this is 1 over 1 factorial, everything consistent. Compare that with the series for 1 over 1 minus t. That's the other famous series that they learned in algebra. I'll just write it. That's 1 plus t plus t squared plus t cubed plus 1 so on, with coefficient 1. This had 1 over n factorials. Those make the series converge. These don't have the n factorials. This is 1, 1, 1, 1. And, well, I could check that formula. But do you remember the name for that series? 1 plus t plus t squared plus t cubed plus so on? Algebra is taught differently in many high schools now. And maybe that never got a name. I guess I would call it the Geometric series. Geometric series. And you see, it's beautiful. It's the other important series. But it's quite different from this one because, what's the difference about this series? Yeah? AUDIENCE: It goes to infinity. PROFESSOR: It's a-- AUDIENCE: It goes to infinity. PROFESSOR: It goes to infinity. But where? At what time? At what value of t is this sum going to fall apart? Blow up? At t equal 1. When have 1 plus 1 plus 1 plus 1, I'm getting infinity. So this blows up. And of course, we see that it should because this blows up. Left side blows up at t equal 1, the right side blows up a t equal 1. Where the exponential series, which is the heart of ordinary differential equations, never blows up because of these big numbers in the denominator. OK, I'm good for this first simple example, y prime equals y squared. It has so much in it, it's worth thinking about. I'm ready, you OK for a second example? A second important separable equation. I'm going to pick one. So I'm going to pick an equation that starts out with our familiar linear growth. This could be, you know, last time it was growth of money in a bank. It could be growth of population. The number of, to a sum first approximation, the rate of growth of the population comes from, like, births minus deaths. And with modern medicine, births are a larger number than deaths. So a is positive, and that grows. But if we're talking about the, I mean, the United Nations tries to predict, everybody tries to predict, population of the world in future years. And so this could be called the Population Equation. But just to leave it as pure exponential is obviously wrong. The world can't grow forever. The population can't grow forever. And the, I guess I hope it doesn't grow like 1 over 1 minus t. So this is at least a little slower. But somehow competition for space, competition for food, for oil, for water-- which is going to be the big one-- is in here. Competition here, of people versus people, a reasonable term, a first approximation, is a y squared, with a minus, is a y squared and with some coefficient. That's a very famous equation. A first model of population is it grows. But this is a competition term, y against y. And so, the same would be true if we were talking about epidemics. That's a big subject with ordinary differential equations, epidemiology. Or say, flu. How does flu spread? And how does it get cured? So partly, people are getting over the flu. But then y against y is telling us how many infected, how many new infections. So we would like to solve that equation. And it's separable. I can do what I did before, dy over ay minus by squared equal dt. And I can integrate, starting from year of 0. Well, why don't we start from year 2014, with the population y at now-- the present population? That would be a model that the UN would consider using. That other people with very important interest in measuring population and measuring every resource would need equations like this. And then they would put on more terms, like a term for immigration. All sorts, many improvements have to go into this equation. Let me just look at this as it is. Well, I've got two choices here. Well, it's this integral that I'm looking at. That is a doable integral. It's the type of integral that we saw in the Rocket problem. The Rocket problem was more constant minus. This was a drag term, when we were looking at rockets. And this was a constant, say, gravity. So it was still a second degree. Still second degree, but a little different. This has the linear in second degree terms. If you look up that integral, you'll find it. Or there's a systematic way to do it. That's in 1801, I guess, called partial fractions. It's not a lot of fun. I don't plan to do it. It's in the book. Has to be because that's the way you can integr-- you can integrate polynomials over polynomials by partial fractions. That's what they're for, but there's a neat way to do this one. There's a neat trick that Bernoulli discovered to solve that equation, to turn it into a linear equation. And of course, if we can turn it into a linear equation, we're on our way. So the neat trick is let z be 1 over y. You can put this in the category of lucky accidents, if you like. So now I want an equation for z. So I know that dz, dt if I take the derivative of that, that's y to the minus 1. So it's minus 1 y to the minus 2 dy, dt. That's the chain rule. Take the derivative of 1 over 1, you get minus 1 over y squared. Multiply by the derivative of what's inside. That's the chain rule. OK. So I plan to substitute those in here. So dy, dt, let's see. Can you see me? You can probably do it better than me. So dy, dt is minus. I'll bring that up. Dy, dt I'm going to put-- I hope this'll work all right-- for dy, dt, I'm going to put in dz. Using this, I'm going to put minus y squared dz, dt. Did that look right? I don't think I'm necessarily doing this the most brilliant way. But dy, dt-- I put this up here and I got that-- equals ay. So that's a over z. Oh, y Is 1 over z. So get this, I want all Zs now. So that's this part. And ay is over z minus by squared is minus b over z squared. Would you say OK to that? I've got Zs now, instead of Ys. I just took every term and replaced y by 1 over z. Y is 1 over z and dy, dt I can get that way. OK. Yeah. Now what? Now look what happens, if I multiply through by z squared or by minus z squared. Let me multiply through by minus z squared. I get dz, dt. Multiplying by minus z squared gives me a minus az. And what do I get when I multiply this one by minus z squared? AUDIENCE: Plus b. PROFESSOR: I get plus b. Look what happened. By this, like, some magic trick. You could say, all right. That was just a one time shot. But it was a good one. We ended up with a linear equation for z. A linear equation for z. And we solved that equation last time. So let me squeeze in the solution for z, and then elsewhere. So what was the solution for z of t? It was some multiple of, no, yeah. This is perfect review of last time. We have a constant times z. And so that's going to go into the exponential. This will be the, it's a minus a, notice. That will be the, what part of the solution is that one called? That's the null solution. The null solution, when b is o. And now I add in a particular solution. A particular solution. And one good particular solution is choose the z to be a constant. Then that'll be 0. So I want that to be 0. So what constant z makes that 0? I think it's b over a, don't you? I think b over a. Does that work good? That's every null solution plus one particular solution. Let me say now, and I'll say again, looking for solutions which are steady states, b over a-- of this particular solution, that particular solution made this 0. So it made this 0. So it's a solution that's not going anywhere. It's a constant solution. It's a solution that can live for all time. OK. B over a. Let me put that word there, steady state. OK. And now I would want to match the initial conditions using c. Yeah. I'd better do that. OK. And I have to get back to y. I have y is 1 over z. So I'm going to have to flip this upside down. I'm going to have to flip this upside down is what will actually happen. Let me make it easy to flip. Let me, I'll change c, which is just some constant to some constant d over a. So then it's a is everywhere down below. And I just write it here in the middle. That makes it easier to flip. So finally I get their solution. Solution to the population equation. But that's the famous word for it, the Logistic equation. This is section 1.7 of the text on the differential equations in linear algebra. It's a very, very much studied example. It's a great example. It fits the growth of human population with some, it's our first level approximation to growth of or other populations or other things. It's a linear term giving us exponential growth, and a quadratic term of competition slowing it down. And let's see that slow down. So now that was a bit of algebra. Much nicer than partial fractions. The bit of algebra just came from this idea of going to z. And now I want to go back to y. So y is 1 over z. So it's a over d e to the minus at plus b. That's our solution. A and b came out of the equation. And d is going to be the number that makes the initial value correct. So at t equals 0, I would have y of 0, whatever the initial population is, is a over d. T Is 0, so that's just 1 plus b. So that tells me what d is. D equals something. It comes from y of 0. So the answer, let me circle that answer. That answer has three numbers in it, a, b, and d. a and b come from the equation. D also involves the initial starting thing, which is exactly what it showed. So you could say we've solved it. But if you ever solve an equation like this, you want to graph it. You want to graph it. So let me draw its graph. This is important picture. So here is time. Here is population. Here's, maybe it started there. This is times 0. And now I want to graph this. I want to graph that function. Really, this is where we're going somewhere. What happens for a long time? At t equal infinity, what happens to the population? Does it grow, like e to the t? Just remember the examples here. We had a growth like e to the t. We had a growth faster than e to the t that actually blew up. What about this guy? What will happen as t goes to infinity with that population? It goes to? AUDIENCE: A over b. PROFESSOR: A over b. A over b. That's the key number in the whole thing. It keeps growing, but it never passes a over b. This is y at infinity. That's the final population. So how does it do this? If I draw this graph-- and what about negative time? Let's go backwards in time. What is it at t equals minus infinity? Then you really see the whole curve. At t equal minus infinity, what is this doing? AUDIENCE: 0 infinity. PROFESSOR: It's 0. Good. Good. Good. T equal minus infinity, this is enormous. This is blowing up. It's in the denominator. We're dividing by it. So the whole thing is going to 0. So here's what the logistic curve looks like. It creeps up. And it's beautifully, there's a point of symmetry here. The growth is increasing here. And then, as a point of inflection you could say, growth is bending upwards for a while. At this point, it starts bending downwards. From that point on, ooh, let's see if I can draw it. It'll get closer, and exponentially close. That wasn't a bad picture. The population here is half way. Here, the population, the final population, is a over b. And just by beautiful symmetry, the population here is a 1/2 of a over b. At this point. If this was the actual population of the world we live in-- I think we're pretty close to this point. I believe, well, of course, nobody knows the numbers, unfortunately, because the model isn't perfect. If the model was perfect, then we could just takes the census and we would know a and b. But the model isn't that great. But it's sort of, we're at a very interesting time, close to a very interesting time. I believe that with reasonable numbers, this a over b might be maybe 12 billion. And we might be, I think we're a little above six billion. I think so. So we're a little bit past it. This is now. This is halfway. That's the halfway point. It's perfectly symmetric. It's called an S curve. And many, many equations in math biology involve S curves. So math biology often gives rise, with simple models, to a kind of problem we've had here with a quadratic term slowing things down. Enzymes, all kinds of. Ordinary differential equations are core ideas in a lot of topics, lot of areas of science. OK. Do I want to say more about the logistic equation? I guess I do want to distinguish one thing. Yeah. One thing about logistic equations and will of course come back to this. OK. Let me look at that logistic equation. Here's my equation. So I've managed to solve it. Fine. Great. Even graph it. But let me come back to the question, suppose I just look at. I can see two constant solutions, two steady states, two solutions where the derivative is 0. So nothing will happen. So in other words, I want to set this thing set to 0 equal to 0 to find steady solutions. Steady means the derivative is 0. So this side has to be 0. So what are the two possible steady states where, if y of 0 is there, it'll stay there? AUDIENCE: 0. PROFESSOR: 0. Y equals 0 is one. And the other? AUDIENCE: A over b. PROFESSOR: Is a over b. So steady equal to 0. And I get two steady states. Let me call them capital Y equals 0 because that's certainly 0 of, if we have 0 population, we'll never move. Or setting this to 0, ay is by squared cancel y's divide by b a over b. So the two steady states are here. That's a steady state and that's a steady state. Those are the only two in this problem. You see how easy that was to find the steady states? That's an important thing to do. And then the other important thing to do is to decide, are those steady states stable? When the population's near a steady state, does it approach that, does it go toward that steady state or away? So what's the answer? For this steady state, that steady state, y is a over b. Is that stable or unstable? So I'll write the word stable. And I'm prepared to put in "un," unstable, if you want me to. This is a key, key idea. And with nonlinear equations, you can answer this stability stuff without formulas. Without formulas. That's the nice thing. And then that comes in a later class. But here's a perfect example. So do we approach this answer or do we leave it? We approach it, the solutions. This is stable, yes. And here's the other stationary point, capital Y. The other steady state is that nothing happens. So now if I'm close to that, if y is a little number, like 2, will that 2 drop to 0, will it approach this steady state, or will it leave it? AUDIENCE: Leave it. PROFESSOR: Leave it. So this steady state is. AUDIENCE: Unstable. PROFESSOR: Unstable. Unstable. Right. Right. With linear equations, we really only had one steady state, like 0. Once it started, it took off forever. Here, it doesn't go infinitely high. It bends down again to that limit, that carrying capacity is what it's called, a over b. I guess I hope you think a nonlinear equation like got a little more to it. Little more interesting, but a little more complicated, than linear equations. Yep. Yep. Yep. And similarly, the rocket equation, we could at the right time soon in the course, ask the same thing, a rocket equation was something like that. What are the steady states? Are they stable? Are they unstable? Can you find a formula? Here. This. We got a formula. And there are other nonlinear equations, which we'll see. OK. I could create more separable equations, but I guess I hope that you see with separable equations, you just separate them and integrate a y integral and a t integral. Is that OK any question on this nonlinear separable stuff? Differential equations courses and the subject tends to be types of equations as can solve. And then there are a hell of a lot of equations that are not on anybody's list, where you could maybe solve them by an infinite series, but not by functions that we know. OK. I'm ready to do the other topic for today. It's the topic that I left incomplete on Monday. So I'm staying with first order equations, but actually this topic is essential for second order equations. So I'm going to topic two for today. So topic two will involve complex numbers. So we have to deal with complex numbers. And the purpose of introducing these complex numbers is to deal with what we met last time when the right hand side, the forcing term, was a cosine. Typical alternating current, oscillating, rotating, rotation. All these things produce trig functions. Maybe rotation is more of a mechanical engineering phenomenon. Alternating current more of an EE phenomenon. But they're always there. And what was the point? The point was we had some linear equation, and we had some forcing by something like cos omega t. Or it could be A cos omega t and B sine omega t. Either just cosine alone, or maybe these come together. And then the solution was y equals some combination of those same guys. In other words, what I'm saying is cosines are nice right hand forcing functions. Fortunately, because we see them all the time. But they do lead to cosines and sines. I emphasized that last time. If we just have cosines in the forcing function, we can't expect that there's any damping, we can't expect only cosines. We have to expect some sines. In other words, we have to deal with combinations of them. And the question is, how do you understand cos omega t plus 3. Or let me take a first example. Example-- cos t plus sine t. That's a perfect example. So what is omega here in this example that I'm starting with? AUDIENCE: 1. PROFESSOR: 1. So I just read off the coefficient of t is 1, 1 hertz here. But we have got this combination. And the question is, how do we understand that cosine plus sine? Two very simple functions, but they're added, unfortunately. And there's a much better way to write this so you really see it. You really see this. That's called a sinusoid. And the rule that want to focus on now is that everything of that kind, of this kind, of this kind, of a cosine plus a sine, can be compressed into one term. One term. Of course, it's got to have two constants to choose because that had an a and a b. This had an m and an n. This had a 1 and a 1. But the term I'm looking for is some number R times a pure cosine of omega t, but with a phase shift. So you see there are two numbers here to choose. It's really like going from rectangular to polar. Say in complex numbers, let's just remember the first fact about a complex number. If the real part is 3, and the imaginary part is, let's say 2, then here's a complex number, 3 plus 2 i. So this was the real axis. This was the imaginary axis. I went along 3, I went up 2, I got to that number. There it is. I plotted the number 3 plus 2 i in the complex plane. And for me, that number 3 plus and so on, really saying something important. And maybe it's not entirely new. I'm saying something important about complex numbers, this is their rectangular form. Something plus something. That form is nice to add to another complex number. If I added 3 plus 2 i to 1 plus i, what would I get? AUDIENCE: 4 plus 3 i. PROFESSOR: 4 plus 3 i. But if I multiply, multiply, 3 plus 2 i times, let's say I square it. I multiply 3 plus 2 i by 3 plus 2 i. What do I get? If I do it with this rectangular form, I get a mess. I can't see what's happening. It's the same over here. This is like having a 1 and a 1, with an addition. This is like a polar form where it's one term. OK. So let me answer the question here and then let me answer the question there. And then you've got a good shot at what complex numbers can do, and why we like the polar form for squaring, for multiplying, for dividing. What's the polar form? Well, I'm using that word "polar" in the same way we use polar coordinates. What are the polar coordinates of this point? They're the radial distance, which is what? So what's that distance? That's the R you could say. It corresponds to this R here. So I'm just using Pythagoras. That hypotenuse is what? AUDIENCE: Square root of 13. PROFESSOR: Square root of 13. Thanks. 9 plus 4, square root of 13. And what's the other number that's locating this in polar coordinates? The angle. And the angle. What can we say about that angle? Let's call it phi is-- what's the angle? Well, it's some number. It's between 0 and pi over 2, I'm sure of that. What do I know about that angle? I know that this is 2 and this is 3. So that's telling me the angle. Well, what is that really telling me immediately? It's telling me the. AUDIENCE: Tangent. PROFESSOR: Tangent of the angle. So the tangent of the angle is 2 over 3. And the magnitude is square root of 13. OK. So those beautiful numbers, 2 and 3, have become a little weirder. Square root of 13, inverse tangent of 2/3. You could say, well, that's not so nice. What was I going to do? I was going to try squaring that number. So if I square 3 plus 2 i, or if I take the 10th power of 3 plus 2 i, or the exponential, all these things, then I'm happy with polar coordinates. Like, what would be the magnitude of the square? And where will the square of that number, so I want to put in 3 plus 2 i squared, which I can figure out in rectangular, of course-- a 9, and 6 i, or 12 i, or 4 i squared, stuff like that. It's not pleasant. What's the magnitude, what's the R for this guy? What's the size of that number squared? Yes? Say that again. AUDIENCE: 13. PROFESSOR: 13. Right. I just have to square this square root so I get 13. And the angle will be, what's the angle for the square there? I don't want a number. I guess I'm just doing this. R e to the i phi squared is R squared. And what's the angle here? E to the i phi squared is e to the 2 i phi. It's the angle doubled. E to the 2 i phi. The angle just went from phi to 2 phi. The lengths went from square root of 13 to 13. Squaring, multiplying is nice with complex numbers. Maybe can I before I go on and on about complex numbers, I should ask you, how many know all this already? Complex numbers are familiar? Mostly. Correctly, with a wiggle. OK. I won't go more about complex numbers. Let me come back to my question here. Let me come back to the application. So here it is with complex numbers. Here it is with sinusoids. And the little beautiful bit of math is that the sinusoid question goes completely parallel to the complex number question. So you have an idea on those complex numbers. We'll see them again. Let me go to this. So I want this to be the same as this, OK. Maybe I'm going to have to use a new board for this. Can I start a new board? So I want cos t plus sine t to be some number R times cosine of t 1. I can see omega's 1, so I just put to 1 minus some angle. OK. And I want to choose R and phi to make that right. You see what I like about it? This tells me the magnitude of the oscillation. It tells me how loud the station is. When I see cos t and, separately, sine t, or I might see 3 cos t and 2 sine t. 3 cos t is a cosine curve. 2 sine t is a sine curve shifted by 90. I put them together, it bumps, it bumps, bumps. Not completely clear. It seems to me just beautiful that if I put together a cosine curve that we know, that starts at 0, with a sine curve that starts at 0, the combination is a cosine curve. Isn't that nice? I mean, you know, that sometimes math gets worse and worse whatever you do. But this is really nice that we can put the two into one. But you see, it's going to-- well, let's do it. What would R and phi be here? So I'll use a trig fact here. A cosine of a difference of angles, so this is R times cosine t, do you member this? This was the whole point of going to high school. Plus sine t sine phi. So now, how do I get R and phi? I use the same idea that worked last time. I match the cosine terms and I match the sine terms. So the cosine t has a 1. 1 cosine t is R cos phi. That's what's multiplying cosine t. And the sine t has a 1. And that has to agree with R sine phi. So I'm in business if I solve those two equations. And well, they're not linear equations. But I can solve them. Of course, the one fact that you never forget is that sine squared plus cosine squared is 1. Right? So if I square that one, and square that one, and add, what will I get? 1 squared and 1 squared will be 2, on the left hand side. On the right hand side, I'll have R squared cos squared, R squared cos squared, and plus R squared sine squared. And what's that? What's R squared cosine squared plus r squared sine squared? AUDIENCE: R squared. PROFESSOR: It's just R squared. So all that added up to R squared. In other words, it's just like polar coordinates. R is the square root of 2. That's telling us the magnitude of the response. Square root of 2. You see, it's just like complex numbers. It's like the cosine gave us a real part and the sine gave us an imaginary part. And R was the hypotenuse. And that's really nice. So R is the square root of 2. OK. Now, the angle is never quite as nice. But how can we get something about an angle out of there? All we could get in this case here was the tangent of the angle. And I'll be happy with that again here because it's a totally parallel question. How am I going to get the tangent of the angle? What do I have? From these two equations, I want to eliminate R. So how do I eliminate R? What do I do? Divide. Divide. I guess if I want tangent sine over cosine, I'll divide this one in the top by this one in the bottom. So I take the ratio. That'll cancel the Rs perfectly. It'll leave me with 10 phi. And here it happens to be 1. OK. So what have I learned? I've learned that when these two add up together, they equal what? R square root of 2. You see how easy it is. Square root of 2 came from the square root of 1 plus. It's like Pythagoras. Pythagoras going in circles, really. Times the cosine of t minus. And what is phi? Its tangent is 1, so what's the angle phi? AUDIENCE: Pi over 4. PROFESSOR: Pi over 4. Right. So that's the sinusoidal identity when the numbers are 1 and 1. But you saw the general rule. Let me just take it. Suppose this is the output, and cos omega t plus n sine omega t. What is the gain? What's the magnitude, the amplitude, the loudness of the volume in this when I'm tuning the radio? What's the R for this guy? What's this R? If we just follow the same idea. So if we have m times a cosine and n times a sine, what's your guess? What's your guess for R, the magnitude? I'm guessing a square root of what? Yeah? You got. What is it? n squared-- [INTERPOSING VOICES] PROFESSOR: Plus n squared. Way to go. M squared plus N squared. And the angle is like the phase shift. I'm not great at graphing, but let me try to go back to my simple example. If I tried to add up on the same graph cosine t, which would start from 1 and drop to 0, go like that, right? Something like that would be cosine. And now I want to add sine t to that. So that climbs up to 1 back to 0, down. And now if I add those two, this formula is telling me that it comes out neat. Neatly. That one plus that one is another sinusoid with height square root of 2. If I had different chalk, I've got at least a little bit different. But does it start here? Of course not. It starts here, I guess. But it goes up, right? Because this comes down, but this is going up. All together, it's up to, where is the peak? Where is the peak on the sum? So I'm adding, everybody sees what I'm doing? I'm adding a cosine curve and a sine curve. And it goes up. And where does it peak? What angle is it going to peek at? What's the biggest value this gets to? AUDIENCE: [INAUDIBLE]. PROFESSOR: At pi over 4, it'll peak. At pi over 4, it'll be the cosine of 0, which is 1. It's height'll be the magnitude, the gain, square root of 2. So it'll peak at pi over 4, which is probably about there, right? Peak at pi over 4 and, I don't know if I got it right frankly. I did my best. That's the sum. That right there. The first key point is it's a perfect cosine. The second key point is it's a shifted cosine. The third key point is its magnitude is the square root of 1 squared plus 1 squared, or n squared plus n squared, or a squared plus b squared. So that's the sinusoidal identity. A key identity and being able to deal with forcing terms, source terms, that are sinusoids. OK. Now, I'm going to take one more step since we have just like 10 minutes left, and let the number i get in here properly. Get a complex number to show up here. OK. Before I start on this, let me recap. Let me recap today's lecture. It started with nonlinear separable equations. And a great example was the logistic equation up there, with the S curve. That took half the lecture. The second half of the lecture has started with things real with sinusoids that are combinations of cosine and sine and has written them in a one term way. And now I want to get the same one term picture from using complex numbers. OK. OK. And everything I do would be based on this great fact from Euler that e to the i omega t. The real part is cosine omega t. And the imaginary part is sine omega t. That's a central formula. Let me draw it rather than proving it. Let me draw what that means. I'm in the complex plane again. Real part is the cosine. The imaginary part is the sine. That number there is e to the i omega t because it's got that real part and that imaginary part. And what's its magnitude? What's the R, the polar distance for cos omega t plus, for this number, which is for this number? What's the hypotenuse here? Everybody knows. AUDIENCE: 1. PROFESSOR: 1. Hypotenuse is 1. Cos squared plus sine squared is 1. So e to the i omega t is on a circle of radius 1. That's the most important circle in the complex world, the circle of radius 1. And all these points are on it. And their angles are omega t. And as t increases, the angle increases, and you go around the circle. You've seen it. Physics couldn't live without this model. OK. So that's basically what we have to know. And now, how do we use it? Well, the idea is to deal with the equation. Like, the equation I had last time was dy, dt equals y plus cos t. That gave us some trouble because the solution didn't just involve cosines, it also involved sines. Yeah. So I want to write that equation differently, in complex form. And this is the key point here. So I'm going to look at the equation dz, dt equals z plus e to the i t. Well, I'll make that cos omega t just to have a little more, the units are better, everything's better if I have a frequency there. Units of this are seconds and the units of this are 1 over seconds. Now, question. What's the relation between the solution z to that complex equation and the solution y to that equation? Of course, they have to be related, otherwise it was stupid to move to this complex one. My claim is that complex equation is easy to solve. And it gives us the answer to the real equation. And what's the connection between y and z? AUDIENCE: So y's the real part. PROFESSOR: Y is, exactly, say it again. AUDIENCE: The real part-- PROFESSOR: Of z. Y is the real part of z. So that gives us an idea. Solve this equation and take it's real part. If I can solve this equation without getting into cosine and sine separately and matching, I can stay real. I solve the equation by totally real methods up to now. Now I'm going to say, here's another approach. Look at the complex equation, solve it, and take the real part. You may prefer one method. You may like to stay real. In a way, it's a little more straightforward. But the complex one is the one that will show us, it brings out this R, it brings out the gain, it brings out the important-- engineering quantities are important, if I do it this way. Now, I believe that the solution to that is easy. Actually, it is included in what I did last time. It's a linear equation with a forcing term that's a pure exponential. And what kind of solution do I look for? I'm looking for a particular solution. If I see an exponential forcing term, I say, great. The solution will be an exponential. So the solution will be sum. Z is sum capital Z e to the i omega t. Plug it in. What happens if I plug that in to find capital Z, which is just a number? Right. This is my method. This is a linear equation, with one of those cool right hand sides, where the solution has the same form with a constant, and I just have to find that constant. So I plug it in. Dz, dt. Take the derivative of this, z i omega will come down. E to the i omega t. Z is just this, z to the i omega t. And this is just 1 e to the i omega t. So I plugged it in, hoping things would be good. And they are because I can cancel e to the i omega t, that's the beauty of exponentials, leaving just a 1 there. So what's capital Z? What's capital Z then? I've got a z here. I better bring it over here. And I've got the 1 there. I think the z is 1 over. When I bring that z over here, do you see what I'm getting? It's all multiplying this e the i omega t. It's a number there. Z times i omega and comes over as a minus z. What do I have multiplying z here? I see the i omega. And what else have I got multiplying the z? AUDIENCE: Minus. PROFESSOR: A negative 1 because it came over with a minus sign. Done. Equation solved. Equation solved. Complex equation solved. So the point is, the complex equation was a cinch. We just assumed the right form, plugged it in, found the number, we're done. But there's one more step, which is what? Take the real part. So I have to take the real part of this. So the correct answer is y is the real part of that number, 1 over i omega minus 1 times e to the i omega t. I'm tempted to stop there, but just with a little comment. How am I going to find that real part? And what form will it have? What form will that real part have? Yeah, maybe just to say what form will it have? The real part, it's going to be a sinusoid. But I have a complex number multiplying this guy. The real part is going to be exactly of the form we-- well, of course, it had to be the form because that was another way to solve the equation. It's going to be some number. And I'll call it g, for gain, times the real part. And so the real part will be a cosine. Yeah, it's just perfect. A cosine of omega t. And there'll be a phase. Yeah. i haven't taken that step fully. I got to that fully. And then I said that that, if I use some complex arithmetic, will come out to be this. And you see the beauty of that answer, which was way better than a sum of sines and cosines. We see the gain. We see the amplitude. And we see the phase shift. Yeah. So I don't know, that would be a good exercise in complex numbers. Find g and find phi, in taking the real part of this thing. Yeah. It's a pure exercise in using complex numbers. I don't feel like doing it today. If we do it, you just see a lot of formulas. Here, you see the point. The point was that the complex equation could be solved in one line. We just did it. But that left us the problem of taking the real part. That was the e to the i omega t there. Left us the problem of taking the real part. And that's a practice with complex arithmetic. So you've got the choice. Either stay real-- sign plus cosine. And then use the sinusoidal identity, polar form. Or get the polar form from here. Same answer both ways. |
MIT_2087_Engineering_Mathematics_Linear_Algebra_and_ODEs_Fall_2014 | 2_FirstOrder_Equations.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Well, OK, Professor Frey invited me to give the two lectures this week on first order equations, like that one, first order dy dt. And the lectures next week will be on second order equation. So we're looking for, you could say, formulas for the solution. We'll get as far as we can with formulas, then numerical methods. Graphical methods take over in more complicated problems. This is a model problem. It's linear. I chose it to have constant coefficient a, and let me check the units. Always good to see the units in a problem. So let me think of this y, as the money in a bank, or bank balance, so y as in dollars, and t, time, in years. So we're looking at the ups and downs of bank balance y. The rate of change, so the units then are dollars per year. So every term in the equation has to have the right units. So y is in dollars, so the interest rate a is percent per year, say 6% a year. So a could be 6%-- that's dimensionless-- per year, or half a percent per month if we change. So if we change units, the constant a would change from 6 to a half. But let's stay with 6. And then q of t represents deposits and withdrawals, so that's in dollars per year again. Has to be. So that's continuous. We think of the deposits and the interest as being computed continuously as time goes forward. So if that's a constant-- and I'll take that case first, q equal 1-- that would mean that we're putting in, depositing $1 per year, continuously through the year. So that's the model that comes from a differential equation. A difference equation would give us finite time steps. So I'm looking for the solution. And with constant coefficients, linear, we're going to get a formula for the solution. I could actually deal with variable interest rate for this one first order equation, but the formula becomes messy. But you can still do it. After that point, for a second order equations like oscillation, or for a system of several equations coupled together, constant coefficients is where you can get formulas. So let's go with that case. So how to solve that equation? Let me take first of all, a constant, constant source. So I think of q as the source term. To get one nice formula, let me take this example, ay plus 1, let's say. How do you find y of t to solve that? And you start with some initial condition y of 0. That's the opening deposit that you make at time 0. How to solve that equation? Well, we're looking for a solution. And solutions to linear equations have two parts. So the same will happen in linear algebra. One part is a solution to that equation, so we're just looking for one, any one, and we'll call it a particular solution. And the associated null equation, dy dt equal ay. So this is an equation with q equals 0. That's why it's called null. And it's also called homogeneous. So more textbooks use that long word homogeneous, but I use the word null because it's shorter and because it's the same word in linear algebra. So let me call yn the null solution, the general null solution. And y, I'm looking here for a particular solution yp, and I'm going to-- here's the key for linear equations. Let me take that off and focus on those two equations. How does solving the null equation, which is easy to do, help me? Why can I, as I plan to do, add in yn to yp? I just add the two equations. Can I just add those two equations? I get the derivative of yp plus yn on the left side. And I have a times yp plus yn. And that is a critical moment there when we use linearity. I had a yp a yn, and I could put them together. If it was y squared, yp squared and yn squared would not be the same as yp plus yn squared. It's the linearity that comes, and then I add the 1. So what do I see from this? I see that yp plus yn also solves my equation. So the whole family of solutions is 1 yp plus any yn. And why do I say any yn? Because when I find one, I find more. The solutions to this equation are yn could be e to the at, because the derivative of e to the at does bring down a factor a. But you see, I've left space for any multiple of e to the at. This is where that long word homogeneous comes from. It's homogeneous means I can multiply by any constant, and I still solve the equation. And of course, the key again is linear. So now I have-- well, you could say I've done half the job. I've found yn, the general yn. And now I just have to find one yp, one solution to the equation. And with this source term, a constant, there's a nice way to find that solution. Look for a constant solution. So certain right hand sides, and those are the like the special functions for the special source terms for differential equation, certain right hand sides-- and I'm just going to go down a list of them today. The next one on the list-- can I tell you what the next one on the list will be? y prime equal ay. I use prime for-- well, I'll write dy dt, but often I'll write y prime. dy dt equal ay plus an exponential. That'll be number two. So I'm just preparing the way for number two. Well, actually number one, this example is the same as that exponential example with exponent s equal 0, right? If s is 0, then I have a constant. So this is a special case of that one. This is the most important source term in the whole subject. But here we go with a constant 1. So we've got yn. And what's yp? I just looked to see. Can I think of one? And with these special functions, you can often find a solution of the same form as the source term. And in this case, that means a constant. So if yp is a constant, this will be 0. So I just want to pick the constant that makes this thing 0. And of course, their right hand side is 0 when yp is minus 1 over a. So I've got it. We've solved that equation, except we didn't match the initial condition yet. Let me if you take that final step. So the general y is any multiple, any null solution, plus any one particular solution, that one. And we want to match it to y of 0 at t equals 0. So I want to take that solution. I want to find that constant, here. That's the only remaining step is find that constant. You've done it in the homework. So at t equals 0, y of 0 is-- at t equals 0, this is C. This is the minus 1 over a. So I learn what the C has to be. And that's the final step. C is bring the 1 over a onto that side, so C will be y of 0 e to the at minus 1 over a e to at. And here we had a minus 1 over a. Well, it'll be plus 1 over a e to the at. So now I've just put in the C, y of 0 plus 1 over a. y of 0 plus 1 over a has gone in for C. And now I have to subtract this 1 over a. Here, I see a 1 over a, so I can do it neatly. Got a solution. We can check it, of course. At t equals 0, this disappears, and this is y of 0. And it has the form. It's a multiple of e to the at and a particular solution. So that's a good one. Notice that to get the initial condition right, I couldn't take C to be y of 0 to get the initial condition right. To get the initial condition right, I had to get that, this minus 1 over a in there. Good for that one? Let me move to the next one, exponentials. So again, we know that the null equation with no source has this solution e to the at. And I'm going to suppose that the a in e to the at in the null solution is different from the s in the source function, which will come up in the particular solution. So you're going to see either the st in the particular solution and an e to the at in the null solution. And in the case when s equals a, that's called resonance, the two exponents are the same, and the formula changes a little. Let's leave that case for later. How do I solve this? I'm looking for a particular solution because I know the null solutions. How am I going to get a particular solution of this equation? Fundamental observation, the key point is it's going to be a multiple of e to the st. If an exponential goes in, then that will be an exponential. Its derivative will be an exponential. I'll have e to the st's everywhere. And I can get the number right. So I'm looking for y try. So I'll put try, knowing it's going to work, as some number times e to the st. So this would be like the exponential response. Response, do you know that word response? So response is the solution. The input is q, and the response is Y. And here, the input is e to the st, and the response is a multiple of e to the st. So plug it in. The timed derivative will be Y. Taking the derivative will bring down a 1. e to the st equals aY. A aY e to the st plus 1 e to the st. Just what we hoped. The beauty of exponentials is that when you take their derivatives, you just have more exponential. That's the key thing. That's why exponential is the most important function in this course, absolutely the most important function. So it happened here. I can cancel e to the st, because every term has one of them. So I'm seeing that-- what am I getting for Y? Getting a very important number for Y. So I bring aY onto this side with sY. On this side I just have a 1. Maybe it's worth putting on its own board. Y is, so Ys aY comes with a minus, and the 1, 1 over-- so Y was multiplied by s minus a. That's the right quantity to get a particular solution. And that 1 over s minus a, you see why I wanted s to be different from a. I If s equaled a in that case, in that possibility of resonance when the two exponents are the same, we would have 1 over 0, and we'd have to look somewhere else. The name for that-- this has to have a name because it shows up all the time. The exponential response function, you could call it that. Most people would call it the transfer function. So any constant coefficient linear equation's going to have a transfer function, easy to find. Everything easy, that's what I'm emphasizing, here. Everything's straightforward. That transfer function tells you what multiplies the exponential. So the source was here. And the response is here, the response factor, you could say, the transfer function. Multiply by 1 over s minus a. So if s is close to a, if the input is almost at the same exponent as the natural, as the null solution, then we're going to get a big response. So that's good. For a constant coefficient problem second order, other problems we can find that response function. It's the key function. It's the function if we have, or if we were to look at Laplace transforms, that would be the key. When you take Laplace transforms, the transfer function shows up. Then when you take inverse Laplace transforms, you have to find what function has that Laplace transform. So did we get the-- we got the final answer then. Let me put it here. y is e to the st times this factor. So I divide by s minus a. A nice solution. Let me also anticipate something more. An important case for e to the st is e to the i omega t. e to the st, we think about as exponential growth, exponential decay. But that's for positive s and negative s. And all important in applications is oscillation. So coming, let me say, coming is either late today or early Wednesday will be s equal i omega, so where the source term is e to the i omega t. And alternating, so this is electrical engineers would meet it constantly from alternating voltage source, alternating current source, AC, with frequency omega, 60 cycles per second, for example. Why don't I just deal with this now? Because it involves complex numbers. And we've got to take a little step back and prepare for that. But when we do it, we'll get not only e to the i omega t, which I brought out, but also, it's real part. You remember the great formula with complex numbers, Euler's formula, that e to the i omega t is a combination of cosine omega t, the real part, and then the imaginary part is sine omega t. So this is looking like a complex problem. But it actually solves two real problems, cosine and sine. Cosine and sine will be on our short list of great functions that we can deal with. But to deal with them neatly, we need a little thought about complex numbers. So OK if I leave e to the i omega t for the end of the list, here? So I'm ready for another one, another source term. And I'm going to pick the step function. So the next example is going to be dy dt equals ay plus a step. Well, suppose I put H of t there. Suppose I put H of t. And I ask you for the solution to that guy. So that step function, its graph is here. It's 0 for negative time, and it's 1 for positive time. So we've already solved that problem, right? Where did I solve this equation? This equation is already on that board. Because why? Because H of t is for t positive. That's the only place we're looking. This whole problem, we're not looking at negative t. We're only looking at t from 0 forward. And what is H of t from 0 forward? It's 1. It's a constant. So that problem, as it stands, is identical to that problem. Same thing, we have a 1. No need to solve that again. The real example is when this function jumps up at some later time T. Now I have the function is H of t minus T. Do you see that, why the step function that jumps at time T has that formula? Because for little t before that time, in here, this is-- what's the deal? If little t is smaller than big T, then t minus T is negative, right? If t is in here, then t minus capital T is going to be a negative number. And H of a negative number is 0. But for t greater than capital T, this is a positive number. And H of a positive number is 1. Do you see how if you want to shift a graph, if you want the graph to shift, if you want to move the starting time, then algebraically, the way you do it is to change t to t minus the starting time. And that's what I want to do. So physically, what's happening with this equation? So it starts with y of 0 as before. Let's think of a bank balance and then other things, too. If it's a bank balance, we put in a certain amount, y of 0. We hope. And that grew. And then starting at time, capital T, this switch turns on. Actually, physically, step function is really often describing a switch that's turned on, now. This source term act begins to act at that time. And it acts at 1. So at time capital T we start putting money into our account. Or taking it out, of course. If this with a minus sign, I'd be putting money in. Sorry, I would start with some money in, y of 0. I would start with money in. Yeah, actually, tell me what's the solution to this equation that starts from y of 0? What's the solution up until the switch is turned on? What's the solution before this switch happens, this solution while this is still 0? So let's put that part of the answer down. This is for t smaller than T. What's the answer? This is all common sense. It's coming fast, so I'm asking these questions. And when I asked that question, it's a sort of indication that you can really see the answer. You don't need to go back to the textbook for that. What have we got here? Yeah? AUDIENCE: Is it the null solution [INAUDIBLE]? PROFESSOR: It'll be this guy. Yeah, the particular solution will be 0. Right, the particular solution is 0 before this is on. I'm sorry, the null solution is 0, and the particular solution, well, the particular solution is a guy that starts right. I don't know. Those names were not important. And then the question is-- so it's just our initial deposit growing. Now, all I ask, what about after time T? What about after time T? For t after time T, and hopefully, equal time T, what do you think y of t will be? Again, we want to separate in our minds the stuff that's starting from the initial condition from the stuff that's piling up because of the source. So one part will be that guy. I haven't given the complete answer. But this is continuing to grow. And because it's linear, we're always using this neat fact that our equation is linear. We can watch things separately, and then just add them together. So I plan to add this part, which comes from initial condition to a part that-- maybe we can guess it-- that's coming from the source. And how do we have any chance to guess it? Only because that particular source, once it's turned on, jumps to a constant 1, and we've solved the equation for a constant 1. Let me go back here. I think our answer to this question-- so this is like just first practice with a step function, to get the hang of a step function. So I'm seeing this same y of 0 e to the at in every case, because that's what happens to the initial deposit. I'll say grow, assuming the bank's paying a positive interest rate. And now, where did this term comes from? What did that term represent? AUDIENCE: The money that [INAUDIBLE]. PROFESSOR: The money that, yeah? AUDIENCE: They had each of [INAUDIBLE]. PROFESSOR: The money that came in and grew. It came in, and then it grew by itself, grew separately from that these guys. So the initial condition is growing along. And the money we put in starts growing. Now, the point is what? That over here, it's going to look just like that. So I'm going to have a 1 over a. And I'm going to have something like that. But can you just guess what's going to go in there? When I write it down, it'll make sense. So this term is representing what we have at time little t, later on, from the deposits we made, not the initial one, but the source, the continuing deposits. And let me write it. It's going to be a 1 over a e to the a something minus 1. It's going to look just like that guy. When I say that guy, let me point to it again-- e to the at minus 1. But it's not quite e to the at minus 1. What is it? AUDIENCE: t minus [INAUDIBLE]. PROFESSOR: t minus capital T, because it didn't start until that time. So I'm going to leave that as, like, reasonable, sensible. Think about a step function that's turned on a capital time T. Then it grows from that time. Of course, mentally, I never write down a formula like that without checking at t equal to T, because that's the one important point, at t equal capital T. What is this at t equal capital T? It's 0. At t equal capital T, this is e to the 0, which is 1 minus 1 altogether 0. And is that the right answer? At t equal capital T is 0, should I have nothing here? Yes? No? Give me a head shake. Should I have nothing at t equal capital T? I've got nothing. e to the 0 minus 1, that's nothing? Yes, yes that's the right thing. Because at capital T, the source has just turned on, hasn't had time to build up anything, just that was the instant it turned on. So that's a step function. A step function is a little bit of a stretch from an ordinary function, but not as much of a stretch as its derivative. In a way, this is like the highlight for today, coming up, to deal with not only a step function, but a delta function. I guess every author and every teacher has to think am I going to let this delta function into my course or into the book? And my answer is yes. You have to do it. You should do it. Delta functions are-- they're not true functions. As we'll see, no true function can do what a delta function does. But it's such an intuitive, fantastic model of things happening over a very, very short time. We just make that short time into 0. So we're saying with the delta function, we're going to say that something can happen in 0 time. Something can happen in 0 time. It's a model of, you know, when a bat hits a ball. There's a very short time. Or a golf club hits a golf ball. There's a very short time interval when they're in contact. We're modeling that by 0 time, but still, the ball gets an impulse. Normally, for 0 time, if you're doing things continuously, what you do over 0 time is no importance. But we're not doing things continuously, at all. So here we go. You've seen this guy, I think. But if you haven't, here's the time to see it. So the delta function is the derivative of-- so I've written three important functions up here. Let me start with a continuous one. That function, the ramp is 0, and then the ramp suddenly ramps up to t. Take its derivative. So the derivative, the slope of the ramp function is certainly 0 there. And here, the slope is 1. So the slope jumped from 0 to 1. The slope of the ramp function is the step function. Derivative of ramp equals step. Why don't I write those words down? Derivative of ramp equals step. So there is already the step function. In pure calculus, the step function has already got a little question mark. Because at that point, the derivative in a calculus course doesn't exist, strictly doesn't exist, because we get a different answer 0 on the left side from the answer, 1 on the right side. We just go with that. I'm not going to worry about what is its value at that point. It's 0 up for t negative, and it's 1 for t positive. And often, I'll take it 1 for t equals 0, also. Usually, I will. That's the small problem. Now, the bigger problem is the derivative of the-- so this is now the derivative of the step function. So what's the derivative of this step function? Well, the derivative along there is certainly 0. The derivative along here is certainly 0. But the derivative, when that jumped, the derivative, the slope was infinite. That line is vertical. Its slope is infinite. So at that one point, you have an affinity, here, delta of 0. You could say delta of 0 is infinite. But you haven't said much, there. Infinite is too vague. Actually, I wouldn't know if you gave me infinite or 2 times infinite. I couldn't tell the difference. So I'll put it in quotes, because it sort of gives us comfort. But it doesn't mean much. What does mean much? Somehow that's important. Can I tell you how to work with delta functions, how to think about delta functions? It's the right way to think about delta function. So here's some comment on delta function. Giving the values of the function, 0, and infinity, and 0, is not the best. What you can do with a delta function is you can integrate it. You can define the function by integrals. Integrals of things are nice. Do you think in your mind when you take derivatives, as we did going left to right, we were taking derivatives. The function was getting crazy. When we go right to left, take integrals, those are smoothing. Integrals make functions smoother. They cancel noise. They smooth the function out. So what we can do is to take the integral of the delta function. We could take it from any negative number to any positive number. And what answer would we get? What would be the right, well, the one thing people know about the delta function is-- and actually, it's the key thing-- the integral of the delta function. Again, I'm integrating the delta function from some negative number up to some positive number. And it doesn't matter where n is, because the function is 0 there and there. But what's the answer here? Put me out of my misery. Just tell me the number I'm looking for, here, the integral of the delta function. Or maybe you haven't met it. AUDIENCE: [INAUDIBLE]. PROFESSOR: It's? It's the one good number you could guess. It's 1. Now, why is it 1? Because if the delta function is the derivative of the step function, this should be the step function evaluated between N and P. This should be the step function, , here, minus the step function, there And what is the step function? You have to keep it straight. Am I talking about the delta function? No, right now, I've integrated it to get H of t. So this is H of P at the positive side, minus H of N. That's what integration's about. And what do I get? 1, because H of P, the step function here, H is 1. And here, it's 0, so I get 1. Good, that's the thing that everybody remembers about the delta function. And now I can make sense out of 2 delta function, 2 delta of t. That could be my source. So if 2 delta of t was my source, what's the graph of 2 delta of t? Again, it's 0 infinite 0. You really can't tell from the infinity what's up, but what would be the integral of 2 delta of t, the integral of 2 delta of t or some other? Well, let me put in the 2, here? What's the integral of 2 delta of t, would be 2H of t. Keep going. What do I get here? AUDIENCE: 2. PROFESSOR: It would be 2 of these guys, 2 of these, 2 of these, 2. All right? So we made sense out of the strength of the impulse, how hard the bat hit the ball. But of course, we need units in there. We have to have units. And here, the value for that unit was 2. Now, I'm going to-- because this is really worth doing with delta functions. I didn't ask at the start have you seen them before. But they are worth seeing. And they just take a little practice. But then in the end, delta functions are way easier to work with than some complicated function that attempts to model this. We could model that by some Gaussian curve or something. All the integrations would become impossible right away. We could model it by a step function up and a step function down. Then the integrations would be possible. But still, we have this finite width. I could let that width go to 0 and let the height go to infinity. And what would happen? I'd get the delta function. So that's one way to create a delta function, if you like. If you're OK with step functions, then one way to create delta is to take a big step up, step down, and then let the size of the step grow and the width of the steps shrink. Keep the area 1, because area is integral. So I keep this, that little width, times this big height equal to 1. And in the end, I get delta. Now again, my point is that delta functions, that you really understand them. What you can legitimately do with them is integrate them. But now in later problems, we might have not a 1 or a 2, but a function in here, like cosine t, or e to the t, or q of t. Can I practice with those? Can I put in a function f of t? I didn't leave enough space to write f of t, so I'm going to put it in here. f of t delta of t dt. And I'm going to go for the answer, there. My question is what does that equal? You see what the question is? I got my delta function, which I only just met. And I'm multiplying it by some ordinary function. f of t gives us no problems. Think of cosine t. Think of e to the t. What do you think is the right answer for that? What do you think is the right answer? And this tells you what the delta function is when you see this. What do I need to know about f of t to get an answer, here? Do I need to know what f is at t equals minus 1? You could see from the way my voice asked that question that the answer is no. Why do I not care what f is at minus 1? Yeah? AUDIENCE: Because you're multiplying by [INAUDIBLE]. PROFESSOR: Because I'm multiplying by somebody that's 0. And similarly, at f equal minus 1/2, or at f equal plus 1/3, all those f's make no difference, because they're all multiplying 0. What does make a difference? What's the key information about f that does come into the answer? f at? At just at that one point, f at? AUDIENCE: [INAUDIBLE] PROFESSOR: 0, f at 0 is the action. The impulse is happening. The bat's hitting the ball. So we're modeling rocket launching, here. We're launching in 0 seconds instead of a finite time. So in other words, well, I don't know how to put this answer down other than just to write it. I guess I'm hoping you're with me in seeing that what it should be. Can I just write it? All that matters is what f is at t equals 0, because that's where all the action is. And that f of 0, if f of 0 was the 2 that I had there a little while ago, then the answer will be 2. If f of 0 is a 1, if the answer is f of 0 times 1-- and I won't write times 1. That's ridiculous. Now we can integrate delta functions, not just a single integral of delta, but integral of a function, a nice function times delta. And we get f of 0. So can I just, while we're on the subject of delta functions, ask you a few examples? What is the integral of e to the t delta of t dt? AUDIENCE: It's 1. PROFESSOR: Yeah, say it again? AUDIENCE: It's 1. PROFESSOR: It's 1. It's 1, right. Because e to the t, at the only point we care about, t equal 0 is 1. And what if I change that to sine t? Suppose I integrate sine t times delta of t? What do I get now? I get? AUDIENCE: 0. PROFESSOR: 0, right. And actually, that's totally reasonable. This is a function, which is yeah, it's an odd function. Anyway, sine, if I switch t to negative t, it goes negative. 0 is the right answer. Let me ask you this one. What about delta of t squared? Because if we're up for a delta function, we might square it. Now we've got a high-powered function, because squaring this crazy function delta of t gives us something truly crazy. And what answer would you expect for that? AUDIENCE: 1. PROFESSOR: Would you expect 1? So this is like? I'm just getting intuition working, here, for delta functions. What do you think? I'm looking at the energy when I square something. OK, so we had a guess of 1. Is there another guess? Yeah? AUDIENCE: A third? PROFESSOR: Sorry? AUDIENCE: 1/3. PROFESSOR: 1/3, that's our second guess. I'm open for other guesses before I-- OK, we have a rule here for f of t. And now what is the f of t that I'm asking about in this case? It's delta of t, right? If f of t is delta of t, then that would match this. And therefore, the answer should match. Do you see what I'm shooting for, yeah? AUDIENCE: It'd be infinity? PROFESSOR: It'd be infinity. It would be infinity. That's delta of t squared is that's an infinite energy function. You never meet it, actually. I apologize, so so write it down there. I could erase it right away because you basically never see it. It's infinite energy. Well, I think you'd see it. I mean, we're really going back to the days of Norbert Wiener. When I came to the math department, Norbert Wiener was still here, still alive, still walking the hallway by touching the wall and counting offices. And hard to talk to, because he always had a lot to say. And you got kind of allowed to listen. So anyway, Wiener was among the first to really use delta functions, successfully use delta functions. Anyway, this is the big one. This is the big one. Now, so what's all that about? I guess I was trying to prepare by talking about this function prepare for the equation when that's the source. So dy equal ay plus a delta function. Let me bring that delta function in at time T. So how do you interpret that equation? So like part of this morning's lecture is to get a first handle on an impulse. So let me write that word impulse, here. Where am I going to write it? So delta is an impulse. That's our ordinary English word for something that happens fast. And y of t is the impulse response. And this is the most important. Well, I said e to the st was the most important. How can I have two most important examples? Well, they're a tie, let's say. e to the st is the most important ordinary function. It's the key to the whole course. Delta of t, the impulse, is the important one because if I can solve it for a delta function, I can solve it for anything. Let's see if we can solve it for a delta function, a delta function, an impulse that starts at time T. Again, I'm just going to start writing down the solution and ask for your help what to write next. So what do you expect as a first term in the solution? So I'm starting again from y of 0. Let's see if we can solve it by common sense. So how do I start the solution to this? Everybody sees what this equation is saying. I have an initial deposit of y of 0 that starts growing. And then at time capital T I make a deposit. At that moment, at that instant, I make a deposit of 1. That's an instant deposit of 1. Which is, of course, what I do in reality. I take $1 to the bank. They've got it now. At time T, I give them that $1, and it starts earning interest. So what about y of t? What do you think? What's the first term coming from y of 0? So the term coming from y of 0 will be y of 0 to start with, e to at. That takes care of the y of 0. Now, I need something. It's like this, plus I need something that accounts for what this deposit brings. So up until time T, what do I put? So this is for t smaller than T and t bigger than T. So what goes there? For t smaller than T, what's the benefit from the delta function? 0, didn't happen yet. For t bigger than T, what's the benefit from the delta function? AUDIENCE: [INAUDIBLE]. PROFESSOR: For t bigger than T, well, that's right. OK, but now I've made that deposit at time capital T. Whatever's going there is whatever I'm getting from that deposit. At time capital T, I gave them $1, and they start paying interest on it. What's going to go there? So if I gave them $1 at that initial time, so that $1 would have been part of y of 0. What did I get at a later time? e to the at. Now I'm waiting. I'm giving them the dollar at time capital T, and it starts growing. So what do I have at a later time, for t later than capital T? What has that $1 grown into? e to the a times the-- right, it's critical. It's the elapsed time. It's the time since the deposit. Is that right? So what do I put here? AUDIENCE: t minus capital T? PROFESSOR: t minus capital T, good. Apologies to bug you about this, but the only way to learn this stuff from a lecture is to be part of it. So I constantly ask you, instead of just writing down a formula. I think that looks good. So suddenly, what does this amount to at t equal capital T? Maybe I should allow t equal capital T. At t equal capital T, what do I have here? AUDIENCE: 1. PROFESSOR: 1. That's my $1. At t equal capital T, we've got $1. And later it's grown. So we have now solved. We have found the impulse response. We have found the impulse response when the impulse happened at capital T. That was good going. Now, I've given you my list of examples with the pause on the sine and cosine. I pause on the sine and cosine because one way to think about sine and cosine is to get into complex numbers. And that's really for next time. But apart from that, we've done all the examples, so are we ready? Oh yeah, I'm going to try for the big thing, the big formula. So this is the key result of section 1.4, the solution to this equation. So I'm going back to the original equation. And just see if we can write down a formula for the answer. So let me write the equation again. dy dt is ay plus some source. I think we can write down a formula that looks right. And we could then actually plug it in and see, yeah, it is right. So what's going to go into this formula? We got enough examples, so now let's go for the whole thing. So y of t, first of all, comes whatever depends on the initial condition. So how much do we have at a later time when our initial deposit was y of 0? So that's the one we've seen in every example. Every one of these things has this term growing out of y of 0. So let me put that in again. So the part that grows out of y of 0 is y of 0 e to the at. That's OK. So that's what the initial. So our money is coming from two sources, this initial deposit, which was easy, and this continuous, over time deposit, q of t. And I have to ask you about that. That's going to be like the particular solution, the particular solution that comes from the source term. This is the solution it comes from the initial condition. So what do you think this thing looks like? I just think once we see it, we can say, yeah, that makes sense. So now I'm saying what? If we've deposited q of t in varying amounts, maybe a constant for a while, maybe a ramp for awhile, maybe whatever, a step, how am I going to think about this? So at every time t equal to s, so I'm using little t for the time I've reached. Right? Here's t starting at 0. Now, let me use s for a time part way along. So part way along, I input. I deposit q of s. I deposit it at time s. And then what does it do? That money is in the bank with everybody else. It grows along with everything else. So what's the growth factor? What's the growth factor? This is the amount I deposited at time s. And how much has it grown at time t? This is the key question, and you can answer it. It went in a time s. I'm looking at time t. What's the factor? AUDIENCE: Is it e to the a t minus s. PROFESSOR: e to the a t minus s. So that's the contribution to our balance at time t from our input at time s. But now, I've been inputting all the way along. s is running all the way from here to here. So finish my formula. Put me out of my misery. Or it's not misery, actually. Its success at this moment. What do I do now? I? AUDIENCE: Integrate. PROFESSOR: I integrate, exactly. I integrate. I integrate. So all these deposits went in. They grew that amount in the remaining time. And I integrate from 0 up to the current time t. So you see that formula? Have a look at it. This is a general formula, and every one of those examples could be found from that formula. If q of s was 1, that was our very first example. We could do that integration. If q of s was e to the-- anyway, we could do every one. I just want you to see that that formula makes sense. Again, this is what grew out of the initial condition. This is what grew out of the deposit at time s. And the whole point of calculus, the whole point of learning [? 1801 ?], the integral equation part, the integrals part, is integrals just add up. This term just adds up all the later deposits, times the growth factor in the remaining time. And as I say, if I took q of s equal 1-- the examples I gave are really the examples where you can do the integral. If q of s is e to the i omega s, I can do that integral. Actually, it's not hard to do because e to the at doesn't depend on s. I can bring an e to the at out in this case. That formula is just worth thinking about. It's worth understanding. I didn't, like, derive it. And the book does, of course. There's something called an integrating factor. You can get at this formula systematically. I'd rather get at it and understand it. I'm more interested in understanding what the meaning of that formula is than the algebra. Algebra is just a goal to understand, and that's what I shot for directly. And as I say, that the book also, early section of the book, uses practice in calculus. Substitute that in to the equation. Figure out what is dy dt. And check that it works. It's worth actually looking at that end of what you need to know from calculus It's is. You should be able to plug that in for y and see that solves the equation. Right, now I have enough time to do cosine omega t. But I don't have enough time to do it the complex way. So let me do as a final example, the equation. Let me just think. I don't know if I have enough space here. I'm now going to do dy dt-- can I call that y prime to save a little space-- equal ay plus cosine of t. I'll take omega to be 1. Now, how could we solve that one? I'm going to solve it without complex numbers, just to see how easy or hard that is. And you'll see, actually, it's easy. But complex numbers will tell us more. So it's easy, but not totally easy. So what did I do in the earlier example if the right hand side was a 1, a constant? I look for the solution to be a constant. If the right hand side was an exponential, I look for the solution to be an exponential. Now, my right hand side, my source term, is a cosine. So what form of the solution am I going to look for? I naturally think, OK, look for a cosine. We could try y equals some number M cosine t. Now, you have to see what goes wrong and how to fix it. So if I plug that in, looking for M the same way I look for capital Y earlier, I plug this in, and I get aM cosine t cosine t. But what do I get for y prime? Sine t. And I can't match. I can make it work. I can't make a sine there magic a cosine here. So what's the solution? How do I fix it? I better allow my solution to include some sine plus N sine t. So that's the problem with doing it, keeping things real. I'll push this through, no problem. But cosine by itself won't work. I need to have sines there, because derivatives bring out sines. So I have a combination of cosine and sine. I have a combination of cosine and sine. So the complex method will work in one shot because e to the i omega t is a combination of cosine and sine. Or another way to say it is when I see cosine here, that's got two exponentials. That's got e to the it and e to the-- anyway. Let's go for the real one. So I'm going to plug that into there. So I'll get sines and cosines, right? When I plug this into there, I'll have some sines and some cosines, and I'll just match the two separately. So I'm going to get two equations. First of all, let me say what's the cosine equation? And then what's the sine equation? So when I match cosine terms, what do I have? What cosine terms do I get out of y prime, here? The derivative. Well, the derivative of cosine is a sine. That that's not a cosine term. The derivative of sine is cosine. I think I get, if I just match cosines, I think I get an N cosine. N cosine t equal ay. How many cosines do I have from that term? ay has an M cosine t. I think I have an aM, and here I've got 1. That was a natural step, but new to us. I'm matching the cosines. I have on the left side, with this form of the solution, the derivative will have an N cosine t. So I had N cosines, aM cosines, and 1 cosine. Now, what if I match signs? What happens there? We're pushing more than an hour, so hang on for another five minutes, and we're there. Now, what happens if I match sines, sine t? How do I get sine t in y prime? So take the derivative of that, and what do you have? AUDIENCE: Minus [INAUDIBLE]. PROFESSOR: Minus M sine t. That tells me how many sine t's are in there. And on the right hand side, a times y, how many sine t's do I have from that? AUDIENCE: You have N t's. PROFESSOR: N, good thinking. And what about from this term? None, no sine there. So I have two equations by matching the cosines and sines. Once you see it, you could do it again. And we can solve those equations, two ordinary, very simple equations for M and N. Let's see if I make space. Why don't I do it here, so you can see it. So how do I solve those two equations? Well, this equation gives me-- easy-- gives me M as minus aN. So I'll just put that in for N. So I have N equals aM. But M is minus aN. I think I've got minus a squared N plus that 1. All I did was solve the equation, just by common sense. You could say by linear algebra, but linear algebra's got a little more to it than this. So now I know M, and now I know N. So now I know the answer. y is M, so M is minus aN. Oh, well, I have to figure out what N is, here. What is N? This is giving me N, but I better figure it out. What is N from that first equation? And then I'll plug in. And then I'm quit. AUDIENCE: [INAUDIBLE]. PROFESSOR: 1 over, yeah. AUDIENCE: 1 plus a squared. PROFESSOR: 1 plus a squared, good. Because that term goes over there, and we have 1 plus a squared. So now y is M cosine t. So M is minus aN. So minus aN is 1 over 1 plus a squared cosine t. Is that right? That was the cosines. And we had N sine t. But N is just 1-- I think I just add the sine t. Have I got it? I think so. Here is the N sine t, and here is the M cos t. It was just algebra. Typical of these problems, there's a little thinking and then some algebra. The thinking led us to this. The thinking led us to the fact we needed cosines in there, as well as cosines. But then once we did it, then the thinking said, OK, separately match the cosine terms and the sine term. And then do the algebra. Now, I just want to do this with complex. So y prime equals ay plus e to the it. To get an idea, you see the two. And then I have to talk about it. You see, I'm only going to go part way with this and then save it for Wednesday. But if I see this, what solution do I assume? This is like an e to the st. I assume y is some Y e to the it. See, I don't have cosines and sines anymore. I have e to the it. And if I take the derivative of e to the it, I'm still in the e to the it world. So I do this. I plug it in. Uh-huh, let me leave that for Wednesday. We have to have some excitement for Wednesday. So we'll get a complex answer, and then we'll take the real part to solve that problem. So we've got two steps, one way or the other way. Here, we had two steps because we had to let sines sneak in. Here, we have two steps because I could solve it, and you could solve that right away. But then you have to take the real part. I'll leave that. Is there questions? Do you want me to recap quickly what we've done. AUDIENCE: Yes. PROFESSOR: I try to leave on the board enough to make a recap possible. Everything was about that equation. We have only solved-- I shouldn't say only-- we have solved the constant coefficient, model constant coefficient, first order equation. Wednesday comes nonlinear equation. This one today was strictly linear. So what did we do? We solved this equation, first of all, for q equal 1; secondly, for q equal e to the st; thirdly, for q equal a step; fourthly for q equal-- where is it? Where is that delta of t? Maybe it's here. Ah, it got erased. So the fourth guy was y prime equal ay plus delta of t, or delta of t minus capital T. So those were our four examples. And then what did we finally do? So if we're recapping, compressing, we're compressing everything into two minutes. We solved those four examples, and then we solved the general problem. And when we solved the general problem, that gave us this integral, which my whole goal was that you should understand that this should seem right to you. This is adding up the value at time t from all the inputs at different times s. So to add them up, we integrate from 0 to t. And finally, we returned to the question of cos t, all important question. But awkward question, because we needed to let sine t in there too. |
US_Government_and_Politics | Freedom_of_the_Press_Crash_Course_Government_and_Politics_26.txt | Hi, I'm Craig and this is Crash Course Government and Politics, and today we're gonna finish up our discussion of the First Amendment, finally, by talking about everybody's favorite: the press. The First Amendment is pretty clear that Congress can't make any laws abridging the freedom of the press, and since you understand the basics of free speech because you were paying attention, the reasons for this should make a lot of sense. But as with any discussion of the First Amendment, things aren't as straight forward as we might think, and the freedom of the press, just like the freedom of speech, is not absolute. [Theme Music] The main thing to know about the First Amendment and the press is that it prevents the government from censoring the press. For the most part, this means preventing the press from publishing some information in the first place, although it can also mean punishing a news agency after they published something. Let's deal with pre-publication freedom of the press first. Let's go to the Thought Bubble. Censorship of the press before a story is published in print, broadcast on television, radio or the internet, is called prior restraint, and the supreme court ruled that it was not allowed in a case called Near v. Minnesota. In that case, a newspaper called The Saturday Press was gonna publish a story that the city of Minneapolis was under the secret control of a cadre of Jewish gangsters, in particular the mayor and chief of police. City officials obtained an injunction to stop the publication of this story, and they gave The Saturday Press editors the opportunity to go before a judge to prove that the story was true. I'll get to this question of truth in a minute. The judge ordered the injunction and said that if the newspaper violated it, they would be punished for contempt of court. Instead, the newspaper counter-sued, claiming that Minneapolis and Minnesota were violating their freedom of the press. The supreme court agreed that no government was allowed to censor the press because a free press is essential for the political system to work. They based their decision on a lot of history, including Blackstone - the British legal authority which explained "The liberty of the press is indeed essential to the nature of a free state; but this consists in laying no previous restraints upon publications and not in freedom from censure from criminal matter when published." And they also relied on an important American authority on the constitution: James Madison - heard of him? - who derived a lot of his constitutional expertise from the fact that he wrote the thing. He said, "This security of the freedom of the press requires that it should be exempt not only from previous restraint by the executive as in Great Britain, but from legislative restraint also." Citizens need a free press to be able to criticize the government and to expose government wrongdoing because otherwise the government can get away with all sorts of things that we don't want it to, like say spying on us, and reading our email, and reading our spy's email! Of course, even with a free press, the government can do this, and what constitutes a press in the age of the internet is a debatable question. WikiLeaks, anyone? But the basic proposition that the press must be able to protect us against an over-reaching government still stands. Thanks Thought Bubble. There's another reason why the Court put the kibosh on prior restraint, and that's because if a newspaper prints something that is untrue about the government, or more practically, about a government official, there's a remedy for this. The person or agency about whom the untrue thing was said or written and published can sue the publisher for libel, and if he proves his case, can get monetary damages. This is supposed to prevent newspapers from flat out lying about public officials, but libel suits can cause another problem, in that they can basically end up being after the fact censorship. If a newspaper is so afraid of a libel suit that it decides not to publish a story, then it effectively censors itself. Sometimes courts call this a "chilling effect" and it applies to speech that people are afraid to make because of potential lawsuit or other punishment, as well as articles and news stories that go unpublished out of fear of potential punishment. Tell you what, I ain't afraid of punishment for that. I can do what I want! Freedom of speech! Luckily for us, the Court dealt with the libel issue in another landmark case, New York Times v. Sullivan from 1964. This case involved an advertisement in the Times that included some inaccurate statements about the way Alabama law enforcement was treating Civil Rights protesters including Martin Luther King Jr. The Montgomery Public Safety Commissioner, L.B. Sullivan thought these mis-statements amounted to libel and sued the Times. He lost at the Supreme Court, and they ruled that the standard for libel of a public figure was actual malice, which was my nickname in high school. This means that in order to win a libel case, you must prove that the publisher of the libelous statement knew that the statement was false and acted with reckless disregard, my friend's nickname in high school, for the truth. This is an almost impossible standard to prove, and what it means is that public figures almost never win libel cases. This goes a long way toward explaining some of outlandish things you read about politicians and celebrities in print, and I'm not even gonna begin to talk about some of what you can find on the Internet, like a bearded dude talking about government and punching eagles. Some argue that we shouldn't feel too bad about celebrities, and we should remember that they are celebrities and are usually doing alright for themselves. Unflattering publicity might simply be considered the price of fame. I'd point out that celebrities are human, too, except for Lil Bub, the only non-human celebrity, and probably don't like being libeled. I guess Jar-Jar Binks is another non-human celebrity, and he gets a lot of bad press, but he truly is terrible, so it's not libel. So it sounds like the First Amendment protection of a free press is pretty much absolute, but there are always exceptions that make things complicated. One of these exceptions is the question of national security. There are some security issues that are so important that the government is allowed to censor the press before they can print stories about them. The best example of this is that the government can prevent the press from printing detailed descriptions of troop movements during a war, because this would help the enemy and put soldiers' lives at risk. It's kinda like in the spy movies when the bad guys learn all the names and aliases of the secret agents, except it's real. Knowing this, most newspapers wouldn't print this sort of thing, at least while it's happening. But what about after the fact? Well, it gets complicated, but another Supreme Court case gives us some guidance about what to expect. In New York Times v. US -- why is it always the New York Times? -- the issue was whether or not the Times could publish the Pentagon Papers. These were secret documents, stolen from the government by Daniel Ellsberg, who had worked at the Defense Department. They showed that much of the government's reasoning behind the Vietnam War was untrue or at least highly questionable, hmm, I'm gonna go with untrue. The government tried to stop the Times and the Washington Post, too, from publishing these papers, because it would make the government look bad and perhaps turn public opinion against the war. Now, this was 1971, and a good deal of public opinion was kind of already against the war, so much so that Lyndon Johnson had decided not to run for re-election just a few years before in 1968. But the government said that publication of this classified report would cause irreparable harm to America's ability to defend itself, and they tried to stop the publication. The Court ruled against this prior restraint, further strengthening the First Amendment protection of the free press. It also slapped down the executive branch, which was trying to claim its privilege to keep state secrets. But we already mentioned this when talking about Nixon and his attempts to hold on to the Watergate tapes. Anyway, as you can see, the First Amendment offers a lot of protections to citizens in the press, especially when they're criticizing the government or its policies, or even when they're making fun of celebrities. This is really, really important, because American democracy relies on its citizens having enough information to make good decisions and hold elected officials accountable. We rely on the press to tell us what the government is doing so that we can decide whether or not we want to let them keep doing it. If the government can keep us from getting important or even not so important information by censoring the press or by preventing us from speaking out against what we see as wrong, it will be able to keep doing this that might be bad, and this is the kind of tyranny that the Framers of the Bill of Rights were most worried about. So the more you're concerned about tyranny, the freer you want speech and the press to be. This is something to think about when you engage in arguments about Edward Snowden and his NSA disclosures, or Julian Assange and WikiLeaks. Thanks for watching. I'll see you next time. Crash Course Government and Politics is produced in association with PBS Digital Studios. Support for Crash Course US Government comes from Voqal. Voqal supports non-profits that use technology and media to advance social equity. Learn more about their mission and initiatives at Voqal.org. Crash Course is made with the help of all of these free speakers. Thanks for watching. That guy speaks a little too freely, if you ask me. |
US_Government_and_Politics | How_a_Bill_Becomes_a_Law_Crash_Course_Government_and_Politics_9.txt | This episode of Crash Course is brought to you by Squarespace. Hi, I'm Craig, and this is Crash Course: Government and Politics, and today, I've got my work cut out for me because I'm going to try to do something that every single social studies teacher in the U.S. has tried to do, even though there is a perfectly good cartoon you could just show. It's from the '70s. It's catchy. It's fun. That's right, today we're going to learn how a bill becomes a law. But we're not going to be able to license the Schoolhouse Rock song. I'm just a bill, yes, I'm only a - you know what has a bill? An eagle. [Theme Music] Okay, I think the only way we're going to possibly be able to compete with Schoolhouse Rock is to jump right into the Thought Bubble with our own cartoon. And to stop talking about Schoolhouse Rock. So let's start at the very beginning, which in this case is a Congressman or a Senator introducing a bill. The real beginning is when he or she has an idea for a law. And even this might come from an interest group, the executive branch, or even the constituents. But the formal process begins with the legislator introducing the bill. After it's introduction, bill is referred to a committee. Although most bills can start in either house, except for revenue bills, which must start in THE House, let's imagine that our bill starts in the Senate, because it's easier. Congress has the power to make rules concerning the Armed Forces, so let's say this is a bill about naming helicopters. Anywho, this bill would be referred to the Senate Armed Services Committee, which would then write up the bill in formal, legal language, or markup, and vote on it. If the markup wins a majority in the committee, it moves to the floor of the full Senate for consideration. The Senate decides the rules for debate - how long the debate will go on and whether or not there will be amendments. An open rule allows for amendments and a closed rule does not. Open rules make it much less likely for bills to pass because proponents of the bill can add clauses that will make it hard for the bill's proponents to vote for. If opponents of our helicopter name bill were to add a clause repealing the Affordable Care Act or something, some supporters of the bill probably wouldn't vote for it. If a bill wins the majority of the votes in the Senate, it moves onto the House. Thanks Thought Bubble. We're going to have to go the rest of the way without fancy animation. But I could sing it. Laaaa- I'm not going to sing it. I'm not going to use a funny voice. The Senate version of the bill is sent to the House. The House has an extra step, in that all bills before they go out to the floor of the House must go to the Rules Committee, which reports it out to the House. If a bill receives the majority of votes in the House, 238 or more to be exact, it passes. YAY! Now, this is important. The exact same bill has to pass both houses before it can go to the president. This almost never happens though. Usually the second house to get the bill will want to make some changes to it, and if this happens, it will go to a conference committee, which is made up of members of both houses. The conference committee attempts to reconcile both versions of the bill and come up with a new version, sometimes called a compromise bill. Okay, so if the Conference Committee reaches a compromise, it then sends the bill back to both houses for a new vote. If it passes, then it's sent to the President. And then the President signs the bill, boom, done. That's the only option. Oh, no, there's two other options, actually. Option 2 is for him to veto the bill and we've gone through all of this for nothing. The 3rd option is only available at the end of a congressional term. If the President neither signs nor vetoes the bill, and then in the next 10 days, Congress goes out of session, the bill does not become a law. This is called a pocket veto, and is only used when the President doesn't want a law to pass, but for political reasons, doesn't want to veto it either. Congress can avoid this all together by passing bills and giving them to the President before that 10 day period. If the President neither signs nor vetoes a law and Congress remains in session for more then 10 days, the bill becomes a law without the President's signature. So that's the basic process, but there is one wrinkle, or if you want to be all Madisonian about it, check, on the president's power. If Congress really wanted a bill and the President has vetoed it, they can override the veto if it gets a 2/3rd majority in both houses on a second vote. Then the bill becomes a law over the President's signature. Aw snap! This is really rare, but it does happen once in a great while. The Taft-Hartley Act of 1953 passed over Truman's veto. I like to call it the Tartley Act. Shorten it. It's a portmanteau. It doesn't happen that often because if the President knows that two thirds of the Congressmen supported the bill, he won't veto it. And if Congress knows that they don't have two thirds support, they won't try to override the veto. Nobody wants to try something and fail in public, right? Except for me obviously, if you look at my other YouTube channel, WheezyWaiter. Eh. So there you have it, how a bill becomes a law. I'll admit, the process is a little cumbersome, but it's designed that way so that we don't get a lot of stupid or dangerous laws. Still this doesn't quite explain why so few laws get passed. Bills have a very high mortality rate, and it's way more common for a bill not to become a law than to become one. The main reason is that there are so many places where a bill can die. The first place that a bill can die is at the murderous hands of the speaker or majority leader, who refuses to refer it to committee. Then the committee can kill the bill by not voting for it at all. And if they do vote and it doesn't get a majority then the bill doesn't go to the floor, and it's dead. In the Senate the murderous leadership can kill a bill by refusing to schedule a vote on it. And any senator can filibuster the bill which is when he or she threatens to keep debating until the bill is tabled. It's a bit more complex than that, but the filibuster rules have changed recently, so hopefully we won't have as many filibuster threats in the future. The House doesn't have a filibuster but it does have a Rules Committee that can kill a bill by not creating a rule for debate. The entire House can also vote to recommit the bill to committee, which is a signal to drop the bill or change it significantly. And of course if either house fails to give a bill a majority of votes, then it dies. This applies to compromise bills coming out of conference committees too. Even if a bill gets a majority in both houses then there's that whole veto thing that the President can do. Remember? So, there are many more ways for a bill to be killed than to become a law. These hurdles are sometimes called veto gates. They can't call 'em Bill Gates because that's a person. Veto gates make it very difficult for Congress to act unless there's broad agreement or the issue is uncontroversial like naming a post office or thanking specific groups of veterans for their service, which are two things that Congress actually does pretty efficiently. Think of all the post offices that aren't named. You can't think of one, can you? Name it. You can't. It's not named. Veto gates are purely procedural, which means they don't draw a lot of attention from the media. The easiest way for Congress to kill bills is to simply not vote on them or even schedule votes for them. This way they don't have to go on record as being for or against a bill, just whether they support having a vote. And constituents rarely check up on this sort of thing. So I hope I managed to do a good job of both explaining how a bill becomes a law and why it's difficult for most bills to pass. And I hope I looked good doing it, as well. This might be frustrating but it's strangely comforting to consider that Congress and the government as a whole were designed to make it difficult to get things done. A single super-powerful executive like a king can be very efficient, but also tyrannical. We don't like tyrannical around here. The founders set up these structural hurdles of the bicameral Congress and the presidential role in legislation to reduce the likelihood that authoritarian laws would pass. Congress added procedural hurdles like committees and filibusters for the same reason. You can argue that Congress has become dysfunctional, but looking at the process of lawmaking, it's hard to argue that this isn't by design. So next time someone accuses you of being difficult, you just say, "I was behaving in a senatorial manner." Thanks for watching. I'll see you next episode Crash Course: Government and Politics is produced in association with PBS Digital Studios. Support for Crash Course US Government comes from Voqal. Voqal supports nonprofits that use technology and media to advance social equity. Learn more about their mission and initiatives at voqal.org. Crash Course was made with all of these nice people. Thanks for watching. |
US_Government_and_Politics | Public_Opinion_Crash_Course_Government_and_Politics_33.txt | Hello, I'm Craig, and this is Crash Course Government and Politics, and today we're going to begin our discussion of politics, rather than government. Aren't they the same thing, Stan? Aren't they the... They're not the same? Oh... I know some of you are saying that we've been talking about politics all along, and in a sense, that's true. But for the rest of the series we'll be looking more closely at policies and the factors that influence how they're made, rather than the institutions and structures that make them. One way to think about this is that "government" describes the what, the who, and the how of policies. And "politics" describes the why. Don't ask me about the where or the when journalism students. Actually, just don't ask me anything. Because I won't hear you. This is a YouTube video. Another way that I like to think about politics is that following it is like following sports. With any political event, whether an election, or a congressional vote, or a Supreme Court decision, you can spend time analyzing and predicting what might happen and then, after the fact, you can analyze why your prediction was correct, or way off base. Just like what happens before and after a big game, or race, or whatever you choose to follow. This is getting very conceptual, and today we're going to focus on one particular aspect of politics that looms large in America: Godzilla. No! Public opinion. [Theme Music] Public opinion can refer to a lot of things, but one useful definition is that it refers to "How a nation's population collectively views vital policy issues and evaluates political leaders." Public opinion matters in America, especially because it's a democracy, which classicists out there will know comes from the Greek word "Demokratia", which means ruled by the people. It's not a drug for balding men? No, that's something else. And anyone who's been forced to learn the Gettysburg Address knows, like Abraham Lincoln, America's is a government "of the people, by the people, for the people." So what the people think, especially about how the government should govern, matters. But it also raises some important questions. Namely: "How do the people express what they want?" "How does or should the government respond to the people?" And, the one we'll start with: "What if the people don't know what they want or are just plain ignorant?" The framers of the Constitution were somewhat skeptical of the ability of the average American to understand and influence public policy, so they gave Americans direct influence over only one part of the government: the House of Representatives. This view that the ignorant masses were not to be fully trusted with the hard work of governing won out over the Anti-Federalist view that more popular participation was better, but is it justified? Many people, including a lot of political scientists, say it's justified. Public issues are complicated, and many people, most of the time, are either uninterested or confused by them. This isn't necessarily a bad thing, especially for those who see disengagement from politics as an example of "rational ignorance." Given the high cost of being informed, it makes good sense to stay less informed. And there have been a number of books that show us just how uniformed Americans can be. The most notable was "The American Voter" in 1960, which showed us how little most Americans knew, or cared, about politics, and suggested that people's opinions were so changeable and random, that the authors concluded that "most people don't have real opinions at all." Wow. I have no opinion about that. Oh, and if you're thinking: "Well that's fine, but in 1960 Americans had so much less information available to them." "They didn't even have color then. And everyone wore hats. Everyone wore hats then!" Today we have the Internet and 24 hour TV news, but here's a statistic: In 1960, 47% of people were unable to name the member of the House who represented them. In 2010, it was 59%. On the other hand, there are political scientists who argue that looking at individual voters and their responses to questionnaires is the wrong way to go. For writers like Benjamin Page and Robert Shapiro, authors of "The Rational Public," the key is to look at collective opinion. If you take large numbers of Americans and aggregate their opinions you find that they are much more coherent and stable, and reflect reasonable judgements about politics and government. Next time you disagree with me and call me crazy, Stan, just aggregate my opinion. You will find it doesn't vary so much. Closely related to this idea of large groups of people basically getting things right about politics is Condorcet's Jury Theorem, which demonstrated that while one juror had only a slightly better chance of determining a defendant's guilt or innocence than a coin flip, a larger group of jurrors would produce a majority that would be more likely than not to get the case right. James Surowiecki summed it up well in his book "The Wisdom of Crowds", arguing that "Even if one voter does not have clear political views, a larger group, taken together, adds up to a rational public." So assuming, that like Lincoln, we actually want public opinion to influence government, we need to take into account a few things. First, we should have a reasonably good idea that the people know what they want. Second, the people should be able to communicate what they want to government officials. And third, the government should pay attention to the public's desires and respond accordingly. All three of these conditions can provide interesting problems of their own. Even if you agree with the rational public idea, and assume that the population as a whole does have coherent political views, the chances are good that what the public wants consists mostly of generalities, and are difficult to turn into actual policies. For example: after the 2008 financial crisis, there was a general anger with Wall Street banks, but different polls on the issue revealed no consensus about what to do about things like executive compensation, or regulating complex financial transactions. It's difficult to say that the resulting Dodd-Frank Bill represented an expression of the popular will. The public communicates what it wants in a number of ways. Most obviously: voting. But let's just say that people have other ways than election results of letting their voices be heard. Or their punches. But don't do that. That was just... that's a fake eagle. Don't worry about it. Even though politicians often claim that winning an election gives them a "mandate to govern," a quick look at the unpopularity of Obamacare suggests that an election win doesn't often translate into solid support for a candidate's policies. Sometimes its lack of support is due to the fact that politicians don't exactly respond to public opinion. National campaigns spend around 1 billion a year on polling, but it doesn't mean that politicians do exactly what the polling suggests, and they often deny that polls influence their decisions. Even as poll conscious a politician as President Clinton didn't always do exactly what the American people said they wanted. For example: in 1994 the public was solidly against a plan to bail out Mexico with a multi-billion dollar loan. But Clinton pushed through an executive order making the loan anyway, because his advisers said this was good economic policy. More often politicians use public opinion polling to shape their responses to issues, rather than defining the issue for the politicians, the polls are used to help them craft a message that will be more acceptable to the public. And public opinion polling certainly has a role in setting the policy agenda by informing politicians of the issues that seem to matter to Americans in the first place. So, in addition to voting and election results, polling is also a way Americans can let politicians know what they want. For instance: whether or not they approve of the President's performance, or of specific policies, like whether the government should allow an oil pipeline to be built. Politicians, and especially journalists, rely on these polls, but before you go jumping on that bandwagon there are a few things you should know about public opinion polling. And don't just go jumping on strange bandwagons. Let's go to the Thought Bubble. The first thing you remember when you hear or read some polling data is that there are lots of ways that polls can be wrong. So there are some questions you should ask before you accept the data. There are a lot of things that can skew the results of polls, some of which are obvious, and others which are more obscure. The biggest questions to ask about a poll is "How many respondents were there and how were they chosen?" It's impossible to get responses to any questions from all 320 million Americans, so pollsters rely on statistical sampling. In order to get a reliable sample, the magic number for pollsters is somewhere between 1,000 and 1,500. The smaller the number, the less reliable the results are likely to be. A poll that's based on a sample that's too small may suffer from a "sampling error". You can sometimes deduce the size of a poll sample from its margin of error. A poll with a small sample will have a large margin of error. In general, for national public opinion polls the margin of error will be plus or minus three points. This means that if the poll says that 53% of people support "Policy X," it's better to say that between 50 and 56% of respondents supported it. But that's just a little math. For fun! Polling organizations like Harris, Pew, and Gallup also strive to make sure that the respondents are a representative sample, free from "selection bias." Selection bias occurs when the people polled are not a representative sample of the population. Say if they're disproportionately white, or rich, or Bronies. The classic example of a selection bias error was the 1936 Literary Digest poll that predicted Alf Landon would defeat F.D.R. It turns out that Literary Digest's readership were disproportionately wealthy and Republican. Another more recent source of selection bias is that polls which rely on random digit dialing of land line phones tend to under count younger people, many of whom have only cell phones. Selection bias is a particular problem with online polls. Anyone who takes an online poll has by definition logged into a website and is therefore not randomly selected. Although news organizations like to report their own polling, CNN, I'm looking at you... you should take these poll numbers with a boulder of salt. Thanks, Thought Bubble. In addition to demographic factors like age, ethnicity, race, and income level, all of which can influence polls, when the questions are asked matters a lot. Sometimes these two factors interact. A poll taken on a Friday evening is likely to include a lot fewer young people responding to it. Especially me, because every Friday night I like to go out and get my swerve on. Which implies that I don't go out, and I haven't gone out since 2003. More significant in terms of election polling is how close the poll was to the actual election. The closer the poll, the more accurate. Polls taken immediately after the election, called "exit polls," can be very unreliable. And polls taken a few days after the election have limited predictive value. In fact, just get over it. The elections over. Just stop polling. One of the most important ways that polls can skewed is through the questions themselves. Ambiguous or poorly worded questions can result in a failure to identify the true distribution of opinion in a target population. Quick poll: do you not, not, not, not unlike Crash Course? Or me as a host? Let me know in the comments. The way questions are framed can change the results of polls. For instance, respondents are much more favorable to policies that "promote free trade" than those which "destroy American jobs". So I want to leave you with the question we started with: In an American democracy, how much should public opinion matter in terms of the way the country is actually governed? Has your answer changed now that you have more of a sense of how informed, or uninformed, Americans are about politics? Did you even have an answer before? Are you even listening? And if you think that politicians are right to respond to the public's desires, are you convinced that our public leaders have a good sense of what Americans really want? I'd be interested to know if your own opinions on these questions change over time. But polling's expensive, so just let us know in the comments. Thanks for watching. See you next time. Crash Course Government and Politics is produced in association with PBS Digital Studios. Support for Crash Course US Government comes from Voqal. Voqal supports nonprofits that use technology and media to advance social equity. Learn more about their mission and initiatives at voqal.org. Crash Course was made with the help of all these pollsters. Thanks for watching. |
US_Government_and_Politics | Legal_System_Basics_Crash_Course_Government_and_Politics_18.txt | Hi, I'm Craig and this is Crash Course Civics and today we're gonna look at the basics of a system that affects all our lives: the law. And no, we're not going to be talking about the laws of thermo-dynamics. That's Hank's show. Though we will be bringing the heat, ha! The law affects you even if you never committed a crime because there's so much more to the legal system than just criminal justice, and even though we're going to focus mainly on courts, the law is everywhere. If you don't believe me, read the user license on your next new piece of software, or if you fly anywhere read the back of your plane ticket. Hopefully won't be more entertaining than what you're watching now, but that's examples of the law. [Theme Music] In general, courts have three basic functions, only one of which you probably learned about in your history class. The first thing that courts do is settle disputes. In pre-modern history (which is way easier to understand than post-modern history), kings performed this function, but as states got bigger and more powerful it became much easier to have specialized officials decide important issues like who owned the fox you caught on someone else's land. Or what does the fox say, which was disputed a lot back then. The second thing the courts do is probably the one you heard about in school, or on television, or perhaps while studying for the standardized test, and that's interpret the laws. This becomes increasingly important when you actually try to read laws, or when you realize that legislators are often not as they might be when writing laws in the first place. Take a look at the Affordable Care Act. There are a few famous careless errors in that. Finally courts create expectations for future actions. This is very important if you want to do business with someone. If you know that you'll be punished for cheating a potential business client, you're less likely to do it. Still you might, 'cause there are a lot of jerks out there who would. Are you one of them? Don't be! At the same time if you know that people will be punished for cheating you you're more likely to do business. And it's courts that create the expectation that business will be conducted fairly. Interpreting the laws can help this too, since the interpretations are public and they set expectations that everyone can understand and know what the law means and how it applies and then world peace. No more law breaking ever. The first thing to remember about courts in the U.S. is that most legal action, if it occurs in court at all, occurs in state court. And if it occurs at night, it occurs in Night Court. Because this is mainly a series about federal government, and not Indiana government or sitcoms about court in New York, I'm going to focus mainly on the federal court system which has four main characteristics. One, the federal court system is separate from the other branches of government. The executive could do the job, just like kings used to but we have separation of powers so we don't have to be at the mercy of kings. Have you seen Game of Thrones? Two, the federal courts are hierarchical, with the Supreme Court at the top and turtles all the way down. Nope -- not turtles -- sorry I meant lower courts. What this means is that when a lower court makes a decision it can be appealed to a higher court that can either affirm or overturn the lower court's decision. The third feature of federal courts is that they are able to perform judicial review over laws passed by Congress and state legislatures, and over executive actions. And the fourth aspect of federal court system is that you should know that the federal judges are appointed for life, and their salaries can't be reduced. This is to preserve their independence from politics. Sounds like a pretty sweet deal. Remember when I told you that the legislature makes the laws? Well, that was true, but it's also not the whole story. Legislatures both state and national make laws and these written laws are called statutes. In continental Europe those are pretty much all the laws they have. Statutes. Statutes everywhere! And statues. That place is filled with art. They had the Renaissance there, y'know? But in the U.S. and England, which is where we got the idea, we have something called common law, which consists of the past decisions of courts that influence future legal decisions. The key to common law is the idea that a prior court decision sets a precedent that constrains future courts. Basically if one court makes a decision, all other courts in the same jurisdiction have to apply that decision, whether they like it or not. The collection of those decisions by judges becomes the common law. I don't have to have a reason to punch the eagle. I should probably point out what courts actually do and explain that there are two different types of courts that can make civil law. What differentiates the two types of courts is their jurisdiction, which basically means the set of cases that they're authorized to decide. Trial courts are also called courts of original jurisdiction. These are the ones you see on TV and they actually do two things. First, they hear evidence and determine what actually happened when there's a dispute. This is called deciding the facts of the case. Not everything that happened or that may be important qualifies as a fact in a court case. Those are determined by the rules of evidence, which are complicated and would really slow down an episode of Law and Order. After the trial court hears the facts of a case it decides the outcome by applying the relevant law. What law they apply will depend on statutes and in some cases what other courts have said in similar situations. In other words the common law. You might have noticed that I've been referring to courts, not judges or juries, because not all trials have juries. Bench trials have only a judge who determines the facts and the law. Besides, who decides what in a court case isn't really that important. More than 90% of cases never go to court by the way, they just get settled by lawyers out of court. But say you actually go to court and you lose. Naturally, you'd be upset. Especially if you're a sore loser, like me. Shut up. You have a choice. You can give up and go back to your normal, loser life or you can appeal the trial court decision to a higher court. An appeals court that has, you guessed it, appellate jurisdiction. Did you actually guess that? That'd be amazing. Appeals courts don't hear facts -- who wants those -- they just decide questions of law so you don't have to bring witnesses or present evidence, just arguments. In most cases, if you want to bring a successful appeal, you need to show that there was something wrong with the procedure of your trial. Maybe the judge allowed the jury to hear evidence they shouldn't have heard, maybe one of the jurors was a cyborg. Here's the way that these courts connect to what I was saying before about common and statutory law. Most common law is made by appeals courts. And because appeals courts have larger jurisdiction than trial courts, appeals decisions are much more important than trial court decisions. So now I'm going to talk about the three types of law, and it's gonna get confusing. We should probably go to the Thought Bubble for some nice, compelling, intriguing animations. The two main types of law are basically the Bruce Banner of law. They're the criminal law and civil law, but they can sometimes morph into the Incredible Hulk of laws: public law. "Public law, smash abuse of government authority!" If you watch TV or movies, or read John Grisham novels, you're probably familiar with criminal law. Criminal laws are almost always statutes written by legislatures, which means that there is an actual law for you to break. In most states the criminal laws are called the penal codes. In a criminal dispute -- and it's a dispute because the government says you broke the law and you will say you didn't -- the government is called the prosecution and the person accused of committing the crime is called the defendant. Almost all criminal cases happen at the state level and for this reason it's hard to know exactly what is or what is not a crime in each state. Although murder is a crime everywhere. There are also some federal crimes like tax evasion, mail fraud, and racketeering. If you're suing someone or being sued, you're in the realm of civil law. Civil cases arise from disputes between individuals, or between individuals and the government, when one party, the plaintiff, claims that the other party, the defendant, has caused an injury that can be fixed or remedied. If the plaintiff proves his or her case the defendant must pay damages. If you lose a civil case you don't go to prison or jail in most circumstances, but you may end up losing lots of money, and that sucks. I love money. Cases about contracts, property, and personal injuries, also called torts, are examples of civil law. So under certain circumstances a civil or criminal case can become public law. This happens when either the defendant or plaintiff can show that the powers of government or the rights of citizens under the Constitution or federal law is involved in the case. Also if the law gets exposed to gamma rays. "Law, smash!" For example, in a criminal case where the defendant claims that the civil rights were violated by the police, the decision can become public law. Thanks Thought Bubble. So those are the basics of the court system in the U.S. And you can see that there's a lot to keep straight. There are types of courts, basically trial courts and appeals courts, on both the state and federal level. And there are types of laws, basically statutory and common laws. The fact that we have both state and federal statutory law is an example of federalism in action. The U.S. unlike most other nations has both statutory and common law, but most of the time when we're talking about federal laws we're in the realm of statutes, or maybe the Constitution. When you study American government, most of the cases you read about are examples of appeals and of public law. How this all works in practice is even more complicated. And the adaptability of the American legal fabric allows statutes to stretch to fit the growing and changing American society. Much like Bruce Banner's incredibly elastic pants. Thanks for watching. I'll see you next time. I'm getting angry! Oh no! Ahhhh! I'm not wearing elastic pants! Oh no! Ahhhhh! Crash Course: Government and Politics is produced in association with PBS Digital Studios. Support for Crash Course Government comes from Voqal. Voqal supports non profits that use technology and media to advance social equity. Learn more about their mission and initiatives at voqal.org. Crash Course is made with the help of these Incredible Hulks. Thanks for watching. Rarrrr! |
US_Government_and_Politics | Introduction_Crash_Course_US_Government_and_Politics.txt | Hi, I’m Craig. I’m not John Green, but I do have patches on my elbows, so I seem smart. And this is Crash Course Government and Politics, a new show, hurray! Why are fireworks legal or illegal? We might find out. Will we find out Stan? Anyway, I have a question for you. Have you ever wondered where your tax dollars go or why people complain about it so much? Or who pays for the highway that runs past your house? Or why you use the textbooks you use in science class? Or why you need a license to drive, or to hunt or to fish or to become a barber? I’ve always wanted to cut my own hair, back when I had it. Have you ever wondered why you have to be 21 years old to drink alcohol but only 18 to vote? Or gamble. Sometimes voting is a gamble - actually always. Do you get confused when you hear people talk about news about Wall Street regulations, or Obamacare, or the national debt? Do you wonder why there are so few cell phone carriers and cable companies? How about why it’s ok for student groups to lead prayers in schools but not for the principal to do so? Have you ever wondered if there are any limits on when, where, and how the police can search your home, or your car, or your locker, or you, or your friend, or your grandma, or your grandma’s friend? And do you know why you can stand outside a government office with a sign and a bullhorn complaining about military action that you think is unfair and the police can’t stop you, but you can be fired from your job for doing the exact same thing? Have you ever been sued? Or fined? Ever wonder what the difference is between being sued and being fined? Have you ever wondered why the government does the things it does and why it doesn’t do other things? Have you ever wondered what it would be like if we had no government at all? That would be anarchy. Can we play the Sex Pistols, Stan? That’s probably illegal. Why is it illegal? And probably the most important, have you ever thought about how you can change the things that seem unjust or unfair or that you just don’t like? Ok so that was more than one question, and obviously there isn’t a single answer to all of those questions, except in a way, there is. The study of government and politics. And that’s what we’re going to talk about today, and this whole series: Crash Course Government and Politics - aptly titled. [Theme Music] So let’s start by doing what human beings do when confronted with complicated questions they can’t answer. We’ll answer a simpler one. In this case, what are government and politics and why do I need to learn about them. Government is a set of rules and institutions people set up so they can function together as a unified society. Sometimes we call this a state, or a nation, or a country, or Guam. And I’ll use these terms somewhat interchangeably - except for Guam, that might be a little confusing. So, we study government in order to become better citizens. Studying government enables us to participate in an informed way. Anyone can participate, but doing so intelligently that takes a little effort, and that’s why we need to learn about how our government works. Politics is a little different. Politics is a term we used to describe how power is distributed in a government. And in the U.S it basically describes the decisions about who holds office and how individuals and groups make those decisions. Following politics is a lot like following sports in that there is a winner and a loser and people spend a lot of time predicting who will win and analyzing why the winner won and the loser lost. The outcome of an election might affect your life more than the outcome of a sports game though. Unless you’re gambling - which might be illegal. Government is really important. Everyone born in America is automatically a citizen, and many people choose to become citizens every year so that they can have a say in the government. The USA is a republic, which means that we elect representatives to govern us, and a democracy, which means that citizens are allowed to participate. This ability to participate is something we take for granted, but we shouldn’t. History tells us that that citizen participation is the exception rather than the rule. But we’re not going to look at history. Who has time? That’s what history courses are for with that other guy. So one way people can participate in government is through voting. And many people will tell you that that’s pretty much the only way we can participate in government and politics, but THEY’RE WRONG. And I love pointing out when people are wrong. Let’s go to the Thought Bubble. Sure, when you mark a ballot, you are participating in the political process, but there are so many other things you can do to be an active citizen. You can contact your representatives and tell them what you think about a political issue. People used to do this by writing letters or sending telegrams, but now they tend to call or send email, although there’s nothing like a good old-fashioned angry letter. People can work for campaigns or raise money or give money. They can display yard signs or bumper stickers. They can canvass likely voters, try to convince them to vote or even drive them to the polls on election day. You participate in politics when you answer a public opinion poll. Or when you write a letter to the editor or comment on an online article. You participate in politics when you blog, or tumbl, or make a YouTube video, or tweet. I guess even YouTube comment counts. First! Ever been to a march or a rally or held a sign or worn a t-shirt with a slogan on it, or discussed an upcoming election at the dinner table and tried to convince your parents who to vote for? You’ve participated in the political process. And if you’ve actually run for office you’ve participated, even if you didn’t win, and if you did win, congratulations, now get back to work. You should already know this. But probably the most important thing you can do to participate in government and politics is both the easiest and the most challenging. Become more educated! Anyone can be a citizen, but to be a good citizen requires an understanding of how government works, and how we can participate. It requires knowledge and effort and we have to do it because otherwise we end up being led rather than being leaders. We learn about politics because knowledge is our best defense against unscrupulous people who will use our ignorance to get us to do things that they want rather than what we think should be done. Thanks, Thought Bubble. That was my first Thought Bubble narration! Hurray! You guys are fun. This is fun. So that’s where we come in. Over the course of this series we will be looking in depth at American government and politics. We’ll be talking about stuff like the structure and function of the branches of government, the division of power between the national government and the state governments, what political parties are, what they do, and how they are different from interest groups. We’ll examine the role the media plays in government and politics, how the legal system and the courts work and how they protect civil rights and civil liberties. We’ll look at political ideologies: what it means when you say you are a liberal or a conservative or a libertarian or a socialist or an anarchist – okay we probably won’t talk about anarchy because that’s sort of the rejection of government. Again, Sex Pistols, Stan? Can’t... copyright issue. I’ll take care of it. ANARCHY - WOOO! I’ve been known to do that from time to time. We’ll try to understand the forces that are shaping American government and politics today. And we’ll work towards becoming more involved and developing our knowledge so that we make our government more responsive and our politics more inclusive. By the end of this series – and actually before the end – you will understand how our government works and how you can make it work better for you and your community. Not only will you be able to answer most of the questions I started this episode with, but you will become, if you pay attention and think for yourself, a more engaged and active citizen. And you might have a beard - if you don’t shave. Next week we’ll talk about Congress, how it works, and what it does, when it does anything. Thanks for watching, I’ll see you next week. And that’s my first Crash Course episode! Are we out of poppers Stan? I’ll just throw ‘em… wooohoo! Bang! Wooo! Bang! Crash Course Government and Politics is produced in association with PBS Digital Studios. Support for Crash Course U.S. Government comes from Voqal. Voqal supports non-profits that use technology and media to advance social equity. Learn more about their mission and initiatives at Voqal.org. Crash Course was made by all of these nice people. Thanks for watching. Can we call Craig Course, Stan? No? Crash Course Craig? ...Can't. |
US_Government_and_Politics | Government_Regulation_Crash_Course_Government_and_Politics_47.txt | Hello, I’m Craig and this is Crash Course Government and Politics and today I’m going to talk a bit more about economic policy. Ran into the table there a little bit. Whoo! Economic policy can be dangerous. Specifically, we’re going to look at some of the broad goals of economic policy and some of the things that the government does to try to accomplish those goals. And we may even provide some examples of times when the government DID accomplish them, so take that, skeptics. But, I have to admit, a lot of the time the goals are just goals. [Theme Music] So all people have goals and aspirations (except me) and the government, since it’s made up of people is no different. Well I do have one goal: to punch the eagle again. And I did it. Accomplished. Well, actually the government's different because it’s economic goals are much bigger and more important than, say my goal of punching the eagle again. Although I would argue my goal is pretty important. So what are these, goals of economic policy? The first goal is promoting stable markets. We talked about how the government structures the market system in the last episode, so I probably don’t need to repeat it. At least I hope I don’t. You should’ve been paying attention. But since nobody wants a malfunctioning market, most of the things the government does to create a market system also work to make the system stable and predictable. Maintaining law and order and minimizing monopolies are examples of government actions that make the market system stable. I didn’t know the government maintained Law and Order – oh not the tv show, OK. One of the more interesting ways – ok interesting to me – that the government keeps markets predictable is through national regulations of things like automobile fuel efficiency standards. If there were no national regulations, and states were allowed to set the rules, then it might be possible for car makers in Detroit to build cars that live up to the mileage standards in Michigan, but not in California, and that would be anarchy. Well, maybe not anarchy exactly, but it wouldn’t be good, and it’d make it much more difficult for manufacturers to know what kind of cars to make. Also, do you really want California, the state with the biggest population, making rules for the rest of us? Of course you don’t. The second major goal of economic policy is promoting economic prosperity. Here’s another example of a situation where many people will tell you that the best way for the government to promote prosperity is to get out of the way, and they may have a point, but the government doesn’t stop trying. So what does the government do to promote prosperity? For one thing, it tries to keep a positive investment climate and build confidence in the economy. One way the federal government can accomplish this is through regulating financial markets through the Securities and Exchange Commission since people won’t want to invest in the securities markets if they think the game's fixed. Another thing the government can do, if it’s feeling particularly Keynesian, is to spend money on public investment in things like highways and the internet. While not actually built by Al Gore, it did begin with a government program out of the Defense Department. The government also pays for research through the National Institutes of Health and the National Science Foundation, and enhances the workforce through education policy and immigration policy, all of which contribute to national prosperity. Another, and by no means the last, way that the government can try to make the country more prosperous is by keeping inflation low. You can find out more about inflation from Crash Course: Economics, but the main tool the government uses to control inflation is the Federal Reserve, which is so complicated that it gets it’s own episode. A third goal of government economic policy, one closely related to the first two, is promoting business development. Many people would probably argue that promoting business development and promoting prosperity are the same thing but policies aimed at helping businesses are slightly different and more focused than those targeting the broader goal of promoting prosperity. The main ways that the federal government promotes business development are through tariffs and subsidies. Since the Great Depression, the U.S. has pretty much pursued a policy of free trade, which means lowering tariffs on most things, which by forcing them to compete can hurt businesses, at least in the short run. In the past, however, high tariffs allowed American businesses to develop free from foreign competition and this helped to make the U.S. the most powerful industrial nation in the world! Can we use that Libertage from US History? I think Yes! [Libertage] Subsidies are very controversial and they come in two forms. Grants in aid for things like transportation – building those superhighways again – provide an indirect subsidy to businesses who don’t have to pay for the roads they use to ship the goods they make. Most people don’t complain about this type of subsidy, because they can also be looked at as a public good. Direct subsidies are another issue. These include direct assistance to businesses through the Small Business Administration and government investment in firms like Sematech and, more recently and more controversially, Solyndra. Many people don’t think that the government should be in the business of investing in business and that these subsidies provide the businesses that receive them with an unfair advantage. Farm subsidies are probably just as controversial. They were put in place to help farmers during The Great Depression, but these days, critics worry that most of the subsidies go to corporate farms. The fourth goal of government economic policy is to protect consumers and employees. A lot of people will tell you that the federal government doesn’t do much to protect employees these days, and those people are probably right, but in the past it certainly did. The government made unionization easier with the National Labor Relations Act and setting labor standards, especially overtime rules with the Fair Labor Standards Act. Both of these were passed in the 1930s, by the way. Probably the most notable thing that the government does to protect workers these days is set the federal minimum wage, but since that topic is being hotly debated as this episode is being produced in 2015, I can’t really comment on how it’s going to turn out. On the other hand the Occupational Safety and Health Administration does set up regulations to prevent workers from breathing in hazardous fumes and protect them from other potentially life threatening workplace conditions, and that’s a good thing. As far as consumers are concerned, there are thousands of regulations that protect us to make sure that the things we buy don’t kill or maim us. The Food and Drug Administration makes sure that our medicines aren’t poison, and the Department of Agriculture inspects meat, which I think is really good a idea, actually. The National Traffic and Motor Vehicle Safety Act of 1966 made cars safer, and the Consumer Products Safety Commission helps keep lead paint out of our toys and saves us from exploding toasters. I like explosions as much as the next guy, but not with breakfast. All of these goals of economic policy, promoting stable markets, promoting economic prosperity, fostering business development and protecting employees and consumers are interrelated and important. I’ll leave it up to you to decide if one is more important than the other three, because that makes for excellent dinner conversation. If your dinner parties are mostly about the role the government plays in our economy. Please invite me to those dinner parties. I’m hungry, for roast beef and political debate. So, to shift gears a little, let’s talk history, and how the government’s role in regulating the economy has changed in the last 240 years or so. So you probably remember from back when we talked about the transition from congressional to presidential government that began with Teddy Roosevelt and really came into its own with Franklin Roosevelt, that before the 20th century the federal government didn’t really do that much. A lot of that has to do with fiscal policy and taxation, which we’re going to discuss in another episode, and maybe that dinner you’re going to invite me to, but some of it was certainly because of the way that the Supreme Court had interpreted the Commerce Clause to mean that government regulation was suspect, and by suspect, I mean generally not allowed. But by the end of the 19th century the Federal government’s regulatory power had begun to change, and a lot of that has to do with one of my favorite subjects - no not Star Wars. And no not the protection of endangered species. (punches eagle) I’m talking about railroads (Yeah!). Let’s go to the Thought Bubble. So, with the completion of the transcontinental railroad in 1869, travel and communication across the U.S. became much easier and it was possible for the first time to have a national market for goods. If you raised cattle in Kansas, you could now easily ship beef to New York or San Francisco. Railroads were, almost by definition, interstate entities, so it was pretty clear that Congress could regulate them. And they needed regulation because railroads had a nasty habit of discriminatory pricing, charging much, much more for some shippers than for others. Something had to be done and Congress stepped in with the Interstate Commerce Act in 1887, which created the Interstate Commerce Commission to regulate railroads. The period of time around the turn of the 20th century in the U.S. is known as the Gilded Age and is associated with runaway capitalism and the creation of modern corporate structures and industrial capitalists like Andrew Carnegie – or Carnegie, if you will – and John D. Rockefeller who are heroes to some and villains to others. In response to some of the abuses of the Gilded Age, Congress passed its first wave of regulatory legislation. In addition to the ICC, Congress created the Federal Trade Commission to regulate trade and the Sherman and Clayton Acts to try to counter the problem of monopolies. These anti-trust laws are the basis of modern anti-trust regulation and have been used against Standard Oil and Microsoft. This first wave of economic regulation didn’t have huge effects on the economy, certainly not greater than the effects of, say World War I. In the 1920s the federal government returned to a more traditional laissez faire approach, which lasted until the Great Depression swept Herbert Hoover and the Republicans out of office and Franklin Roosevelt into it. And with Franklin Roosevelt came the New Deal and the advent of what law schools sometimes like to call the administrative and regulatory state. Thanks Thought Bubble. We’re not going to get into details about the various laws and regulations of the New Deal here, but luckily I think John talked about them in Crash Course: U.S. History. John, he talks about stuff. But in general, those regulations meant that the federal government would take an active role in regulating certain sectors of the economy, like agriculture and transportation. Sometimes technology played a part. There really wasn’t a need for a Federal Aviation Administration until there were airplanes. The next big wave of government regulation happened in the early 1970s under, of all people, president Nixon. These new regulatory laws were different from their New Deal predecessors in that they focused on the economy as a whole. For example the Occupational Safety and Health Administration dealt with ALL occupations, or at least most of them, and the EPA was created to protect the whole country’s environment. Beginning in the 1980s with Ronald Reagan, or actually before him under Carter, the federal government has undertaken various initiatives to de-regulate the economy, but we already talked about deregulation in our episode on taming the bureaucracy so we don’t need to re-hash that here. The point to remember is that, despite attempts at deregulation, the administrative regulatory state appears to be here to stay. So why do we have an administrative regulatory state now, even though so many people complain about it? Part of the reason has to do with the remarkable staying power of bureaucracies, which are harder to kill than Wolverine. Nowadays the federal government not only has economic goals, goals like increasing prosperity that most of us agree upon, it also has a sense, maybe even a belief that it should try to achieve those goals. This is a long way from the view of the federal government that persisted through the 19th century, one which many people say was handed down by the framers. But times change, and the world and the U.S. has gotten much more complex. Economic concerns take up an increasingly large part of our lives and many of them, especially big macroeconomic policies require big solutions. And for many Americans, but certainly not all of them, the best solution we have is government. Thanks for watching. See you next time. Crash Course Government and Politics is produced in association with PBS Digital Studios. Support for Crash Course: U.S. Government comes from Voqal. Voqal supports nonprofits that use technology and media to advance social equity. Learn more about their mission and initiatives at Voqal.org. Crash Course was made with the help of all these occupational safety and health hazards. Thanks for watching. |
US_Government_and_Politics | Affirmative_Action_Crash_Course_Government_and_Politics_32.txt | Hi, I'm Craig and this is Crash Course Government and Politics, and today, I'm gonna finish up our episodes on civil rights by talking about affirmative action. There's a few things I'm not gonna do in this episode, though. First. I'm not gonna try to defend all aspects of affirmative action, I admit it's a problematic concept. Second, I'm not gonna say that affirmative action isn't necessary or that it's racism, I'm pretty sure that that debate will go on in the comments. What I am gonna do is define affirmative action, describe how the courts have dealt with it, and try to explain why it has existed and continues to exist. [Theme Music] So let's start with the easy part and define affirmative action. Affirmative action is a government or private program designed to redress historic injustices against specific groups by making special efforts to provide members of these groups with access to educational and employment opportunities. I like this definition because it also explains why affirmative action exists - to redress historic injustices which means discrimination. The key aspect of affirmative action is that it provides special access to opportunities, usually in education and employment, to members of groups that have been discriminated against. Now, where affirmative action gets controversial is when you look at the two ideas of access and opportunity. When you poll Americans they generally favor equality of opportunity although they usually don't like it when the government tries to promote equality of outcomes, usually by redistributing wealth, but I'm getting ahead of myself. This means that Americans generally think that other Americans should have an equal shot at success even though they don't imagine that all Americans will be equally successful. Not all of us can be Donald Trump, although not all of us want to be. Since we tend to believe in the USA that education and jobs are the keys to success, equality of opportunity is tied up in access to these two things, and that's why they are the focus of affirmative action efforts. Here's where it gets tricky. In order to increase access to education and job opportunities for members of groups that are historically discriminated against, affirmative action programs try to ensure that they get extra special access to jobs and schools, which, to many people, is not equality of opportunity. Legal types often will use the metaphor of a thumb on the scale to describe the added benefits that affirmative action programs supposedly provide, but we could also see it as a head start in a foot race, which is the metaphor I prefer for reasons I'll explain in a bit. But first let's go to the Thought Bubble. So while affirmative action started with LBJ ordering government agencies to pursue policies that increase the employment of minorities in their own ranks and in soliciting contracts, the first time it made a splash at the supreme court was over the issue of university education. Specifically, in the landmark case of Regents of the University of California versus Bakke in 1973, the court ruled on the issue of racial set-asides, or quotas, in admissions at the University of California Davis, Medical school. Of the 100 slots available to incoming med students, 16 were set aside for racial minorities. Bakke claimed that this meant that some people who were less qualified than he was, at least he felt so, got into Davis med school and Bakke didn't. So he sued, claiming that the quotas discriminated against him because he was white. The supreme court ruled in Bakke's favor, saying that racial quotas were not allowed since they didn't provide equal opportunity, but they also ruled that affirmative action programs were allowed if they served a compelling government interest, and were narrowly tailored to meet that interest. In other words, if they'd passed the test of strict scrutiny. One of the more interesting things about this decision is the kind of stuff the court said constitutes a compelling government interest. They rejected the idea that righting historical wrongs was something that the government should undertake, probably because it opens up all kinds of historical cans of worms, especially the question of who decides when and if a historical wrong has been redressed. What they did say was that compelling government interest was ensuring diversity in university admissions. This is true in general, and as long as we can imagine there being universities, the state has an interest in seeing that their classes represent diverse viewpoints. Diversity benefits both the members of the minority and majority groups, at least in the minds of the court. Thanks, Thought Bubble. This is just a pretty serious video I don't know when I was gonna get that eagle punch in so I just did it there. The early 1970's were the high tide of affirmative action in the U.S, and ever since then the courts have looked less favorably at affirmative action claims. Because they apply strict scrutiny, most affirmative action claims are struck down. This was clarified in the case of Adarand Constructors Inc. versus Peña in 1995 which dealt with racial preferences in the hiring of subcontractors on government projects. Although this case meant that the government was not supposed to give preferential treatment to minority-owned businesses, or those that employed a large number of minorities, a government report from 2005 found that at least as far as the federal agencies were concerned, the practice was still widespread. In most of the cases it hears, the court has struck down affirmative action provisions because they fail one or another of the strict scrutiny tests, but the basic idea that universities can create programs to build and maintain a diverse student body has been upheld. Two relatively recent cases involving the University of Michigan show how complicated it can be. In the 2003 case of Gratz versus Bollinger, the court ruled that Michigan's undergraduate admissions policy, which awarded extra points to people in racial minority groups, was unconstitutional because it was not narrowly tailored to meeting the goal of student body diversity. In the same year, in the case of Grutter versus Bollinger, Bollinger just keeps showing up to the supreme court because he was the President of the University of Michigan at the time, lucky. The court ruled that the admissions policy of Michigan's law school was narrowly tailored to meet the goal of promoting diversity although it said that in 25 years such a program might not be necessary. So at the time we're making this episode, the idea that universities can take race into account in their admissions so that they can create a diverse learning environment for their students is still constitutional, but the supreme court looks very carefully at the actual policy that the university has in place, and if it looks anything like a quota, they'll strike it down. Turns out there was another place to punch the eagle. Two times! Affirmative action remains controversial and it looks like eventually it's going to disappear but maybe not right away. In 1996, Californians passed a ballot initiative - Proposition 209 - that effectively outlawed affirmative action in public employment, public contracting, and public education, especially university admissions. After this initiative, also known as the California Civil Rights Initiative, passed over vocal and organized opposition, the graduation rate among African Americans in some California universities went up. On the other hand, the enrolment rate of African Americans at many UC schools declined, and it only returned to 1996 levels in 2010. Other states like Michigan had passed laws similar to California's Proposition 209 making it harder and harder for affirmative action programs to flourish. But as is often the case in politics, people's response to affirmative action differs depending on how you ask the question. When phrased as an anti-discrimination measure, ballot measure like Prop 209 are quite popular, but when people are asked if they want to get rid affirmative action their responses are not always so positive. Support for affirmative action remains, and I suspect that this is because many people still recognize that some form of support for minority groups is needed in the U.S. And this brings me back to the reason why we have affirmative action in the first place. While the courts have ruled that attempting to correct the historical injustices of slavery and Jim Crow laws are not a compelling enough interest to justify affirmative action, for many, they are. Minority groups, and in particular African Americans, have suffered from horrible treatment and legal disability from the time they began arriving as slaves in 1619. Even after the Civil Rights Act passed in 1964, full equal opportunity was still not a reality. Opinions vary on whether affirmative action is still necessary today, and your point of view depends a lot on your personal history and your politics, which as we'll see in the next few episodes, are deeply intertwined. Thanks for watching, see you next week. Crash Course Government and Politics is produced in association with PBS Digital Studios. Support for Crash Course U.S. Government comes from Voqal. Voqal supports non-profits that use technology and media to advance social equity. Learn more about their mission and initiatives at voqal.org. Crash Course was made with the help of all these nice people. Thanks for watching. |
US_Government_and_Politics | Social_Policy_Crash_Course_Government_and_Politics_49.txt | Hello, I’m Craig and this is Crash Course Government & Politics and today we’re going to talk about social policy. I have a lot of social policies, which include not staying out past 3AM on weeknights, and avoiding social gatherings where velveeta sausage cheese dip is served. Both of these are pretty loosely enforced, though. Actually, we’re talking about government social policy, which deals with things like social security, education, and healthcare. And hopefully velveeta sausage cheese dip. But… probably not. [Theme Music] In talking about policy, it’s really hard to separate social policy or foreign policy from economic policy, primarily because they’re all paid for with money. One way to distinguish between them is to look at a policy’s goals. Social policy has a number of goals, none of which is the outright promotion of social-ism. Glad that’s out of the way and no one is going to comment on it at all in the comments. Peace on Earth. In America, social policy consists of programs that seek to do at least three things. Some social programs protect against risk and insecurity, like from job loss, health problems or disability. Other social programs seek to promote equal opportunity. Finally, some social programs attempt to assist the poor. Of these three goals, there’s general agreement that promoting equal opportunity is a good thing, less agreement on whether the government should protect us from risk, and widespread skepticism about helping the poor. Americans traditionally haven’t cared much for social policy, and part of the reason for this has to do with Americans’ strong faith in individualism that is suspicious of government action, and generally favors private charity and pull-yourself-up-by-your-bootstraps self-reliance. I don’t think I’ve ever worn bootstraps, Stan. Does that make me a true American? As you might have guessed, the history of the American government social policy pretty much starts, as most government programs do, with the New Deal. Prior to the 1930s there were some attempts on the state level to protect workers and limit exploitation, but often these were struck down by the courts, and the Federal government’s role in protecting people from risk was minimal. The government did provide pensions to veterans’ widows, but except for a relatively brief period after the Civil War, the numbers of pension recipients were never very large. The Great Depression changed the way that Americans came to view their government, and also modified how many of them felt about poverty. The suffering caused by the Depression was so great and so widespread that many Americans came to feel that it was part of the government’s job to do something about it. Private charities, which had been the primary way that Americans had helped the poor before the Depression, could not handle the numbers of needy people. In addition, not all of these people could be considered to have become poor due to their own personal failings. The Great Depression helped solidify the idea that people could sometimes be victims of economic forces beyond their control, and that it was the government’s duty to help them. Basically, the Great Depression changed people’s question from “if the government should help” to “how should the government help?” The answer to that question came in the form of the New Deal. You’ve probably heard about the New Deal; it’s a big deal. But we’ve only got 12 minutes, so we’re going to focus on two specific programs: Social Security and Aid to Families with Dependent Children, or AFDC. And if you judge by public opinion polls -- and who doesn’t -- then Social Security is one of the most successful New Deal programs ever. Let’s go to the Thought Bubble. Started in 1935, the Social Security Act was a reaction to the fact that many elderly people in the U.S. were poor, largely because they had no work, little savings, and no pensions. Social Security provided monthly payments to people over age 65, and while no one was getting rich, it was enough money to prevent people from falling into abject poverty. A couple of things about Social Security. First, it’s not a savings program; you pay into it when you are working but that money doesn’t go into an account for you to access when you retire. So how does it work? Well, when you are working and on a payroll, taxes are deducted from your wages and the amount is matched by your employers. The total amount that gets taken out is 7.65% with 6.2% going to Social Security and the other 1.45% going to Medicare, which provides health coverage for older people. This money goes into a pot, which is then paid out to people over the age of 65. In other words, today’s workers are paying today’s older Americans. The benefits are indexed, which means that they go up with inflation. This program redistributes wealth from younger working people to older retired people. Because the more you make, the more you pay -- at least up to a point because there’s a cap on the amount of your salary that’s subject to the payroll tax – Social Security also redistributes wealth from richer people to poorer ones. In general, Americans are suspicious of programs that redistribute wealth, but Social Security is very popular with both liberals and conservatives. Conservatives tend to like it because it is funded by a regressive payroll tax that phases out at higher incomes, rather than a more progressive one that would hit high earners harder. Liberals like it because it provides automatic benefits for the elderly. Thanks, Thought Bubble. Whether Social Security is in crisis depends a lot on what numbers you look at and whether you believe that there are political solutions to potential problems. The number of people receiving benefits is rising – approximately 50 million Americans receive Social Security and that number is increasing as baby boomers get older – and the number of people paying into it is falling. Eventually, if these trends continue, there will come a time when there might not be enough money paid in to Social Security to pay out benefits to those who qualify. This shouldn’t be an issue since Social Security spending is controlled by Congressional legislation, and they can always raise the payroll tax or raise the benefit age above 65. Should be easy. Uncontroversial. Since older people tend to vote, there’s a strong incentive for Congress to fix any problems and keep the benefits coming. Also, it would be a national embarrassment for Congress to let it go bankrupt. Medicare, which is also paid for by payroll taxes, is probably in more trouble, partly because of the same demographics that are putting pressure on Social Security, but mainly because of rising medical costs which Medicare can only do so much to control. Medicare is a third party payer for its medical benefits, it doesn’t actually provide doctors or medicine or stuff that makes people healthy. Since it does cover more than 45 million Americans, Medicare has some leverage over costs, but, at least until recently, those costs have been rising rapdily. Social Security is generally popular, but I’ll tell you what was unpopular: Aid to Families with Dependent Children. In fact, it was so unpopular that we don’t even have it anymore! Like imagine this eagle as the AFDC (punches eagle)... metaphor. AFDC is what Americans tend to think of when we talk about “welfare.” It was a system that paid benefits to women with children and the amount of the payments went up or down depending on how many children you had. AFDC was what is called a non-contributory program, which means what it sounds like: you didn’t need to have contributed through taxes to be eligible or to receive benefits. There are still some non-contributory social welfare programs, most notably free school lunches, federal housing assistance programs, and supplemental nutrition assistance program, also known as SNAP or food stamps. Another is the successor to AFDC, Temporary Aid to Needy Families, or T.A.N.F. or TANF. In the 1980’s, conservatives argued that these AFDC checks created dependency or at the very least an incentive to not work, and increasing welfare payments were pointed to as a criticism of liberalism in general. But conservatives weren’t able to reform welfare in the 80’s, because even though a majority of Americans didn’t like it, passing laws is difficult, especially when Congress is hostile to you. It took a Democratic president, Bill Clinton, to push welfare reform through Congress, which in 1996 passed the Personal Responsibility and Opportunity Reconciliation Act, better known as the 1996 Welfare Reform Act. This law got rid of Aid to Families with Dependent Children and replaced it with Temporary Aid to Needy Families, which emphasized that any aid to needy families was going to be TEMPORARY, by putting that as the first word in its title. There are now work restriction that recipients must meet in order to get benefits, and there are time restrictions. You can only receive benefits for two years in a row and five years total. All of this was supposed to encourage people to get off welfare, and as the name of the law tells us, exercise greater personal responsibility. So did it work? It kind of worked. The number of people receiving welfare did decrease and more people did look for and find work. On the other hand, the law didn’t reduce poverty, although to be fair that wasn’t what it was supposed to do -- it was supposed to reduce welfare. Also, during economic downturns as in 2001 and 2009, welfare caseloads rose again, suggesting that the work that people did find might not be such a stable solution to relieving poverty. So this episode has focused mainly on the more controversial aspects of social policy, those that involve redistribution of wealth from richer to poor Americans, and I’m sure all of you commenters are fine with that. Actually, probably not. For a lot of reasons, some economic, but many cultural, Americans have generally been suspicious of these redistributive programs. Remember that I said one goal of social policy, one that is not very controversial, is increasing opportunity. And for most of us, the key to increasing opportunity is education. Which is what we’re doing right here! Education is one social policy that almost everyone agrees on, under the theory that if everyone is educated they will be able to find good, high paying jobs that will enable them to achieve greater economic stability and mitigate the risks in their own lives without the government having to do it for them. Whether it works or not, and just how much the government should be involved, are questions that you will have to think about and argue over with your friends and families and teachers and teacher’s teachers and teacher’s grandmas and the guy at McDonalds…maybe the guy standing next to you at the Velveeta sausage cheese dip platter. But it’s important to remember that social policy isn’t just redistribution of wealth or income, it’s also education and programs that help people who really can’t help themselves. Thanks for watching. See you next time. Crash Course Government and Politics is produced in association with PBS Digital Studios. Support for Crash Course: U.S. Government comes from Voqal. Voqal supports nonprofits that use technology and media to advance social equity. Learn more about their mission and initiatives at Voqal.org. Crash Course was made with the help of all these Velveeta sausage cheese dips. Thanks for watching. |
US_Government_and_Politics | Gerrymandering_Crash_Course_Government_and_Politics_37.txt | Hi, I'm Craig and this is Crash Course Government and Politics and today I'm gonna talk about a topic in American politics that tends to drive people crazy! Ahhh! No it's not partisanship, or horse race journalism, or the state of political punditry, although we could easily do episodes on all three of those, and we might. Nope, today we're gonna look at the election districts and how they shape electoral outcomes, and that means - you guessed it - we're gonna talk about Gerrymandering. Clone: Thank goodness, Gerrymandering is a blight on our American election system. It completely thwarts the will of the majority, and it's responsible for our lopsided house of representatives. Second Clone: Not so fast my left-wing sore loser friend! Gerrymandering is not nearly as responsible for the 2014 republican congress as the fact that people like you self-segregated the urban enclaves of socialism. Craig: All right calm down, clones. Gerrymandering is a little more nuanced than that. Let's talk it out. [Theme Music] Congressional Apportionment - how many representatives each state gets - is super exciting! Even though it only changes every 10 years. Since the number of representatives each state gets is based on population, it's important to know how many people are in each state. That's one reason, at least in the constitution, that we have a census every 10 years. The most populous state, California, has the largest number of representatives - 53 - and the least populous states have only one. Sorry Alaska, Delaware, the Dakotas, Vermont, and Wyoming, and Montana, and the state of loneliness. One is the loneliest number. In those sparsely populated states, figuring out the election district, which geographic area is represented by a congressman, is easy because there's only one district. This makes elections in these states effectively at large elections, like a state's choice for senator. Even though there are two senators from each state, they represent the entire state at large rather than only a part of it like representatives are supposed to do. The electoral college, the system through which Americans choose their president, are also a type of at large election. The rest of the states are divided into what are called single member districts. This means that each election district chooses one representative. Now you might think it would be simple to divide a state into as many pieces as it has representatives, but why would you think that? Nothing is simple! Districts are required to be equal - or almost equal - in population and in most states populations are not evenly distributed across the entire region. The notion that election districts must encompass equal population is the essence of the idea of one person, one vote - a principle that was cast into law by the 1962 supreme court decision in Baker vs Carr. It means that a person's vote counts equally no matter where they live, at least as far as the house of representatives goes. In the senate it doesn't actually work out because the resident of a small state like Delaware has the same number of senators - 2 - as a resident of California. To put it another way, in 2014 two senators represented 897,934 Delawareans and the same number of senators represented the approximately 38 million Californians. In the house, each representative is responsible for about seven to eight hundred thousand people, which is still a lot but much better than one senator for nineteen million Californians or thirteen million Texans. The idea that people should be equally represented in congress shouldn't be controversial, and for the most part it's not. What is controversial is the way that minority groups are represented. One of the problems with single member districts is that they can make it easier to cut minority groups out of the political landscape. After all, if in a given state only 15% of the residents are minorities, it'll be more difficult for them to elect a member of their own group. Even under a plurality rule, unless that person can appeal to a large number of non-minority people. Congress and the supreme court have tried to remedy this problem by mandating that there be majority-minority districts, which is a confusing way of saying districts where the majority of voters are members of a minority group. This is a little like affirmative action in the realm of voting, and as you might have guessed, there is a fair amount of disagreement among people who think a lot about it. Although, I'd bet that number itself is a pretty small...minority. This idea of majority-minority districts leads us into a really fun aspect of congressional districting - the way that the districts themselves are drawn, a process known as Gerrymandering after the 19th century political cartoon that depicted one particular Massachusetts district that looked like a reptile. Oh! There it is. Looks like a dragon or something. And we all know dragons are reptiles. The man responsible for this twisted district - the name of my band in high school - was Elbridge Gerry, hence the name Gerrymander. So districts have to be drawn in a way that they contain roughly equal populations, so why does it matter if they look convoluted or even somewhat ridiculous like this? Well, states don't just draw districts to make them look equal in population, they draw them to capture certain population characteristics so that one party has a greater chance of electing a member from a particular district. In the district pictured here, the Illinois 4th, Chicago has been carved up to capture a certain population - me. That's the district I live in. Usually district are drawn so that they can capture my vote, or a significant majority of one party or the other, virtually ensuring that a particular district will elect only a democrat or republican as the case may be. You might have noticed that thin strip in the Illinois 4th's western edge connecting the upper half and the lower half. Look carefully and you'll see that it runs along the interstate, which I'm sure means that it has a huge population. Why do we do this? Because one of the requirements according to federal election law is that districts not only be roughly the same size in terms of population, but also they be contiguous, meaning that they can't be divided completely by other districts. This requirement results in some pretty weird configurations. So who draws these cockamamie districts anyway? Well, they're done by state legislatures. Well, not legislatures themselves, but by people working at the behest of legislatures. If one party has a majority of the state legislature, say the democrats, they usually want to draw the districts so that Democrats have a better chance of winning, republicans do the same thing. This is why state legislature elections matter so much in census years. Whoever wins that year gets to re-draw the districts. A couple of things to note here. First, there's no rule saying that states can't re-draw their districts whenever they want. Texas tried to do this in 2003 - not a census year - prompting its democrats to run away to Oklahoma for a spell. Second, it's possible for a state to hand the task over to a less biased expert district drawing person, or group, that might make districts more fair. Hand it over to me! I'll make 'em all look like little bunnies. But wait, you might ask yourself, what's wrong with this system and why do people think it's unfair? Let's go to the Thought Bubble. So imagine a state that's 60% republican and 40% democrat, and has 5 electoral districts like this one. Let's call it Clonesylvania. You could draw districts so that there were 3 republican districts and 2 democratic ones, accurately reflecting the state's population, like this. Or you could re-draw it so there were 3 democratic districts and 2 republican districts, which would be an inaccurate reflection of the party composition of the state's population. Or you could simply draw the districts so you had 5 republican districts and zero democratic ones, like this. So you can see, especially in the second and third examples how Gerrymandering can result in districts that don't actually reflect the political makeup of a state at all. By now you might be fuming at the injustice of state legislature's re-drawing districts to make sure that the opposing party has no chance of winning national congressional elections, and you may have read a number of articles blaming Gerrymandering for the composition of the current congress and for making congressional elections generally less competitive. There are a lot of people who feel the same way. But there's a counter argument that it's not the state legislatures that result in solidly republican or solidly democratic districts, but the fact that democratic voters tend to cluster in cities where they often outnumber republicans by a lot. So that states like Ohio, even though the number of democrats and republicans are pretty even with a slight edge going to democrats perhaps, they all tend to concentrate in urban areas around Cleveland and Columbus so that the overwhelming majority of the state's districts are won by republicans. Thanks Thought Bubble. Congressional districting is fascinating and really really important for determining the composition of congress, but is also quite complicated, which as with most things, makes it difficult to understand. But unlike some other complicated issues concerning policy, Gerrymandering is one that's easy to criticize because the visual results are so striking and because it can result in numbers that just look unfair. This is probably why, come election time, you'll hear a lot about it. Now at least you'll have a better idea what those pundits are talking about and you'll be better equipped to making your own decision about the issue. Luckily for you, there's more and more data about this stuff every election and always more to learn. Thanks for watching, I'll see you next time. Crash Course Government and Politics is produced in association with PBS Digital Studios. Support for Crash Course U.S. Government comes from Voqal. Voqal supports non-profits that use technology and media to advance social equity. Learn more about their mission and initiatives at voqal.org. Crash Course was made with the help of these less biased expert drawing district drawing people. Thanks for watching. |
US_Government_and_Politics | Market_Economy_Crash_Course_Government_and_Politics_46.txt | Hello. I’m Craig and this is Crash Course Government and Politics and today we’re going to turn to a topic that is near and dear to our wallets at Crash Course: economics. Now, I know that dedicated fans are saying: “Hold on Craigers, you have a whole series about economics. Tell me about government.” To those fans, I say: “you’re right…and don’t call me Craigers.” But this episode is going to be about the role that government plays in the economy, specifically, the way that government creates the market economic system that we know and love. [Theme Music] Before I get into the ways that government creates a market economy, let me be right up front and say that we’re going to posit that without some government, it wouldn’t be possible for a market economy to exist. [gasp] Whaaaaa? I realize that this is a bit controversial, with many people believing that markets are natural phenomena that follow laws like “supply and demand” that are analogous to real physical laws like, say, gravity. Which is also a movie starring George Clooney - he aged so well. This is an interesting construct and one that has important political ramifications, because if you believe in it, then basically there’s nothing that the government can, or should, do to improve the economy. I’ll leave it to commenters to argue this point, but I stand by my statement: We wouldn’t have a market economy without government. So economically-minded political scientists, AND politically-minded economists, will tell you that there are a number of ways that government structures the economy in the U.S. I’m going to go over eight of them, although there might be more. So, in no particular order, here it goes. The government creates and maintains a market economy by: establishing law and order; defining rules of property; governing rules of exchange; setting market standards; providing public goods; creating a labor force; ameliorating externalities; and promoting competition. I think most of us can agree that a big part of the government’s job is to establish law and order. This idea goes back at least as far as the Enlightenment and Thomas Hobbes, but since this is not Crash Course: Political Philosophy, I’m going to move on. Law and order helps to structure the economy by providing predictability. It is much harder to engage in trade or production for profit if you suspect that what you have to trade or sell may be taken away by bandits, like the Hamburglar. But -- only -- in that case only if it’s burgers that you are actually trading. But it’s not just that the government, if it’s doing its job, can protect us from being robbed in the literal sense of the Hamburgler stealing our delicious, delicious burgers. The government creates a legal system that can punish people who commit fraud, and knowing that they can be punished prevents people from committing fraud. Or at least I hope it does. Most of the time it does. Don’t do fraud kids. The second way that the government structures the economy is by defining rules of property. Now there are many people who will tell you that property is an inalienable right, sort of like something given by God. I’m looking at you John Locke. And John Locke would respond, “don’t tell me what I can’t do” but I would suggest that without government what you think of as your property might not be as “yours” as you think or want it to be. But isn’t this sweet polka dot button-up I’m wearing mine? Well, it is because I paid for it and we have laws that say that payment for a good confers a title to it – we see this especially with land, or as it’s known to the law as “real property” or perhaps “real estate.” We don’t actually receive written titles when we buy most things, but according to the law, if I can establish ownership by proving I paid for this shirt or somebody left it to me in their will or something then it’s mine. And if someone takes it from me, I can bring the law down on them - the courts, the legal system, or maybe the sheriff will help me get it back. A really concrete example of the way the laws create and protect property rights are trespass laws, which allow you to tell those noisy kids to get off your lawn. Without trespass, who’s to say it’s not their lawn? Basically ownership of anything is a bundle of rights establishing what you can do with that thing, whether it’s your car, or your house, or your eagle. And without legally established ownership rules, we can’t buy or sell or punch anything. And speaking of buying and selling, another way that the government structures the economy is through setting and governing rules of exchange. Let’s go to the Thought Bubble. In most states there are complex rules that explain how and when, or even if you can sell something. For example some localities, (like Indiana) have so called “blue laws” that prevent you from buying or selling alcohol on certain days. Some counties in some states are completely dry, meaning that you can’t buy or sell alcohol at all, and for a brief (terrible) period in the US – prohibition – the Eighteenth amendment to the Constitution prohibited the “manufacture, sale, or transportation of intoxicating liquors” Manufacture, sale, and transportation, sound like the three main ingredients in an economy to me. Some exchanges are still flat-out forbidden by laws in the U.S.. Many drugs are called controlled substances for a reason, and that reason is that they are subject to government control. Some drugs are prohibited outright and if you make or sell or buy them you can be punished by the government. There are also laws preventing you from selling yourself into slavery, or from selling your body through prostitution, or selling parts of your body like your kidneys. Some economists may question the wisdom of these rules, but they exist and by making and enforcing them the government can exert powerful control over what can and cannot be exchanged. Thanks, Thought Bubble. Probably less controversial than the rules governing exchange is the government’s role in setting market standards. This is something governments have been doing for a very long time, and you’ve probably learned about it in history class as the government’s setting up weights and measures. This may not seem like such a big deal until you consider that if you are paying someone for a pound of chick peas, you need to know what a pound is... if you’re going to get the right amount for that sweet hummus. This goes for measures too. If I am buying an acre of land, I want to make sure that I’m getting 4,046.86 square meters of land, or 43,560 square feet. And if I buy an acre in Scotland, I’m going to get even more since a Scottish acre is the equivalent of 1.27 U.S. acres. Plus no one will look at me funny when I’m eating my haggis. Basically this means is that the government insures that buyers and sellers are operating on the same playing field. This used to be even more important when currency contained precious metals, but I don’t want to get into a big argument about pennies and nickels -- that's John Green's thing, and we've all established that I'm not John Green. This brings us to public goods. Public goods are things and services that the government provides that can be enjoyed by everyone and, once provided, cannot be denied to a particular subset of the population. One example is public transportation: in many places the government provides bus or subway services to residents, not for free, but at highly subsidized costs, although if you’ve ridden the New York Subway recently it doesn’t always seem like the subsidies are big enough. In many cases the government steps in to provide public goods when markets wouldn’t. It’s not likely that private companies would provide an air-traffic control system, and even if they did, it would have to be highly regulated by the government anyway because you don’t want different cities and states enacting different rules about air-travel. That would be a literal disaster. Also, if it were up to unregulated markets, there wouldn’t be any flights to places with small populations because they wouldn’t be profitable. A really good example of the government providing a public good where the market wouldn’t step in is the rural electrification projects of the New Deal, the most famous of which sprang from the Tennessee Valley Authority. It wouldn’t have been profitable for power companies to provide electricity to rural towns and farms, so the government stepped in and provided it. And since without electricity it’s pretty hard to watch Crash Course, I’m glad they did. We'd have to do, like, a Crash Course Live Play. And I'm not good at live theater. You might have heard that the government is not a “job creator” and in some ways that’s true, except for government jobs like firefighters and public school teachers and, if we’re talking the federal government, soldiers and sailors. But there are other ways that government efforts help to create a labor force. The main way this happens is through compulsory education laws. States require that kids go to school up to a certain age and this is to ensure, or at least try to ensure, that when they become adults they will have a level of competence that will enable them to be productive workers. Of course, employers could provide the necessary training at their own expense, but why would they do it if the government provides it for them? Government also helps create the workforce by providing student loans, which help people pay for college. And that's why college is so easy to pay for now. Right? Wink. There are government-run training programs and, I suppose, the potential for the government to employ more people, like it did during the Great Depression with programs like the Works Progress Administration and the Civilian Conservation Corps. Now if you’ll allow me to put on my economist’s hat – Stan, do we have budget for an economist’s hat? No. Apparently economists wear very expensive hats. I will try to explain what the government does to ameliorate negative externalities. I love my externalities ameliorated. Especially the negative ones. An externality is an external effect that is a byproduct of a market transaction. They can be positive or negative and can also be seen as the difference between the private cost and the social cost of economic behavior. Here’s an example. Driving is an economic behavior. Back in the 1970s gasoline included lead, which made engines run better but also polluted the air with lead, which, as we now know is very bad. Very, very bad. Buying leaded gasoline and running your car on it was a private economic transaction but air pollution was a very public cost that neither the seller of the gasoline nor the purchaser had to pay. And air pollution was very costly in terms of public health. So the government ameliorated this by outlawing lead in gasoline and creating regulations that limited air pollution generally. What this did was force companies and, by extension, purchasers to pay for these negative external costs. Regulation is one way to deal with negative externalities. Another is through taxes, which we’ll deal with it in another episode. The last way that the government creates our market economy, at least the last way I’m going to talk about, is by promoting competition. According to our old friend Adam Smith, the essence of a functioning market system is competition, and in a perfect world competition would ensure that people got the best products at the best prices. But history has shown that corporations and individuals have often tried to stifle competition and create monopolies. If there’s only one firm selling a product, that firm can charge whatever it wants, and this monopoly condition doesn’t usually benefit consumers. At least not as much as it benefits monopolists. So government can and has stepped in to create laws to regulate monopolies. The best known of these are the anti-trust laws, which are sometimes used against big corporations, like Standard Oil or more recently, Microsoft. And the government can also grant anti-trust exemptions that allow monopolies, as it did for Major League Baseball. Either way, the government, under the Commerce Clause in the Constitution can pass laws that promote or inhibit competition, although usually it tries to make the marketplace more, rather than less, competitive. So that's why I say the government has a big role to play in making a free market economy. You may not be convinced that without government a free market system wouldn’t be possible, and that’s ok. You can think what you want. It's a free market. Thanks for watching. See you next time. Crash Course Government and Politics is produced in association with PBS Digital Studios. Support for Crash Course: U.S. Government comes from Voqal. Voqal supports nonprofits that use technology and media to advance social equity. Learn more about their mission and initiatives at Voqal.org. Crash Course was made with the help of all these free marketeers. Thanks for watching. |
US_Government_and_Politics | Sex_Discrimination_Crash_Course_Government_and_Politics_30.txt | Hi, I’m Craig, and this is Crash Course Government and Politics and today I am going to talk to you about something the affects almost everybody, jobs. Unless you are very lucky or very unlucky, at some point in your life, you will probably have a job and more likely than not, you will be employed by someone else. The big boss. The person who tells you what to do. Stan, are you my boss? I’m more of a contractor. The rules about what employers can and can’t do are very complicated and changing all the time, but one thing they are not allowed to do is discriminate against certain groups of people. Probably the largest group protected from discrimination is the one we are going to talk about today, women. [Theme Music] So, before we get into the nitty gritty of the employment discrimination against women, we need to go back a little and explain the middle level of Supreme Court Review. Helpfully called intermediate scrutiny, not Mitzy scrutiny, as I would like to call it. It’s kind of hard to define, but as the name suggests intermediate scrutiny is more stringent than rational basis review, where the government usually wins and its actions are allowed to stand, and strict scrutiny where the government usually loses. So that’s about as helpful as I can get in terms of letting you know what the outcome of a case will be when courts apply intermediate scrutiny. It’s more useful for you to know when intermediate scrutiny applies and that’s mainly in cases involving women. Now, hold on. I know many of you are saying “I know that women are often discriminated again for being women so what makes them different from other groups that face discrimination like black people or Jewish people or at least in the past, Irish people.” All of the groups I just mentioned have one common characteristic, at least where the courts are concerned. And this is that the thing that makes them a discrete group is something that they can’t change. Now, current ideas about sex and gender make this characterization more problematic than the Supreme Court likes to think, but Supreme Court justices weren’t always the most progressive. Also problematic is religion, since we are free to adopt or discard religion as we want. But, I guess that since religion is specifically mentioned in the first amendment and that when the court decided on its categories, religious discrimination was more prevalent than it is now. That’s why religion is included as a category that will trigger the court to take a closer look. But, given the way that the court tends to look at these things, you’d think that sex, by which I mean male and female, would be the kind of thing that would put you in a specific group that might be subject to discrimination based on that group identity, right? Well, probably, but the court’s key reasoning here has to do with the fact that racial, religious and ethnic groups are almost always minorities. And women statistically, at least, are not. For the courts, majority groups have a good chance of winning in the legislative process and therefore they don’t need the same level of judicial protection as minority groups. Still, there’s been some recognition, that despite there non-minority numbers, women have still historically been treated unequally to men. Let’s just come right out and say that they have been given inferior status. And because of this a law or government action that specifically mentions or is aimed at women will cause the court to look more carefully than when women aren’t mentioned but less carefully than when religious, ethnic, or racial minorities are mentioned and that’s intermediate scrutiny. So, the 14th amendment guarantees equal protection of the laws but most of the actual rules against discrimination come out of the federal civil rights act of 1964 and various state anti-discrimination statutes. This is one of the most far reaching and important pieces of federal legislation ever and its history is fascinating, but we’re not going to get too much into it here, because this isn’t a History class, this is Government. Sometimes, we talk about history, but not now, ok? The important thing is that it outlawed discrimination against race, religion, ethnicity, or sex in a whole bunch of situations, including public accommodations and transportation and most important employment. The key section of the civil rights act dealing with employment is title 7, if you’ll excuse the legal language, the most relevant part of the statute is this: [A] EMPLOYER PRACTICES. It shall be an unlawful employment practice for an employer [1] To fail or refuse to hire or to discharge any individual, or otherwise to discriminate against any individual with respect to his compensation, terms, conditions, or privileges of employment, because of such individual's race, color, religion, sex, or national origin; or [2] To limit, segregate, or classify his employees or applicants for employment in any way which would deprive or tend to deprive any individual of employment opportunities or otherwise adversely affect his status as an employee, because of such individual’s race, color, religion, sex, or national origin. Despite all the legal language, that seems pretty straight forward. Unfortunately, it’s a lot easier to say what an unlawful employment practice is than it is to prove that your employer is doing it. This is where we again have to get legalistic and explain how discrimination claims work their way through the courts. Let’s also get Thought Bubbleistic. So, let’s say you feel you’ve been discriminated against at work by your employer. What can you do? At least under federal law. First, you have to be in a protected class as defined by the law, which means that you’ll need to show that the discrimination was based on your race, color, religion, sex, or national origin. Now, sometimes this’ll be easy to prove. Like in a case where you’re employer says, I’d give you a promotion if you weren’t black, or gee, I’m sorry to let you go, but you know you’re a woman and we can’t have too many women working here. This happens almost never because most people aren’t that bigoted or that stupid, but it does happen and if you have this kind of statement and witnesses to back it up, you have a pretty good chance of winning. The more common cases are those where nobody who is a member of a minority group or a woman gets promoted or members of those groups are disproportionately fired. Say if the company has 90 white employees and 10 black employees, and when they lay off 10% of the work force, 9 black workers are fired and only 1 white one is. This is called a disparate impact and if this happens, new court procedures kick in. If you are in a protected class and feel that you are a victim of disparate impact discrimination and you can show that your employer’s action has the effect of exclusion, then the burden of proof, which normally is with the party making the complaint, you, in this case, shifts to your employer, who then has to prove that his actions were caused by a business necessity. I can’t imagine there would be a business necessity for firing 90% of your black work force. If the employer is able to show that he was forced by business necessity to fire most of his black employees, then burden shifts again back to the plaintiff to show that the employer’s reasons are untrue. That they are just pretext and the action was really taken because the employees were in the protected group. Much of the evidence to show this will probably be statistical and it may be hard to get, which points out a crucial thing about discrimination claims. They are hard to prove. Thanks Thought Bubble. By now, I’ll bet many of you are Craig! I thought you said you were going to focus mainly on women, but the discrimination you’ve been describing applies to all sorts of protected groups! Eagles are a protected species, but that’s different. So, women are protected against adverse employment actions by federal and state legislation, but they are also protected against sexual harassment in the work place, this might not seem like discrimination right away, but if you think of discrimination as negative treatment based on one’s membership in a specific group, then it starts to make sense. It makes even more sense when you read about some of the things that women have had to go through at work that have led to discrimination cases. I’m not going to go into graphic detail, but it’s pretty horrible. You should that there are two types of sexual harassment, quid pro quo and hostile workplace environment. Quid pro quo harassment is when an employer or withholds workplace benefits like promotions in exchange for sexual favors. This is obviously wrong and terrible. Hostile Work Environment is a bit trickier because it can be the result of other employees and not necessarily an employer, but courts have ruled that it is an employer’s responsibility to ensure that the workplace is friendly to all employees. I said I wasn’t going to get graphic, but I think one example might help to understand what sorts of things constitute workplace sexual harassment. In the case of Burlington vs. Ellerth, Kim Ellerth was subject numerous unwanted advances from her supervisor. In one of her conversations with the supervisor, he denied her request on a relatively inconsequential business matter, but added, “are you wearing shorter skirts yet Kim, because it would make your job a whole heck of a lot easier.” That’s just disgusting and no one should have to endure those kinds of remarks at work. Ellerth won her suit against Burlington and I’m going to stop on that relatively cheerful note. It’s important that we have an understanding of workplace discrimination, because most of us will spend time working and since some of us will be employers, we should have an idea of how to behave and what is that about. Women do get some special treatment under the law, a reflection of the fact that they have historically been, and continue to be singled out for mistreatment. The laws and courts have recognized this which is why women receive legal protections from discrimination. But women have made some gains which is probably a result of their increasing presence in the workplace and power as voters. And if their strides for greater equality on the job and elsewhere continue, I’d say that’s a very good thing. It’d be nice if someday there was no need for a heightened level of scrutiny when it comes to laws concerning women, but we’re not there yet, so the fact that anti-discrimination laws and intermediate scrutiny exist is also a good thing. Thanks for watching, I’ll see you next week. Crash Course Government and Politics is produced in association with PBS Digital Studios, support for Crash Course Government comes from Voqal. Voqal supports nonprofits that use technology and media to advance social equity, learn more about their mission and initiatives at Voqal.org. Crash Course is made with the help of all these nice women and men. Thanks for watching. |
US_Government_and_Politics | Supreme_Court_of_the_United_States_Procedures_Crash_Course_Government_and_Politics_20.txt | Hi, I'm Craig, and this is Crash Course Government and Politics and today, finally, we are stepping into the big leagues. That's right, I'm trying out for the Cubs. No, we're gonna talk about how the Supreme Court of the United States actually works! I could try out for the Cubs right, Stan? Sometimes people refer to it by the unfortunate nickname S.C.O.T.U.S but I'm not gonna do it, I'm gonna call it the supreme cocoa, or cocoa supreme. Now, let's just be respectful. So strap in and get ready for some highly technical discussion of procedure as we learn how you, yes you, probably not you, can bring a case to the Supreme Court. [Theme Music] The first thing you need to take a case to the Supreme Court is a case, or controversy, and except in certain rare situations where the court has original jurisdiction, that case has to have already heard and decided by a lower court and appealed. And not just once; before a case gets to the Supreme Court you have to have exhausted your appeals at lower levels of the state or federal system. If you've lost your previous appeals but still think that you have an issue worthy of the court's attention, you can petition for a writ of certiorari, which people in the know call "the cert" 'cause they're keepin' it cash...which is short for casual. For a look at how the court chooses its cases, let's go to the Thought Bubble, or the thobub. Lot of nicknames today, Stan! Or ST. Certiorari is a formal request that the Supreme Court hear your case, but petitioning for a writ is no guarantee of anything. The federal government's chief lawyer, the solicitor general, is basically like a bouncer at a hot club, if you're old enough to get into a hot club. They screen out a lot of petitions because those cases don't raise a lot of federal law questions or because they've already been decided in other cases, or they're not wearing good enough shoes to get into the club. If, and it's a big if, your petition is granted, it goes into the cert pool - the first round in which the justices decide which cases they're actually going to decide. The list of cases that will be decided is called the discussion list. For the judges to actually hear the case, called granting certiorari, 4 of the 9 justices have to agree to hear it. This is called the rule of 4. The discussion of the discussion list and decision about whether or not to grant certiorari happens at the conference, which is like the back of the club where the really well-dressed people go. So the judges have read your petition and 4 of them have decided that your case is one of about 80 that they will hear, congratulations! Now you, and the side that disagrees with your position, have to submit briefs. Briefs are not underwear; briefs are written legal arguments from each side explaining why the law favors their position. The party bringing the case seeking to overturn the lower court decision is the petitioner. The party that wants the court to uphold or affirm the lower court's decision is called the respondent. The petitioner also files a reply, which attempts to rebut the respondent, which is not a euphemism. After filing all this, you're finally on your way out of the Thought Bubble. I mean you're on your way to court. Thanks thobub. You might think that there would only be two briefs in a case, one from each side, and it's true that there must be at least two. But often there are many, many more briefs, and even boxer briefs! That's what Stan wears. Stan put your pants on! All undergarments aside, individuals or groups who are not actually parties to the case, but have an interest in the outcome can also file amicus curiae, or friend of the court briefs. Amicus briefs often contain different legal, economic, or historical arguments that can sometimes persuade justices and appear in their opinions. They are also one way that interest groups can attempt to influence the Supreme Court. After the briefs have been filed, the court schedules oral arguments, giving them time to read and consider the briefs. Each side gets half an hour to make its case, but this time includes questions from the justices, so most of the time it's usually spent answering questions. Imagine a presentation with the most intense teacher you've ever had bombarding you with questions, except that there 9 teachers! Well, 8 because Clarence Thomas never speaks. After oral arguments, you wait for a decision. The justices then meet in another conference which is held on a Wednesday or a Friday, 'cause there's good TV the other days. In order for the court to render an official decision, 5 of the 9 justices, a majority must agree on at least one of the legal arguments that either affirms or overturns the lower court's decision. Although they can also send a case back down to the lower court for another decision, which is called a remand. Although, you might call it... a punt! Woo! That was like 30 yards. The chief justice presides over the conference and assigns the task of writing the court's decision, called the majority opinion. The opinions are given in writing, although sometimes justices will read them from the bench. Sometimes the court will issue a single majority opinion which is a very strong statement of unified agreement. In the key civil rights case of Brown v. Board of Education, the court issued a single opinion that was even stronger because it was unanimous. But sometimes the court will issue multiple opinions on the same case. The decision of the court either to affirm or overturn the lower court's ruling is called 'the holding', and this is the first thing you need to know in any Supreme Court decision. The second thing that matters is the legal reasoning, or rationale, behind the holding. If a justice agrees with the holding in the majority opinion, but for different legal reasons, they write a concurring opinion. The rationale in this concurrence is cool and everything, but the lower courts do not need to follow it. Only the holding of the majority and its rationale are binding on lower courts. A single justice writes a concurrence, but other justices can sign onto it if they agree with its logic. For instance, the eagle and I both agree that fish are delicious, but I would write a concurrence that the scales and the eyeballs are gross. It's unlikely this will go to the Supreme Court though. Let's solve it now. Problem solved. Many Supreme Court cases are not unanimous. In fact, in an ideologically divided court, you are likely to find a lot of cases decided by 5 to 4 margins. The judges who are on the losing side who didn't support the majority decision can write a dissenting opinion. A dissent does not set a precedent for a lower court and has no force of law, but often dissents are very eloquent and they can provide arguments that might persuade later courts in similar decisions. Sometimes, as with the famously bad case of Plessy v. Ferguson, the arguments in a dissent can form the foundation for the majority opinion in a later case, even though it can take 50 years to get from a case like Plessy to Brown v. Board of Education. So that's the nuts and bolts of how Supreme Court decisions are made. But before we wrap this up, here are a few key things to remember. First, there are a lot of hurdles you need to jump over before a court makes a decision in a case. Most certiorari petitions, there are usually about 8,000 each year, don't make it past the clerks or the solicitor general, and don't get granted. It takes 4 judges to agree to hear a case, but 5 to render a majority opinion. Only the holding and the rationale supported by at least 5 of the 9 justices becomes binding precedent for lower courts. Dissents and concurrences may be fun and interesting to read, especially if there are pictures, and they may include important legal ideas, but lower courts don't need to follow them. So that's how to court works procedurally, but there's another way to think about Supreme Court decision-making. To really understand the Supreme Court, we need to consider the thinking behind judicial decisions, but that's for another episode. Thanks for watching. Crash Course Government and Politics is produced in association with PBS Digital Studios. Support for Crash Course U.S. Government comes from Voqal. Voqal supports non-profits that use technology and media to advance social equity. Learn more about their mission and initiatives at voqal.org. Crash Course was made with the help of these cocoa supremes. Thanks for watching. |
US_Government_and_Politics | Congressional_Decisions_Crash_Course_Government_and_Politics_10.txt | This episode of Crash Course is brought to you by Squarespace. Hello, I'm Craig again and this is Crash Course: Government and Politics and today were gonna look at why Congress acts the way it does. More specifically we're gonna try to figure out as much as we can without being mind readers, the factors that influence congressmen when they make decisions. And then after that we'll be mind readers and then we'll -- we'll see if we were right. This should be a welcome change of pace from the last couple episodes where we delved into the gory details of how Congress works or is supposed to work anyway. [shudders] [Theme Music] So, to over simplify greatly, but also to help those of you who studying for tests there are three main factors or agents that influence congressmen in making their decisions: their constituency, interest groups, and political parties. And they vary in importance depending on the situation that a congressmen is in. Our basic understanding of democracy and representative government suggests that constituents would matter most to representatives and senators and fortunately, this is sometimes the case. Unfortunately, this is sometimes the case. If a congress person ignores what the voters in his or her district want they're probably not going to be in office for very long, representatives pay the most attention to their constituents when they are actually voting on bills because votes are a record that constituents can easily check, say right before an election. If this is the case then the relative lack of important congressional votes in recent years tells us something. Nowadays, congressmen are more likely to depend on direct service to constituents, what is sometimes called case work, to build up their record. This might be why congressmen tend to spend much more time in their home states and districts than in Washington, they might also want to check up on their lawn, you know grass grows you gotta mow it. Constituent's views can affect congressmen without the threat of unseating them in an election though, because congressmen can anticipate what the voters will want and respond to this. They manage this through public opinion polling. The more sophisticated polling is, the better representatives are at crafting their message, and maybe even their votes to what their constituents want. We're going to devote a number of episodes to interest groups in the future, explaining what they are and where they come from. I know this because I'm psychic. But for now, it is important to recognize that they are incredibly important to congressmen although not for the reasons you might think. Let's go to the Thought Bubble. OK, when I mention interest groups or say the phrase "special interests" you probably imagine some guy in a suit -- maybe even a fedora -- surreptitiously handing a suitcase full of money to a congressmen in return for his vote on some issue of supreme importance to the interest group that the suit guy represents. Or maybe you think interest groups are more subtle than this, buying votes with campaign contributions, this stereotypical view presents a dramatic story and paints picture that sticks in your head but there is no empirical evidence that it's true. I hope the fedora part is true though. That's probably true. The main thing that interest groups provide to congressmen is information that they can use in writing a bill or making a policy case to their constituents. One of the big things in American government is that information is very important and very valuable. On the other hand, interest groups do give an awful lot of money to campaigns. They also provide a lot of research and assistance in the writing of bills. Interest groups are most influential at the committee stage of legislation, rather than when congressman are casting floor votes and their influence tends to be mostly negative. This means that rather than inserting items into legislation, it's much easier and more effective to exclude potential provisions from laws. Plus, this practice -- and maybe the fedoras a little bit -- makes it easier to obfuscate special interest influence on laws. It's harder to show that interest groups have kept something out of a law than that they put something into it. Thanks, Thought Bubble. That brings us to our third big influencer, political parties. Whoo Hooo [popper pops]. Oh, not that kind of party. The way that political parties effect law makers is even more complex than the role of interest groups. A disciplined party leadership can put pressure on a congressmen to vote a certain way. They call them whips for a reason. But this only works when the party is unified and strong. The weaker the party the more freedom the representative has to go rouge on some issues and votes if there are many different factions within a party, there's less of a consequence for voting along the party line. This is why I don't have friends. Freedom. The clearest example of this is the so called Hastert Rule named after formed speaker Dennis Hastert who would only bring a bill to the floor of the house for a vote if a majority of the majority party, in his case, Republicans, supported it. Side note, if you've got the majority and the party unity to pull of a stunt like that you really end up looking like an effective speaker. Parties also help to organize logrolling which is relatively straight forward quid-pro-quo bargaining. You vote for my farm bill senator, and I'll support your banking bill. You vote for my not punching eagles bill, Eagle and I won't punch you. Not voting for it? [clacks to ground]. You've been logrolled. Is that how that word works? Logrolling occurs most obviously at the voting stage but can also be part of the writing of legislation in committees. When we talk about parties we talk about me. But when we talk about political parties we can't leave out the president. Who is the de facto leader of his party and it's own most influential member. I'm pretty sure you're aware of that. The president has the most power when his party and the majority part in congress are the same. When this happens, Congress usually follows the president's lead and allows him to set the policy agenda. That way they can take some credit if the policy is a winner and avoid some blame if turns out not so great. We saw this most recently with the creation of The Affordable Care Act (Obama Care) which was written and passed during the first few years of the Obama presidency when his party, the Democrats, also had the majority in both houses. Divided government, when the president and the congressional majority are in opposite parties works well for Congress too because it makes it super easy to set a policy agenda, they just oppose what ever the president wants. This type of obstructionism is unfortunately pretty common in Congress today, just look at the years from 2010 to 2012 when Congress's program could be summed up in four words, "Repeal ObamaCare and replace it." Wait, that's not true, that's five words. To sum up, political parties are most influential over Congress when a single party controls both houses and the presidency and when the party leadership is strong enough to exert discipline and a degree of uniformity of policy. So that's about it for the factors that influence congressional decision making. Really Stan, that's it? That's all? I'm going on break. Well, obviously there are other factors like the personal lives of individual congressmen and maybe congressional history but since this is broad survey of American government and politics we can't easily get into that without taking less breaks. And, I'm gonna go on break. For my money, it's the structures of congress and most of all which party has the majority and thus controls the leadership and the committees that makes the most difference. Even though I want to say and believe that constituents matter most because I don't want to feed into this cynicism that seems to come so naturally to discussions of Congress. But I think we should try to avoid any cynicism and conspiracy theories when we try and figure out why a congressperson acted a certain way and recognize that any congressional decision is the product of the complex interaction of a number of factors, only some of which will be apparent. Each of these decisions will be conditioned and constrained by the structures and procedures of Congress itself. Thanks for watching. I'll see you next time. Crash Course: Government and Politics is produced in association with PBS Digital Studios. Support for Crash Course: US Government comes from Voqal. Voqal supports nonprofits that use technology and media to advance social equity. Learn more about their mission and initiatives at Voqal.org. Crash Course was made with the help of all these nice people, thanks for watching. |
US_Government_and_Politics | Types_of_Bureaucracies_Crash_Course_Government_and_Politics_16.txt | Hi I'm Craig and this is Crash Course: Government & Politics and today we're going to do something we try to do a lot at Crash Course, punch eagles. Also help you understand the news better. Now we're not going to explain it like Ezra Klein, well we sort of will, or break it down into graphs and charts like Nate Silver, or slow jam it like Jimmy Fallon. We're not going to slow jam it, Stan? Instead Bureaucrat Jimmy and I are going to give you some tools you need to better understand news stories and opinions about government and politics by describing the various types of bureaucracies that affect our lives. Ain't that right, BJ? Yeah. [Theme Music] There are a number of ways we can try to make sense of the vast federal bureaucracy, and one of the most straightforward is to categorize the different agencies by type. Now labeling 'em this way doesn't actually tell us what they do but you'll see them labeled this way in books and articles so you should be familiar with the terms that political writers use. The first type of bureaucracy is the cabinet-level agency, also called the executive department. Each of the fifteen departments is usually headed by a secretary, except the Justice Department, which is run by the Attorney General. You can find a list of the executive departments in any good textbook and I bet there might be a list on the internet somewhere too. Maybe. But the ones you hear the most about are the State Department, the Department of Defense, and the Treasury Department. Others, like the Department of the Interior or Housing and Urban Development you usually only hear about when there's a new secretary, or a scandal. Executive departments mostly provide services through sub-agencies. For example the FBI is technically part of the Department of Justice and the FDA is part of the Department of Health and Human Services. There are also independent agencies that are very similar to executive departments because their heads require Senate confirmation. Well their whole body requires -- their -- I'm talking about the head like they're the boss -- the head of the agency. The best example here is the CIA: Central Intelligence Agency, but NASA is another independent agency. Next we have the independent regulatory commissions which are supposed to be further removed from presidential oversight, which makes them independent. You can recognize them because they're usually called commissions, like the Federal Communications Commission, the Federal Trade Commission, and the Securities and Exchange Commission. They all have rule-making authority and the power to punish violations of the rules, often through fines. If you pay attention to stories about banking, especially banking malfeasance, you'll find plenty of stories about SEC fines. Last, and pretty much least, frankly, are the government corporations that are supposed to make profits but in fact tend to rely on government subsidies to stay afloat. The U.S. Postal Service and Amtrak are the best known, and for most Americans these are the agencies with which we have the most contact, especially the post office. A more useful way to think about bureaucracies is in terms of what they actually do, their functions. Although the problem here is that many bureaucracies have more than one function. Maybe the Thought Bubble can help us out. Thought Bubble! Let's do this! Some bureaucracies primarily serve clients. Many of the sub-agencies of the cabinet departments fit this bill, with the most obvious being the Food and Drug Administration, which serves the public by testing and approving new drugs; the Centers for Disease Control, which tries to do exactly what the name expects; and the National Institutes of Health, which, among other things, sponsors research that improves citizens health. All of these agencies are under the auspices of the Department of Health and Human Services. Another good example of a client-serving agency is the Department of Agriculture, which in addition to rating meat administers the Supplemental Nutrition Assistance Program, which is the snappy new name for food stamps. A second function that many agencies perform is to maintain the Union. One way agencies maintain the Union is by collection revenue, because without money the county doesn't function. The main agency in charge of collecting revenue is the IRS. A second form of maintaining the union is providing security for its citizens. The Department of Justice, which prosecutes federal crimes and protects civil rights, and the Department of Homeland Security, which, among other things, is in charge of airport security, are the main agencies that ensure internal security. Bureaucracies also keep Americans safe from external threats. This is the job of various intelligence services like the CIA and NSA, and especially the Department of Defense. A third function of bureaucracies is to regulate economic activity, primarily by creating and enforcing rules and regulations. Some of the agencies primarily charged with enforcing regulations are housed within executive departments, like OSHA, within the Department of Labor. Others like the FCC and SEC are independent. The fourth major function of bureaucracies is closely related to regulating economic activity. Some bureaucracies have the primary function of redistributing economic resources. Agencies concerned with fiscal and monetary policy handle the inflow and outflow of money in the economy through taxes, spending and interest rates. Providing direct aid to the poor or welfare is another function of bureaucracies that is even more complex and controversial. Most of these agencies, like the Social Security Administration provide direct services so we can see the overlap between the function of agencies. Thanks Thought Bubble. I've been suggesting that even though they aren't mentioned in the Constitution, bureaucracies are pretty powerful, so I should probably explain where that power comes from. Basically, Thor's hammer. Actually no, it doesn't come from that at all. It comes from Congress, which, as we've seen, delegates power to executive agencies in varying degrees. But once the agencies exist they create powers for themselves by maximizing their budgets. Bureaucracies lobby for their own interests and the bigger and more important they are the more money they get from Congress. We tend to think that the nation defense is important, so the Department of Defense is able to convince Congress to give it lots of money. Although mo' money can lead to mo' problems as Biggie helpfully reminded us. Money is also probably the most important lever of power in the U.S. In addition to getting money for themselves, another source of bureaucratic power is the expertise of bureaucrats themselves. The President, and especially Congress, will often rely on bureaucratic experts to tell them how a policy will be implemented. The source of their power is the expert's command of useful information. You shouldn't underestimate this, as any number of technology companies will tell you. So those are two ways of thinking about bureaucracies. I hope that they're helpful and at least when you hear about the FCC issuing a fine for Janet Jackson's "wardrobe malfunction" or something, you'll understand who's doing the punishing. And when you read about Congress cutting SNAP funding you'll be like "Oh snap! That's tied up with the farm bill!" Thanks for watching. I'll see you next episode. Crash Course: Government & Politics is produced in association with PBS Digital Studios. Support for Crash Course: Government comes from Voqal. Voqal supports nonprofits that use technology and media to advance social equity. Learn more about their mission and initiatives at voqal.org. Crash Course was made with the help of all these wardrobe malfunctions. Thanks for watching. |
US_Government_and_Politics | Congressional_Leadership_Crash_Course_Government_and_Politics_8.txt | Hi, I'm Craig and this is Crash Course Government and Politics, and today we're going to examine the leadership structure of Congress! I know, pretty exciting stuff! Now calm down, let me explain. [Theme Music] Are you ready to talk about Congressional leadership? You better be. So, the Congressional leadership are the Congresspersons with titles like Majority Leader and Minority Whip, and they have a lot to do with political parties, so we're going to talk about what the political parties do in Congress as well. Even if you don't follow politics, you probably have heard of the name and titles, if not the functions, of the various leaders. I'm going to need some help on this one, so... Let's go the Clone Zone! In the Clone Zone today I've got House Clone and Senate Clone to help me explain Congressional leadership. House Clone in the house! Take it away. The leader of the House of Representatives is the Speaker of the House, and he or she is the third most powerful person in the country. The speaker is always elected by whichever party is in the majority. These elections take place every two years, because the whole House is elected every two years. That's a lot of elections! At the time of the shooting of the episode the Speaker of the House is John Boehner from Ohio, known for his tan, tears, and tacos. Yeaah, he's oddly really good at making tacos. I had the barbecue pork at his house one time.... Yeah, I had the beef taco! He called it la lengua. Interesting choice. Yeah. The speaker has two assistants to help run the house. The Majority Whip has the primary task of counting votes on important pieces of legislation, and making the party members vote along with their party. Whipping them into line, I suppose. (whipping noise) The third in line is the House Majority Leader, who helps the majority and probably does other stuff, but mainly he's chosen by the speaker because he's popular with particular factions within the party. The Minority Party, that's the one with fewer members elected in a term, duh (scoffs), also has a Minority Leader, and a Minority Whip, but no speaker. The Minority Leader is the de facto spokesperson for the minority party in the House, which is why you often see him or her on TV, or on your phone, or, your iPad, or your pager. I don't think you can see it on your pager. Hey, that was some pretty good stuff you said there House Clone. What's the deal with the Senate, Senate Clone? Things are simpler over in the Senate because we have only 100 august members and not the rabble of 435 to try to "manage." The leader of the Senate is the Majority Leader and he (so far it's always been a he) is elected by the members of his party, which by definition is the majority party, the one with 51 or more members. There's also a Minority Leader, which, like the Minority Leader in the House, is the party's spokesperson. The Vice President presides over the Senate sessions when he doesn't have anything better to do, even though it's one of his few official constitutional duties. When the veep is off at a funeral, or undermining the president with one of his gaffes, the President pro tempore presides. The President pro tem is a largely ceremonial role that is given to the most senior member of the majority party. Senior here means longest serving, not necessarily oldest, although it can be the same thing. No one would want to be a Congressional leader if there was no power involved, so it's important to know what powers these folks have, and how they exercise them. Also, I'm not supposed to do this, but let's go to the Thought Bubble. I love saying that! The primary way that leaders in both the House and Senate exercise power is through committee assignments. By assigning certain members to certain committees, the leadership can ensure that their views will be represented on those committees. Also, leaders can reward members with good committee assignments, usually ones that allow members to connect with their constituents, or stay in the public eye, or punish wayward members with bad committee assignments. Like the committee for cleaning the toilets or something. The Speaker of the House is especially powerful in his role assigning Congressmen to committees. Congressional leaders shape the agenda of Congress, having a huge say in which issues get discussed and how that discussion takes place. The Speaker is very influential here, although how debate happens in the House is actually decided by the House Rules Committee, which makes this a rather powerful committee to be on. The Senate doesn't have a rules committee, so there's no rules! Aw, yeah! There's rules. The body as a whole decides how long debate will go on, and whether amendments will be allowed, but the Majority Leader, if he can control his party, still has a lot of say in what issues will get discussed. Agenda setting is often a negative power, which means that it is exercised by keeping items off the agenda rather than putting them on. It's much easier to keep something from being debated at all than to manage the debate once it's started, and it's also rather difficult for the media to discuss an issue that's never brought up, no matter how much the public might ask, "But why don't you talk about this thing that matters a lot to me?" Thanks, Thought Bubble. Speaking of the media, Congressional leaders can also wield power because they have greater access to the press and especially TV. That's the thing people used to watch. Instead of YouTube. This is largely a matter of efficiency. Media outlets have only so many reporters, and they aren't going to waste resources on the first-term Congressman from some district in upstate New York. No one even goes to upstate New York. Is there anyone in upstate New York? Has anyone ever gone to upstate New York? When the Speaker calls a press conference reporters show up, and the Majority Leader can usually get on the Sunday talk shows if he wants. Media access is a pretty handy way to set an agenda for the public. Finally, Congressional leaders exercise a lot of power through their ability to raise money and to funnel it into their colleague's campaign. I want colleagues like that. Each House of Congress has a special campaign committee and whoever chairs it has the ability to shift campaign funds to the race that needs it most, or to the Congressperson he or she most wants to influence. The official leadership has little trouble raising money since donors want to give to proven winners who have a lot of power, and get the most bang for their buck. Since the leaders usually win their races easily, this is more true in the House than the Senate. They frequently have extra campaign money to give. Often the donations are given to political action committees, or PACs, which we'll talk about in another episode. We're going to spend a lot of time talking about political parties, and probably having parties of our own in later episodes, especially their role in elections, but they are really important once Congress is in office too. One way that parties matter is incredibly obvious if you stop to think about it. It's contained in the phrase "majority rules." This is especially true in the House, where the majority party chooses the Speaker, but it's also the case in the Senate. This is why ultimately political parties organize and raise so much money to win elections: if one of the parties controls both houses and the presidency, as the Democrats did in 2008 through 2009, that party is much more likely to actually get things done. The party that's the majority in each house is also the majority on all of that house's committees, or at least the important ones, and, as we saw in the last episode, committees are where most of the legislative work in Congress gets done. Gets did. As you probably figured out, the majority party chooses the committee chairs, too, so it's really got a lock on that sweet legislative agenda. Parties also can make Congress more efficient by providing a framework for cooperation. The party provides a common set of values, so a Republican from Florida and one from Wyoming will have something in common, even if their constituents don't. These common values can be the basis of legislation. Sometimes. But sometimes -- [punches eagle] -- that happens. Political parties also provide discipline in the process. When a party is more unified it's easier for the leader to set an agenda and get the membership to stick to it. Right? Unified. Lack of party unity can make it difficult for the leadership. In 2011 a large group of very conservative newbie Congressmen associated with the Tea Party Movement made it difficult for Speaker Boehner to put forward an agenda. The Tea Party caucus felt Boehner compromised too much with the Democrats, even though his agenda was, by some standards, pretty conservative. As a result, Congress wasn't able to get much done, except make itself unpopular. So, if you combine all this with the stuff we learned about Congressional committees, you should have a pretty good understanding of how Congress actually works. Yay! Understanding! As this course progresses and you fall in love with politics, and myself, be on the lookout for how the leadership sets the agenda and pay attention to what issues might be floating around that aren't getting discussed in Congress. Understanding who the Congressional leaders are, and knowing their motivations, can give you a sense of why things do and don't get done by the government. And, if you're lucky, you live in a district represented by a member of leadership. In that case, the person you vote for will be in the news all the time, which is kind of satisfying, I guess. Yeah, I voted for that guy! Yeah! And now he's on the TV! Yeah! Thanks for watching. We'll see you next week. What do you think, can we be unified? Can we get things done? We can't. Crash Course Government and Politics is produced in association with PBS Digital Studios. Support for Crash Course US Government comes from Voqal. Voqal supports non-profits that use technology and media to advance social equity. Learn more about their mission and initiatives at voqal.org. Crash Course was made by all of these nice people. Thanks for watching. Someday, maybe the eagle and I will get along. Not today. Not today. |
US_Government_and_Politics | Media_Institution_Crash_Course_Government_and_Politics_44.txt | Hello. I'm Craig, and this is Crash Course Government and Politics, and today, we're going to talk about one of my favorite subjects: the media and its role in politics. At last, we're finally going to hear a fair and balanced account of how the lamestream media is distorting the American public's understanding of current affairs! Yeah, sure, except for the segment of the American public that gets all of their information from right wing media sources. Ok, so the media can be a thorny issue, but, we are talking about politics here, and technically we are a part of the media, so we should probably say something about it. Other than how awesome we are at it. [Theme Music] (deep, growly voice) I am the media! Ha ha ha ha ha! (regular voice) So, in terms of politics, the main function of the media is to provide information so people can make decisions and get involved in politics. (deep voice) Ha ha ha ha! (regular voice) Sorry. For the economically inclined, the media lowers information costs. Rather than going out and researching what we might want to know, which takes time and effort, various media outlets tell us stuff that they think we will find useful. We probably have a sense of what we mean by media, but it's a good idea to break it down into types, because each of the forms of media work slightly differently, and the role of the different types has changed a lot over the years. There's a lot more beards, for instance. The oldest form of media, at least, that we are going to talk about today, is print, which means newspapers and magazines. Print is no longer the main source of information for Americans, but it sure used to be. Especially in the days when some large cities had papers that would put out a morning and afternoon edition. But just because fewer people are reading print media doesn't mean that they aren't very important. For one thing, most of the other news media organizations rely on print for their news. Newspapers like the New York Times and the Washington Post still break most major news stories and provide a lot of information for television and Internet news. Print media also tend to offer more detail and comprehensive news stories, although this is changing quickly. One aspect of print media that is often overlooked is that it's still the main source of news for educated elites, and these are the people whose opinions tend to matter a lot in making policy. If you're skeptical about this, watch a morning news program or check out an article on a news aggregator website and see how often the program article references the Times or the Post. You might be surprised. The second oldest, and in some ways still most important source of political information comes from broadcast media. As a Youtuber, I hate to say this, but television still reaches more Americans than any other form of media, and remains an important source of political information. Radio is less important at reaching a diverse mass audience, although talk radio, especially conservative talk radio, matters a lot in the political media landscape. But despite it's massive reach, broadcast media has a significant drawback in shaping public opinion: television stories are very short, usually less than two minutes long, and therefore less informative. A third major media force in politics is, what's that called? The Internet! You probably already know that, though, since you're watching this video. It's a little bit tricky to write about how the Internet affects politics because it's changing so rapidly. But there are a few things we can say. As a news source, the primary advantage of the Internet is that it can update so quickly. This is great for breaking news, although there's an argument that it pushes news organizations towards creating more stories and hot takes (also my nickname in high school), rather than deep reporting. In the early days of the web, Internet news was mostly just online versions of print newspapers, but that landscape has shifted, a lot. First came blogs about politics, and then sites dedicated to politics. Which, this being America, tended to polarize into right wing and left wing sites. The growth of social media provided new avenues for politicians, campaigns, and parties to get their information to the masses, and now every candidate has at least one Twitter profile and Facebook page. And they've probably got a Snapchat and a Tumblr, and a, uh, maybe a Tinder, and staff dedicated to maintaining a social media presence. This can be great for lots of information about a candidate or their policies, but it's hardly unbiased news, so if this is your only source of information, you're probably aren't going to get the full story. For a sense of how the media landscape has changed over the past two decades, check out this chart. That's right, we got charts here. Cause we are video media. The surprising thing to me about this data is not that so many more people are getting their news from online sources, but that such a high percentage still relies on television news, especially if you combine local, national, and cable news programs. I, sometimes I forget I have a TV. I guess you can chalk it up to information costs. Without any research on your part, watching a nightly news program will keep you decently informed, and it only takes twenty-two minutes of your time, without the commercials. So just sit there and let the TV do the thinking for ya. I should probably talk about those commercials a little bit. One of the really great things about the Internet is that it opens up the possibility of a lot more non-commercially supported information becoming available. There's no commercials on the Internet... none. A serious complaint about broadcast and print journalism is that, because they are primarily financed by advertising, news organizations have an incentive not to report on stories that are critical of their parent organizations of advertisers. This doesn't stop us from getting negative reports about News Corp or the Washington Post group, but they're unlikely to report on themselves. So this question of how much we can trust the news comes down largely to issues of bias, because it's pretty rare that news organizations lie outright. Without the public trust, readers and viewers will just go somewhere else. This doesn't mean that newspapers, and to a much lesser extent, television companies, are without bias, though. The New York Times and Washington Post do tend to be more liberal than conservative, but overall they're probably balanced out by the Wall Street Journal, Fox News, and talk radio, which tend to be conservative. Putting political bias aside, the most persistent bias in the news seems to be towards conflict and scandal! And these are not really liberal or conservative issues. If anything, the news media is most biased towards conflict, which may explain why you don't see a lot of stories about compromise. Two politicians smiled and shook hands today, and then walked away, happy. That doesn't sound interesting. Let's look at the three main factors that effect news coverage in the Thought Bubble. The first factor influencing the news is the journalists who make it. The journalists are even more important than their bosses, the publishers, because they have the discretion to report and interpret the news. If you think the news is just the facts, then it's useful to remember that the New York Times slogan used to be: all the news that's fit to print. Do reporters have a bias towards one political ideology or the other? Probably. More journalists identify as liberal and as Democrats than say they are conservative or Republican. The next factor to consider is the source of political news: the politicians themselves. Politicians do a lot of things to create a positive media image for themselves, and this goes beyond shaking hands and kissing babies. They show up at important events, like opening day of the baseball season, or a natural disaster, and make the most of these photo opportunities. They also cultivate relationships with reporters, because if a journalist likes you, they might be more likely to write something nice about you. Or at least something less mean. One of the best ways to cultivate a good relationship with a journalist is by leaking information to them. A leak is a disclosure of confidential information to a journalist, and politicians can use them to cement relationships with news organizations, and to make sure that a story is reported the way that they want it reported. Reporters have a hard time refusing a scoop, so if a politician gives inside information, they can usually influence the way the reporter will tell the story. Thanks, Thought Bubble. Even more important then leaks are press releases. These are stories written by politicians, or more likely their staff, that are released to the press. Politicians hope that stories will be reported with minimal revisions, and they often are, especially since there's so much pressure for news organizations to put out content as quickly as possible. News organizations like them a lot, because they lower the cost of producing information. But advocates of responsible journalism worry a lot about them because, coming directly from politicians, they're certain to be biased. And when they're reported as straight news, they can be misleading. The third factor influencing the media is us, the consumers of news. Why do we matter if news is just a matter of reporting what happens? We matter because producers of news want us to read and watch it, so they make news that we will want to read and watch. In practice, this means that the news will be tailored to the groups of people most likely to consume it. And those people are not always a good cross-section of Americans as a whole. People who watch and read the news tend to be better educated and wealthier than those who don't, and media producers respond to this. What this means in practice is that certain segments of the population, and their concerns, are under-reported. Among the large groups that don't get media attention that is proportional to their size are the working class, especially union workers, religious groups, veterans, and various minority groups. So the media plays an important role in American politics as the filter through which politicians can make information available to the public. The media, as the name suggests, mediates this information and shapes it in powerful ways. In the sense that it doesn't actually create or change the structures of government, you could argue that the media isn't all that important to the American political system. But if you believe that information is key to understanding why and how American politicians act, then we start to see media in a new light. In many ways the most important thing about media is what it doesn't cover. It's really hard for voters and other citizens to formulate opinions and try to influence their elected representatives if they don't even know something is an issue. Even in the twenty-first century, when there are so many more sources of information to choose from, there are still stories we don't get to hear. The first step to hearing them is probably a better understanding of the media and its importance as a political institution. Thanks for watching, see you next time. Crash Course is produced in association with PBS Digital Studios. Support for Crash Course US Government comes from Voqal. Voqal supports nonprofits that use technology and media to advance social equity. Learn more about their mission and initiatives at voqal.org. Crash Course is made with the help of all of these unbiased journalists. Thank you. Except for that guy. He's pretty biased. That one's not even a journalist. I don't even know who that is. |
US_Government_and_Politics | Congressional_Committees_Crash_Course_Government_and_Politics_7.txt | Hi, I'm Craig and this is Crash Course Government and Politics and today we're going to get down and dirty wallowing in the mud that is Congress. Okay, maybe that's a little unfair, but the workings of Congress are kind of arcane or byzantine or maybe let's just say extremely complex and confusing, like me, or Game of Thrones without the nudity. Some of the nudity, maybe. However, Congress is the most important branch, so it would probably behoove most Americans to know how it works. I'm going to try to explain. Be prepared to be behooved. [Theme Music] Both the House of Representatives and the Senate are divided up into committees in order to make them more efficient. The committees you hear about most are the standing committees, which are relatively permanent and handle the day-to-day business of Congress. The House has 19 standing committees and the Senate 16. Congressmen and Senators serve on multiple committees. Each committee has a chairperson, or chair, who is the one who usually gets mentioned in the press, which is why you would know the name of the chair of the House Ways and Means Committee. Tell us in the comments if you do know, or tell us if you are on the committee, or just say hi. Congress creates special or select committees to deal with particular issues that are beyond the jurisdiction of standing committees. Some of them are temporary and some, like the Senate Select Committee on Intelligence, are permanent. Some of them have only an advisory function which means they can't write laws. The Select Committee on Energy Independence and Global Warming has only advisory authority which tells you pretty much all you need to know about Congress and climate change. There are joint committees made up of members of both houses. Most of them are standing committees and they don't do a lot although the joint Committee on the Library oversees the Library of Congress, without which we would not be able to use a lot of these pictures. Like that one, and that one, and ooh that one's my favorite. Other committees are conference committees, which are created to reconcile a bill when the House and Senate write different versions of it, but I'll talk about those later when we try to figure out how a bill becomes a law. So why does Congress have so many committees? The main reason is that it's more efficient to write legislation in a smaller group rather than a larger one. Congressional committees also allow Congressmen to develop expertise on certain topics. So a Congressperson from Iowa can get on an agriculture committee because that is an issue he presumably knows something about if he pays attention to his constituents. Or a Congressperson from Oklahoma could be on the Regulation of Wind Rolling Down the Plain Committee. Committees allow members of Congress to follows their own interests, so someone passionate about national defense can try to get on the armed services committee. Probably more important, serving on a committee is something that a Congressperson can claim credit for and use to build up his or her brand when it comes time for reelection. Congress also has committees for historical reasons. Congress is pretty tradish, which is what you say when you don't have time to say traditional. Anyway, it doesn't see much need to change a system that has worked, for the most part, since 1825. That doesn't mean that Congress hasn't tried to tweak the system. Let's talk about how committees actually work in the Thought Bubble. Any member of Congress can propose a bill, this is called proposal power, but it has to go to a committee first. Then to get to the rest of the House or Senate it has to be reported out of committee. The chair determines the agenda by choosing which issues get considered. In the House the Speaker refers bills to particular committees, but the committee chair has some discretion over whether or not to act on the bills. This power to control what ideas do or do not become bills is what political scientists call "Gatekeeping Authority", and it's a remarkably important power that we rarely ever think about, largely because when a bill doesn't make it on to the agenda, there's not much to write or talk about. The committee chairs also manage the actual process of writing a bill, which is called mark-up, and the vote on the bill in the committee itself. If a bill doesn't receive a majority of votes in the committee, it won't be reported out to the full House or Senate. In this case we say the bill "died in committee" and we have a small funeral on the National Mall. Nah we just put it in the shredder. Anyway, committee voting is kind of an efficient practice. If a bill can't command a majority in a small committee it doesn't have much chance in the floor of either house. Committees can kill bills by just not voting on them, but it is possible in the House to force them to vote by filing a discharge petition - this almost never happens. Gatekeeping Authority is Congress's most important power, but it also has oversight power, which is an after-the-fact authority to check up on how law is being implemented. Committees exercise oversight by assigning staff to scrutinize a particular law or policy and by holding hearings. Holding hearings is an excellent way to take a position on a particular issue. Thanks Thought Bubble. So those are the basics of how committees work, but I promised you we'd go beyond the basics, so here we go into the Realm of Congressional History. Since Congress started using committees they have made a number of changes, but the ones that have bent the Congress into its current shape occurred under the speakership of Newt Gingrich in 1994. Overall Gingrich increased the power of the Speaker, who was already pretty powerful. The number of subcommittees was reduced, and seniority rules in appointing chairs were changed. Before Gingrich or "BG" the chair of a committee was usually the longest serving member of the majority party, which for most of the 20th century was the Democrats. AG Congress, or Anno Gingrichy Congress, holds votes to choose the chairs. The Speaker has a lot of influence over who gets chosen on these votes, which happen more regularly because the Republicans also impose term limits on the committee chairs. Being able to offer chairmanships to loyal party members gives the Speaker a lot more influence over the committees themselves. The Speaker also increased his, or her -- this is the first time we can say that, thanks Nancy Pelosi -- power to refer bills to committee and act as gatekeeper. Gingrich also made changes to congressional staffing. But before we discuss the changes, let's spend a minute or two looking at Congressional staff in general. There are two types of congressional staff, the Staff Assistants that each Congressperson or Senator has to help her or him with the actual job of being a legislator, and the Staff Agencies that work for Congress as a whole. The staff of a Congressperson is incredibly important. Some staffers' job is to research and write legislation while others do case work, like responding to constituents' requests. Some staffers perform personal functions, like keeping track of a Congressperson's calendar, or most importantly making coffee - can we get a staffer in here? As Congresspeople spend more and more time raising money, more and more of the actual legislative work is done by staff. In addition to the individual staffers, Congress as a whole has specialized staff agencies that are supposed to be more independent. You may have heard of these agencies, or at least some of them. The Congressional Research Service is supposed to perform unbiased factual research for Congresspeople and their staff to help them in the process of writing the actual bills. The Government Accountability Office is a branch of Congress that can investigate the finances and administration of any government administrative office. The Congressional Budget Office assesses the likely costs and impact of legislation. When the CBO looks at the cost of a particular bill it's called "scoring the bill." The Congressional reforms after 1994 generally increased the number of individual staff and reduced the staff of the staff agencies. This means that more legislation comes out of the offices of individual Congresspeople. The last feature of Congress that I'm going to mention, briefly because their actual function and importance is nebulous, is the caucus system. These are caucuses in Congress, so don't confuse them with the caucuses that some states use to choose candidates for office, like the ones in Iowa. Caucuses are semi-formal groups of Congresspeople organized around particular identities or interests. Semi-formal in this case doesn't mean that they wear suits and ties, it means that they don't have official function in the legislative process. But you know what? Class it up a little - just try to look nice. The Congressional Black Caucus is made up of the African American members of the legislature. The Republican Study Group is the conservative caucus that meets to discuss conservative issues and develop legislative strategies. Since 2010 there is also a Tea Party caucus in Congress. There are also caucuses for very specific interests like the Bike Caucus that focuses on cycling. There should also be a Beard Caucus, shouldn't there? Is there a Beard Caucus Stan? No? What about an eagle punching caucus? The purpose of these caucuses is for like minded people to gather and discuss ideas. The caucuses can help members of Congress coordinate their efforts and also provide leadership opportunities for individual Congresspeople outside of the more formal structures of committees. There are a lot of terms and details to remember, but here's the big thing to take away: caucuses, congressional staff, and especially committees, all exist to make the process of lawmaking more efficient. In particular, committees and staff allow individual legislators to develop expertise; this is the theory anyway. Yes it's a theory. Committees also serve a political function of helping Congresspeople build an identity for voters that should help them get elected. In some ways this is just as important in the role in the process of making actual legislation. When Congress doesn't pass many laws, committee membership, or better yet, being a committee chair is one of the only ways that a Congressperson can distinguish him or herself. At least it gives you something more to learn about incumbents when you're making your voting choices. Thanks for watching. I'll see you next week. Crash Course is produced in association with PBS Digital Studios. Support for Crash Course US Government comes from Voqal. Voqal supports nonprofits that use technology and media to advance social equity. Learn more about their mission and initiatives at voqal.org Crash Course is made with all of these lovely people. Thanks for watching. Staffer! Coffee! Please. Thank you. |
US_Government_and_Politics | Civil_Rights_Liberties_Crash_Course_Government_Politics_23.txt | Hi, I'm Craig, and this is Crash Course Government and Politics, and today we're finally, at long last, moving on from the structures and branches of government and onto the structures and branches of trees. This is a nature show now. Okay, we're not moving on completely, because we're still talking about courts, but today we'll be discussing actual court decisions, and the kind of things that courts rule on, rather than how they do it. That's right, we're moving onto civil rights and civil liberties. [Theme Music] Okay, first I want to talk about something that I find confusing: the difference between civil rights and civil liberties. Usually in America, we use the terms interchangeably, which adds to the confusion, but lawyers and political scientists draw a distinction, so you should know about it. Then you can go back to calling civil liberties "rights" and civil rights "liberties," and most people won't care. But I'll care. I'll be disappointed in you. So civil liberties are limitations placed on the government. Basically, they are things the government can't do that might interfere with your personal freedom. Civil rights are curbs on the power of majorities to make decisions that would benefit some at the expense of others. Basically, civil rights are guarantees of equal citizenship, and they mean that citizens are protected from discrimination by majorities. Take, for example, same sex marriage. You could think of it as a liberty, except that not everyone is free to marry at any given time. Six year olds can't get married, and you can't marry your sibling. But same sex marriage is a civil rights issue because in the states that don't allow it, the majority of voters is denying something to a minority, creating inequality in the way that the laws work. Now, just to make things more confusing, lawyers often talk about the difference between substantive and procedural liberties, but they usually call them rights instead of liberties. That's a lawyer eagle. A legal eagle. Substantive liberties are limits on what the government can do. For example, the first amendment says that congress shall make no law establishing religion. So this means that they cannot create a national church or declare that Christianity or Islam or Hinduism is the official religion of the US. Procedural liberties are limits on how the government can act. For example, in America in courtroom dramas, there is a presumption that someone is innocent until proven guilty. This presumption means that in criminal cases, juries and judges have to act as though the accused is innocent until the prosecution convinces them otherwise. If they are not convinced, the accused person doesn't go to prison. So now that we understand the difference between civil rights and civil liberties perfectly because of my amazing explanation, let's focus on liberties and try to figure out what they are and where they come from, with some help from Thought Bubble. So civil liberties are contained in the incredibly unhelpfully named "Bill of Rights," which isn't even called that in the Constitution. It's just a name that we give to the first 10 amendments. The 9th amendment is included to remind us that the list of liberties and/or rights in the other amendments isn't exhaustive. There might be other rights out there, but the constitution doesn't specifically say what they are. Thanks constitution. In some cases, it's pretty clear. The first amendment, for example, says that "congress shall make no law respecting the establishment of religion, or abridging the free exercise thereof, or abridging the freedom of speech or of the press to assemble or to infringe the right to petition the government for redress of grievances." Pretty straight forward. But other cases are not so clear. The second amendment says "the right to keep and bear arms shall not be infringed," but it doesn't say by whom. Same thing with the 5th amendment guarantees against self incrimination. Could congress force you to incriminate yourself? How would they do that? And the 8th amendment prohibits cruel and unusual punishments, like presumably shock pens, but it doesn't say who is forbidden from cruelly and unusually punishing. My mom wasn't forbidden from keeping me from playing video games. As usual, we might expect the Supreme Court to sort out this mess, but initially they were no help at all. In a case that you've probably never heard of, called Barron vs. Baltimore, decided in 1833, the court said that the Bill of Rights applied to the national, meaning federal government, not to the states. They said that every American has dual citizenship, but not the good kind. They meant you are a citizen of the US and of the state in which you reside, and basically that the constitution only protected you from the federal government. In other words, if the state of Indiana wanted to punish me cruelly or unusually, they could. Thanks, Thought Bubble. So Barron vs. Baltimore left Americans in a bit of a civil liberties pickle, and not the good kind of pickle. They were protected from the national government doing terrible things, like quartering troops in their homes, but not from the state doing the same thing. And since the state was close to home and the national government was far away and, compared with today, tiny and weak, these protections were pretty weaksauce, so what happened to change this? I hope something, because I like a zesty government sauce. The 14th amendment & the Supreme Court happened. After the Civil War, as part of the reconstruction, the 13th, 14th, and 15th amendments were added to the constitution. Of these, the 14th is the most important, probably the most important of all amendments. What does it say? Well the first section, which is the one that really matters, and I'm not going to read the whole thing okay? It reads "all persons born or naturalized in the United States and subject to the jurisdiction thereof, are citizens of the United States and of the state wherein they reside. No state shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States. Nor shall any state deprive any person of life, liberty, or property, without due process of law; nor deny any person within its jurisdiction the equal protection of the laws." What this means is that the federal government's like: "Listen states, you can't be dumb. Just stop it. Okay? We're all in this together. Alright?" It means states can't deny equal protection, civil rights, or due process, which in this case encompasses civil liberties. This in theory makes it impossible for states to infringe upon the liberties and the Bill of Rights. But the legal system being what it is, it's not quite that simple. Did you think it'd be simple? The Supreme Court could have just ruled that all the rights and liberties in the Bill of Rights applied to the states, which seems to be what the 14th amendment implies, but they didn't. Instead they ruled that each of the rights or liberties had to be incorporated against the states on a case-by-case basis. This is a concept called selective incorporation, and it supposedly reserves more power to the states. What it really means is that when people felt that the states were violating their liberties, they had to go to the Supreme Court, which by now has incorporated almost every clause in the Bill of Rights against the states. You want examples? We've got them. In the famous case of Gitlow vs. New York, the court ruled that the first amendment protection of the freedom of speech could not be violated by a state. In this case, it was New York, but once a liberty is incorporated against one state, it's incorporated against all of them. In Mapp vs. Ohio, the court ruled that states couldn't use evidence gathered from warrantless searches. In Benton vs. Maryland, the right against Double Jeopardy, being tried for the same crime twice, was incorporated against the states. By now, almost all the rights and liberties mentioned in the first ten Amendments have been incorporated against the states. This means that individuals are protected from all their governments taking away their liberties, and that's a good thing. I loves my liberties. So we'll be talking about civil rights and civil liberties for a number of episodes, and this topic, while confusing, can be lots of fun. We might play liberties bingo, or civil rights kickball. I don't know what those things are, but they sound like fun. The main thing to remember is that going all the way back to the framers, Americans have been concerned about a too powerful government taking away citizens' freedoms. Yes, these liberties apply mostly to citizens, although some do apply to non-citizens, too. In order to put limits on government, the Bill of Rights was added to the Constitution in 1789, but this didn't mean that those limits applied to the states, probably because the founders expected states to be the main protectors of rights, and in fact, many state constitutions have provisions that copy or in some ways, go beyond what's in the US Constitution. Only after the 14th Amendment was passed, following the Civil War, did the national government get around to addressing this issue of states denying people's liberties. Even then, it took numerous court cases for us to get to the point that most civil liberties that we assume cannot be taken away by the government have actually been guaranteed through the process of selective incorporation. It's taken a long time to get where we are, and there's still a long way to go. Protecting civil liberties requires vigilant citizens to be aware of the ways that government is overstepping its bounds, but that's only half the equation. It's also vital that our majority pay attention the civil rights of others, and that we ensure that everyone is afforded the same protections and benefits promised by our system of law. Thanks for watching. I'll see you next time. Crash Course Government and Politics is produced in association with PBS Digital Studios. Support for Crash Course US Government comes from Voqal. Voqal supports non-profits that use technology and media to advance social equity. Learn more about their mission and initiatives at Voqal.org. Crash Course is made with the help of these nice people who are innocent until proven guilty. Thanks for watching. |
US_Government_and_Politics | Freedom_of_Speech_Crash_Course_Government_and_Politics_25.txt | Hi, I'm Craig, and this is Crash Course Government and Politics, and today, we're talking about free speech. Other Craig: Finally, today we can let loose and establish the kinds of things we can say to criticize our government, like the crazy idea that money and speech are the same thing. Other Other Craig: Not so fast, Clone, the Supreme Court has ruled that spending money, at least in the political context, is speech. You do have the right to criticize that decision though. Unless your boss or YouTube says that you can't. Craig: All right, we're trying to talk about free speech, shut up. Let's get started and see if we can figure out what the limits of free speech are, assuming that there are some. Other Other Craig: There aren't. Craig: That's a lie. But I'm free to say that. [Theme Music] Craig: There are two really important things to remember about the First Amendment protection of free speech. The primary reason we have freedom of speech is to allow for public criticism of the stupid government. Stupid government. That's the sort of thing that can land you in jail in countries that don't have strong free speech protections, or should I say, you would be Putin jail, heh, don't put me in jail. Oh, that's right, I'm in the US, it doesn't matter. The stories of oversensitive kings and dictators silencing people who question their rule or even make jokes at their expense are too numerous to recount, but for the most part, that kinda thing doesn't happen in the US, which is why no one gets arrested for carrying around a giant picture of Obama as Hitler, or former President Bush as a monkey. Well, that's stuff's okay, as far as the First Amendment is concerned, but that doesn't mean it's respectful or in good taste. The second thing to remember is that the First Amendment protects you from the government doing things that try to deny your speech, but not anyone else. What this means is that you don't have an absolute right to say whatever you want, wherever you want, to whomever you want and not suffer any consequences. Isn't that right, Stan, you dingus? I'm fired? I was just kidding; it was a joke. If you work for a private company, your boss can certainly fire you for saying mean things about them or revealing company secrets, and you don't have any First Amendment claim against them. Unless, of course, your boss is the government, or a branch of the government, in which case, you might be able to claim a First Amendment right. See, like most things, it's complicated. Among the speech that is protected, not all of it has the same level of protection under the First Amendment. Now, let's exercise our right to free Thought Bubble. The speech that gets the strongest protection is political speech. Criticism of, but also praise for particular officials, their parties, or their policies is usually protected. It's given what is called preferred position, which means that any law or regulation or executive act that limits political speech is almost always struck down by courts. The big case that made pretty much the final decision on political speech was Brandenburg v. Ohio in 1968. In this case, a Ku Klux Klan leader was making a speech that, as you can imagine, was offensive to a lot of people and could have been considered threatening, too. The court ruled that because the speech was political, it was protected by the First Amendment, no matter how outrageous it was. The court said, "The Constitutional guarantees of free speech and free press do not permit a state to forbid or proscribe advocacy of the use of force or law violation except where such advocacy is directed to inciting or producing imminent action and is likely to produce such action." According to the court, the First Amendment protects speech even if it advocates the use of force or encourages people to violate the law. So you can advocate overthrowing the government or not paying your taxes as much as you want, unless what you say is likely to produce the thing you're advocating. Overthrowing the government, say. And it is likely to happen imminently, meaning very soon after you make the statement. This case limited an older standard regarding free speech that was put forward in the case US v. Schenck in 1917. In that case, Schenck distributed pamphlets urging people to avoid the draft for World War I. This was a violation of the Espionage Act, which made it a crime to obstruct the draft or the war effort. The law was more complicated than that, but that's the basic gist. In his decision on this case, Oliver Wendell Holmes wrote that, "When that speech presents a clear and present danger, the state can then abridge that person's speech." Memorably, he explained that the First Amendment does not protect a person who shouts "fire" in a crowded theater. In later cases, Holmes limited this idea, largely because it gives the government a lot of leeway to say what kind of speech creates danger, especially during a war, as was the case with Schenck. Thanks, Thought Bubble. Political speech isn't the only type of speech that the courts have addressed. Symbolic speech can also be protected by the First Amendment, and if that symbolic speech has political content, it usually is protected. Symbolic speech includes wearing armbands, carrying signs, or even wearing a jacket with an obscene word directed at the military draft. Symbolic speech also includes burning an American flag, which pretty much is always a political message. Not all symbolic speech is protected, though. For example, if you're a high school student who holds up a banner that reads, "Bong hits 4 Jesus" at a school-sponsored function, don't expect that the First Amendment will prevent the school, a government agent, from suspending you. And yes, that really happened. Also, this is not symbolic speech. That's violence. Even hate speech is protected. Even if it's really hateful, like burning a cross on a person's lawn, although this might be prosecuted as vandalism or trespassing. Public universities that try to punish hate speech have seen their discipline code struck down. Commercial speech might not be protected, but if it's a political commercial, it will be, and as we've pointed out before, spending money on political campaigns has been determined to be speech that is protected by the First Amendment, although we shall see donations to political campaigns are still treated differently, at least for now. Pretty much the only kind of speech that's not protected, other than speech that's likely to incite immediate violence, is what's called fightin' words. In the actual case that dealt with fighting words, Chaplinsky v New Hampshire, the defendant uttered what seemed more like insults than a call to engage in fisticuffs. What'd you call me? Still, the court ruled that some words were so insulting that they were more than likely to result in a fight, so fighting words are not protected speech. One thing to note, though, the fighting words free speech exception is almost never used. So as you can see, the First Amendment pretty much protects you from the government throwing you in jail or otherwise punishing you for what you say in most instances, but it's important to remember than the First Amendment is not unlimited. Most important, it only protects you from government action, not the action of private people, especially your employers. One final example might make this clear. In Pickering v. Board of Education, a public school teacher wrote a letter to the editor of his local paper complaining about the way that the school board was spending money on the schools. He didn't write it on school time or using school paper or email, especially since it was 1968 and there was no email. The school board, or his principal, fired him. He brought the case to the Supreme Court, claiming that he was fired for his speech, which was political in nature criticizing local government and not for anything related to his job performance, and he won. But the only reason he was able to get his job back is that his employer was the government, so it was the government that punished him for speaking out. For most of us, complaining about our employer's policies may get us fired, and unless we are government employees, we can't claim that it violated our First Amendment rights. The First Amendment, like all of the Amendments, is meant to protect us from an overreaching government. There are other types of laws that help us deal with individuals who do things that we think are wrong, but we'll talk about those in another episode. Thanks for watching. See ya next time. Mmmph! Third eagle punch in the video. Is that too much? It doesn't matter. I'm free to do it. Crash Course Government and Politics is produced in association with PBS Digital Studios. Support for Crash Course US Government comes from Voqal. Voqal supports nonprofits that use technology and media to advance social equity. Learn more about their mission and initiatives at Voqal.org. Crash Course was made with the help of all of these free speakers. Thanks for watching. |
US_Government_and_Politics | Political_Parties_Crash_Course_Government_and_Politics_40.txt | Hi, I'm Craig and this is Crash Course Government and Politics and today we're gonna talk about parties. Woo! Yeah! No, not those kind of parties. We're talking about political parties, which can be a lot less fun. Woo. [Theme Music] So, today we're talking about why we have political parties and the role of parties in American politics. But before we dive into the pool - some would say a cesspool - that is political parties, let's have a definition. Political party: a team of politicians, activists and voters whose goal is to win control of government. So kind of an important point: the goal of a party is to control government and in the U.S. that means electing people who agree with and usually are members of the party. So above everything else, parties exist to win elections. Parties don't mainly focus on influencing policies, although particular policies are often associated with particular parties. Influencing elected officials is mainly the job of interest groups, who we'll talk about soon. For now, let's keep in mind that political parties and interest groups are not the same thing. So let's look at three reasons why we have political parties. One: I dunno. Two: I dunno. Three: I dunno. I do know, I'll tell you in a second. First, we create political parties to facilitate collective action in the electoral process. Given that parties exist to win elections, this is probably the main reason we have them. But what does facilitate collective action in the electoral process mean exactly? Basically, it means that parties make it easier for voters to form groups that will vote in certain ways. Here's an example, albeit one that overgeneralizes a little bit. Just come on, just go with it. In general, republican candidates support policies that are more friendly to business, so if you're a businessman, you know that affiliating yourself with the republican party is probably going to benefit you. The second reason given for forming political parties is that they facilitate policy making. This reasoning applies to elected members who being to political parties, not to voters. So membership in a party allows politicians to work together. It's easier for democrats to form alliances with other democrats and sometimes these alliances have the added benefit of strengthening the party. Party affiliation can help legislators from different places work together. For example, common republicanness should make it easier for a republican from rural Kansas to work with another republican from suburban Florida. Sometimes though, party ideology can prevent even members of the same party from working together as happened in 2008 when republicans couldn't agree on whether the government should bail out struggling banks. A third, and I must say not altogether convincing reason why we have political parties is to deal with the problem of politicians' ambition. According to this idea, parties provide a structure, maybe even a career ladder for politicians so that they're not always acting in their own self interest. The fact that the party provides different leadership possibilities and some sense of discipline prevents ambitious politicians with largely similar views from competing against each other, like say 16 candidates running for president all in the same party. Just wouldn't happen. Ever. So that's why political parties exist, but what do they do? Well, they have five main functions in the U.S. and I'll leave it up to you to decide which - if any - is the most important. Eagle doesn't get to decide. Eagle doesn't get to decide anything. So here's the list: 1. Recruit candidates; 2. Nominate candidates; 3. Get out the vote; 4. Facilitate electoral choice; 5. Influence national government. The first thing that parties have to do if they want to win elections is find candidates. This is a two-step process involving recruiting and nominating. We've already mentioned that in order to be a good candidate for office, you generally have to have an unblemished personal record - like me - or at least be really good at heartfelt apologies. I don't have an unblemished record and I'm very sorry about that. Also, you need the ability to raise money. Of course, in order to avoid any problems with campaign financing, it's helpful to have money yourself, but why spend your own money if you can convince people to give money to your campaign? Maybe print out some hats. Merch works, merch helps raise money. There are lots of people who want to run for office, although there's some debate about whether we're really getting the best candidates. The pay isn't great and neither is the prestige anymore, and then there's the scrutiny that a run for office puts you and your family through. Parties play an important role in sifting through all the people who want to run and picking those who have the best chance of winning. Nomination is the process through which a potential candidate is actually chosen to represent a particular party in an election. When we talk about nominations in the US, we're mostly talking about the presidency because that's the only office that goes through the formal nomination process. But technically congressman and senators are nominated by their parties to run as well. There are three ways that a candidate for president can be nominated. In the old days, presidential candidates were nominated at a convention or caucus, which are gatherings of party members governed by rules. Conventions still occur every four years but they're largely ceremonial these days because presidential candidates are actually nominated during the primaries. Let's go to the Thought Bubble. Primary elections are held to choose candidates who will then run in the later general election. Political parties decide when and how primaries will be held and who the candidates will be. These are the elections that pit democrat against democrat and republican against republican to see who will face off in November. Primaries can either be open or closed. Most states have closed primaries, which means that only registered voters of a particular party can vote in that election. So, in a state with closed primaries, like New York, only democrats can vote in the democratic primary. And since in many districts one party is overwhelmingly dominant, the primary winner is very likely to win the general election too. In states with open primaries, members of any party can vote in the primary, which sounds great because it encourages more participation but it also opens up opportunities for mischief *evil laughter*. For example, if there's a strong republican candidate up against a weak republican candidate in a state with open primaries, democrats can turn up and vote for the weak republican in the hopes that if he wins he will have less of a chance in the general election running against a democrat. Sneaky. In presidential elections, the winner of a primary election will be assigned a certain number of party delegates. Delegates are non-elected party members who actually nominate the candidates at the convention. The delegates are usually pledged to vote for the candidate who won the primary in their state, at least on the first ballot, and majority rules in nominating. This is why we see so much election coverage of primaries and why some states like New Hampshire try so hard to have their primaries early. Once a candidate has sewn up enough delegates, he or she becomes the nominee, and the convention serves largely as a formality. Although the primary system is more democratic than the convention, it still has problems. Even though there's more opportunity for participation, that doesn't mean people actually participate. In fact, only about 25% of those eligible to vote in primary elections actually do, and these tend to be the more ideologically extreme members of the parties. Because to them, winning elections matters most. So, if only partisan voters show up, we tend to get uber-partisan candidates. And because they have to win bruising primaries before they even get to the general election, these candidates tend to be aggressive and uncompromising. That's good when you're competing in an election but not so good when you're trying to work with other people to craft policies or, in very rare cases, legislation. This is why many people think that primaries add to political polarization in the US. Thanks, Thought Bubble - you got my vote. There's a third way that a person can become a candidate, but it's a long and dangerous path. Hey, Stan, zoom the camera in as I say that. It's a long and dangerous path *evil laughter*. A person can run as an independent and if they get enough signatures on a petition, they can become a candidate. You're more likely to see this in congressional races but even then it's not super common. It's also really not that long and dangerous as we implied in that last shot. The third thing that parties do is mobilize voters, also known as getting out the vote. This is pretty obvious because you can't elect a candidate if you don't get people out to vote for them - duh! Parties get out the vote through direct mail, email and advertisements, and they can also help with voter registration drives. The main thing the party does in terms of getting out the vote is coordinate volunteers to help encourage voting. If you want to help on a campaign or with an election effort, your local party office is a good place to start. Another good place to start is getting out of bed. Getting out the bed is a campaign we should have. Parties also help to facilitate electoral choice. Basically, a political party acts sort of like a brand. So, knowing which party a candidate represents acts as a kind of shorthand for voters in the same way that seeing, say, a Netflix logo lets you know that you're about to chill. I'm not going to go into what each party stands for right now but let's just say that knowing that a candidate is a republican or a democrat allows you to figure out pretty much what they stand for even if you don't know anything about the candidate. Political parties even help non-partisan voters by narrowing down political choices and making things easier. If you want to, you can choose a candidate by answering two relatively simple questions: which party better represents my interests and values, and which candidate belongs to that party? Finally, believe it or not, political parties have a role in the way the national government actually works. Party membership is really important in Congress. Parties determine who the Speaker is since he or she always comes from the majority party and is chosen by a vote of members of that party. Parties also determine the composition of the committees and party leaders assign members to those committees. And parties help determine who the chairs of the committees are and they, along with the Speaker and the majority leader in the Senate, largely shape Congress' agenda. The president and his party have a reciprocal relationship - that's the best kind of relationship and the most fun to say. Reciprocal. The president is the leader of his party and his personal character and popularity helps to shape the party's brand - for better or worse - and can be used to raise money. On the other side, the party throws its support behind the president's initiatives and helps to elect candidates that support him in Congress. So, at their most basic level, parties exist to elect political candidates and thus gain control of the government. In order for them to do this well, they need to provide voters with clear electoral preferences and encourage them to act on those preferences. In a way, this branding function - helping voters to choose between Candidate A and Candidate B - is what parties are all about. But you're free to disagree and if you do, go form your own party and do whatever you want. It's a party! Woo! Thanks for watching, see you next time. Crash Course Government and Politics is produced in association with PBS Digital Studios. Support for Crash Course US Government comes from Voqal. Voqal supports non-profits that use technology and media to advance social equity. Learn more about their mission and initiatives at voqal.org. Crash Course is made with the help of all these hard-line partiers. Thanks for watching. |
US_Government_and_Politics | Political_Ideology_Crash_Course_Government_and_Politics_35.txt | Hi, I'm Craig, and this is Crash Course Government and Politics. And today, we're gonna get personal. Not personal in the sense that I'm gonna tell you I'm a bed wetter cause I'm not...gonna tell you that. We're gonna talk about people's personal political views and where they come from. This is what political scientists sometimes call Political Socialization. But before we get into the forces that help create our political outlooks, we should probably define what political ideologies look like in America. [Theme Music] In America, there are a number of ways people characterize themselves politically. Typically, they identify with a political party, although as we'll see when we talk more about parties, this has become less likely over time. And although there's a lot of overlap between political party and political ideology, there's not 100% correspondence between the two. But there is 100% overlap between my fist and this eagle's beak! Politics. So, now I should probably say what I mean by political ideology. Basically, I'm talking about whether you identify as liberal or conservative or libertarian or socialist or anarchist or nihilist or craigist - people who just love me. I'm one of those. You're probably familiar with the idea that liberals are on the left, and conservatives are on the right. And this can be a helpful shorthand, but what political views do these terms represent? Let's go to the Clone Zone! What? The Clone Zone - it's right here now? I'll just...I'll leave then. This way? I'll go this way. Taking a cue from anti-federalists, American conservatives believe that a large government poses a threat to individual liberty, and we prefer our national government to be as small as possible. (Scoffs) We have this in common with libertarians. There are some basic functions like national defense that government can best take care of. But especially since the New Deal, our government has taken on too much. What government we need is best handled by states and localities. For the most part, American conservatives believe in the free market and that it will provide the greatest economic opportunity and benefit to the greatest number of people. American conservatives usually support a strong defense. This is one place where we generally don't think spending should be cut. Most other programs, the things that fall under discretionary spending, can and should be left up to the private sector. And this will allow the government to reduce its spending. Lower spending, in turn, will mean lower taxes. Ahh, delicious lower taxes. This means that we don't like flag burning, and we favor prayer in schools because these reflect traditional religious and patriotic values. Just like the eagle. I love the eagle. You're my friend. (Kisses eagle) Many conservatives, as strong adherents to religious faiths, are against abortion. But I'd say there's a greater diversity in conservative views on social issues than on economic ones. The social sphere is where we differ significantly from our libertarian friends who don't see any role for government in people's personal lives. This means that libertarians often support things like marijuana legalization that more traditional conservatives do not support. If there's one value that American conservatives privilege above others, it's liberty. America is a country of freedom, and in most cases, government is more of a threat to liberty than a protector of it. And I'm out. (Kisses hand and touches eagle's head) Not bad conservative clone. But bad, here's why. Sometimes liberals in the U.S. are called New Deal Liberals because the policies that we support grew out of the New Deal. And lately, a number of us are trying to re-brand ourselves as progressives. Although this is a little tricky given that, historically, progressives and liberals aren't the same thing. In general, American liberals believe that government can help solve problems, and a bigger government, like a glorious soaring eagle can solve bigger problems and more of them. We support government intervention in the economy, both in the form of regulations and higher taxes, especially when that intervention benefits historically marginalized groups like minorities, women, and the poor. We like to the government step in on behalf of consumers and to protect the environment because in general, we don't trust that the free market will be fair to everyone. We know that protection of the environment, aiding the poor, and expanding civil liberties all cost money, so we usually favor a progressive tax system with higher taxes on the wealthy and corporations. Although not all American liberals are anti-business, as a rule we don't have a lot of faith that big businesses have the average American's best interest at heart, and so we prefer to see them regulated. Although we still see national defense as important, most American liberals feel that the country spends more than enough on the military and that the defense budget should be cut, leaving more money for necessary social programs. In the debate over guns or butter, we like butter. Although we're also fine with the government telling us not to eat so much of it. If conservatives value liberty, we liberals cherish equality as their primary political virtue, and we see government as a necessary agent in promoting equality. (Kisses hand and pats eagle's head) We're equals, me and that eagle. Thanks clones. So, for the most part, this is what most American liberals and conservatives believe, and these are the basic foundations upon which they build their political opinions. But where do they come from? (Punches eagle and eagle tumbles to the floor) Political scientists sometimes refer to the process by which individuals establish their personal political ideologies as political socialization. And they have identified four main agents that contribute to our political identities. Let's go to the Thought Bubble! The first and most important source of our politics is family. This makes a lot of sense since kids either want to emulate their parents or reject their ideas. And parents are usually the first people that express political opinions to kids. As I suggested, family can influence your political outlook in negative and positive ways. If you respect your parents and admire them, it's likely you will adopt their political ideology. On the other hand, adopting an opposing political view can be a form of rebellion. Still, for the most part, liberal parents breed liberal children, and conservative parents create new generations of conservatives. The second major influence on political ideology is social groups, which in this case, refer to one's race, gender, religion, or ethnicity. Obviously, these are generalizations, but certain groups tend to fall predictably into liberal or conservative camps. African Americans and Jewish people are among the most liberal Americans while white Catholics tend to be conservative. Latinos are an interesting case because many identify as Catholic, but they tend to be more liberal politically. One of the reasons that many use to explain why African Americans and Latinos tend to be liberals is that these groups are disproportionately poor and receive a significant share of government benefits. To this way of thinking, economic self interest is a prime determiner of where one stands politically, and it also explains why white conservatives, especially those who are wealthy, favor policies of lower taxes and less government intervention. One problem with this purely self interested view of political ideology though is that there is a large number of low income, low wealth white voters who also do or would gain from more government benefits. But they tend to be conservative. In other words, be careful when you try to define a person's politics by looking at their bank account. Gender also tends to be statistically significant in terms of political ideology. The gender gap refers to the fact that women tend to be more liberal overall than men. This is especially true on the issue of national defense where they tend to favor spending reductions rather than increases. Thanks Thought Bubble. The third agent of political socialization in the U.S. is education, namely the primary and secondary school system. This is the most formal way that our political views are shaped since almost all American students take at least one year of American history, and many states require courses in civics. In these courses, students learn about political values like liberty and equality and may come to align themselves with a liberal or conservative view. And maybe you're watching me in one of these classes right now... Conservatives tend to think that American schools and textbooks skew towards a liberal outlook, but this might be because most public school teachers are members of unions, and teachers' union membership correlates highly with a liberal viewpoint. Whether or not most teachers and textbooks are liberal, it's a bit of a leap to assume that most students will automatically adopt the ideology of their teachers. Education does relate to political ideology in at least one measurable way though. In that the higher the level of education one attains, the more likely one is to profess liberal views on issues such as women's rights or abortion. On the other hand, higher education levels also correlate with more conservative views on issues like government support of national health insurance or affirmative action programs to help African Americans. So, as with many things we look at, things aren't so clear cut, and it's important to have some kind of data to back up our generalizations. One final agent of political socialization are the political conditions one lives through. Example - if you grew up during the Great Depression and saw FDR's New Deal programs benefit you and your family, it's likely that you'd develop and maintain pro-government, liberal views. If you came of age during the Reagan era, when popular politicians were singing the praises of self-reliance and calling government the problem rather than the solution, it's likely that you'd develop conservative political views. It remains to be seen whether people who form their political identities during the Great Recession will be liberal or conservative. But don't worry - pollsters are busy trying to figure it out. So there you have a very broad outline of what the words conservative and liberal mean in American politics and some of the factors that turn people into liberals or conservatives. More than probably anything I've said in this series, these are generalizations that you need to look at critically. This doesn't mean that if you find someone or if you are someone who doesn't fit either description 100%, that we're completely wrong, only that political ideologies are complex and change over time. But we need to understand the outlines of these generalities because they get used all the time in our discussions of American politics. In fact, it would be hard to talk about politics without them. Thanks for watching - see you next time. Crash Course Government and Politics is produced in association with PBS Digital Studios. Support for Crash Course U.S. Government comes from Voqal. Voqal supports non-profits that use technology and media to advance social equity. Learn more about their mission and initiatives at voqal.org. Crash Course was made with the help of all these political ideologues. Thanks for watching! |
US_Government_and_Politics | Interest_Group_Formation_Crash_Course_Government_and_Politics_43.txt | Hello. I'm Craig, and this is Crash Course Government & Politics, and today I'm gonna take a deep dive into political science theory and try to figure out why interest groups form. So grab your snorkel, no, this is a deep dive, grab your scuba gear. I know, I know, you're probably thinking that interest groups form to influence the government with their piles and piles of cash, so the way they form is by amassing piles of cash, and then, well, who really cares once they have all that cash? Maybe that's true, but why we have some interests that get together in organized to influence policy and others that don't is an interest-ing question. [Theme Music] So as you remember pluralist theory proposes that every group with a particular interest should be able to form an organization that will pursue policies to further that interest. So if I like birds, which clearly I don't, I should be able to join up with other bird fanciers to work with the government, so that it protects birds' wildlife habitats. I hate this analogy. And in fact there are a number of interest groups out there that can do this, including the Sierra Club and the World Wildlife Foundation. I respect those clubs. I don't hate all eagles, just that eagle. But this doesn't mean that everyone with an interest can form a group. Like, I like video games, I'm not going to go form a group about it. Why doesn't every interest form a group? Well political scientists would cite collective action problems. These are the problems that occur because people should work together, but don't. The classic example is building a road that many people could use but no one individual could build on their own because, you know, building a road is hard. Although there are obviously some situations where people can coordinate to get things done without any government intervention, most of the time if you want to accomplish big things like the Hoover Dam or building the highway systems, you need some sort of government action to make sure it gets done. And usually it gets done when the government collects taxes from everybody to build the road. Now some people may use the road a lot; more than their taxes paid for, and some may not use it at all, but the government collects taxes to pay for it all the same because otherwise it wouldn't get built. One special type of collective action problem is called free riding. This is when people who stand to get a big benefit from a project either don't pay a big enough share, or they don't pay at all because they know that the project is so important that it will get done whether they contribute or not. Then they will get the full benefit of the project - say a pothole free road - without paying a dime. Get it? Free ride? (Singing) Come on a take a free ride". This becomes a problem when other people, not wanting anyone to get a free ride, also refuse to contribute to the project and, you guessed it, it doesn't get done. 'The free rider problem is worse in larger groups and this partly explains why some interests are better represented in American politics.' Because large groups are more anonymous, it's easier to free ride; there are lots of other people who probably care more than a single individual and, knowing this, the single individual rationally chooses to free ride. Also, the larger the group, the easier it is for an individual to claim that their efforts don't matter which is a smaller version of the voter paradox. Finally, with a large number of members, it's much harder for a group to enforce it's rules against slackers, who don't pull their weight young man! Or woman - young person! What this means in practice is that smaller groups are more successful in forming and pushing their agenda, this is why producers are more successful at forming interest groups than consumers, and why business owners are usually more successful then workers. Now, many people will say that in the US labor is well represented through unions, but over the last forty years at least union membership and the dues that come with it are shrinking and unions are less able to get legislation passed. So how do large groups solve the collective action problem and actually coordinate to get things done? According to political scientist Mancur Olson they do this by providing selective benefits to their members. In other words they build membership by providing perks. These can be material things like special services or discounts on things like insurance, or smaller things like baseball caps or bumper stickers. One of the largest organized interest groups in the US, the American Association of Retired Persons, or AARP provides a number of material benefits to their members including discounts on a number of useful products and services. AARP also provides many of the second type of benefits, informational benefits. Interest groups can inform their members of policies that may affect them, and can provide guidance about what to do in order to influence these policies. The third type of benefits are called solidary, and they refer to friendship and comradery that can come from being a member of a group. Ah, that's my favorite benefit. Solidary benefits also include networking opportunities, and networking probably has a lot to do with why people join professional groups. Gotta do the schmoozin'. The final type of benefit that can help build interest groups is called the purposive benefit. This is a feeling that by being a member of a group, you're helping to make a difference. Purposive benefits partially explain why so many people joined up with groups during the civil rights movements, it certainly wasn't for the SNCC keychains. Another way that interest groups can be formed is by political entrepreneurs. These are specific individuals who make extraordinary efforts to bring people together for the purpose of changing policy. Often political entrepreneurs are politicians who recognize the latent potential of groups that haven't yet organized. When successful, they benefit electorally. One of the most famous examples in American politics was Claude Pepper who realized there were a lot of older Americans in Florida and that if he became their champion and organized them, they would vote for him. Even more well known than Pepper, was salt (Chuckles). No, Robert Wagner, whose sponsorship of the National Labor Relations Act was so important that it's often still called the Wagner Act. This helped create political power for labor unions, and unions helped keep Wagner in Washington. I guess we also have to talk about the elephant, or if you're a Democrat, the donkey in the room; lobbying. Lobbying is an attempt to influence policy by persuading a government policy making official. As we say when we talked about iron triangles, this can be done by providing officials with information that they can use. But most people still think of it as providing campaign contributions in return for favorable policy outcome. There's no real evidence of this quid pro quo bargaining, probably because it's really close to bribery and therefore doesn't happen that much. But it has happened in the past and this idea is pretty powerful, especially if you think all politicians are corrupt. But there is more to lobbying than this, especially in the eyes of political scientists, who like to divide lobbying into insider and outsider strategies. Insider strategies include directly trying to persuade elected officials, and also using the courts. Actual direct lobbying of congressmen has become more difficult over the years. Congress has passed laws that have limited the ability to deduct lobbying expenses from taxes, and restricted how much lobbyists can pay for official's travel expenses. This has cut down a lot on the famous political junkets which often looked like a vacation paid for by lobbyists. Congress has also passed laws limiting the gifts that lobbyists can give to officials which now have a value of no more than fifty dollars. So no free Apple Watch for you congressman, sorry. You could probably get Grand Theft Auto V at this point though. While it's mostly elite groups that are able to lobby congressmen, executive department heads or even the President directly, other groups have been more successful using the courts as an insider strategy. One way to do this is through direct lawsuits like the one that led to the Brown vs Board of Education decision. Another way is by finding plaintiffs and funding their lawsuits, the old "find and fund," they call it. No one calls it that. A third thing that interest groups can do is file amicus curiae, or "friend of the court" briefs to get their legal ideas into Supreme Court decisions. Groups on both sides of the political spectrum pursue all these avenues, but environmental and civil rights groups are known for going after the courts. In particular the courts are often the focus of minority groups, since they're the ones least likely to be successful in electoral politics. The other types of lobbying strategies are called outsider strategies, let's explain that in an animated way by going to the Thought Bubble. Outsider strategies are those that involve interest groups mobilizing the public. Sometimes these strategies are called grassroots lobbying. Mobilizing public opinion is sometimes called "going public", but this is very confusing because it's the same term used to describe the President's making direct appeals to public opinion. And also it happens when a company floats it's stock on an exchange for the first time. When interest groups go public the main things they do are organize advertising campaigns, organize protests and engage in grassroots efforts to get their membership to lobby officials. Advertising campaigns are expensive, so you don't see a lot of advertising on behalf of things like poverty relief. The best know recent example of an interest group sponsored advertising campaign was the Harry and Louise campaign, sponsored by healthcare groups and doctors that was designed to stymie President Clinton's attempts at healthcare reform. Interest groups organize protests to get the attention of politicians and the media, and protests can be effective as we saw during the civil rights movement. Protests can also help to form interest groups, like the Occupy movement during the height of the financial crisis after 2008. Some protests, such as labor strikes, can impose costs on business owners, and push them to lobby office holders more directly. Grassroots lobbying occurs when an organized group encourages its membership to contact elected officials, often through letter writing, emails or telephone calls. This is becoming increasingly prominent because of the new rules that limit traditional lobbying, and because technology makes it so easy for groups to reach large numbers of people, and to get them to respond directly to elected officials. But beware, sometimes technology makes it easy for well financed groups to give the appearance of being a large-scale grassroots organization, when they really aren't. These bogus attempts at grassroots organizing have been called astroturf lobbying, and one of the few examples where political scientists have demonstrated their superiority at naming things. Thanks Thought Bubble. So interest groups are all about changing policies, and they pursue both insider and outsider strategies to influence policy makers. I hope that we've given you a balanced view of interest groups, one that's not so cynical about the way they function in our politics. Thanks for watching, I'll see you next time. Crash Course Government and Politics is made in association with PBS Digital Studios. Support for Crash Course US Government comes from Voqal. Voqal supports non-profits that use technology and media to advance social equity. Learn more about their mission and initiatives at voqal.org. Crash Course is made with the help of all of these collective actions. Thanks for watching! |
US_Government_and_Politics | Freedom_of_Religion_Crash_Course_Government_and_Politics_24.txt | Hi, I'm Craig, and this is Crash Course Government and Politics, and I'm excited. I'm excited because today, we start delving into Supreme Court jurisprudence, with the totally controversial topic of freedom of religion. Now, other than being fun to say, jurisprudence means all the important cases on a particular topic, but unfortunately, I'm only going to be talking about a couple of them, because they demonstrate how the Supreme Court reasons its way through a tricky issue. Jurisprudence. Jurisprudence. [Theme Music] So the Constitution deals with religion right there in the First Amendment, which is also the one that deals with speech and the press and assembly and petitions. Here's what it says: "Congress shall make no law respecting an establishment of religion or prohibiting the free exercise thereof." It's the first clause in the First Amendment of the Bill of Rights, so it's pretty darn important. Notice it has two parts, and each one creates a separate religious liberty or freedom. The first part, "no law respecting an establishment of religion" is caused the establishment clause; can you guess what the second religious liberty is? If you said free exercise, you're right. What do these two freedoms mean, though? Establishment of religion means that the US can't create an official state church, like England has with the church of England. This means that the First Amendment ensures that the US does not have any state endorsed religion nor does it write its laws based on any religious edicts, and it's also the clause in the Constitution that deals with religious monuments and school prayers and stuff like that. The free exercise clause in a way is more straightforward, it means you can't pay for exercise. Gym memberships are illegal. But freedom isn't free. You're gonna pay with pain! No pain, no gain. Actually, none of that is what we're talking about. What it means is you can't be prohibited from being part of a certain religion, although it doesn't mean that any religious practice is okay. For example, if your religion requires human sacrifice, because you're an Aztec, state, local, and federal law could prevent you from practicing that aspect of religion, for obvious reasons, although it couldn't prevent you from believing that human sacrifices were necessary to make the sun rise every day. We are gonna anger a lot of Aztecs with this video, Stan. There are a number of cases that establish this distinction between religious belief and religious practice, but my personal favorite is Church of Lukumi Babalu Aye vs. Hialeah, because I love saying Lukumi Babalu Aye. You probably figured out that what these two clauses mean in practice has been determined to some degree by Supreme Court decisions. There's a bunch of them, but probably the most important one is called Lemon v. Kurtzman, from 1971. Right off the bat, the Lemon decision is a little complicated because it combines two sets of facts, although they both involve public money and parochial schools. In one case in Rhode Island, the state was using taxpayer funds to pay teachers in parochial schools in an effort to educate Rhode Island children, which is generally a good goal. In the other case in Pennsylvania, the state was paying teachers in private schools to provide secular education services, but enough with the set-up, let's go to the Thought Bubble. The Supreme Court in Lemon vs. Kurtzman devised a three prong test to see if the state law violates the First Amendment religious freedom clauses. Under the first prong, the Court looks to see whether the law in question has a secular legislative purpose. In this case, the purpose of the law was educating children, which you remember, is one of the powers reserved to the states, and for the most part, is a secular purpose. Under the second prong, the Court examines whether or not the law's principal or primary effect neither enhances nor inhibits religion. Here again, the Court found that paying private school teachers or using private school facilities did not necessarily promote religion or prevent students from worshipping as they wanted to. The third prong requires that the law under consideration does not create excessive entanglement between a church and the state. This is the one where both the Rhode Island and Pennsylvania laws got into trouble. In Rhode Island, the school buildings where the children were learning were full of religious imagery, and 2/3 of the teachers were nuns. The Court paid close attention the fact that the people involved were kids, ruling, "This process of inculcating religious doctrine is, of course, enhanced by the impressionable age of the pupils in primary schools particularly. In short, parochial schools involve substantial religious activity and purpose." In Pennsylvania, the problem was different. The Court ruled that in order to make sure that the teachers were NOT teaching religion, the state would have to monitor them so closely that it would be excessive entanglement and give the state way too much control. They ruled that, "The very restrictions in surveillance necessary to ensure that teachers play a strictly non-ideological role give rise to entanglements between church and state." Thanks, Thought Bubble. So it's pretty complicated, and I'm not 100% sure that I find it convincing. First of all, the Justices engaged in some slippery slope reasoning about the Pennsylvania case. The Court argued that even if, in this situation, the secular purpose was a good one, there's a tendency for states to take more and more power for themselves. But my bigger concern is that all three prongs in this case were given equal weight, and I'm not sure that they always should be. I mean, you got the one round one and then the two like, you know, long ones, and you can pull that round one, it's just for grounding. What the ruling in this case meant was that the secular purpose, educating children, was not gonna happen, or at least would be made more difficult. Also, you could argue that it was kind of paternalistic, assuming that kids wouldn't be able to block out religious imagery, but since they are kids, maybe a little paternalism is okay. You spit that gum out, Junior. So Lemon vs. Kurtzman built on an earlier case, Engel vs. Vitale, which ruled that prayer in schools violated religious freedom. You would think that, taken together, this issue would be pretty much put to bed, yet every few years, a case comes along involving prayer in school, and now they apply the old three prong Lemon test. For example, one state adopted a statute mandating a moment of silence at the beginning of each school day. One of the purposes of this statute is to provide students with an opportunity to pray in school. Another purpose is to create a calming atmosphere in the classroom to better promote learning. The first purpose doesn't look so secular, and as for the second prong, doesn't necessarily advance or inhibit a particular religion. Students can choose not to pray at all. Is this excessive entanglement? That's always gonna be difficult to say, especially since 'excessive' is pretty subjective, but if you go on the standard of the Pennsylvania case in Lemon, almost any religious practice in school could be excessively entangling, because the state is going to have to step in and monitor it. Some school systems have tried to get around this by having the prayers led by students, because they aren't agents of the state. But then you have the issue of how much a student-led prayer is really led by a student, and how do you find out without more monitoring and more state entanglement? The Lemon test is an attempt by the Court to set up a framework for analyzing future situations where religion and the state might get mixed up. It's probably better than having what legal scholars like to call "a bright line rule" about religion in public spaces like schools and courthouses, but it does leave a lot of wiggle room and it seems that it encourages future cases because we keep seeing them. The funny thing is, religious freedom is one of the less controversial protections found in the First Amendment, if you don't believe me, wait until our next episode on free speech. Just wait. You just -- you just wait. Did you guys hear what he said? See ya next time. Crash Course Government and Politics is produced in association with PBS Digital Studios. Support for Crash Course US Government comes from Voqal. Voqal supports non-profits that use technology and media to advance social equity. Learn more about their mission and initiatives at voqal.org. Crash Course was made with the help of all these jurisprudences, am I using that word right? Thanks for watching. |
US_Government_and_Politics | Political_Campaigns_Crash_Course_Government_and_Politics_39.txt | Hi I'm Craig and this is Crash Course Government and Politics. And today we're going to try and untangle the mess that is the American political campaign. One of the things about the American political system that often confuses people who don't live in America is the way that our politicians run for office. There are two aspects in particular that stand out about American political campaigns: their length and their expense. We're going to look at both of these today and see that they're related but before we do we are going to answer a burning question: why do we need political campaigns anyway? [Theme Music] If you ask one hundred people about the reason why we have political campaigns, you'll get well, not a hundred but at least more than one answer. And you might work for Family Feud. Probably the best answer to this question though, is that we have political campaigns to provide voters with information they need to choose a candidate to represent them. So how do political campaigns provide information? And what is a political campaign anyway? Let's go to the Thought Bubble. A campaign is an organized drive on the part of a candidate to get elected to an office. It's also the way we refer to the organization itself. For example, in 2012 we had the Obama campaign and the Romney campaign. And each consisted of a campaign organization made up of thousands of staffers and volunteers and all of their activities. Most campaigns are temporary, geared towards an election although both parties do have permanent professional campaign organizations. At the top level are the national committees, the DNC and the RNC. Can you guess what they stand for? These organizations coordinate all national campaigns, especially those for President. Each house of congress has a Republican and Democratic campaign committee. The individual Senate and Congressional committees are headed up by sitting members of the Senate and the House, and because these committees give money to candidates, their leaders are very popular. I find that I'm popular when I make it rain at parties. Campaigns provide information in a number of ways. The main thing they do is communicate with the public, usually through the media which we'll discuss in greater depth in future episodes. The main stage of political campaigns is the organized event where candidates can present information about themselves and their policies directly through voters and speeches. These are known as stump speeches, although only rarely these days do candidates actually speak on stumps, they have podiums and stages now. In addition to these events, candidates present the information by appearing on the TV, in debates, at town meetings, and in "impromptu" photo opportunities. They like to appear with military hardware, too, although sometimes this can backfire, as in the case of Michael Dukakis in 1988. Campaigns can spread their messages through direct mail, press releases, news coverage, and through advertisements, often on the TV, which is like the internet, only less interactive and has a lot of real housewives on it. Thanks, thought bubble. Nowadays, there are many more ways that candidates can reach out to voters. One way is through email. If you've ever given money to a candidate or a campaign, you can expect emails in ever-increasing numbers as election day approaches, and we all love that. Candidates now take to Twitter to blast out information and individual candidates and their campaigns often have Facebook pages. There are even campaign ads made specifically for YouTube, although how their advertising algorithm works is beyond me. It's weird to get a campaign ad for the Michigan Senate if you don't live in Michigan. One other way that campaigns communicate information is through raising money. Of course, they need money to pay for all the campaign ribbons and buttons and PA systems and folding chairs and tour buses and stump speeches and axes to chop down trees so they have stumps to speak on. These things ain't cheap. Even more expensive are advertisements on the TV. A sitting president has an advantage here in that he can usually get on TV whenever he wants and he'll have a chance to clarify his positions in the State of the Union Address. But even he has to spend money on ads. And raising money is another way to present voters with information because campaign solicitations usually come with some policy piece attached to them. Almost every solicitation you get will be somewhat targeted to one of your interests and tell you, or try to tell you, where the candidate asking for your money stands on that issue. So you may have gotten a campaign solicitation and wondered, "Hey, why you need my money?" The unhelpful answer is that they need your money because campaigns are expensive. But then you might ask, "why are they so expensive?" Good question. Campaigns are expensive because they're huge, especially presidential campaigns; they need to reach 220 million people of voting age. Another reason they're expensive is because they're super long. Democrat and Republican candidates raise money, give speeches and create political action committees years before the election. It's ridiculous. I blame the eagle. Campaigns are also expensive because Americans expect them to be personal and this takes time and money. We like to see our candidates in person and have them show up in small towns in Iowa and New Hampshire, even though those states don't matter all that much in the grand electoral picture. Another reason campaigns are so expensive is that they rely increasingly on the TV and other visual media that cost a lot of money to produce. Gone are the days when William McKinley could sit on his porch in Ohio and have reporters come to him. Nowadays, even when candidates get free exposure by appearing on nightly comedy shows, like The Daily Show, it still costs the campaign in terms of time, travel and probably wardrobe and makeup so that they can look as good as I do. No makeup. Minimal wardrobe: no pants. Sorry, Stan. How expensive are campaigns anyway? Eh...very! In the 2008 presidential campaign both candidates together spent three billion dollars. In 2012 the candidates spent about a billion dollars each, and outside groups spent a further four billion. And congressional elections weren't much cheaper, except when you consider that there were a lot more of them. Combined, congressional races in 2008 cost about one billion dollars. All the money that gets spent on campaigns leads us inevitably to campaign finance rules, which were set up by Congress after 1970 and refined by the courts. We have campaign finance legislation because all that money pouring into campaigns sure looks like it raises the potential for corruption. Whether or not an individual's campaign contributions can sway a congressman's vote is highly debatable but it certainly gives the appearance of impropriety when a congressman who receives millions of dollars from the oil industry then works hard to weaken regulations on oil companies so that they can make more profit. Campaign contributions are not bribes, but they sure look like them to lots of people. Recognizing that campaign contributions could potentially influence the political process, congress passes the Federal Election Campaign act of 1971. This was the first law that put limits on campaign spending and donations. It was further refined by the McCain-Feingold Campaign Law in 2002, and by court decisions that refined the rules for campaign spending and donations and provided a legal rationale for these limits. Until recently, the most important case on campaign finance was Buckley V Valleo. This case established the idea that limits on campaign spending were problematic under the first amendment because limiting the amount someone could spend on politics was basically limiting what that person could say about politics. Freedom of speech, y'all! According to the rules, individuals were allowed to donate up to $2500 per candidate and their was a total limit to the amount an individual could give. Donations to a party committee, which because they don't go to a specific candidate and thus seem less like bribes, were limited to $28,500. Individual donors were also allowed to give up to $5,000 to a political action committee, or PAC. But it gets more complicated. Individuals and PACs are allowed to give unlimited funds to a 527 group, named after its designation in the tax code, that focuses on issue advocacy. The most famous 527 group in recent political memory is probably Swift Boat Veterans for Truth, which spent more than 22 million dollars to raise awareness around the issue of whether 2004 presidential candidate, and later Secretary of State John Kerry was completely honest about his Vietnam War record. If this sounds like it was more of an organization against the candidate himself, well you can see why the line between "issue advocacy" and support for a political campaign can be kind of blurry. Now here's something important: these limits are on contributions to candidates and campaigns, not on spending by candidates and campaigns. What this means is that a candidate and their campaign can spend however much they raise. So if a candidate running for office has one billion dollars, they can spend one billion trying to win. There's no concern about self-funded candidates bribing themselves, and you often see very rich people spending a lot of their own money trying to win office. So Buckley Vs. Valleo set up the basic distinction between campaign donations, which could be limited, and campaign spending, which couldn't. This distinction was undercut by the Supreme Court in the case of Citizens United Vs. the Federal Election Commission in 2009. This reaffirmed the idea that money is the equivalent of speech and struck down many of the limitations on campaign donations. The Citizens United decision cleared the way for Super PACs. These organizations are allowed to raise and spend unlimited amounts of money to promote a candidate or publicize a cause, but they may not directly contribute to a candidate or coordinate with a campaign. In the 2012 election, there were over 500 registered super PACs and 41 of them spent over half a million dollars. The largest seven had spent over 256 million by the end of August, one of the reasons that the 2012 election was the most expensive ever, clocking in at around 6 billion. Now this sounds like a lot of money, right? It is. Gimme it. But a little context: the total spent on house and senate races was around 3.6 billion dollars, which was less than half of what Americans spend annually on potato chips. So when you look at it this way, the amount we spend on elections doesn't seem like so much, which may make us rethink the idea that money is corrupting American politics. Or maybe not. Maybe potato chips are corrupting American politics. Certainly corrupting my belly. American political campaigns are big and high stakes and raise questions about the influence of money in politics that are tough to answer. On the one hand, it does seem like there's the potential for very rich people to have a lot of influence on the elections. On the other hand, limiting a person's ability to register his or her preference of a candidate through spending on that candidate does seem like a limitation on their political speech. One of the arguments for limits on campaign contributions is that forcing candidates to raise money in small amounts from a large number of donors will make them reach out to larger numbers of constituents, and appealing to large numbers is the essence of Democracy. But it's also time consuming for a politician to reach out to all those potential donors and congressmen already spend a considerable amount of time raising money when they should be legislating. And watching Real Housewives. And eating Little Caesar's. There's a lot to do. But this is the system we have, and unless congress passes a law limiting campaign expenditures, or shortening the campaign season, we can expect campaigns to remain long and get more and more expensive. Thanks for watching, I'll see you next time. Crash Course Government and Politics is produced in association with PBS Digital Studios. Support for Crash Course US Government comes from Voqal. Voqal supports non-profits that use technology and media to advance social equity. Learn more about their mission and initiatives at Voqal.org. Crash Course was made with the help of all of these campaign financiers. Thanks for watching. |
US_Government_and_Politics | Equal_Protection_Crash_Course_Government_and_Politics_29.txt | Hi I’m Craig and this is Crash Course Government and Politics, and today we’re going to finally get into why many people, including me, think that the Fourteenth Amendment is the most important part of the Constitution. At the same time, we will attempt – successfully, I hope – to unravel the difference between civil liberties and civil rights, and also try to figure out how the Supreme Court actually looks at civil rights and civil liberties cases. So that’s a lot. Let’s get this out of the way because we're not gonna have time later. Let's get started. [Theme Music] So we’ve been talking a lot in the past few episodes about civil liberties, the protections that citizens have against the government interfering in their lives. Civil rights are different in that they are primarily about the ways that citizens, often through laws, can treat other groups of citizens differently, which usually means unfairly. Civil Rights protections grow out of the “equal protection” clause of the Fourteenth Amendment, which reads: “No State shall make or enforce any law which shall … deny to any person within its jurisdiction the equal protection of the laws.” This may seem straightforward, and in some of the landmark cases that we’ll get to like Brown v. Board of Education, it is, but when you think about it, unequal treatment of specific groups is usually done by private citizens or institutions – like your employer or your landlord, and most people, believe it or not, are NOT employed by the government, either federal or state and they don’t live in government housing. And initially the Supreme Court interpreted the clause to apply only to the state government, not to private discrimination. In the Civil Rights Cases, the Court ruled that the law, “could not have been intended to abolish distinctions based on color, or to enforce social, as distinguished from political equality,” and they confirmed that as long as the state provided equal accommodations for people of different races, segregation was fine. This is the infamous “separate but equal” doctrine that was formulated in the case Plessey v. Ferguson. The distinction between social and political equality is an important one, and it provides a principle for looking at discrimination that the courts still use. Unfortunately, it’s pretty complicated and it means we have to look at something that’s kind of confusing, levels of scrutiny and protected classes. And we’ll start with protected classes because they are easier to understand. Let’s go to the Thought Bubble. So when state law or executive action mentions a protected class, the Supreme Court will almost automatically become suspicious. So what are protected classes? Broadly speaking they are what we might think of as “minorities” and this is an important way to conceptualize them. The Court defined protected classes in one of the most important footnotes in their jurisprudence. Here’s the relevant passage: “Nor need we enquire whether similar considerations enter into the review of statutes directed at particular religious … or national, …, or racial minorities,… whether prejudice against discrete and insular minorities may be a special condition, which tends seriously to curtail the operation of those political processes ordinarily to be relied upon to protect minorities, and which may call for a correspondingly more searching judicial inquiry.” So here it lays out the categories where the Court is going to pay special attention: when a statute deals with “discrete and insular minorities,” such as religious, or national or racial minorities. It’s automatically suspect and the courts are going to look at it closely. Why? Well this is in the footnote too. It’s because minorities, by definition, are at a huge disadvantage in the democratic political process – their numbers are too small to pass laws that might favor them, and it is easy for groups in the majority to pass laws that will disadvantage groups that are not in the majority. And this gets at the heart of the distinction between civil liberties, which deal with government actions, and civil rights, which deal with majority groups making life hard for minority groups. You may not like this distinction, but it does have the virtue of being based on a principle. Basically the courts will step in to protect groups that are unable to protect themselves in the legislative process because it will be too hard for them to pass laws in their favor. The way politics works in the U.S. will complicate this, as we’ll see, but as a principle it does make some sense. Thanks, Thought Bubble. That footnote above talks about situations that call for a “more searching judicial inquiry.” This is known as the level of scrutiny that the courts will apply, and it’s not strictly limited to equal protection cases, but this is where I’m going to try to make sense of it. So the highest level of scrutiny is called strict scrutiny. I'd call it super scrutiny or mega-monster scrutiny, but they didn't ask me today. And this means that the government will have a heavy burden to prove that the law or action in question is allowable. When government action concerns a protected class, strict scrutiny kicks in. There’s a five-step process that the courts go through in examining what the government has done. First they look to see if there’s a protected liberty at stake. Sometimes this is easy, as with religious freedom, but other times it’s hard, as with certain property rights or privacy issues. Second they look at whether the liberty is fundamental, which again can be complicated or not, depending on what the government is doing. Freedom from incarceration is a fundamental liberty, actually, it’s basically what we mean by liberty, so a law that specifically incarcerated one group based on nationality would get strict scrutiny. Unfortunately this did happen, during World War II when Japanese Americans were interned, but it’s a bad example of strict scrutiny since in that case the court, ruling in the case of Korematsu v. US let the government’s action stand. Third, they look at whether the law or executive action places an undue burden on the person or group in question. Let's say a state requires literacy tests for voting which can be burdensome or not, depending on the test and how it is administered. Fourth, assuming that the first three qualifications are met, the courts look to see if the law in question furthers a compelling government interest. In the literacy test example, the government interest might be seen as creating an educated pool of voters, although I’m not sure this would qualify as compelling. Fifth, if the court finds that the law meets all the other criteria, it looks to see if the government action in question follows the least restrictive means of achieving the government’s interest. In other words, is there a less burdensome way that the government could accomplish what it says the law accomplishes? If the answer is yes, then the law is struck down. So you can see, this five-part test is pretty, well, strict, and it’s hard for the government to pass it. In practice, this means that if the Court applies strict scrutiny, it means that the governmental action or law in question is probably going to be deemed unconstitutional. So that’s strict scrutiny -- not mega-monster scrutiny -- but what about those cases where the government isn’t dealing with a protected class, which is much of the time? Usually the Court applies what is called the “rational basis” standard for review. This is the lowest level of court scrutiny, and what it means is that if the government can show that it has a rational basis for its actions, the courts will say they are ok. As you might expect, this gives the government a lot of leeway with its laws. In between strict scrutiny and rational basis review is something called midzi scrutiny -- NOPE -- intermediate scrutiny. It’s a harder standard to meet than rational basis, but it doesn’t mean that the government usually loses, like with strict scrutiny. Why doesn't the government consult me about naming things? Ok, so now we have a sense of what civil rights are, and why the courts look at civil rights cases in the way that they do. It seems like a good time for an example to help make sense of all this. And there’s no better example than the famous decision in Brown v. Board of Education of Topeka Kansas. Although it was not the first case to take on the issue of discrimination in education, Brown v. Board is the most important, because it dealt with public schools. The issue was that Topeka had separate schools for black students and white students. Linda Brown was black and her parents wanted her to attend the white school because it was closer to where they lived and because it was better. The schools were supposed to be equal in quality under the “separate but equal” doctrine, but they weren’t. So after all I’ve told you about how the court decides cases where protected classes are involved – in this case black people who certainly qualify as a discrete and insular minority – the interesting thing about Brown v. Board of Education is that the Court pretty much ignored all of it. Their reasoning wasn’t legal or historical, it was sociological, based on the idea that separate facilities are inherently unequal because they make the minority group feel inferior to the majority group. Although the case didn’t immediately bring about the end of segregated schools – many states engaged in what they called “massive resistance” to prevent school integration, Brown v. Board of Education is still a landmark Civil Rights case. It showed that the federal government could intervene in something as local as public education when racial discrimination was involved, and, more important, it showed that states couldn’t use race as a criterion for setting up public schools. It was the legal basis of what we know as the American civil rights movement, and provided the foundation for the federal civil rights legislation of the 1960s. So I got a little into the history there, sorry about that. I know this is Crash Course Government and not Crash Course History. But with civil rights it's kind of hard not to. That’s because, unlike with civil liberties which are pretty much defined by the bill of rights, the question of civil rights comes out of the Fourteenth Amendment equal protection clause, which itself came about because of the Civil War and from the very beginning was a contested concept, and one whose meaning has changed over time. Because civil rights and equal protection almost by definition involve political activity and protection of minority rights, what constitutes civil rights changes over time. That’s why, in 2015 people talk about same sex marriage as a defining civil rights issue when 30 years earlier it was hardly mentioned. What’s really important is that we understand that civil rights, and their denial, have as much, if not more, to do with us and how we treat each other, as they have to do with how the government acts. Thanks for watching, I’ll see you next time. Crash Course Government and Politics is produced in association with PBS Digital Studios. Support for Crash Course US Government comes from Voqal. Voqal supports non-profits that use technology and media to advance social equity. Learn more about their mission and initiatives at voqal.org. Crash Course was made with the help of these mega-monster scrutineers. Thanks for watching. |
US_Government_and_Politics | 양원제_의회_Crash_Course_정부와_정치_2.txt | Hi. I'm Craig, and this is Crash Course Government. Uh. It's been a dream of mine to be on Crash Course since I was a little kid. Speaking of acting like a little kid, today, we're gonna talk about the U.S. Congress, which, according to the Constitution, is the most important branch of government. That was probably written by Congress. It wasn't. They didn't So when I say that Congress is supposed to be the most important branch of government, I'm talking about the national government, not the state government. There's a difference, okay? I know this, because the Constitution, which consists of seven articles and 28 amendments, mentions Congress first. In fact, right after the preamble, the very first section of the very first article, which is helpfully labeled Article I, Section I, says this: "All legislative powers herein granted shall be vested in a Congress of the United States, which shall consist of a Senate and a House of Representatives." So, right away, the Constitution sets up a two house legislature, with a Senate and a House of Representatives. The Latin word for this is bicameral, and I promise I'll quit with the Latin now. I didn't really say much Latin, but, just once, but I'll pr -- I won't say anymore. [Theme Music] That's pretty catchy. [whistles theme] So let's start with the House of Representatives, because it's a little easier. In order to serve in the House, you have to be 25 years old, a citizen for seven years, and a resident of the state that you hope to represent. I'd like to think that I represent a state of enjoyment. Vote for me 2015. Representation is determined by population. No state has fewer than one, Vermont, North and South Dakota, Wyoming, and Alaska each have one, and the most populous state, California, has 52. Right now, there are 435 members of the House of Representatives. The Senate has two senators from each state for a total of 100. To be a senator, you must be at least 30 years old, a citizen for nine years, and a resident of the state you hope to represent. Originally, senators were chosen by the state legislatures, which meant that they tended to be politically important members of a state's elite class. But this changed with the 17th amendment, and now, senators are elected by the people, just like representatives. I'm gonna explain how the two houses of the legislature actually legislate in a later episode--I'll have a bigger beard, probably--but now, I'm going to point out a few of the ways that they are different. Ultimately, the houses do the same thing, make laws, but the Constitution grants certain specific powers to each house. Let's look at those powers in the Thought Bubble. The House of Representatives is given the power to impeach the president and other federal officials. This can be confusing because people tend to think that impeaching means kicking the official out of office, but it doesn't. The House impeaches an official by deciding that that person has done something bad enough to bring him to trial. An impeachment is like a criminal indictment. Once the official is impeached, the trial happens in the Senate. If it's the President who's been impeached, the Chief Justice of the Supreme Court presides. Otherwise, it's the Vice President. You don't let the VP preside over a presidential impeachment, because he has a vested interest in seeing the president removed. Then the VP would become president. Duhhh. The second power that the House has is that they decide presidential elections if no candidate wins the majority of the electoral college. I'll explain this later, but for now, remember that this barely ever has happened ever. The third power that belongs specifically to the House is found in Article I, Section 7: "All bills for raising revenue shall originate in the House of Representatives." This is pretty important, because it means that any bill that raises taxes starts in the House, and if you know anything about America, you know that we care about taxes, a lot. So this power is huge and is sometimes called "The Power of the Purse". The Senate has some important powers, too. The first one I've already mentioned is that they hold impeachment trials. That doesn't happen very often at all. Another power the Senate has is to ratify treaties. This requires a 2/3rds vote of the Senate. Most treaties you don't hear much about, except when the Senate refuses to ratify them, as it did or didn't do with the Treaty of Versailles. I totally would have ratified that treaty, just sayin'. The last significant power that belongs only to the Senate is the confirmation power. The Senate votes to confirm the appointment of executive officers that require Senate confirmation. Some of these, like the cabinet secretaries, are obvious, but there are over 1,000 offices requiring Senate confirmation, including federal judges, and this is probably too many. Thanks, Thought Bubble. Uh, I love saying that, YES! So those are the major differences between the two houses of the legislature, but why do we have two, and why did the framers of the Constitution make them different anyway? There are two categories of reason here: historical and practical. The historical reason for the two houses is that when the Constitution was being written, the framers couldn't agree on what type of legislature to have, because they came from states with different interests. Delegates from states with large populations wanted legislatures to be chosen based on the state's population, so that their states would have, wait for it, more legislators and more power. This is called proportional representation, states with small populations understandably didn't want proportional representation. They favored equal representation in the legislature, which would give them equal power. Large states supported what was called the Virginia Plan, and small states wanted the New Jersey Plan, and they argued over it until a compromise was reached. Since it was brokered by Connecticut's Roger Sherman, it was called the Connecticut Compromise, or, more usually, The Great Compromise, because historians are really bad at naming things. Hey, this war is nine years long. Let's call it the Seven Years War. That's actually genius. If you guessed that the compromise was an upper house with equal representation and a lower house with proportional representation, congratulations, you understand the Great Compromise! You don't win anything if you guessed it right. Actually, if you guessed it right, click here and watch me punch an eagle. So that's the historical reason for the two houses, but what about the practical reasons? One of the main reasons to divide the legislature and to give the two houses power is to make it so that the legislature doesn't have too much power. How do we know that the Framers wanted this? Because one of them, James Madison, told us that in one of the Federalist Papers. In Federalist 51, Madison wrote "In republican government, the legislative authority necessarily predominates. The remedy for this inconveniency is to divide the legislature into different branches and to render them different modes of election and different principles of action, as little connected with each other as the nature of their common functions and their common dependence on the society will admit." James Madison may not have sounded like Foghorn Leghorn. But that's one of the theories. My theory. I say, I say. Anyways, the idea that one house of the legislature can limit the power of another house is called an intrabranch check. We'll look at this in more detail when we talk about checks and balances. In general, the Framers of the Constitution were kind of obsessed with the idea that the government might have too much power. So we'll be seeing lots of examples of how they try to deal with this. So let's finish up by looking at the reasons why the specific powers were given to each house. To do this, let me introduce my assistants. By assistants, I really mean clones. Let's go to The Clone Zone! So I made these clones to help us understand these multi-sided issues. This is Senate clone and this is House clone, and they're quite good looking I might add. Senate clone: So you may have noticed, according to the Constitution, Senators are expected to be older than Representatives, and although 30 isn't all that old today, it was in 1787 when the Constitution written. This was because older people are wiser, or at least more experienced, and the Framers wanted the Senate, which is sometimes called the Upper House, to be more serious, or just more dignified. And above all, deliberative than the House. It was supposed to be more immune from the desires of the public, which the Framers were kind of afraid of because of their unfortunate propensity to riot. One of the ways that the Framers hoped to ensure this was by giving Senators a 6 year term, which really would mean that they could ignore the ranting and ravings of their constituents for at least, like, 5 years at a time. Because the Senate is supposed to be the more deliberative body and the one that is more insulated from public opinion, they are the ones given the power to confirm public ministers and to ratify treaties. I guess they thought that being older and wiser, Senators would be better judges of character and better able to govern based on their sense of what is in the public interest. Sometimes the idea that a representative should govern based on what he thinks is best for the people rather than what they say they want is referred to as a representative acting as a trustee. House clone: Haha, which is another way of saying that the Senate is full of elitist snobs who don't care what their constituents want at all. In the House of Representatives, we're supposed to take into consideration the desires of the people in their district, who voted for them, acting in the role of delegates. So the main way that the Framers tried to ensure that Representatives could be more responsive to their voters, other than having them directly elected for by the voters instead of state legislatures, was to give them 2 year terms. This method meant that they have to be responsive to the changing opinions of voters in their districts, otherwise they could easily be voted out of office. You don't want that, no way. Oh boy. Why they would be given the power of impeachment is beyond me, but it totally makes sense to give the power of the purse to the branch of government that is closest to the people. After all, one thing that the government does that is directly related to almost everybody is taxes. So you want the most democratic body making the decisions that have the most direct effect on people. Craig: Huh, thanks clones. So there you have it, that's the basics of our bicameral Congress, including the differences between the two Houses and why they are that way. Oooh, I used Latin again. I'm sorry. Mea culpa. We'll be going into much greater detail about how the two houses work together, or don't, in future episodes. But that's enough for now, thanks for watching Crash Course. I'll see you next week. Crash Course Government and Politics was produced in association with PBS Digital Studios. Support for Crash Course US Government comes from Voqal. Voqal supports non-profits that use technology and media to advance social equity. Learn more about their mission and initiatives at Voqal.org. Crash Course was made with the help of these nice people. Thanks nice people. And thanks for watching. You're nice people, I assume. |
US_Government_and_Politics | Election_Basics_Crash_Course_Government_and_Politics_36.txt | Hi, I'm Craig, and this is Crash Course Government and Politics, and today I'm gonna talk about an aspect of American elections that is probably most familiar to you, at least if you're an American and you sometimes watch TV, or look at the internet, or read a newspaper, or breathe air. I'm talking about elections, which get a lot of attention here in the US, and on Crash Course, possibly because they present a relatively straight forward narrative, and it's easy for the media to cover. But we're not going to focus on media coverage today. No, instead, we're going to look at why we have elections in the first place, and the institutions and procedures that structure the way elections work in America. We might even compare them to elections in other places, but I can't make any promises. [Theme Music] Before we get into the nitty-gritty of how elections work in the U.S., it might be a good time to ask a question that rarely gets asked, "Why do we have elections in the first place?" A simple answer is, "Complexity." America's too big and complex to hold public referendums on individual issues, although some states, like California, try to do it. So, instead, we choose representatives. In other words, we vote for people, not policies. Elections are as good a system holding these representatives accountable as any. Well, at least they're better than violence or public shaming. Political scientists and economists have a more complicated way of describing this in terms of "adverse selection." Because why would we want a simple answer when we have political scientists and economists around? Well, they gotta do somethin'. Adverse selection is a problem that can arise when we make a choice but do not necessarily have all the information we need to make that choice. Kind of like when you buy a used car. Elections help to solve this problem because they are ideally competitive. The competition creates incentives for candidates to provide information about themselves and to make most of that information accurate since their opponent will call them out for any statements that are less than truthful. At least, that's what we hope will happen. Elections also supposedly make candidates more accountable since they provide voters a chance to get rid of bad actors. Of course, this only works when elections are competitive, and, as we'll see in a later episode, many elections in the U.S. really aren't. You might think that since elections are so important to our politics that they would be featured prominently in the Constitution, but yeah, no. The Constitution does set up a few basic guidelines that structure American elections, but most of the important rules that define the way elections are carried out come out of state laws, legal decisions, and local administrative practices. So what does the Constitution say about elections? Not a lot, as it turns out, except when it comes to choosing the president. President just gets everything... President's so important. The Constitution does lay out the qualifications for running for federal office--which we already when over in our episodes on Congress and the President -- and it describes the number of Representatives and Senators. But mostly the Constitution leaves elections up to the states. Article 1, Section 4 says, "The times, places and manner of holding elections for Senators and Representatives, shall be prescribed in each state by the legislature thereof; but the Congress may at any time by law make or alter such regulations, except as to the places of choosing Senators. " And the Constitution was later changed to allow for direct election of Senators with the Seventeenth Amendment, so that last clause doesn't matter so much anymore. The Constitution does say more about the way the President is chosen indirectly through the Electoral College, but the framers messed that up so badly that they had to amend the Constitution after the election of 1800. The Twelfth Amendment, which basically means that the President and Vice-President come from the same political party -- although it doesn't actually say that -- fixed the electoral process. So now it's flawless. But it's still indirect and the qualifications for the electors who choose the president are still left up to the states. Some Constitutional amendments also help to structure American elections. The Twenty-Fourth Amendment outlawed poll taxes, which made it easier for poor people to vote, and the Twenty-Sixth Amendment lowered the voting age from twenty-one to eighteen. In general, when Congress addresses voting issues, it's to try to expand the pool of voters. Although the Constitution doesn't specify when elections happen, it does give Congress the power to do so, and it requires that the day on which the electors choose the president has to be one single day. This is in Article Two: "The Congress may determine the time of choosing the electors, and the day on which they shall give their votes; which day shall be the same throughout the United States." Congressional laws also help structure elections by making them more fair. The Voting Rights Act of 1965 set up a number of systems to increase voter participation by minority groups, especially African Americans. And Congress also set up the Federal Election Commission, which has some say over elections. I'll tell you who should never be allowed to vote: eagles! You're gerrymandered out of here. But generally, following the Constitution, most aspects of elections are under the control of the states. State laws define how candidates are nominated and get on the ballot, and they can influence the operation of political parties. State laws also determine registration requirements for voting and set up the location and hours of polling places, which vary a lot from state to state. Probably most important for federal elections, state decide the boundaries of Congressional election districts, although not the number of representatives each state has, which is determined by the state's population. We'll talk more about election districts in a future episode. That's what gerrymandering has to do with. Remember when I gerrymandered the eagle? Yeah, that's a preview for what's coming. Although this is not always true in every case, as a general rule of thumb, the federal government is more likely to pass laws that expand voting, and states are the government that restrict voting, especially through registration requirements and taking the vote away from people convicted of felonies. One important aspect of American elections that has been set up by state laws is the way that winners have been decided. We like to say that in America, majority rules. But for the most part, this isn't really true, at least as far as elections are concerned. In most states, and in most elections, we follow the Plurality Rule, and this has important consequences for American politics. Let's go the Thought Bubble. Under the Plurality Rule, the candidate with the most votes wins. The number does not have to be a majority, and the more candidates in the race, the less likely anyone will get the majority. Suppose your election has four candidates: A, B, C, and D. Candidate A gets 20% of the vote, Candidate B gets 30%, candidate C gets 25%, and Candidate D also gets 25%. That should add up to 100%. It does? Thank goodness! Okay, so no one has a majority here, so who wins? Candidate B, of course, because she has the most votes, 30%. Now you'll notice something about this election that may be a bit of a paradox: the significant majority of voters in this election, 70% in fact, have chosen Not B. Yet, B is that one that wins. This is why we need to be very careful when we say that majority rules, because in many cases, it doesn't. But in some cases, it does. Some states do have a majority rule in their elections. In these states, if no candidate gets more than 50% plus 1 of the vote, then the top two vote-getters go on to what's called a run-off election. In this second election, you almost always get a majority. In many cases, we also say that American elections are "Winner take all." This is the case in forty-eight out of fifty states when it comes to electoral votes. What this means is that the winner of the election gets 100% of the state's electoral votes, even though it's likely they wouldn't have carried 100% of the votes. It is possible for a state to decide to award its electoral delegates proportionately, based on the percentage of votes that a candidate receives, or even by electoral district, although the latter rule causes some problems, as we'll see in another episode. Thanks, Thought Bubble. So, the Plurality Rule can result in the majority of people being represented by someone they voted against. This seems like a bad system, so why do we have it? The main reason is efficiency. Under plurality rule, you get a definite winner that you might not have under a majority rule. It also allows for a greater variety of candidates to win, at least potentially. And it has one key result for America's political system: it pretty much ensures that we will only have two viable political parties. The concept that plurality rules create two-party systems is explained by something called Duverger's Law. Here's how it works. Imagine political parties on a continuum from extreme right to extreme left. Most voters will not fall into either extreme, so the masses of party followers will coalesce around the center-right and center-left. In these conditions, there's no incentive to form a third party because it's likely to take votes away from the centrist party, and thus throw the election to the other party. Let's say that you're on the right of the political spectrum. You like the ideas of the center right party, but you think they're a little bit weak, and you'd like to see someone speak up more for your right-most ideas. You could vote for the candidate whose ideology and policies are more to your liking, but they're not likely to win. Remember, most people prefer center-right ideas over extreme-right ideas. That's why they're extreme. So, the candidate you would most like to support isn't going to win, but what's worse for you is that by voting for them, you take away votes from the candidate you partially agree with. Since people know that third-parties almost never win, we're left with only two parties in the U.S. Now, Duverger's Law is important for political scientists, and it explains broadly why we have two parties, but a look at American politics in the second decade of the twentieth century suggests that parties are more extreme than the model would lead us to believe. The polarization of parties is the subject of another episode on the composition of parties and how they reflect political ideologies. But for now, it's still useful to understand how elections themselves work to shape the party system we have in the U.S. This is what we sometimes call a structural or institutional view of politics, and it's the kind of thing political scientists really, really like. We'll look closely at the actual political parties and who votes for which one in other episodes. But I hope we've provided a little bit of insight into how elections work in the U.S. Thanks for watching. See you next time. Crash Course Government and Politics is produced in association with PBS Digital Studios. Support for Crash Course U.S. Government comes from Voqal. Voqal supports non-profits that use technology and media to advance social equity. Learn more about their mission and initiatives at voqal.org. Crash Course was made with the help of this plurality of people. Thanks for watching! |
US_Government_and_Politics | Judicial_Review_Crash_Course_Government_and_Politics_21.txt | Hi. I'm Craig, and this is Crash Course Government and Politics, and today we're going to talk about the most important case the Supreme Court ever decided ever. No, Stan, not Youngstown Sheet and Tube Company vs. Sawyer. Although, that is one of my favorites. Loves me some sheet and tube. And no, it's not Ex parte Quirin. Although I do love me some inept Nazi spies and submarines. And no, it is not Miller v. California. Get your mind out of the gutter Stan. We could play this game all day, but this episode is about judicial review: the most important power of the Supreme Court and where it came from. Don't look so disappointed. This is cool! [Theme Music] When you think of the Supreme Court, the first thing you think about, other than those comfy robes, is the power to declare laws unconstitutional. The term for this awesome power, the main check that the court has on both the legislative and executive branches, is judicial review. Technically, judicial review is the power of the judiciary to examine and invalidate actions undertaken by the legislative and executive branches of both the federal and state governments. It's not the power to review lower court decisions. That's appellate jurisdiction. Most people think of judicial review as declaring laws unconstitutional, and that definition is okay. The legal purist will quibble with you since judicial review applies to more than just laws. Appellate courts, both state and federal, engage in some form of judicial review, but we're concerned here with the federal courts especially the U.S. Supreme Court. The Court has the power to review the following: One, Congressional laws a.k.a. statutes! Statutes. Since judicial review is a form of appellate activity, it involves upholding or affirming the validity of laws, or denying it, invalidating the law in question. You might think that the Supreme Court does this a lot, but it doesn't and historically it almost never happened before the twentieth century. If the court were always striking down congressional statutes, it would be hard for people to know which laws to follow, and you'll remember that one of the main things that courts do is create expectations and predictability. For instance, you could predict that I would eventually be punching this eagle! Another reason why they don't invalidate laws often is that if the Court frequently overruled Congress, the Court would seem too political and people would stop trusting its judgment. If the Court has any power at all, it largely stems from its prestige and reputation for being impartial and above politics. No one has any problems with the Supreme Court decisions, at all. Two, the Court can also overturn state actions which include the laws passed by state legislatures and the activities of state executive bureaus, usually the police. The power to review and overturn states comes from the Supremacy Clause in the Constitution. Most of the time that the Supreme Court extends civil rights, it comes out of a state action. A good example is Brown vs. Board of Education where the Court struck down the idea of separate accommodations being equal in the context of state public schools. Three, the Court can review the actions of federal bureaucratic agencies. Although, we usually defer to the bureaucrat's expertise if the action is consistent with the intent of the legislature which the Court usually finds it is. The Court almost never strikes down Congressional delegation of power to the executive. Although, you might think that it should. The fourth area where the Court exercises judicial review is over Presidential actions. The Court tends to defer to the President, especially in the area of national security. The classic example of the Court overturning executive action happened in U.S. vs. Nixon where the Justices denied the President's claim of executive privilege and forced him to turn over his recordings relating to the Watergate scandal. More recently, the Court placed limits on the President's authority to deny habeas corpus to suspected terrorists in Rasul vs. Bush. So, the Supremacy Clause gives the Court the authority to rule on state laws, but where exactly in the Constitution does the power of judicial review come from? Trick question! It's not there, go look ahead, look. I'll wait. See, not there. Wow, you went through that whole thing really quickly. Fast reader. The crazy thing is that the power of judicial review comes from the Court itself. How? Let's go to the Thought Bubble. The Supreme Court granted itself the power of judicial review in the case of Marbury vs. Madison. You really should read the decision because it's a brilliant piece of politics. The upshot of the case was that Chief Justice John Marshall ruled that the Court had the power to review, uphold, and strike down executive actions pursuant to the Judiciary Act of 1789, and in doing this, to strike down part of that federal law. How he got there was pretty cool. So, Marbury was an official that President John Adams, at the very end of his term, appointed to the position of Justice of the Peace. When Marbury went to get his official commission certifying that he could start his job, James Madison, who was Secretary of State, refused to give it to him. So, Marbury did what any self-respecting petitioner would do, he went to the Supreme Court for a writ of mandamus that would force Madison to give Marbury his job. This is what he was supposed to do according to the Judiciary Act of 1789. What Marshall did was brilliant! He ruled that yes, Marbury had a right to the commission but that the Supreme Court could not grant his writ because the law directing them to do so was unconstitutional. This is brilliant for two reasons. First, by the time the time the case came before the Court, Thomas Jefferson was President. Those of you who remember Crash Course U.S. History will recall that that less handsome man told you that Jefferson was a Democratic Republican while Adams, Marbury, and even Marshall were all Federalists. By ruling against his own party, Marshall made a decision that was favorable to Jefferson and thus, likely to be supported. The second move was even cooler. Marshall's ruling took the power of writs of mandamus away from the Court, making it look weaker, while at the same time giving the Court the power to declare the law that had granted it the mandamus power in the first place unconstitutional. So by weakening the Court in this instance, like Daredevil going blind as a kid, Marshall made it much stronger for the future, like Daredevil getting stronger in the future. Thanks, Thought Bubble! So that's where judicial review comes from, but that still leaves many questions. A big question is, why has this ruling stuck around and hasn't been overturned by other laws or later court decisions? Another question is, is judicial review a violation of separation of powers? Some say that it's judges making laws and thus an anti-democratic usurpation of the legislature's power. Let's talk about this rulings longevity first. Remember when I said last time that the Supreme Court rulings are binding in lower courts? You don't remember do ya? You were sleepin'. Wake up! Well, in general, Supreme Court precedents are binding on future Supreme Courts too because of the principle of stare decisis, which is Latin for "let the decision stand." This doesn't mean that future Supreme Court's can never overturn the decisions of prior Courts, it's just that they try very hard to not do it. This idea of precedent is one way that judges can be said to make laws. Appellate decisions are like common law in that they are binding on future courts and constrain their decisions and because they don't have to be grounded in a specific statute. Other courts have to follow the higher court's interpretation of the law, and this interpretation has the effect of redefining the law without actually rewriting the statute. On the other hand, appellate decisions are technically not common law in that they are only binding on courts, not executive agencies or legislatures. They are, however, signals to courts and legislatures about how courts will rule in the future. Maybe an example will help. If you watch cop shows, or you get arrested a lot, you probably know something about Miranda vs. Arizona which gave us the Miranda Warning. You have the right to remain silent and all that stuff. Hopefully, you've never heard that in person, though. But hey, we're not here to judge. That's what the courts are for! Bahahahaha. Okay. In that case, the Supreme Court threw out Miranda's conviction because he hadn't been told he had the right to remain silent. Without knowing that he didn't have to talk, he made a confession that got him convicted. The court didn't rewrite Arizona's law but it sent a signal to Arizona's law enforcement agencies, and those in all the other states, that in the future courts would throw out the convictions of defendants who hadn't been informed of their rights. As a result, police procedures changed in every state, and now the police are supposed to read the Miranda Rights to anyone they arrest. So those are the very basics of judicial review. We've probably raised as many questions as we've answered, but that's why we're making a bunch of these videos! So we can teach it all! All of it! Anyway, the big concern for many is that cases like Marbury vs. Madison, which give courts the power to strike down pieces of legislation, overturn the judgment of the elected representatives that made the laws and violate the idea of separation of powers. Well, that is a thorny issue, but it's one that we don't have time to de-thorn today. For now, understand that judicial review is how the courts work in practice and not necessarily a defined power granted by the Constitution. Just remember, the executive and legislative branches also operate with a lot of implied powers that aren't explicitly granted to them in the Constitution. That's because the governance of the United States has evolved and changed over time to hopefully, suit the needs of the country as they change over time. Thanks for watching. Crash Course Government and Politics is produced in association with PBS Digital Studios. Support for Crash Course U.S. Government comes from Voqal. Voqal supports non-profits that use technology and media to advance social equity. Learn more about their mission and initiatives at voqal.org. Crash Course is made with the help of these nice people who have the right to remain silent. Thanks for watching. You have the right to stop watching. |
Stanford_CS234_Reinforcement_Learning_Winter_2019 | Stanford_CS234_Reinforcement_Learning_Winter_2019_Lecture_11_Fast_Reinforcement_Learning.txt | Um, today what we're gonna be doing is we are gonna be starting to talk about fast reinforcement learning. Um, so in terms of where we are in the class, we've just finished up policy search. You guys are working on policy gradient, uh, right now for your homework. And then, that'll be the last homework and then the rest of the time will be on projects. [NOISE] Excuse me. Um, and then, uh, right now we're gonna start to talk about fast reinforcement learning, which is something that we haven't talked about so much. So, so the things we've discussed a lot so far in the term are things like optimization, generalization, and delayed consequences. So, how we do planning and Markov decision processes? How do we scale up to really large state spaces using like deep neural networks? Um, and how do we do this optimization? And I think that that works really well for, um, uh, a lot of cases where we have good simulators or where data is pretty cheap. Um, but a lot of the work that I do in my own lab thinks about how do we teach computers to help us, uh, which naturally involves reinforcement learning because we're teaching computers about how to make decisions that would help us. But I think there are a lot of other applications where we'd really like computers to help us. So, things like, uh, education or healthcare or consumer marketing. And in each of these cases we can think of them as being reinforcement learning problems because we'd have some sort of agent like our computer, uh, that is making decisions as it interacts with a person and it's trying to optimize some reward, like, it's trying to help someone learn something or it's trying to treat a patient or it's trying to, uh, increase revenue for a company by having consumers click on ads. And in all of those cases, the place where data comes from is people. Um, and so, there's at least two big challenges with that. The first is that, uh, you know, people are finite. There's not [NOISE] an infinite number of people, um, and also, that it's expensive and costly to, um, try to gather data when you interact with people. And so, it raises the concern about sample efficiency. So, in general, of course, we would love to have reinforcement learning algorithms that are both computationally efficient and sample efficient. But, uh, most of the techniques we've been looking at so far particularly these sort of Q learning type techniques were really sort of inspired by this need for computational efficiency. Um, so if we think back to when we were just doing planning at the beginning when we talked about doing dynamic programming versus doing Q learning, um, in dynamic programming, we had to do a sum over all next states and in TD learning we sampled that. So, in TD learning we sort of had this constant cost per update versus for dynamic programming where we had this S squared times A and cost. Um, so it was much more expensive to do things like dynamic programming than it was to do TD learning, um, on a, on a per step basis. And so, a lot of the techniques that have been developed in reinforcement learning have really been thinking about this computational efficiency issue. Um, and there are a lot of times where computational efficiency is important. Like, if you wanted to plan from scratch and you were sort of driving a car at 60 miles per hour, then if it takes you-. Uh, so if you're driving a car at 60 miles per hour and it takes your computer one second to make a decision about like, you know, how to turn the wheel or something like that, um, then during that one second you've traveled, you know, many feet. So, in a lot of cases, you really do have real-time constraints, uh, on the computation you can do. Uh, and in many situations like for, you know, in the cases of, uh, some robotics and particularly when we have simulators [NOISE] we really want computational efficiency because we need to be able to do these things very quickly. Um, we can sort of use our simulators but we need our simulators to be fast so our agent can learn. Um, in contrast to those sort of examples are things where sample efficiency is super expens- important and maybe computation is less important. So, whenever experience is costly or hard to gather. And so, this is particularly things that involve people. Um, uh, so we think about students or patients or customers, like the way that our agent will learn about the world is making decisions, um, and that data affects real people. So, it might be very reasonable for us to take, you know, several days of computation if we could figure out a better way to treat cancer, um, because we don't wanna randomly experiment on people and we wanna use the data as well as we can to be really really sample efficient, um, versus in the case of like Atari it's, uh, we wanna be really computationally efficient because we can do many, many simulations. It's fine. No one's getting hurt. But, um, but we need to eventually learn to make, you know, to derive a good game. So, one natural question might be, okay, so maybe now we care about sample efficiency. Um, and before we cared perhaps more about computational efficiency but maybe the algorithms we've already discussed are already sample efficient. So, does anybody remember like on the order of magnitude or like, you know, somewhere in the rough ballpark how many steps it took DQN to learn a good policy for Pong? Maybe, there's multiple answers maybe someone could say. Yeah. [NOISE] I think it varies somewhere partly between 2 to 10, is my guess. 2 to 10 million. So, um, that's a lot, that's a lot of data [LAUGHTER] to learn to play Pong. So, I would argue that the techniques we've seen so far, um, are not gonna address this issue and it's not gonna be reasonable for us to need, you know, somewhere between 2 to 10 million customers before we figure out a good way to target ads or 2 to 10 million patients before we figure out the right decisions to do. So, the techniques we've seen so far are not gonna be, uh, they're formally not sample efficient and they're also empirically not sample efficient. Um, so we're gonna need new types of techniques than what we've seen so far. So, of course, when we start to think about this, we think about the general issue of. you know, what does it mean for an algorithm to be good? We've talked about computational efficiency and I've mentioned this thing called sample efficiency. But in general, I think one thing we care a lot about is, you know, how good is our reinforcement learning algorithm and we're going to start to try to quantify that in terms of sample efficiency. Um, but, of course, you could have an algorithm that's really sample efficient in the sense that maybe it only uses the first 10 data points and then it never updates its policy. So, it doesn't need very much data to find a policy but the policy is bad. So, when we talk about sample efficiency, we're gonna want both but we don't need very much data and we don't need very much data in order to make good decisions. So, we still wanna get good performance we just don't need- wanna need very much experience to get there. Um, so when we talk about what it means for an algorithm to be good, you know, one a, one possibility is we can talk about whether or not it converges at all. Um, that just means whether or not the value function or the policy is stable at some point, like asymptotically because the number of time steps goes to infinity. And we talked about how sometimes with, uh, value function approximation we don't even have that, like, ah, things can oscillate. Um, then another thing that's stronger than that that you might wanna us to say, well, asymptotically as t goes to infinity, um, will we converge to the optimal policy? And we talked about some algorithms that would do that under some different assumptions. Um, but what we haven't talked about very much is sort of, you know, well, how quickly do we get there? Asymptotically is a very long time. Um, and so we might wanna be able to say, if we have two algorithms and one of them gets the optimal policy here, like if this is performance with this time and another algorithm goes like this, intuitively, algorithm two is better than algorithm one even though they both get to the optimal policy, eventually. So, we'd like to be able to sort of account for either, we can think about things like how many mistakes our algorithm makes or its relative performance over time compared to the optimal. And so, we'll start to talk today about some other forms of measures for how good an algorithm is. So, in this lecture and the next couple of lectures, we're gonna, um, do several different things trying to talk about sort of how good are these reinforcement learning algorithms and think about algorithms that could be much better in terms of their guarantees for performance. Um, we're gonna start off and talk about tabular settings. But today we're only gonna talk about simple bandits. But generally, for the next, um, today and next lecture, we'll talk about tabular settings and then, um, hopefully also get to some about function approximation plus sample efficiency. But we'll start to talk about sort of settings frameworks and approaches. So, the settings that we're going to be covering like today and next time, it's gonna be bandits which, uh, a number of you- who, who, who here is doing the default project? Okay. So, a number of you are, are already starting to think about this in terms of the project. So, we'll introduce bandits today, um, and then we'll also talk about this for MDPs. And then, we'll also introduce frameworks, and these are evaluation criteria for formally assessing the quality of a reinforcement learning algorithm. So, they're way- there's sort of a tool you could use to evaluate many different algorithms and algorithms will either satisfy this framework or not, or have different properties, um, under these different frameworks. And then, we'll also start to talk about approaches, which are classes of algorithms for achieving these different evaluation criteria for these different frameworks in different settings, either for the MDP setting or, or for the bandit setting. And what we'll shortly see is that there's a couple of main ideas of styles or approaches of algorithms which turned out to both have, um, be applicable both to bandit settings and MDP settings, um, and function approximation actually and also that have some really nice formal properties. There's sort of a couple of big conceptual ideas about how we might do fast reinforcement learning. Okay. So, the, the plan for today will be that we'll first start with an introduction to multi-armed bandits. Then we'll talk about the definition of regret on a mathematical formal sense. Um, and then we'll talk about optimism under uncertainty. And then, as we can, we'll talk about Bayesian regret, um, and probability matching and Thompson sampling. I'm curious, who here has ever seen this sort of material before? Okay. A couple of people, most people not. I wouldn't- is it covered in AI? I don't think they would cover it. Oh, good. Okay. All right. So, for some of you, uh, this will be a review, for most of you, it will be new. So, for multi-armed bandits, we can think of them as a subset of Reinforcement Learning. So, it's generally considered a set of arms, the- there's a set of m arms, which were, uh, the equivalent to what we used to call actions. So, in Reinforcement Learning, in our set of actions, um, we're thinking about like, there being m different actions. Now we're often gonna call those arms. [NOISE] And then, for each of those arms, they have a distribution of rewards you could get. So, we haven't talked a lot about sort of having uncertainty over our rewards. We mostly just talked about the expected reward. Um, and for multi-armed bandits, today we're gonna explicitly think about the fact that, um, rewards might be sampled from a stochastic distribution. [NOISE] So, there's some distribution that we don't know which is conditioned on the arm. So, conditioned on the arm, you're gonna get different rewards. So, for example, it could be that for arm 1, your distribution looks something like this. This is the probability of that reward and this is rewards. Um, and then for arm 2, it looks something like this. [NOISE] So, in this particular example, um, the average reward for arm 1 would be higher than the average reward for arm 2. And then they would have different variances. But it doesn't have to be Gaussian. You could have lots of different distributions. Um, essentially, what we're trying to capture here is that whenever you, uh, um, take a particular action, which we also refer to as pulling an arm, um, for the multi-armed bandit, um, then the reward you get might vary even if you pull the same arm twice. So, in this case, you can imagine that if you pull the arm once, um, arm 1 once, you might get a reward here and maybe the second time you get a reward there. So, the idea in this case is it's similar to MDPs except for now there's no transition function. So, there's no state or equivalently, you can think of it as there's only a single state. Um, and so when you take an arm, um, you stay in the same state. There's always these m actions available to you, and on each step, you get to pick what action to take and then you observe some reward that is sampled from the unknown probability distribution associated with that arm. And just like in Reinforcement Learning, um, we don't know what those reward distributions are in advance and our goal is to maximize the cumulative reward. So, if someone told you what these distributions were in advance, you would know exactly which arm to pull, whichever one has the highest expected, uh, as, expected mean. [NOISE] So, we're gonna try to use pretty similar notation to what we had for, um, the reinforcement learning case. Um, but if you notice things that are confusing, just let me know. Um, so we're gonna define the action value as the mean reward for a particular action. So, that's Qa, this is unknown, agent doesn't know this in advance. The optimal value V star is gonna be equal to Q of the best action. And then, the regret is gonna be the opportunity loss for one step. So, what that means is that if instead of taking, if, if you could have taken Q of a star and instead you took Q of at. So, this is the actual arm you selected. How much in expectation did you lose by taking the sub-optimal arm? And this is how we're gonna mathematically define regret in the case of Reinforcement Learning. So, if you selected the optimal arm, your regret, your expected regret will be 0 for that time step, um, but for any other arm, there will be some loss. And the total regret is just, um, the total opportunity loss if you sum over all the time steps that the agent acts and compare, um, the actions it took and the, um, the expected reward of each of those actions to the expected reward of the optimal action. So, just to be clear here, this is not known, this is not known to the agent, and this is unknown to the agent. [NOISE] Just to check for understanding for a second, why is this, so why is this second thing unknown to the agent? Yeah. Because you don't know the probability distribution of Q. Right. So, [NOISE], correct. So, you don't know, er, what the distribution is, so you don't know what Q is. You get to observe, um, a sample from that. So, you get to observe R. You get to get an R which was sampled from the probability distribution of rewards given the action that was taken. [NOISE] But you don't get to observe either, uh, the true expected value of the optimal arm nor the true expected value of the arm that you selected. So, this isn't something we can normally evaluate unless we're in a simulated domain, okay? But we're gonna talk about ways that we can bound this and, and think about algorithms or try to minimize the regret. So, if we think about ways to sort of quantify it, another way to think about it alternatively is that, um, think about the number of times that you take a particular action. We can call that Nt of a. So, that's like the number of times we select action 1, action 2, et cetera. And then, we can define a gap which is the difference between [NOISE] the optimal arm's value and the value of the arm that we selected, and that's the gap. So, that's how much we lost by picking a different arm than the optimal arm. So, this gap is equal to, gap for a star is equal to 0, if you don't lose anything by taking the optimal arm, and for all other arms, it's gonna be positive. So, another way to think about the regret which is equivalent is to say, this is equivalent to thinking about what are the expected number of times you select each of the arms times their gap. So, how much you would lose by selecting that arm compared to the optimal arm. And what we would like is that sort of an algorithm, um, it should be able to adjust how many times you pull arms which have large gaps. So, if there's a really, really bad arm, uh, like if there's really bad action which has a very low reward, you would like not to pull that as much, to take that action as much as the arms that are close to optimal. [NOISE] So, one approach that we've seen before is greedy. Um, in the case of, ah, bandits, the greedy algorithm is very simple. Um, we just average the, the rewards we've seen for an arm. So, we just look at every single time that we took that arm, we look at the reward we got for each those timestamps, and then we average it. And that just gives us, um, an estimate of the, of Q hat. And then what greedy does is it selects the action with the highest value, um, and takes that arm forever. So, it's probably clear in this case that because the rewards are sampled from a stochastic distribution that if you are unlucky and get samples that are, uh, misrepresentative, then you could lock into the wrong action and stay there forever. So, if we think of that little example I gave before, and I'll work out, uh, a bigger example shortly, so in this case, imagine this is reward, this is probability, and this was a2. Okay. So, let's imagine that the first, um, and I'll make this 1 and this 0. So, if you sample from a1, in this case, you could imagine there's some non-zero probability that the first sample you get is say, 0.2 for a1, and the first sample you get for a2 with non-zero probability might be 0.5. So, the true mean of a2 is lower than the mean of a1. But if you sampled each of these once, um, then if you're greedy with respect to that, then you will take the wrong action forever. [NOISE] Yeah. Wrong action forever, is the idea that our policy is gonna be influencing what times we'll get in the future or is the idea that there are some set of samples independent of this greedy policy to begin? Because it seems otherwise, if there is non-zero reward, you just take that one forever. Uh, great question. So, um, is it, yeah, so what [inaudible] said is, um, you know, is there an additional thing that we're doing kind of before this? Normally, for a lot of these algorithms, um, we're gonna assume that all of them operate by selecting each arm once at least if you have a finite set of arms. Um, and equivalently, you [NOISE] can say if you don't have any data, you treat everything equivalently or, um, but essentially most of these ones say, until you have data for all the arms, we are gonna do round robin, you're gonna sample everything once. And after you do that, either you can be greedy or we can do something else. So, there has to be a pre-initialization space. It's a good question. So, and we're also gonna assume for right now that we split ties, um, with equal probability. So, if there are two arms that have the same pro- probability, um, and they both have the max actio- max value, then you would split your time between those until the value is changed. So, this is an example where if we first sampled a1 once [NOISE], then sampled a2 once. Um, and because there's a non-zero probability that those samples would make it look that, um, action a1 has a lower mean than action a2, then you could lock into the wrong action forever. Now, an e-greedy algorithm, which we've seen before in class, um, uh, it does something very similar except for with probability 1 - epsilon, we select, um, the greedy action, and otherwise with epsilon, we split our probability across all the other actions or all the other arms. [NOISE] So, in these cases, um, we have some more robustness. So, in this case, you know, we would continue to sample, um, the other action, but we're always gonna make a sub-optimal decision at least epsilon percent of the time, well, approximately. It's a little bit less than that because if you do it totally randomly not a, um, uh, but it's order epsilon. I mean, er, it's slightly less than that because, um, er, if you uniformly split across all your arms with one over the number of arms probability, we'll be selecting the optimal action. [NOISE] Okay. So let's see these in practice for a second before we talk more about better algorithms. Um, so let's imagine we're trying to figure out how to treat broken toes and this is not a real medical example. Um, but let's imagine there's three different surger- three different options. Um, one is surgery. One is buddy taping the broken toe with another toe, which is what the Internet might tell you to do. Um, and the third is to do nothing. And the outcome measure is gonna be a binary variable of whether or not your toe is healed um, after six weeks as assessed by an x-ray. Okay. So let's imagine that we model this as a multi-armed bandit with three arms, where each arm corresponds to um, well, I'll ask you in a second what it corresponds to. And, and there's an unknown parameter here. So each arm, there's an unknown parameter which is the reward outcome. So let's just take just you know, one or two minutes just to say what does uh, um, a pull of an arm correspond to in this case and why is it reasonable to model it as a bandit, instead of an MDP? [NOISE] Yeah. I'm , and in terms of why we model it as a bandit, one reason is that MDPs usually, think of an agent walking through the world and each, the state, or the world has many different states, and we analyze those. Here we have just one state, a toe is broken and various actions are considered. Right. So great. So here we just have one there. And, and so what is, what is the, what does it mean to pull an arm in this case or take an action? Which does that correspond to in the real world? [NOISE] That would be a new patient coming in and then making a decision about the care for that patient. Great. So that's like um, uh, a patient coming in and then us deciding to either do surgery on them or giving them um, er, er, in this case a um, like a er, er, doing one of the three options for treatment. Um, and so in this case too, the, each pool is a different patient. So how we treat patient one isn't gonna generally affect how we treat patient two in terms of whether they healed or not, or whether that particularly your you know, surgery worked for them, doesn't affect the next patient coming in. So all the patients are IID. Um, and what we wanna figure out to do, is which of these treatments on average is most effective. Okay. So let's think about these in a particular set- setting. So um, uh, a par- particular set of values. So let's imagine that they're all Bernoulli reward var- variables because either the toe is gonna be be healed or it's not gonna be healed after six weeks. Um, it turns out in this particular fake example, surgery is best. So if you do surgery with 95% probability, it will be healed after six weeks. Buddy taping is 90%, and doing nothing is 0.1. So what would happen if we did something like a greedy algorithm? Oh, yeah. Sorry, is it possible to incorporate other factors into like pulling the arms? For example, surgeries like [inaudible] like cost effective versus buddy taping, are there ways to incorporate that? Yeah. It's a great question. So question's about like uh, you know, could we you know, surgery is a lot more invasive. And there might be other side effects, etc. There's a couple different ways you could imagine putting that information in. One is, you could just change the reward outcome. So you could say um, maybe it's more effective but it's also really costly and I've gotta have some way to combine outcomes with cost. Um, another thing that one might be interested in doing in these sorts of cases is that you might have a distribution of outcome. So in this case, all of them have the same um, distribution. They're all Bernoulli. But in some cases um, your reward outcomes will not be, will be complicated functions, right? Like, so it might be that um, maybe for some people, surgery is really awful and for most people, it's really good. But it's really bad for some people because they have you know, some really bad side effects and so its mean is still better. Um, but there is like, this really bad risk tail of like, maybe people you know, react badly to anesthesia or something like that. So in those cases, you might wanna not focus on expected outcomes. You might wanna look at risk sensitivity. And in fact, one of the things that we're doing in my group is looking at safe reinforcement learning including safe bandits um, and thinking about how you could optimize for risk-sensitive criteria. Another thing that we're not gonna talk about today which you also might wanna do in this case is that, patients are not all identical. Um, and you might wanna incorporate some sort of contextual features about the patient to try to decide which surgery to do or no surgery versus buddy taping. Ah, and hopefully, well, those of you who are doing the default project, we'll think about this definitely. And we'll probably get to this in a couple lectures. In general, we often have sort of, a rich contextual state which will also [NOISE] affect the outcomes. Okay, so in this case, let's imagine that we have these three potential interventions that we can do and we're running the greedy algorithm. So um, as ah, brought up before, we're gonna sample each of these arms once, and now we're gonna get an empirical average. So let's say that we sample action A1 and we get a plus one. And so now our empirical average of what the expected reward is for action A1 is 1. And then we do A2, and we also get 1 so that's our new average. And then we do A3, we get a 0. And so at this point, what is the probability of greedy selecting each arm assuming that the ties are split randomly? Yeah. Two plus two, then epsilon pair at one, plus or minus a little. And, and your name? . , yeah. So what said is um, exactly correct for the e-greedy case. So you're jumping ahead a little bit but that's totally right. Um, ah, in this case for greedy, it'll just be 50-50. Yeah. You're already moving to e-greedy. So yes. So the probability of A1 is gonna equal the probability of A2, just gonna equal to a half. So let's imagine that we did this for a few time steps. So, so we can think about what the regret is that we incur along all of this way. So at the beginning um, we have this sort of, an initialization period where we're gonna select each action once and we're always comparing this to what was the reward we could have gotten under the optimal action. So the regret here is gonna be exactly equal, so this is optimal. And remember, the regret is gonna be Q of A star minus Q of the action you took. So in the first case, it's gonna be zero because we took the optimal action. And the second case, it's gonna be 0.95 - 0.9 which is 0.05. And then the third case, it's gonna be 0.95 - 0.1. And then in the third case, it's gonna be zero or fourth case it's gonna be 0 and then it's gonna be 0.05 again. Now, in this situation um, will we ever select a greedy? Will we ever select A3 again given the values we've seen so far? No. Yeah, I see people say you know, no. So why not? [NOISE] Is it possible, what's our current estimate of um, the reward for A3? . Yeah. So I guess I didn't show, put those here but the, this was the actual. These were the rewards we got. So it's 1, 1, 0. So our current estimate for A3 is 0. We know our rewards are bounded between 0 and 1. None of our estimates can ever collapse below 0. Um, and we already have a positive 1 for these other two actions, which means that their averages can never go to 0. So we're never gonna take A3 again. Now, in this case, that's not actually that bad like [LAUGHTER] here, here, that's not actually a problem because A3 is a bad arm and it's got a much lower expected reward. Um, in other cases er, it could be that A3, we just got unlucky. Um, and in that case, it could mean that we never should, never take the optimal action. Yeah. I thought we used B star like in terms of the reward for an action. Yes. And that's the same as this. Good question. So this is the same as B star. Yeah. And I'll go back and forth between notation but yeah, definitely just ask me. So in this case, we're never gonna select A3 again and um, now notice that in the greedy case, there are cases where um, if I had used slightly different values here um, that you might have selected A3 again later. Because if it's the case that um, like if you didn't have Bernoulli rewards but you had Gaussians um, it could be that the rewards for other arms dropped below another arm later and then you start to switch. So you don't necessarily always stick with which other arm looked best at the beginning. But in this particular case with these outcomes then you're not gonna select A3 ever again. [NOISE] All right, now let's do um, e-Greedy. So in this case, we're gonna assume we got exactly the same outcomes for the first few. Um, and then what said is that we're gonna um, have ah, one-half minus epsilon um, over 2. So we're gonna split ties randomly again. So with probability epsilon, we're gonna take um, with epsilon over 3. We're gonna take A1 or A2 or A3. And with 1 - epsilon over 2, we're gonna take A1 [NOISE] or A2. [NOISE] Interesting, okay. Okay. So in this case, it's gonna look almost identical except we're still gonna have some probability of taking A3 in this case. And we can do a similar computation here. In this case um, we've assumed that all of these are exactly the same. So e- e- e-Greedy in this case, will select A3 again. Yes. Um, if epsilon is fixed, not updating, yeah. [NOISE] Um, if Epsilon is fixed, how many times is it gonna select a_3? Main question's whether it's finite or infinite. Maybe talk to your neighbor for a second and decide whether if Epsilon is fixed, whether it'll be- a_3 will be selected a finite or infinite number of times, and what that means in terms of the regret? [NOISE] Okay. I'm gonna have everybody vote. So if you think it's gonna be selected an infinite number of times, raise your hand. Great. So what's that mean in terms of regret, is there- it gonna be good or bad? It's gonna be bad. Great. Bad, bad regret. I mean, in general, regret's gonna be unbounded unfortunately in these cases. [LAUGHTER] Um, so we're always gonna unfortunately have infinite regret but, um, but the rate at which it grows can be [LAUGHTER] can be much smaller depending on the algorithm you do. [NOISE] So, um, in particular and, uh, yeah, so we can also think about it in this case. So if you have a large gap which we do for a_3 here and we're gonna be selecting that arm an infinite number of times, then e-greedy is also gonna have, um, a large regret. So I like this plot, this plot comes from David Silver's slides. Um, uh, so if you explore forever, like, if you just do random, um, which we didn't discuss, but you could also do, then you're gonna have linear total regret, um, which means that with the number of time steps, t, you're- you're gonna scale linearly with t. Essentially, your regret is growing unboundedly and it's growing linearly which is equi- essentially proportio- I mean, it's gonna generally have a constant in front of it but um, it's gonna be a constant times the worst you could do at every time step. Because if you always select the worst arm at every time step, your regret will also grow linearly. So it's pretty bad. Um, if you explore never, if you do greedy, uh, then it also can be linear, and if you do e-greedy, it's also linear. So essentially, it means that all of these algorithms that we've been using so far can have really, really bad performance, um, certainly in bad cases, and so the critical question is whether or not it's better tha- it's possible to do better than that. So can we have sub- what's often called sublinear regret. So we want to have regret. If an algorithm's gonna be considered good in terms of its performance and its sample efficiency, we're gonna want its regret to grow sublinearly. Um, when we think about this, we're generally gonna think about, um, whether the bounds that we create, in the performance bounds are gonna be problem independent or problem dependent. So most of the bounds, i- it depends a little bit. For MDPs, most of the bounds that we can get are gonna be problem independent. For bandits, there's a lot of problem dependent bounds. Um, problem dependent bounds in the case of bandits mean that, um, the amount of regret that we get is gonna be a function of those gaps, and that should be sort of intuitive. So if you have- let's imagine that we just have two arms, like, a_1 and a_2 and the mean, um, the expected reward. So if this is, the expected reward is 1 and this is an expected reward of 0.001, intuitively it should be easier to figure out that arm one is better than arm two that if this is like 0.53 and this is 0.525. Because in one case, really hard to tell the difference between the mean of the two arms and the other case, the means are really, really far apart. So somewhat intuitively if the gap is really large, it should be easier for us to learn, and if it's really small, it should be harder. Yeah. Um, so if the, uh- [NOISE] is that [OVERLAPPING]. [NOISE] Uh, so, uh, optimal reward is deterministic, for some actions. I mean, we have zero regret. [NOISE] Good question. Um, uh, the question is if the optimal are- if the, uh, optimal reward is, uh, are you just saying optimal reward? Yeah. The optimal reward is deterministic till we have zero regret, if you know it. So if you know that all the rewards of the arms are [NOISE] deterministic, then you just need to pull them once. Then you can make a good decision and then you're done. In general, these algorithms aren't going to know that information if you don't- even if it was deterministic you're still gonna have these other forms of bounds. So what about the greedy case then? What if it's deterministic? If it's determinant- if in a greedy [NOISE] case- it's a good question. In a greedy case, if your real rewards are deterministic, um, then you don't need- then you would have pulled all the arms once, and you will make no mistakes. You'll make, uh, you'll have zero regret basically from that point onwards. So we'd consider that as just like you'd have some initial constant regret, and then afterward it would be independent of t. Did you have a question? Yeah. [NOISE] Um- Remind me your name. [NOISE] Is it also a function of the variance of each arm- Oh, good question. - [OVERLAPPING] because in that case like you take upon [inaudible] Yeah. Great question. So we're not gonna ta- uh, question is whether it also depends on the variance in addition to the mean. Um, yes, we're not gonna talk about that, um, uh, but in addition to problem dependent bounds, you can certainly think about parametric, like, if you have some parametric knowledge on the distribution of the rewards, then you can exploit it. Um, so if you know it's a Gaussian or other things like that. Um, I think in general if you know them, or like if you have information about the moments, then you should be able to exploit it. Most of the information that I've seen is, um, looking at the mean and the variance. We very frequently throughout a lot of this stuff is are gonna assume that our rewards are bounded. Um, that's gonna be important for most of the proofs that we do, even without making any other parametric assumptions. Okay. But then the other version of this is problem independent, which just says [NOISE] regardless of the, of the domain you're in, regardless of the gaps, can we also show that, um, uh, regardless of any structure of the problem, can we still ensure that regret grows up linearly, and that's what we're gonna mostly focus on today. So I think lower bounds, lower theoretical bounds are- are helpful to try to understand how hard a problem is. So I said, is it possible for something to be sublinear, um, and there's been previous work to look at sort of, well, how much does the regret have to grow? So, um, in this case, this is regret we're mostly getting, um, to, mu's, to write out regret. But in this case they- they prove that in terms of the gaps, um, and the similarity in the dis- distributions in terms of the KL divergence, the KL divergence, that you can show a lower bound on how much regret grows. [NOISE] So this is where there's a sort of unfortunate [NOISE] aspect of regret growing unboundedly comes up. If you don't make any other assumptions on the distributions of your rewards, um, uh, in general, the regret of your, um, your regret will grow unboundedly, and it will ha- do so in terms of these gaps, and the KL divergence. But it's still sublinear. So that's nice. So it's growing logarithmically with t here. T is the time steps. It's encouraging that, like, our lower bound suggests that there's room for traction, right? That we can definitely, um, at least there's no formal result that says it has to be linear, it says no, we should be able to grow much slower. So how could we do- why- how would we maybe do this? So this is now, um, we've talked about a particular framework which is regret, we talked about a setting, which is bandits, and now we're gonna talk about an approach which is, um, optimism in the face of uncertainty. And the idea is simply to choose actions which might have high value. Why should we do this? Um. I have a question on the- in the previous slide, um, so is that true at every t, or only in the one of because as it's really isn't that just saying that it's greater than infinity? All right. Um, that's a good question. Question is about whether i- it holds. I think this holds on every time step, I'd have to check the exact, like, way that they wrote this, um, uh, there's also constants, um, but I think this should hold on a per time step basis. We're really saying that as time is going along, it should be true [OVERLAPPING] Yeah. The limit as t goes large, this is where I'd have to look back at the original paper, um, there's probably additional constant terms which are transitory, um, and so this is probably the dominant term as t goes large. But in a lot of the regret bounds particularly in our MDP cases, we often have transitory terms that are independent of the time step, but are still large early on. That's my guess. But great question. Okay. So optimism in the face of uncertainty, um, says that we should choose actions that might have a high value. Why? Well, there's two possible outcomes. If we pick something that we think might be good, um, one is it's good. So if it is good, or I'll be more precise in this case. So let's say we select, so, a_1 and 1 is a_1 has high reward. So that's good. If we, um, if we took an action and it actually does- and we- because we thought it might have high reward and actually does have high reward, then we're gonna have small regret. So that's good. Um, what's the other outcome? a_1 does not have high reward. Has lower has, um, when we sample this, we get our r for a_1 with lower reward. [NOISE] Well, if we get something with low reward, we learn something. So, we're like, hey, you know, we tried that restaurant again, we thought it was great the first time. The second time it was horrible. So, now we've learned that that restaurant is not as good and we update our estimate of how good that restaurant is. And that means we don't think that a- that action has as high a value as it did before. So, essentially, either the world really is great, and in which case that's great, we're going to have low regret. Or the world is not that great and then we learn something. So, this gave us information. And so this b, acting optimistically gives us both information about the reward or allows us to achieve high reward. And so it turns out to have been a really nice principle. It's been around since at least Leslie Kaelbling in 1993. Introduced this idea of sort of interval estimation and then there started to be a lot of analysis of these types of optimism under uncertainty techniques. So, how can we do this more formally, or like, where would we get to be more precise about what it means for an action to have a high-value? Let's imagine that we estimate an upper confidence bound for each action value, such that the real value of that action is less than or equal to the upper confidence bound with high probability. And those upper confidence bounds in general are going to depend on how many times we've selected that particular action. Because we would like it to be such that if we've selected that action a lot, that upper confidence bound should be pre- pretty close to Q of A. And if we haven't selected it very much, maybe we're really optimistic. And then we can divide an upper confidence bound bandit algorithm by just selecting whichever action has the highest upper confidence bound. So, for every single action we maintain an upper confidence bound, and then we just select whichever one has the max, and then we update the upper confidence bound for that action after we take it. So, a UCB algorithm would for t = 1 dot, dot, dot, we would first have an initialization phase where we pull each arm once, each arm once, and then we compute UT of at for all A. And then for T = t dot, dot, dot, we would select at equaling this arg max, and then we would get a reward that is sampled from the true reward distribution for that arm. And then we would update Ut of at and all other arms. Turns out that often we have to update not just the action of the arm that we took, but the action of all the other arms we took too. And this basically comes into, you, you don't have to do that, but in terms of the theory we often have to do that in order to account for high probability bounds. And we'll see more about that in just a second. So, every time you get a reward, you update the upper confidence bounds of all your arms, and then you select the next action and you repeat this just over and over again. Okay. So, how are we going to define these U of T? So, we're going to use Hoeffding's inequality, so a refresher. In Hoeffding's inequality, we can apply this to a set of iid random variables. We're going to assume right now that all of them are bounded between 0 and 1, and we're going to define our sample mean just to be a, the average over all of those variables. And then what Hoeffding says is that the probability of that expected mean is, like, this is the true expected mean. So, this is the true mean. This is our empirical mean. And this is, you can think of this as like some epsilon. This is just some constant. Is the probability that your true mean is greater than the empirical mean plus some constant U, is less than or equal to expo- the exponential minus 2nu squared. So, this is the number of samples we have. Okay. So, we can also invert this to say, if you want this to be true with a certain probability, you can pick a mu so that Xn plus mu is gonna be at least as large as the real mean. So, let's say what we wanna do is, we want that this to, we want that the empirical mean plus mu, the probability that that's less than the real mean, to equal to delta over T squared. And we'll see why we might want to choose that particular probability shortly, but let's imagine for a second that's what we want our probability to be, since then we can solve for what mu has to be. Okay, So, that's exponential -2nu squared has to equal to delta over t squared. And then what we do is we just solve for what mu is. Thanks for letting me know. Okay. So, mu in this case is going to be equal to the square root 1 over 2n log of t squared over delta. So, I just solve for that equation. What does that tell us? That says that if we do, if we define our ut of, or in this case we define, I'll keep with the same notation as for Hoeffding. So, if we have Xn plus mu, with that particular choice of mu, that's generally going to be greater than or equal to the true expected value of X, with probability greater than or equal to 1 - delta over t squared. So, this, Hoeffding's inequality gives us a way to define an upper bound. Because instead of these being Xs right now, you can imagine those are just pulled from our arm and those are all rewards. And so this says, if you take your empirical average of your words so far, and you add in this upper bound, which depends on the number of times we pulled that arm and t. So, t here notices the total number of time steps we pulled any arms, and n is the number of times we pulled that arm. So, they're not the same thing. So, we're [NOISE] inside of this competence bound, we have a competence bound that is decreasing at a rate of how many times we pull this particular arm, and then we have a log term which is increasing at the rate of the number of times we pulled any arm. And that is the reason why after each time step, we have to up- um, update the upper confidence bounds of all the arms. So, you kind of have these two competing rates that are going on. As you pull an arm more, you're gonna get a better estimate of its reward, so it's shrinking. But then you also have this slower growing term, this log, which is increasing with the number of time steps. All right. So, this is one way we could define our upper confidence bounds. So, we could use this in the case of our rewards. So, we might wanna say that ut of at is equal to our empirical average of that arm, + 1 over 2, the number of times we pulled that arm, times log of t squared divided by delta. So this is how, one way for us to define our upper confidence bounds. [NOISE] All right. So now the next question is. Okay. So we've done that, how is that gonna help us in terms of showing that maybe the regret of something which is optimistic is actually sub-linear? Okay. So what we're gonna do now is, um, I'll do a quick poll. Do you guys want me to write it on here or do you guys want me to do it on the board? So raise your hand if you want it on the board. All right. Raise your hand if you want it on here. Okay. We'll do this next part on the board [LAUGHTER] that was easy. Was there a question in the back? Yeah. I have question isn't this derivation about t because it seems like. [OVERLAPPING] About what? You first introduced it, about t? Yes. It seems you first introduced it there was basically a constant the way you moved around but then later you're saying that that's actually the time that we we're updating every time step. So how are we able to do that? Yeah. It's a good question. So, um, you're right. And I'm being slightly imprecise about this. If you know what the time horizon is that you're gonna be acting on a bandit, you could set t to be the maximum. So you, if you know that you're gonna act for t time steps, you can plug that in and then your confidence bounds are, then that log term is fixed basically. Um, in online settings if you don't know that, you can also constantly be updating it with a time step. It's a good question. Yeah. How is delta decided? How is what? Pardon. How is delta, is like what is delta? Okay. Good question. Um, so, question is what is delta. What we're gonna, I did not tell you, uh, in this ca- well, in this case it's telling us, um, it's specifying what is the probability this is holding like what this inequality is holding. Later we're gonna pro- provide a regret bound that is high probability. So we're gonna say we're gonna have a regret bound which is something that like with probability is 1 minus a function of delta, um, your regret will be sub-linear. So that's how. You can get expected regret bounds too and the UCB paper which, um, one of the original UCB papers provides an expected bound but I thought it was a little, the, this bound was a little bit easier to do in class. So I thought I would do the high probability bound. Yeah. So before we were talking about regret, I didn't exactly understand how you use regret to update your estimate of the action value. Oh, good question. Um, so question is, do we use or how would we use the regret bound, the regret to update our estimate action, we don't. Regret is just a tool to analyze our algorithm. Great clarification. So regret is a way for us to analyze whether or not an algorithm is gonna be good or bad in terms of how fast the regret gro- grows but it's not used in the algorithm itself. The algorithm doesn't compute regret. And it's not used in terms of the updating. Excuse me. Okay. So actually I'll leave this one up here. So you guys can continue to see that. All right. So let's do our proof. So what we're gonna wanna do now is we're gonna wanna try to prove something about, um, the regret but before and how quickly it grows for the upper confidence bound algorithm. But before I prove that, I'm gonna try to argue to you that, um, we're gonna look at, sort of, the probability of failure of these regret bounds, of these confidence bounds. So what I said here is that [NOISE] we're gonna define these upper confidence bounds like this in terms of the empirical mean for that arm so far plus this term that depends on the number of times we pulled that arm. And what I wanna convince you now is, what is the probability, what is the probability that on one step, that on some step, step the confidence bounds will fail to hold. Why is this bad? Okay. So we wanna bound the probability that on any step, excuse me, as we're running our algorithm that our confidence bounds fail to hold. Why? Because if they all hold, we can guarantee we're gonna be making some, um, we're gonna have some nice properties. Okay. So note, if all the confidence bounds hold, like, on every step then we can ensure the following. So if all confidence hold, bounds hold then Ut of at, this is the actual arm we selected, is gonna be greater than q of a star. The real value. The real value of the optimal arm. Okay. So why is this true? There's two cases here, either at is equal to a star, or at is not equal to a star. So let's just take a second and maybe talk to your neighbor [NOISE] to say, if it's the case that our confidence bounds hold which means that really is the case that we have ut with, um, that this confidence bound is, um, gonna be greater than or equal to the mean for that arm. So if these are true confidence bounds, this equation is holding, we're not getting a failure. So we know that Q t which is this is gonna be greater than, um, the real expected value for that arm. Okay. So if that's true, then this is gonna hold at every time step. So maybe, let's take a neighbor or if it's not clear what I'm asking or how to think about that, feel free to raise your hand too. So there's two cases either the arm that we selected is a star or the arm we selected is not a star and in both cases, this is gonna hold, if the confidence bounds are correct. So maybe let's take a second to to think about this or feel free to raise your hand if it's not clear how to, how to get started on that. I wanna be just be clear here so. [NOISE] I just wanted to note at the top there that if the confidence bounds hold, then that upper confidence bounds which is equal to that is going to be greater than the real expected value for that arm. Yeah. On the confidence bound on your other equation, you have written over there? Yeah. So you're saying that the optimal, like your optimal Q value, should be less than all the confidence bounds for any action? No, good question. So just need to clarify what it is. This is saying for the arm that you selected, the upper confidence bound of the arm you selected is- has a value that- that- that upper confidence bound that you use to choose the arm, whichever arm you selected, the upper confidence bound of that arm is higher than the true value of the optimal arm. That's what this equation is saying. Saying that if the confidence bounds hold on all time steps, which they might not, but let's say that they do because these are only high probability bounds, but if they hold on all time steps, then whatever arm you selected, its upper confidence bound is higher than the value of the true arm. The real value of the true arm. Okay, and I just wanted to be clear what I mean for the confidence bound to hold, I put this up there. So that means that the upper confidence bound of an arm holding means that that upper confidence bound which is defined in that way is greater than the true value of that arm. So let's work- work through this a little bit. So let's say there's two cases. So if A T is equal to A star, then that's what this is saying is that saying is, is Ut of a star greater than Q of a star? Does that hold if the confidence bounds hold? Yes. By definition. So if you look up there. So the upper confidence bounds if it holds for an action for that- the upper confidence bound for an action has to be bigger than the mean for that action. If that upper confidence bounds hold. Okay. So this ho- this works. So if we really selected the optimal action, we've defined our upper confidence bounds, so they really are better than the mean of that arm, and so this holds. The other case is that at is not equal to a star. So what does that mean? That means that Ut of at is greater than Ut of a star. Because otherwise, we would have selected a star. It means that some other arm had a higher upper confidence bound than the optimal action, and we know that this is greater than Q of a star. Okay? So if the confidence bound holds, we know at every single time step the upper confidence bound of the arm we selected is better than the true mean of the optimal arm. Yes. Is that true in the epsilon greedy case as well? Is it true in epsilon greedy case as well? I know I- I don't follow your question yet. Like, you're selecting this arm using some strategy, right? Yeah. And it gets some maximizing action, right? No. Ut, so this only holds, this part only holds because we're picking the arg max, you're not going to be able to see this, but the arg max A of Ut of at. So that first inequality, only- well, there might be other algorithms that t holds for too, but it holds in particular for the upper confidence bound. Great- great question. Okay. So this says if we could get it, and we will see shortly why that matters, but kind of intuitively. This says if the confidence bounds hold, then we know that separate confidence bound of the arm we select is going to be better than the optimal arm. And the reason we're going to want that is later when we're doing the regret bounds, we do not want to deal with properties we don't observe namely the value of the optimal arm. Because we don't know what Q of a star is. We can't com- we don't know which arm it is. So when we look at regret bound right now, regret bounds are in terms of Q of a star. We don't know what that quantity is. So we're going to need to figure out some way to get rid of that quantity and we're going to end up using these upper bounds, but we're going to need the fact that the upper bound of the arm we select is better than the Q star, Q of a star. Okay. So- so this is saying that that's true if our upper confidence bounds hold, what is the probability that that occurs? Okay. So what that means is, if we want to say that on all time steps, this is a union bound, this is our union over events. The union over all the events for ut = 1 to t of the probability that Q of a star minus the upper confidence bound of the action that we took. We want this, oops, not that way. We want the probability of this, which is essentially the probability of failure. So this is, what's the pro- this is if all confidence bounds hold things are good, this says, what is the probability that that failed? That the arm that you took it actually is not better. The upper confidence bound is not better than the- the real mean of the optimal arm? So this is the failure case, where the confidence, but um, we're gonna say that if we look at that, we- we don't want this thing to happen. We can upper bound that by making sure our upper confidence bounds hold at all time steps. Okay. So these are the arms. So what I said up there is that if all of our confidence bounds hold on all time steps we can ensure that. So we're now going to write that down in terms of what? The probability that the upper confidence bounds do hold on all time steps. And so we're gonna do probability that Q of at, that's Q hat of at is greater than U. Okay. This is the upper confidence bound we defined over there. This is just saying that the upper confidence bound holds for each arm on each time step. Well, by definition, over there we said we- we picked an upper confidence bound to make sure this held with the least probability of delta over t squared. Because that's how we defined our upper confidence bound. We picked a big enough thing to add onto our empirical means so that we could ensure that the upper confidence bound really was larger than our mean. So now we have a union over all time steps, a union over all arms, delta divided by t squared. And note that if you sum over t = 1 to infinity of t to the -t, that's equal to pi squared over 6 which is less than 2. So when you do the sum you get 2m delta. So what this says is that the probability- that your upper confidence bounds hold over all time steps. So this is the negative, this is that they- that they don't hold is at least 1 - 2m delta. So all our- our- what we're gonna end up doing is we're gonna have a high probability regret bound that says, with probability at least 1 - 2m delta, we're gonna get a smaller regret. Yeah. So what about the infinite horizon case? Great question. Yes. This is all for an infinite horizon case. We're gonna look at the- we're gonna find our regret in terms of t, the number of time steps. Okay, all right. So why is this useful? Do you guys want me to leave this up or we can move that now? Everyone's written it down, who wants it? Okay. So let's- can this go up? Let me see or not. Okay. So why is this useful? Okay. We're now gonna define our regret. So, this part of the board just says that we've made it so these upper confidence bounds holds with high probability. Now we're gonna try to show what our regret is gonna be, oh good. Okay. Thank you. All right. So, what's our regret? Regret of our UCB algorithm after T time steps is just equal to the sum over all those time steps t = 1 to t of Q of a star - Q of at. Okay? Remember, we don't know either of these things. We don't know what the real mean is of any arm we pick and we don't know what the real mean is of the optimal arm. So we need to turn this into things that we know. These are unknown. Okay? So what we're gonna do is one of our favorite tricks in reinforcement learning, which is we add and subtract the same thing. So we do sum over t equal- -t =1 to t of Ut, this upper bound, at - Q of at + Q of A star - Ut at. I just added and subtracted the same thing. I picked the upper confidence bound of the arm that we selected at each time point. Okay? So then the important thing is that what we showed over here is that if all of our confidence bounds hold, then the upper confidence bound of the arm we selected is larger than Q of a star. That's what we showed over there. So that means that this has to be less than or equal to 0. Because the upper confidence bound of whatever arm we selected, we proved over there, is gonna be higher than the real mean of the optimal arm. So this second part of this equation is less than or equal to 0, which means we can upper bound our regret as follows. So now we can drop that second term. So that's nice, right? Because now we don't have any a stars anymore. We only are looking at the actions that we actually took at each time step and we're comparing the upper confidence bound of the- at that time step versus the real mean. But remember the way we've defined our upper confidence bound and put it over here, the way we defined our upper confidence bound Ut of at is exactly equal to, um, the empirical mean, at, plus this square root 1 over 2 and at log t squared over delta. Okay? And we said here that this was going to be the difference from Hoeffding between Q of at - Q hat of at. So the probability that this, remember I called this U, the probability that this was greater than U was small. So now we're assuming that all of our confidence bounds hold, which means that we know that the difference between the real empirical mean and the true mean of this arm is bounded by U. Yeah. Going back, for the bottom panel, sorry, it's a little hard to see, two questions. First of all, where does this second, um, so you have a union over i = 1, the number of arms. I don't see where that index actually factors in. And then also if you could just go over the third line with the delta over t squared, and summation, how we derived that. Sure? Yeah, so, um, so what we did there is we said that if, um, are you asking about the second line to the third line? Yes. So what we did that in this case is, um, we said for each, we wanna make sure that on each of the time steps, all of the upper confidence bounds hold. Um, and so that's where we get an additional sum, um, over here over all the arms. So this is conservative, um, trying to make sure we don't know- you could imagine just doing this over the arm that's selected, and but we don't know which arm is selected. We want to be able to show, um, this is going to be a looser upper, upper bound saying this is sufficient. So, um, we're saying that if you want to make sure that Q of a star is greater than the upper bound of the arm that is selected, it is sufficient to ensure that your upper confidence bounds are valid at all time points from this, from this, um, reasoning up here. And so this is the probability that your upper confidence bounds are correct on all times points, for every single time point, for every single arm, your upper confidence bounds have to hold. And then, what we get in this case is, we said that the probability on a particular time-step, the upper confidence bounds holds is delta over t squared. That's how we defined that U term [NOISE] so that according to Hoeffding's inequality, it would hold with the least probability delta over t squared. And then, so this is that, and then I just made a side note that this is not something, some of you guys might have seen this but I certainly wouldn't expected people to, that just it turns out that in terms of the limit, um, if you sum over t = 1 to infinity of t to the -2 that's bounded by pi squared over 6 which is less than 2. Um, fun fact. Um [LAUGHTER] so, so you can plug that in, right, because then you just get a 2 here, and then you just get a sum over all arms which is m, um, and you have a delta. So this just allows us to take that infinite sum. So notice also that, um this goes to question before this holds for the infinite horizon because when we did this summing we're basically making sure that our confidence bounds hold forever. So we're, okay. Great. So we said here now that we're doing all of this part under the assumption that our confidence bounds hold. Our confidence bounds hold mean that the difference between our expected mean and the true mean for that same arm is bounded by mu, bounded by U with high probability where this is the definition of U. So that's what our Hoeffding inequality allowed us to state. So take that quantity now, that's our U and plug it in here. So this is looking exactly the difference between our upper confidence bound and Q. So this is exactly equal to sum over t = 1 to t of U. Um, it's a little confusing in terms of notation. Um, so I'll just plug in the exact expression right there. 1 over 2 and t of a, t log t squared over delta. Okay, so we just plugged in that, the difference between the empirical mean and the true mean is bounded by this quantity U. Okay. All right. So then in this case, what we can do is we can split this up into the different arms that we pulled, okay? So this is sum over all timesteps. We can pull out, note that this is, um, if we upper bound this by big T, this is equal to less than or equal to square root log big T squared over delta, and then we're gonna get a sum over all time steps and we're gonna split this up according to which arms we selected, okay? So for, this is the same as if we look at for each of the arms, how many times did we pull it? So sum over n equals 1 to n, t, i, square root of 1 over n. So we just divided this up, like for each of the arms, we selected them some amount of times. That's here, i is, i is indexing our arms. So nt, i is the total number of times we selected arm i. And then we sum that up, okay. And then, if we note the fact that if you sum from n = 1 to t square root 1 over n, that is less than or equal to 2 square root t. You use an integral argument for that, I'm happy to talk about it offline. Yes, in the back. What happened to the 1 over 2 [inaudible]. Thank you, we have a 2, we can put a 2 here. Thanks. I'll be a little loose with constants but definitely catch me on them. Because most of these bounds will all end up being about whether it's sublinear or linear. It's good to be precise. Okay. So we have this quantity here, um, when is this quantity maximized? This quantity is going to be maximized if we pulled all arms an equal number of times. Why? Because, um, 1 over n is decreasing with n. And so the, the largest these can be is if you split, if you split, um, your pools across all arms equally. So if we go back up to here, and I call this a. So a is gonna be less than or equal to, excuse me, square root, 2 log t squared over delta times sum over i equals 1 to m, sum over n equals 1 to t divided by m. So this is as if we split all of our pools equally across all the arms. 1 over square root n, okay, and then we can use this expression. Okay? So this is less than or equal to 1 over 2, we're almost there. Um, t squared over delta and then we're gonna get sum over i equals 1 to m of 2 square root t over m. Okay. And then, when you sum this over m, you get less than or equal to square root 1 over 2 log t squared over delta that brings in m into there. When we look at another two in here times T, m. And now we're done. So what has this shown? This is shown that if we use upper confidence bounds, that the rate at which our regret grows is sublinear square root times a log. So t here is the number of timesteps. So timesteps, so as if we are, if we use upper confidence bounds, um, in order to make our decisions, then the regret grows much slower. This is a problem independent bound. It doesn't depend on the gaps. There's, there's much nicer ones and tighter ones that depend on the gaps. But this indicates why optimism is fundamentally a sound thing to do in the case of bandits, is that it allows us to grow much, it allows us to have much better performance in terms of regret than it does for the e-greedy case. Yeah. Can you just display one more time the last one on the top board, um, how you went from summation over t1 to big T and you just pulled out the t squared big T, what happens to t = 1 to big T - 1? Great question. Yeah, so this log term here t equals, um, is ranging from t = 1 to t, this log term is maximized when t is big T. So we're upper bounding that log term and then it becomes a constant and we can pull it out. Okay, so the cool thing here is that this is sublinear. That's, that's really the main point. Um, well, I go through an example and we'll go through, um, more of this next time. Um, I, I go, we next go through an example for the toy domain for the broken toes of like what do these upper confidence bounds look like in that case and then what will the algorithm do in these scenarios? Um, so that's what we'll look at next. And then, after that we'll look at, so this is one class of techniques which is this optimism under uncertainty approach, which is saying that we're going to look at what the value is based on a combination of the empirical rewards we've seen so far plus an upper confidence bound over them, um, and use that to make decisions. [NOISE] And then the next thing we'll see too is where we are Bayesian about the world and we instead maintain a prior and we update our prior and use that to figure out how to act. So we'll go through this next week, um, and I'll see you then. |
Stanford_CS234_Reinforcement_Learning_Winter_2019 | Stanford_CS234_Reinforcement_Learning_Winter_2019_Lecture_1_Introduction_Emma_Brunskill.txt | Hi everybody, um, I'm Emma Brunskill. Um, I'm an assistant professor in Computer Science and welcome to CS234, um, which is a reinforcement learning class, um, which was designed to be sort of an entry-level masters or PhD student in an introduction to reinforcement learning. So, what we're gonna do today is I'm gonna start with just a really brief overview of what is reinforcement learning. Um, and then we're gonna go through course logistics and when I go through course logistics, I'll also pause and ask for any questions about logistics. Um, the website is now live and so that's also the best source of information about the class. That and Piazza will be the best source of information. Um, uh, so I'll stop there when we get to that part to ask if there's anything that I don't go over that you have questions about and if you have questions about the wait-list or any particular things relating to your own circumstance, feel free to come up to me at the end. Um, and then the third part of the class is gonna be where we start to get into the technical content that we're thinking about, uh, an introduction to sequential decision making under uncertainty. Um, just so I have a sense before we get started, who here has taken a machine learning class? All right. Who here has taken AI? Okay. So, a little bit less but most people. All right. Great. So, probably everybody here has seen a little bit about reinforcement learning. Um, varies a little bit depending on where you've been at. We will be covering stuff starting from the beginning as if you don't know any reinforcement learning, um, but then we'll rapidly be getting to other content, um, that's beyond anything that's covered in at least other Stanford related classes. So, reinforcement learning is concerned with this really foundational issue of how can an intelligent agent learn to make a good sequence of decisions? Um, and that's sort of a single sentence that summarizes what reinforcement learning is. Do we know what we'll be covering during this class? But it actually encodes a lot of really important ideas. Um, so the first thing is that we're really concerned now with sequences of decisions. So, in contrast to a lot of what is covered in, uh, machine learning, we're gonna be thinking about agents, intelligent agents or an intelligent agent in general that might or might not be human or biological. Um, and how it can make not just one decision but a whole sequence of decisions. We're gonna be concerned with goodness. In other words, we're gonna be interested in- the, the second thing is how do we learn to make good decisions, um, and what we mean by good here is some notion of optimality. We have some utility measure over the decisions that are being made. Um, and the final critical aspect of reinforcement learning is the learning, but, um, that the agent doesn't know in advance how its decisions are gonna affect the world or what decisions might necessarily be associated with good outcomes, and instead it has to acquire that information through experience. So, when we think about this. This is really something that we do all the time. We've done it since we were babies. We try to figure out, how do you, um, sort of achieve high reward in the world and there's a lot of really exciting work that's going on in neuroscience and psychology, um, that's trying to think about this same fundamental issue from the perspective of human intelligent agents. And so I think that if we wanna be able to solve AI, um, or make significant progress, we have to be able to make significant progress in allowing us to create agents that do reinforcement learning. So, where does this come up? There's this, um, nice example from Yael Niv who's, uh, an amazing sort of psychologist and neuroscience researcher over at Princeton. Um, where she gives us an example of this sort of primitive creature, um, which evolves as following during its lifetime. So, when it's a baby, it has a primitive brain and one eye and it swims around and it attaches to a rock. And then when it's an adult, it digests its brain and it sits there. And so maybe this is some indication that the point of intelligence or the point of having a brain in at least in part is helping to guide decisions, and so that once all the decisions and the agent's life has been completed maybe we no longer need a brain. So, I think this is, you know, this is one example of a biological creature but I think it's a useful reminder to think about why would an agent need to be intelligent and is it somehow fundamentally related to the fact that it has to make decisions? Now of course, um, there's been a sort of really a paradigm shift in reinforcement learning. Um, around 2015, um, in the Neurex Conference which is one of the main machine learning conferences, David Silver came and went to a workshop and presented these incredible results of using reinforcement learning to directly control Atari games. Now, these are important whether you like video games or not. Um, video games are a really interesting example of sort of complex tasks that take human players a while often to learn. We don't know how to do them in advance. It takes us at least a little bit of experience. And what the really incredible thing about this example was, this is, uh, Breakout, is that the agent learns to play directly from pixel input. So, from the agent's perspective, they're just seeing sort of these colored pixels coming in and it's having to learn what's the right decisions to make in order to learn to play the game well and in fact even better than people. So, this was really incredible that this was possible. Um, when I first started doing reinforcement learning, a lot of the work was really focused on very artificial toy problems. Um, a lot of the foundations were there but these sort of larger scale applications we're really lacking. And so I think in the last five years, we've seen really a huge improvement, um, in the types of techniques that are going on in reinforcement learning and in the scale of the problems that can be tackled. Now, it's not just in video game, um, playing. Uh, it's also in things like robotics, um, and particularly some of my colleagues up at University of, um, California Berkeley, um, uh, had been doing some really incredible work on robotics and using reinforcement learning in these types of scenarios, um, to try to have the agents do grasping, fold clothes, things like that. Now, those are some of examples if you guys have, um, looked at reinforcement learning before, are probably the ones you've heard about. You probably heard about things like video games or robotics. Um, but one of the things that I think is really exciting is that, uh, reinforcement learning is actually applicable to a huge number of domains, um, which is both an opportunity and a responsibility. So, in particular, um, I direct the AI for human Impact Lab here at Stanford and one of the things that we're really interested in is how do we use artificial intelligence to help amplify human potential? So, one way you could imagine doing that is through something like educational games. Where the goal is to figure out, um, how to quickly and effectively teach people how to learn material such as fractions. Another really important application area is health care. Um, this is sort of a cutout, um, of looking at seizures that some work that's been done by Joel Pineau up at McGill University and I think there's also a lot of excitement right now thinking about how can we use AI in a particular reinforcement learning, um, to do things like to interact with things like electronic medical records systems and use them to inform patient treatment. There's also a lot of recent excitement and thinking about how we can use reinforcement learning and lots of other applications kind of as an optimization technique for when it's really hard to solve optimization problems. And so this is arising in things like natural language processing in vision and a number of other areas. So, I think if we have to think about what are the key aspects of reinforcement learning, they probably boil down to the following four, and these are things that are gonna distinguish it from other aspects of AI and machine learning. So, reinforcement learning from my sentence about that we're learning to make good decisions under uncertainty, fundamentally involves optimization, delayed consequences, exploration and generalization. So, optimization naturally comes up because we're interested in good decisions. There's some notion of relative different types of decisions that we can make, um, and we want to be able to get decisions that are good. The second situation is delayed consequences. So, this is the challenge that the decisions that are made now. You might not realize whether or not they're a good decision until much later. So, you eat the chocolate Sunday now and you don't realize until an hour later that that was a bad idea to eat all two courts of ice cream or, um, you in the case of things like video games like Montezuma's Revenge, you have to pick up a key and then much later you realize that's helpful or you study really hard now and Friday night and then three weeks you do well on the midterm. So, one of the challenges to doing this is that because you don't necessarily receive immediate outcome feedback, it can be hard to do what is known as the credit assignment problem which is how do you figure out the causal relationship between the decisions you made in the past and the outcomes in the future? And that's a really different problem than we tend to see in most of machine learning. So, one of the things that comes up when we start to think about this is how do we do exploration? So, the agent is fundamentally trying to figure out how the world works through experience in much of reinforcement learning and so we think about the agent as really kinda being the scientist of trying things out in the world like having an agent that tries to ride a bicycle and then learning about how physics and riding a balanced bike works by falling. And one of the really big challenges here is that data is censored and what we mean by censoring in this case is that you only get to learn about what you try to do. So, all of you guys are here at Stanford clearly that was the optimal choice. Um, but you don't actually get to figure out what it would have been like if you'd went to MIT it's possible that would've been a good choice as well, but you can't- you, can't experience that because you only get to live one life and so you only get to see that particular choice you made at this particular time. So, one question you might wonder about is, um, you know, policy, what we're gonna, we're gonna talk a lot about policies. Policies, decision policies is gonna be some mapping from experiences to a decision. And you might answer why, we, this needs to be learned. So, if we think about something like Deep Mind, um, Atari playing game. What it was learning from here, is it was learning from pixels. So, it was essentially learning from the space of images what to do next. And if you wanted to write that down as a program, a series of if then statements, it would be absolutely enormous. This is not tractable. So, this is why we need some form of generalization and why it may be much better for us to learn from data directly, as well as to have some high level representation of the task. So, that even if we then run into a particular configuration of pixels we've never seen before, our agent can still know what to do. So, these are sort of the four things that really make up reinforcement learning, at least online reinforcement learning and why are they different than some other types of AI and machine learning. So, another thing that comes up a lot in artificial intelligence is planning. So, for example, the Go game, um, is, can be part of as a planning problem. So, what does planning involve? Involves optimization, often generalization and delayed consequences. You might take a move and go early and it might not be immediately obvious if that was a good move until many steps later but it doesn't involve exploration. The idea and planning is that you're given a model of how the world works. So, your given the rules of the game, for example, and you know what the reward is. Um, and the hard part is computing what you should do given the model of the world. So, it doesn't require exploration. And supervised machine learning versus reinforcement learning. It often involves optimization and generalization but frequently it doesn't invo-, involve either exploration or delayed consequences. So, it doesn't tend to involve exploration because typically in supervised learning you're given a data set. So, your agent isn't collecting its experience or data about the world instead it's given experience and that it has to use that to say in for whether an image is a face or not. Similarly, um, it's typically making essentially one decision like whether this image is a face or not instead of having to think about making decisions now and then only learning whether or not those were the right decisions later. Unsupervised machine learning awful, also involves optimization and generalization but generally does not involve exploration or delayed consequences and typically you have no labels about the world. So, in supervised learning, you often get the exact label for the world like this image really is, has a, contains a face or not. Um, in unsupervised learning you normally get no labels about the world and an RL you typically get something kind of halfway in between those which you get a, a utility of the label you put. So, for example, you might decide that there's a face in here and it might say, ''Okay, yeah, we'll give you partial credit for that,'' because maybe there's something that looks sort of like a face. But you don't get the true label of the world or maybe you decide to go to Stanford, um, and then you don't know. And you're like okay that was a really great experience but I don't know if it was, ''the right experience.'' Imitation learning which is something that we'll probably touch on briefly in this class and is becoming very important, um, is similar, um, but a little bit different. So, in, uh, it involves optimization, generalization, and often delayed consequences but the idea is that we're going to be learning from experience of others. So, instead of our intelligent agent getting to ex-, take experiences, um, from the world and make its own decisions, it might watch another intelligent agent which might be a person, make decisions, observe outcomes and then use that experience to figure out how it wants to act. There'll be a lot of benefits to doing this but it's a little bit different because it doesn't have to directly think about the exploration problem. Imitation learning and I just want to spend a little bit more time on that one because it's become increasingly important. So, to my knowledge, it was first really sort of popularized by Andrew Ng, um, who's a former professor here, um, through some of his helicopter stuff where he was looking at expert flights together with Pieter Abbeel, whose a professor over at Berkeley, um, to see how you could imitate very quickly, um, experts flying toy helicopters. And that was one of sort of the first kind of major application successes of invitation learning. It can be very effective. There can be some challenges to it because essentially, if you get to observe one trajectory, let's imagine it's a circle of a helicopter flying and your agent learns something that isn't exactly the same as what the expert was doing, that you can essentially start to go off that path and ven-, venture into territory where you really don't know what the right thing is to do. So, there's been a lot of extensive work on imitation learning that's sort of combining between imitation learning and reinforcement learning that ends up being very promising. So, in terms of how we think about trying to do reinforcement learning, we can build on a lot of these different types of techniques. Um, and then also think about some of the challenges that are unique to reinforcement learning which involves all four of these challenges. And so these RL agents really need to explore the world and then use that exploration to guide their future decisions. So, we'll talk more about this throughout the course. Um, a really important question that comes up is where do these rewards come from, where is this information that the agents are using to try to guide whether or not their decisions are good, um, and who is providing those and what happens if they're wrong? And we'll talk a lot more about that. Um, we won't talk very much about multi agent reinforcement learning systems but that's also a really important case, as well as thinking about game theoretic aspects, right. So, that's just a really short overview about some of the aspects of reinforcement learning and why it's different than some of the other classes that you might have taken. Um, and now we're gonna go briefly through course logistics and then start sort of more of the content and I'll pause after course logistics to answer any questions. In terms of prerequisites, um, we expect that everybody here has either taken an AI class or a machine-learning class either here at Stanford or the equivalent to another institution. And if you're not sure whether or not you have the right background for the class, feel free to reach out to us on Piazza and we will respond. Um, if you've done extensive work in sort of related stuff, it will probably be sufficient. In general, we expect that you have basic Python proficiency, um, and that you're familiar with probability, statistics, and multi-variable calculus. Um, things like gradient descent, loss derivatives, um, those should all be very familiar to you. Um, and I expect that most people have probably heard of MDPs, um, before, but it's not totally critical. So, this is a long list [LAUGHTER] but I'll go through it slowly because I think it's pretty important. So, this is what are the goals for the class, what are the learning objectives? So, these are the things that we expect that you guys should be able to do by the time you finish this class and that it's our role to help you be able to understand how to do these things. So, the first thing is that it's important to be able to define the key features of reinforcement learning that distinguish it from other types of AI and machine learning, um, frames of problems. So, that's what I was doing a little bit of so far in this class to figure out, how does this distinguish this. How does RL distinguish itself from other types of pro-, problems. So, related to that, um, for most of you, you'll probably not end up being academics, um, and most of you will go into industry. And so, one of the big challenges when you do that is that when you're faced with a particular problem from your boss or when you're giving a problem to one of your, um, supervisees is for them to think about whether or not it should be framed as a reinforcement learning problem, um, and what things are applicable to it. So, I think it's very important that by the end of this class, that you have a sense of if you're given a real-world problem like web advertising or patient treatment or robotics problem, um, that you have a sense whether or not it is useful to formulate it as a reinforcement learning problem and how to write it down in that framework and what algorithms are relevant. Um, during the class, uh, we'll also be introducing you to a number of reinforcement learning algorithms, um, and you will have the chance to implement those in code including deep reinforcement learning cla-, uh, problems. Another really important aspect is if you're trying to decide what tools to use for a particular, say robotics problem or health care problem, um, is to understand which of the algorithms is likely to be beneficial one and why. And so, in addition to things like empirical performance, I think it's really important to understand, generally, how do we evaluate algorithms. Um, and can we use things like theoretical tools like regret sample complexity, um, as well as things like computational complexity to decide which algorithms are suitable for particular tasks. And then the final thing is that one really important aspect of reinforcement learning is exploration versus exploitation. This issue that arises when the agents have to figure out what decisions they wanna make and what they're gonna learn about the environment by making those decisions. And so, by the end of the class, you should also be able to compare different types of techniques for doing exploration versus exploitation and what are the strengths and limitations of these. Does anyone have any questions about what these learning objectives are. Okay. So, we'll have three main assignments for the class, um, will also have a midterm. Um, we'll have a quiz at the end of the class, um, as well as a final project. The quiz is a little bit unusual. Um, so, I just want to spend a little bit of time talking about it right now. The quiz is done on both individually and in groups. Um, the reason that we do this is because we want a low stakes way to sort of have people practice with the material that they learn in the second half of the course. Um, in a way that's sort of fun engaging and really tries to get you to think about it and also learn from your peers. Um, and so, we did it last year and I think a number of people who are a little bit nervous about how it would go before and then ended up really enjoying it. So, the way that the quiz works is it's a multiple choice quiz. At the beginning and everybody does it by themselves and then after everybody has submitted their answers, then we do it again in groups that are pre-assigned by us. And the goal is that you have to get everyone to decide on what the right answer is before you scratch off and see what the correct answer is. And then we grade it according to, um, whether you scratched off the right answer, correctly first or not. You can't do worse than your individual grade. So, doing it in a group can only help you. Um, and for SCPD students, they don't do it in groups. So, they just write down justifications for their answers. Again, um, it's a pretty lightweight way to do assessment, um, the goal is that you sort of have to be able to articulate why you believe that answers are the way they are and discuss them in small groups and they use that informa-, um, use that to figure out what the correct answer is. Um, the final project is paired pretty similar to other projects that you guys have done in other classes. Um, it's an open-ended project. It's a chance to, uh, reason about, um, and, and think about reinforcement learning, uh, stuff in more depth. We will also be offering a default project that will be announced over the next couple of weeks before the first milestone is due. If you choose to do the default project, your breakdown, because you will not need to do a proposal or milestone, will be based on the project presentation in your assignment, uh, write up. Since we believe that, um, you guys are all of each other's best resource, um, we use Piazza, um, that should be used for pretty much all class communication unless it's something that's sort of a private or sensitive manner in which case of course please feel free to reach out to the course staff directly, ah, and for things like lectures and homework and project questions pretty much all of that should go through Piazza. For late day policy, we have six late days, ah, for details you can see the webpage and for collaboration please see the webpage for some of the details about that. So before we go on to the next part, do I have any questions about logistics for the class? Okay, let's get started. Um, so, we're not going to do an introduction to sequential decision-making under uncertainty, a number of you guys who have seen some of this content before, um, we will be going into this in prime, more depth than you've seen for some of this stuff including some theory not theory today but in other lectures, and then we'll also be moving on to content that will be new to all of you later in the class. So, sequential decision-making under uncertainty. Um, the fundamental that we- thing that we think about in these settings is sort of an interactive closed-loop process, where we have some agent, an intelligent agent hopefully that is taking actions that are affecting the state of the world and then it's giving back an observation and a reward. The key goal is that the agent is trying to maximize the total expected future reward. Now, this expected aspect, um, is going to be important because sometimes the world itself will be stochastic and so the agent is going to be maximizing things in expectation, this may not always be the right criteria, um, this has been what has been focused on for the majority of reinforcement learning but there's now some interest in thinking about distribution honorable, RL and some other aspects. One of the key challenges here is that it can require balancing between immediate and long-term rewards and that it might require strategic behavior in order to achieve those high rewards, indicating that you might have to sacrifice initial higher rewards in order to achieve a better awards over the long-term. So as an example, something like web advertising might be that you have an agent that is running the website and it has to choose which web ad to give to a customer, the customer gives you back an observation such as how long they spent on the web page, and also you get some information about whether or not they click on an ad, and the goal is to say how people click on ads the most. So you have to pick which ad to show people so that they're going to click on ads. Another example is a robot that's unloading a dishwasher, so in this case the action space of the agent might be joint movements. The information that agent might get backwards are camera image of the kitchen and it might get a plus one reward if there are no dishes on the counter. So in this case it would generally be a delayed reward, for a long time there're going to be dishes on the counter, er, unless it can just sweep all of them off and have them crash onto the floor, which may or may not be the intended goal of the person who's writing the system. Um, and so, it may have to make a sequence of decisions where it can't get any reward for a long time. Another example is something like blood pressure control, um, where the actions might be things like prescribed exercise or prescribed medication and we get an observation back of what is the blood pressure of the individual. Um, then the reward might be plus one if it's in the- if the blood pressures in a healthy range maybe a small negative reward if medication is prescribed due to side effects and maybe zero reward otherwise. [NOISE] So, let's think about another case, like some of the cases that I think about in my lab like having an artificial tutor. So now what you could have is you could have a teaching agent, and what it gets to do is pick an activity, so pick a teaching activity. Let's say it only has two different types of teaching activities to give, um, it's going to either give an addition activity or a subtraction activity and it gives this to a student. Then the student either gets the problem right, right or wrong. And let's say the student initially does not no addition or subtraction. So, it's a kindergartner that student doesn't know anything about math and we're trying to figure out how to teach the student math, and that the reward structure for the teaching agent is they get a plus one every time a student gets something right and they get a minus one if the student gets it wrong. So, I'd like you to just take a minute turn to somebody nearby and describe what you think an agent that's trying to learn, to maximize its expected rewards would do in this type of case, what type of problems it would give to the student and whether or not that is doing the right thing. [NOISE]. Let me just- let me just clarify here, and let me just clarify here [NOISE]. Let me just clarify here is that let's assume that for most students addition is easier than subtraction, so that, like what it says here that the problem even though the student doesn't know either of these things that the skill of learning addition is simpler for a new student to learn than subtraction. So what would, what might happen under those cases? Is there maybe we want to, raise their hand and tell me what they and somebody nearby them was thinking might happen for an agent in this scenario? [NOISE]. The agent would give them really easy addition problems, that's correct. That's exactly actually what happened. There's a nice paper from approximately 2,000 with Bev Wolf, which is one of the earliest ones but I know where they're using reinforcement learning to create an intelligent tutoring system and the reward was for the agent to, to give problems to the student in order to get them correct. Because, you know, if the students getting things correct them they've learned them. But the problem here is with that reward specification what the agent learns to do is to give really easy problems, and then maybe the student doesn't know how to do those initially but then they quickly learn how and then there's no incentive to give hard problems. So this is just sort of a small example of what is known as reward hacking, [LAUGHTER] which is that your agent is gonna learn to do exactly what it is that you tell him to do in terms of the rewards function that you specify and yet in reinforcement learning, often we spend very little of our time thinking very carefully about what that reward function is. So, whenever you get out and test for the real world this is the really really critical part. But normally, it is the designer that gets to pick what the reward function is, the agent is not having intrinsic internal reward and so depending on how you specify it, the agent will learn to do different things. Yeah, was there question in the back? In this case, it seems like the student will also be RL agent and that like in real life the student, so what we asked for her questions so techniques to approach or is it okay that we ignore that part? So, the question was to say well, you know, we also think that people are probably reinforcement learning agents as well and that's exactly correct, and maybe they would start to say, "Hey, I need to get harder questions, or be interactive in this process." For most of this class we're going to ignore the fact that the world that we interact with itself might also be an RL agent, in reality it's really critical, um, sometimes this is often considered in an adversarial way like for game theory, I think one of the most exciting things to me is when we think about it in a cooperative way? Um, so, who here has heard about the sub-discipline of machine teaching? Nobody yet, so, er, it's a really interesting new area that's been around for maybe 5-10 years, some a little bit beyond that. One of the ideas there is, what happens if you have two intelligent agents that are interacting with each other where they know that each other's trying to help them? Er, so there's a really nice classic example from sorry for those of you that aren't so familiar with machine learning but, imagine that you're trying to learn a classifier to decide where along this line things are either positive or negative. So in general you're going to need some amount of samples, samples if you, uh, wear that sort of the number of points on the line where you have to get positive or negative labels. Um, if you're in an active learning setting, generally I think you can reduce that to roughly log n by being strategic about asking people to label particularly points in a line, one of the really cool things for machine teaching is that, if I know you are trying to teach me where to divide this line, you'll only need one point or at most two points essentially constant, right? Because, if I'm trying to teach you, there's no way I'm just going to randomly label things. I'm just gonna label you a single plus and a minus and that's gonna tell you exactly where the line goes. So that's one of the reasons why if the agent knows that the other agent is trying to teach them something, it can actually be enormously more efficient than what we normally think of for learning. And so, I think there's a lot of potential for machine teaching to be really effective. But all that said, we're going to ignore most of that for the course, if it's something you want to explore in your project, you're very welcome to. There's a lot of connections with reinforcement learning. Okay. So, if we think about this process in general um, if we think of sort of a sequential decision making process, we have this agent. We're going to think about almost always about there being discreet timer. So, agent is gonna make a decision, it's gonna affect the world in some way, it's gonna see the world, it's gonna give some new observation and a reward. The agent receives those and uses it to make another decision. So, in this case when we think about a history, what we mean by history is simply the sequence of previous actions that the agent took, and the observations and rewards it received. Then the second thing that's really important is to define a state-space. Again, often when this was first discussed, this is sort of thought about is some immutable thing. But whenever you're in a real application, this is exactly what you have to define, is how to write down the representation of the world. Um, what we're going to assume in this class is that the state is a function of the history. So, there might be other aspects of- there might be other sensory information that the agent would like to have access to in order to make its decision. But it's going to be constrained to the observations is received so far, the actions is taken, and the rewards is observed. Now, there's also gonna be some real-world state. So, that's the real world. The agent doesn't necessarily have access to the real world. They may have access only to a small subset of the real world. So, for example as a human, right now, I have eyes that allow me to look forward. You know, roughly 180 degrees. Um, but I can't see behind my head. But behind my head is still part of the world state. So, the world state is the real world, and then the agent has its own state space it uses to try to make decisions. So, in general, we're gonna assume that it has some function of the history. Now, one assumption that we're gonna use a lot in this class which you guys have probably seen before is the Markov assumption. The Markov assumption simply says that we're going to assume that the state used by the agent uh, is a sufficient statistic of the history, and that in order to predict the future, you only need to know the current state of the environment. So, it's simply basically indicates that the future is independent of the past given the present, if in the present, you have the right aggregate statistic. [NOISE] So, as a couple of examples of this, yeah? Question name and-. Would you just explain maybe with an example the difference again between the state and the history? Like I'm having trouble to differentiate. Yeah. So, the state, um, uh, if we think about something like uh, um, a robot. Um, so let's say you have a robot that is walking down a long corridor. Okay. Let's say there's two long corridors. Okay. So, your robot starts here. This is where your robot starts, and it tries to go right, right, and then it goes down, down, down. Okay. Let's say its sensors are just that it can observe whether in front of it um, uh, um, whether there is a wall on any of its sides. So, it can- the observation space of the robot is simply is there a wall on any side-on each of its four sides? I'm sorry, it's probably a little bit small on the back. But the agent basically has, you know, some sort of local amount via laser range finder or something like that. So, it knows whether or not there's a wall immediately around it, that has been immediately around it square, and nothing else. So, in this case, what the agent would see is that initially the wall looks like this, and then like this, and then like this, and then like this. The history would include all of this. But it's local state is just this. So, local state could just be the current observation. That starts to be important when you're going down here because there are many places that looked like that. So, if you keep track of the whole history, the agent can figure out where it is. But if it only keeps track of where it is locally, then a lot of partial aliasing can occur. So, I put up a couple of examples here. So, in something like hypertension control, you can imagine the state is just the current blood pressure, um and your action is whether to take medication or not. So, current blood pressure meaning like you know, every second for example what is your blood pressure? So, do you think this sort of system is Markov? I see some people shaking their heads. Almost definitely not. Almost definitely there are other features that have to do with, you know, maybe whether or not you're exercising, whether or not you just ate a meal, whether it's hot outside. What the- if you just got an the airplane. All these other features probably affect whether or not your next blood pressure is going to be high or low and particularly in response to some medication. Um, similarly in something like website shopping, um, you can imagine the state is just sort of what is the product you're looking at right now? So, like I open up A- Amazon, I'm looking at some um, you know, computer, and um that's up on my webpage right now, and the action is what other products to recommend. Do you think that system is Markov? Systems is not Markov? Do you mean the system generally? But if the assumption is Markov and if it doesn't fit? Question is whether or not the system generally is Markov and the assumption just doesn't fit or make- just some more details. I'll think about this. What I mean here is that this particular choice re-representing the system is that Markov. Um, and so, there's the real-world going on, and then there's sort of the model of the world that the agent can use. What I'm arguing here is that these particular models of the world are not Markov. There might be other models of the world that are. Um, but if we choose this particular observation say just the current blood pressure as our state, that is probably not really a Markov state. Now it doesn't mean that we can't use algorithms that treat it as if it is. It is just that we should be aware that we might be violating some of those assumptions. Yeah? Um, I'm wondering so if you include um, enough history into a state, can you make them part of the Markov? Okay. It's a great question. So, why is it so popular? Can you know-can you always make something Markov? Um, generally yes. If you include all the- the history, then you can always make the system Markov. Um, in practice often you can get away with just using the most recent observation or maybe the last four observations as a reasonably sufficient statistic. It depends a lot on the domain. There's certainly domain, maybe like the navigation world I put up there where it's really important to model. Either use the whole history as the- as the state um, or think about the partial observability um, and other cases where you know, maybe the current- the most recent observation is completely sufficient. Now, one of the challenges here is you might not want to use the whole history because that's a lot of information. [LAUGHTER] and you have to keep track of it over time. And so, it's much nicer to have sort of a sufficient statistic. Um, of course, some of these things are changing a little bit with LSTMs and other things like that. So, um, some of our prior assumptions about how things scale with the size of the state-space are changing a little bit right now with deep learning. Um, but historically certainly, there's been advantages to having a- a smaller state-space. And um, again historically, there's been a lot of implications for things like computational complexity, the data required, and the resulting performance depending on the size of the state space. So, just to give some intuition for why that might be, um, if you made your state everything that's ever happened to you in your life, um, that would give you a really, really rich representation. You'd only have one data point for every state. There would be no repeating. So, it's really hard to learn because um, they're- all states are different. Um, and in general if we wanna learn how to do something, we're gonna either need some form a generalization or some form of clustering or aggregation so that we can compare experiences, so that we can learn from prior similar experience in order to what to do. So, if we think about assuming that your observation is your state, so the most recent observations that the agent gets, we're gonna treat that as the state. Then we- the agent is modelling the world is that Markov decision process. So, it is thinking of taking an action, getting observation and reward, and it's setting the state, the world state- that the environment state it's using to be the observation. If the world- if it is treating the world as partially observable um, then it says the agent state is not the same, um, and it sort of uses things like the history or beliefs about the world state to aggregate the sequence of previous actions taken and observations received, um, and uses that to make its decisions. For example, in something like poker, um, you get to see your own cards. Other people have cards that are clearly affecting the course of the game. Um, but you don't actually know what those are. You can see which cards are are discarded And so that's somewhere where it's naturally partially observable. And so you can maintain a belief state over what the other cards or at the other players. And you can use that information or to make your decisions. And similarly often in health care there's a whole bunch of really complicated physiological processes that are going on but you could monitor parts of them for things like you know blood pressure or temperature et cetera. Uh, and then use that in order to make decisions. So, in terms of types of sequential decision making processes, um, one of them is Bandits. We'll talk more about this later the term. Um, Bandits is sort of a really simple version of a markup decision process in the sense that the ideas that the actions that are taken have no influence over the next observation. So, when might this be reasonable? So, let's imagine that you have a series of customers coming to your website and you show each of them an ad. So, and then they either click on it or not and then you get another customer login into your website. So, in this case the ad that you show to customer one, generally doesn't affect who cuts- which customer two comes along. Now it could maybe in really complicated ways maybe customer one goes to Facebook and says I really really loved this ad, you should go watch it. Um, but most of the time whatever ad you showed a customer one does not at all affect who next logs into your website. And so the decisions you make only affect the local, um, the first customer and then the customer two is totally independent. Bandits have been really really important, um, for at least 50 years. Um, people thought about them for things like clinical trials, how to allocate people to clinical trials. You will think of them for websites and a whole bunch of other applications. MDPs and POMDPs say no wait the actions that you take can affect the state of the world, they affect often the next observation you get, um, as well as the reward. And you have to think about this closed loop system of the actions that you're taking changing the state of the world. So, the product that I recommend to my customer might affect what the customer's opinion is on the next time-step. In fact, you hope it will. Um, and so in these cases we think about, um, the actions actually affecting the state of the world. So, another important question is how the world changes? Um, one idea is that it changes deterministically. So, when you take an action in a particular state, you go to a different state but the state you go to it's deterministic. There's only one. And this is often a pretty common assumption in a lot of robotics and controls. All right. Remember, um, Tomás Lozano-Pérez who's a professor over at MIT ones suggesting to me that if you flip a coin, it's actually deterministic process. We're just modeling it as stochastic. We don't have good enough models. Um, so, there are many processes that if you could sort of write down, um, a sufficient perfect model of the world it would actually look deterministic. Um, but in many cases even though it maybe hard to write down those models. And so we're going to approximate them as stochastic. And the idea is that then when we take an action there are many possible outcomes. So, you couldn't show an ad to someone and they may or may not click on it. And we may just want to represent that with a stochastic, stochastic model. So, think about a particular example. So, if we think about something like Mars Rover, um, ah when we deploy rovers or robots on really far-off, um, planets, it's hard to do communication back and forth. So, be nice to be able to make these sort of robots more autonomous. Let's imagine that we have a very simple Mars rover that's, um, thinking about a seven state system. So, it's just landed. Um, it's got a particular location and it can either try to go left or try to go the right. I write down try left or try right meaning that that's what it's going to try to do but maybe you'll succeed or fail. Let's imagine that there's different sorts of scientific information to be discovered, and so over in S1 there's a little bit of useful scientific information but actually over at S7 there's an incredibly rich place where there might be water. And then there's zero in all other states. So, we'll go through that is a little bit of an example. As I start to talk about different common components of an oral agent. So, one often common component is a model. So, a model is simply going to be a representation of the agent has for what happens in the world as it takes its actions and what rewards it might get. So, in the case of the markup decision process it's simply a model that says if I start in this state and I take this action A, what is the distribution over next states I might reach and it also is going to have a reward model that predicts the expected reward of taking, um, an action in a certain state. So, in this case, ah, let's imagine that the reward of the agent is that it thinks that there's zero reward everywhere. Um, and let's imagine that it thinks its motor control is very bad. And so it estimates that whenever it tries to move with 50% probability it stays in the same place and 50% probability it actually moves. Now the model can be wrong. So, if you remember what I put up here the actual reward is that in state S1 you get plus one and in state S7 you get S you get 10 and everything else you get zero. And the reward I just wrote down here is that it's zero everywhere. So, this is a totally reasonable reward model the agent could have. It just happens to be wrong. And in many cases the model will be wrong, um, but often can still be used by the agent in useful ways. So, the next important component that is always needed by an oral agent is a policy. Um, and a policy or decision policy is simply how we make decisions. Now, because we're thinking about Markov decision processes here, we're going to think about them as being mappings from states to actions. And a deterministic policy simply means there's one action prostate. And the stochastic means you can have a distribution over actions you might take. So, maybe every time you drive to the airport, you flip a coin to decide whether you're going to take the back roads or whether you're going to take the highway. So, as a quick check imagine that in every single state we do the action try right is this the deterministic policy or stochastic policy? Deterministic great. We'll talk more about why deterministic policies are useful and when stochastic policies are useful shortly. Now, the value function, um, is the expected discounted sum of future rewards under a particular policy. So, it's a waiting. It's saying how much reward do I think I'm going to get both now and in the future weighted by how much I care about immediate versus long-term rewards. The discount factor gamma is going to be between zero and one. And so the value function that allows us to say sort of how good or bad different states are. So, again in the case of the Mars rovers let's imagine that our discount factor is zero. Our policy is to try to go right. And in this case say this is our value function. It says that the value of being in state one is plus one everything else is zero and the value of being in S7 is 10. Again, this might or might not be the correct value function. Depends also on the true dynamics model, but this is a value function that the agent could have for this policy. Simply tells us what is the expected discounted sum of rewards you'd get if you follow this policy starting in this state where you weigh each reward by gamma to the number of time steps at which you reach it. So, when we think about, yeah. So, if we wanted to extend the discount fac- factor to this example, um, would there be, like, ah, an increasing value or decreasing value to reward depending on how far it went? Yes. Question was if, if the Gamma was not 0 here. Um, so gamma is being 0 here indicates that essentially we just care about immediate rewards, whether or not we'd start to, sort of, if I understood correctly, you start to see like rewards slew into other states, and the answer is yes. So, we'll see more of that next time, but if the discount factor is non-zero, then it basically says you care about not just the immediate reward you get, [NOISE] you're not just myopic, you care about their reward you're gonna get in the future. So, in terms of common types of reinforcement learning agents, um, some of them are model-based, which means they maintain in their representation a direct model of how the world works, like a transition model and a reward model. Um, and they may or may not have a policy or a value function. They always have to compute a policy. They have to figure out what to do. But they may or may not have an explicit representation for what they would do in any state. Um, model free approaches have an explicit value function and a policy function and no model. Yeah. Going back with [NOISE] the- the earlier slide, I'm confusing when the value function is evaluated ice with the- with well the setting yes. So, why is it not [NOISE] S_6 that has value of 10 because if you try right at S_6 you get to S_7. You were saying well how do I- when do we think of the rewards happening. Um, we'll talk more about that next time. When really, uh, there's many different ways people think of where the rewards happening. Some people think of it as the reward happening for the current state you're in. Some people think of it as it's the reward you're in [NOISE] and the action you take. And some people- some- another common definition is r- SAS prime, meaning that you don't see what reward you get until you transition. And this particular definition that I'm using here we're assuming that rewards happened in one year in that state. All of them are, um, basically isomorphic, um, but we'll try to be careful about which one we're using [NOISE]. The most common one we'll use in the class is s,a which says that when you're in a state, and you choose a particular action, then you get a reward, and then you transition to your next state. Great question. Okay. So, when we think about reinforcement learning agents, and whether or not they're maintaining these models and these values and these policies, um, we get a lot of intersection. So, I really like this figure from David Silver, um, I- where he thinks about, sort of, RL algorithms or agents mostly falling into these three different classes. They even have a model or explicit policy or explicit value function. And then there's a whole bunch of algorithms that are, sort of, in the intersection of these. So, things like actor critic often have an explicit. And what do I mean by explicit? I mean like often they have a way so that if you give it a state you could tell- I could tell you what the value is, if I give you a state you could tell me immediately what the policy is, without additional computation. So, actor-critic combines value functions and policies. Um, there's a lot of algorithms that are also in the intersection of all of these different ones. And often in practice it's just very hopeful to maintain. Many of these and they have different strengths and weaknesses. For those of you that are interested in the theoretical aspects of learning theory, there's some really cool recent work, um, that explicitly looks at what is the formal foundational differences between model-based and model-free RL that just came out of MSR, Microsoft Research [NOISE] , um, in New York, which indicates that there may be a fundamental gap between model-based and model-free methods, um, which on the deep learning side has been very unclear. So, feel free to come ask me about that. So, what are the challenges in learning to make good decisions, um, in this, sort of, framework? Um, one, is this issue of planning that we talked about a little bit before, which is even once I've got a model of how the world works, I have to use it to figure out what decisions I should make, in a way that I think it's going to allow me to achieve high reward. Um, [NOISE] and in this case if you're given a model you couldn't do this planning without any interaction in the real world. So, if someone says, here's your transition model, and here's your reward model, you can go off and do a bunch of computations, on your computer or by paper, and decide what the optimal action is to do, and then go back to the real world and take that action. It doesn't require any additional experience to compute that. But in reinforcement learning, we have this at other additional issue that we might want to think about not just what I think is the best thing for me to do given the information I have so far, but what is the way I should act so that I can get the information I need in order to make good decisions in the future. So, [NOISE] it's, like, you know, you go to a brand new restaurant, and, ah, let's say- let's say you move to a new town, you go to- there's only one restaurant, you go there the first day, and they have five different dishes. You're gonna be there for a long time, and you wanna optimizing at the best dish. And so maybe the first day you try dish one, and the second day you tr- try dish two, and then the third day three, and then et cetera so that you can try everything, and then use that to figure out which one is best so that over the long term you pick something that is really delicious. So, in this case the agent has to think explicitly about what decision it should take so it can get the information it needs so that in the future it can make good decisions. So, in the case of planning, and the fact that this is already a hard problem, um, you think about something like solitaire, um, you could already know the rules of the game, this is also true for things like go or chess or many other scenarios. Um, and you could know if you take an action what would be the probability distribution of the next [NOISE] state, and you can use this to compute a potential score. And so using things like tree search or dynamic programming, and we'll talk a lot more [NOISE] about these, um, ah, particularly the dynamic programming aspect you can use that to decide given a model of the world what is the right decision to make. But sol- the reinforcement learning itself is a little bit more like solitary without a rule book. We're here just playing things and you're observing what is happening, and you're trying to get larger reward. And you might use your experience to explicitly compute a model and then plan in that model, or you might not and you might directly compute a policy or a value function. Now, I just wanna reemphasize here this issue of exploration and exploitation. So, in the case of the Mars rover it's only going to learn about how the world works for the actions it tries. So, in state S2 if it tries to go left it can see what happens there. And then from there it can decide the right next action. Now, this is obvious but it can lead to a dilemma because it has to be able to balance between things that seem like they might be good, based on your prior experience, and things that might be good in the future, um, that perhaps you've got unlucky before. So, in exploration we're often interested in trying things that we've never tried before, or trying things that so far might have looked bad, but we think in the future might be good. But an exploitation we're trying things that are expected to be good given the past experience. So, here's three examples of this. In the case of movies, um, exploitation is like watching your favorite movie, and exploration is watching a new movie, that might be good or it might be awful. Advertising is showing the ad that sealed the most highest click-through rate so far. Um, exploration is showing a different ad. And driving exploitation is trying the fastest route given your prior experience and exploration is driving a different route. [inaudible]. Great question, which is, what's the imagine for that example that I gave? I am that you're only going to be in town for five days. Um, and with the policy that you would compute in that case if you're in a finite horizon setting, be the same or different as one where you know you're going to live in this for all of infinite time. Um, we'll talk a little bit more about this next, ah, next time but very different. Um, and in particular, um, the normally the policy if you only have a finite horizon is non-stationary which means that, um, the decision you will make depends on the time step as well as the state. In the infinite horizon case the assumption is that, um, the optimal policy and the mark off setting is stationary, which means that if you're in the same state whether you're there on time step three or time step 3,000 you will always do the same thing. Um, but in the finite horizon case that's not true. And as a critical example of that. So, why do we explore? We explore in order to learn information that we can use in the future. So, if you're in a finite horizon setting and it's the last day is your last day in Hollywood and you know you're trying to decide what to do, um, you're not going to explore because there's no benefit from exploration for future because you're not making any more decisions, so in that case you will always exploit, its always optimal to exploit. So, in the finite horizon case, um, the decisions you make have to depend on the value of the information you gain to change your decisions and the remaining horizon. And there's this often comes up in real cases. Yeah. How much, um, how much more complicated is if there's a finite horizon but you don't know where is this? Uh, is just something I remember from game theory this tends to be very complicated. How this [inaudible]? Question is what about what I would call it indefinite horizon problems where there is a finite horizon but you don't know what it is that can get very tricky. One way to model it is as an infinite horizon problem with termination states. So, there are some states which are essentially stink states once you get there the process ends. There's often happens in games, um, you don't know when the game will end but it's going to be finite. Um, and that answer that's one way to put it into the formalism, um, but it is tricky. In those cases we tend to model it has infinite horizon and look at the probability of reaching different termination states. [inaudible] you miss exploitation, exploration essentially subproblems, I guess particulary for driving. It seems like it would be better to kind of exploit has you know are really good and maybe explore on some [inaudible] don't know her as good rather than trying like completely brand new route. In about how this mix happens of exploration, exploitation and maybe in the cases of cars, maybe you would, um, sort of, er, not try things totally randomly. You might need some evidence that they might be good, um, it's a great question, um, there's generally it is better to intermix exploration exploitation. In some cases it is optimal to do all your exploration early or at least equivalent, um, and then it came from all of that information for later, but it depends on the decision process. Um, and we'll spend a significant chunk of the course after the midterm thinking about exploration, exploitation, it's definitely a really critical part of reinforcement learning, um, particularly in high stakes domains. What do I mean by high-stakes domains? I mean domains that affect people. So, whether it's customers or patients or students, um, that's where the decisions we make actually affect real people and so we want to try to learn as quickly as possible and make good decisions as quick as we can. Any other questions about this? If you're in- you're in sort of state that you haven't seen before, do you have any other better option and just take a random action to get out of there? Or you can use your previous experience even though you're not never been there before? The question is great. It's the same if you're in a new state you've never been in before, what do you do? Can you do anything better than random? Or can you somehow use your prior experience? Um, one of the really great things about doing generalization means that we're going to use state features either learned by deep learning or some other representation to try to share information. So, that even though [NOISE] the state might not be one you've ever exactly visited before you can share prior information to try to inform what might be a good action to do. Of course if you share in the wrong direction, um, you can make the wrong decision. So, if you overshoot-overgeneralize you could overfit your prior experience and in fact that there's a better action to do in the new scenario. Any questions for this? Okay. So, one of the things we're going to be talking about over the next few lectures is this trend two really fundamental problems which is evaluation and control. So, evaluation is the problem as saying if someone gives you a policy, if they're like hey this is what you should do or this is what your agent should do, this is how your robot should act in the world to evaluate how good it is. So, we want to be able to figure out you know your manager says Oh I think this is the right way we should show ads to customers, um, can you tell me how good it is? What's the quick [inaudible]? Um, so one really important question is evaluation, um, and you know you might not have a model of the world. So, you might have to go out and gather data to try to evaluate this policy be useful to know how good it is, you're not trying to make a new policy with not yet you're just trying to see how good this current one is. And then the control problem is optimization. It's saying let's try to find a really good policy. This typically involves as a sub-component evaluation because often we're going to need to know what does best mean? Best means a really good policy. How do we know how good the policies? We need to do evaluation. Now one of the really cool aspects of reinforcement learning, um, is that often we can do this evaluation off policy. Which means we can use data gathered from other policies to evaluate the counterfactual of what different policies might do. This is really helpful with because it means we don't have to try out all policies exhaustively. So, um, in terms of what these questions look like, if we go back to our Mars Rover example for policy evaluation it would be if someone says your policy is this, in all of your states the action you should take as try right. This is the discount factor I care about, um, please compute for me or evaluate for me what is the value of this policy? In the control case, they would say I don't know what the policy should be. I just want you to give me whatever ever policy has the highest expected discounted sum of rewards, um, and there's actually sort of a key question here is. Okay. Expected discounted sum of rewards from what? So, they might care about a particular starting state, they might say I want you to figure out the best policy assuming I'm starting from S4. They might say I want you to compute the best policy from all starting states, um, or sort of some average. So, in terms of the rest of the course where we get- yeah. I was just wondering if it's possible to learned the optimal policy and the reward function simultaneously? Through example if I has some belief of what the reward review that included or for some sort of action there will be a state and that turned out to be wrong, ah, we have to start over and trained to find optimal policy or could I use what I've learned so far. In addition assumption organization of data with a belief of the rewards that [inaudible]? Fake question, which is. Okay. Let's say I have a policy to start with I'm evaluating it, um, and I don't know what the reward function is and I don't know what the optimal policy is and it turns out this [inaudible] isn't very good. Do I need to sort of restart or can I use that prior experience to sort of inform what's the next policy I try? Ah, perhaps a whole suite of different policies? In general you can use the prior experience in order to inform what the next policy is that you try our next suite of policies. Um, there's a little bit of a caveat there which is, uh, you need to have some stochasticity in the actions you take. So, if you only take the same you know one action in a state, you can't really learn about any other, um, actions you would take. So, you need to assume some sort of generalization or some sort of stochasticity in your policy in order for that information to be useful to try to evaluate other policies. This is a really important issue. This is the issue of sort of counterfactual reasoning and how do we use our old data to figure out how we should act in the future, um, if the old policies may not be the optimal ones. So, in general we can, um, and we'll talk a lot about that it's a really important issue. So, we're first going to start off talking about sort of Markov decision processes and planning and talking about how do we sort of do this evaluation both whom we know how the world works, meaning that we are given a transition model and reward model and when we're not, then we're also going to talk about model-free policy evaluation and then model-free control. We're going to then spend some time on deep-deep reinforcement learning and reinforcement learning in general with function approximation, which is a hugely growing area right now. Um, I thought about making a plot of how many papers are going on in this area right now it's pretty incredible. Um, and then we're going to talk a lot about policy search which I think in practice particularly in robotics is one of the most influential methods right now and we're going to spend quite a lot of time on exploration as well as have, um, a few advanced topics. So, just to summarize what we've done today is talk a little bit about reinforcement learning, how it differs compared to other aspects of AI machine learning. We went through course logistics started to talk about sequential decision making under uncertainty. Just as a quick note for next time, um, we will try to post the lecture slides, um, two days in advance or by the end of you know the evening of two days in advance, so that you can print them out if you want to, um, in class. And I'll see you guys on Wednesday. |
Stanford_CS234_Reinforcement_Learning_Winter_2019 | Stanford_CS234_Reinforcement_Learning_Winter_2019_Lecture_12_Fast_Reinforcement_Learning_II.txt | All right. We're gonna go ahead and get started, and in consistent with this, uh, theme of this section, we're gonna be optimistic under uncertainty and hope that this will work but we will see. Um, before we get into the content, um, I wanted to ask if anybody has any questions about logistics or, um, any other aspects of the general course. Yeah. I just wanna double check with people who are doing the default project are okay, either not submitting or just submitting another thing for the milestone instead of doing the default. Yes. Yeah, so, the question was asking about, um, to repeat. The question was asking about whether or not people who were doing the default project if they need to do anything in particular for the milestone. Um, no, you don't. Any other questions? Okay. So, just as a reminder, where we are sort of in the class right now is, um, last time we started talking about bandits and regret and we'll do, um, a brief, uh, recap of that today and we're gonna continue to talk about fast learning. Um, and we're gonna go from Bayesian bandits towards Markov decision processes today, and then on Wednesday we're gonna talk some more about fast learning and then we're gonna try to talk about sort of fast learning and exploration. And just to remind us all about, like, why we're doing this, um, the idea was that if we wanna move reinforcement learning into real-world applications, we need to think about carefully taking the data that we have, um, and how do we gather it and how do we best use it so that we don't need to collect a lot of data in order for our agents to learn to make good decisions. One of my original interests in this whole topic was to think sort of formally about what does it mean for an agent to learn to make good decisions and what are the sort of, information theoretic limits of how much information would an agent need in order to be able to make a provably optimal decision. So, we've been thinking about sort of a couple of different main things here. We're, we're talking about a couple of settings. Last time we talked about bandits, today we'll also talk about bandits and Markov decision processes. We're talking about frameworks, which are ways for us to formally assess how good an algorithm is. Um, you know, these could be the framework of empirical success. We, we talked last time about the mathematical framework of regret and we'll talk today about some other frameworks for evaluating in general how good a reinforcement learning algorithm is. And then, we're also talking about styles of approaches that tend to allow us to achieve these different frameworks. Um, and last time what we started to do is to talk about optimism under uncertainty. So, just a quick recap from bandits. Um, so, bandit was basically a simplified version of a Markov decision process where in the most simple setting there's no state, there's a set of actions, um, and now we're going to think specifically about the fact that the reward is through some stochastic distribution. Um, there's some unknown probability distribution over rewards. Um, at each timestep you get to select an action, um, and see, you know, a reward. And then your goal is to select actions in a way that is gonna optimize your rewards over time. And the reason this was different than a supervised learning problem is that you only get to observe the reward for the action that you sample. So, it's what's known as censored data. Um, you don't get to see what would have happened if you'd went to Harvard. So, um, we get to see censored data and we have to use that censored data to make good decisions. Um, and what we discussed here talking about regret, what we mean that in a, in a formal mathematical sense is that we were comparing, um, the expected reward from the action we took, um, to the expected reward of the optimal action. Now, notice that all of these things are stochastic. Um, so, it doesn't have to have a particular parametric distribution but imagine that we're thinking about, um, Gaussians. Let's say, we had two Gaussians. So, this is action 2 and this is action 1. Okay. So, here's the mean of action 1. So, Q of a1 is greater than Q of a2. So, action 1 has the better expected reward. But notice that on any particular trial, you might sometimes get a, um, uh, you could, ima- imagine getting a result where, um, the actual reward you get from a sub-optimal arm is better than the expected reward of the optimal arm. I just wanna highlight that. So, imagine that you've sampled from action A2 and you got here. It's a particular sample. So, you could have a particular sample be better than the expected reward of the optimal arm. But because we're imagining doing this many, many times we're again just looking at expectations. So, we're saying, "On average, which is the best arm, and on average how much do we lose from selecting a sub-optimal arm?" And the goal was to minimize our total regret which is equivalent to maximizing our cumulative reward over time. So, we then introduced this idea of optimism under, of, under uncertainty. Um, and the idea was to, um, uh, uh, estimate an upper confidence bound on the potential expected reward of each of the arms. So, this was to say we wanna be able to say for each of the arms, what do we think is an upper bound of their expected value, and then when we act, we're gonna pick whichever arm has the highest upper confidence bound. And this was gonna lead to one of two outcomes. So, this could, um, two things could happen. So, either, either at is equal to A star, and in that case, what's our regret? 0. So, if we select the optimal action and we have our regret at 0. So, that's good. And we have our regret at 0 like at per timestep or it's not. And if it's not on average, what happens to that upper confidence bound? Yeah. Yes. Yeah, [NOISE] that answer is correct. So, we lower it. So, if we get, if we select an arm which is, um, not optimal, then it means its real mean is lower, um, uh, than the upper confidence bound we're averaging, um, at least with high probability. And so, then in general our UT of AT will decrease. So, we're gonna gain information about what the real mean is of that arm and we'll reduce it. And if we reduce it enough, over time we should find that the optimal arm's upper confidence bound is higher. So, I'll ask you to play about what might happen if we do lower bounds. Um, but that's one of the reasons why upper bounds is really good, and, and I mentioned that these ideas have really been around for a long time, um, at least around like 20, almos- 25-30 years. So, I think the first one was 1993, Kaelbling, by Leslie Kaelbling. She's at MIT. Um, don't remember if it was her PhD thesis or if it was just after that. Um, uh, but she talked about this idea of interval estimation of estimating, um, the potential rewards. She didn't do formal proofs of this being a good idea, but she did it for Markov decision processes, and they found that it was, uh, a very good idea in terms of empirical performance, and then a lot of people went around and did the theoretical analysis and showed that provably this is a good thing. And I think it's interesting often about which area ends up being more advanced whether it's the empirical side or the theoretical side. Okay. So, optimism under uncertainty in the bandit case just involves keeping track of the rewards we've seen and we showed, saw that we could use things like Hoeffding inequality to compute these upper confidence bounds. Because remember each of our samples from the arms is iid, because they're all coming from the same parametric distribution, which is unknown and we're given these samples. So, what we found last time is that, uh, if we use the upper confidence bound algorithm I did a proof on the board to show that with high probability we had logarithmic regret. Why was this important? Because we looked at greedy algorithms and showed that they could have linear regret. And what is it linear in? So, um, this is the number of timesteps, number of timesteps we act. So why is linear regret bad? Well, if it's linear, it means that essentially you could be making the worst decision on every single time point, and that's pretty bad. So we'd like to have things that are going slower which means that our algorithm is essentially learning to make good decisions. Now, notice that in this case, this is a little bit different, um, statement of our result than what we saw before. Um, it's related, this involves the gaps. This is the gaps, delta a is equal to Q of a star, - q of a. It's how much worse, it is to take a particular action than to take the optimal action. Um, and this is related to a bit different, to the bound from last time. Last time, we proved a bound that was independent of the gaps, so it didn't matter what the gaps are in the problem. Um, this is the bound that depends on the gaps. Now, of course, you don't know what the gaps are in practice. If you didn't know the gaps, then you would know the optimal arm to prove, um, but the nice thing about this is that i- it's always important to know whether or not this sort of knowledge appears in the analysis or in the algorithm. This is saying, you don't need to know what the gaps are, but if you use this algorithm, [NOISE] how your, how your regret grows depends on a property of the domain. You don't have to know the property of that domain, but that's what your regret will depend on. And so this is saying that for upper confidence bounds, if you have different sizes of gaps, you're gonna get different regrets. Um, your algorithm is just going to proceed by following upper confidence bounds, it doesn't need to know about these gaps, you're gonna get better or worse performance depending on what the gaps are. And in general, we'd really like these. We'd like our algorithms to be able to adapt to the problem so that thi- this is known as a problem-dependent bound, right here, and you would like those. You'd like to have algorithms that are agnostic, that are sort of saying, provide problem-dependent balance, that they work better if the problem is easier. All right. So let's go to our toy example, which we were talking about as a way to sort of see how these different algorithms work, and we're looking at fake, um, ah, ways to treat broken toes, um, where we are looking at surgery, buddy taping, or doing nothing, and we're imagining that we had a, a Bernoulli variable that determined whether or not these treatments worked. So surgery on in expectation was the most expect- uh, most effective thing, with 0.95 success, ah, buddy taping was 0.9, and doing nothing was 0.1. So how would something like upper confidence bound algorithms work? Well, in the beginning we don't have any data, so let's sample all the arms once. That's going to involve sampling from a Bernoulli distribution. Um, so in this case, let's imagine that we've got, um, a1, a2, and a3 here. And note that this is, [NOISE], yeah, all right, this is our, their empirical estimate where we just average, over all the, all the rewards we've seen from a particular arm. Okay? So we pulled each arm once, which is the same as taking each action once, we got 1-1-0. And now what we have to do is compute the upper confidence bounds for each of those arms before we know what to do next. Okay. So in this case, we're gonna define our upper confidence bounds by being the empirical average, plus the square root of 2 log t, t is the number of times we pulled arms, and N t of a is the number of times we've sampled a particular arm. So this is total arm pulls. This is a particular arm. Okay. So what does that gonna be in this case? Let's just define it for each of them. So UCB of a1 is gonna be equal to 1, because that's what we've got so far, plus square root 2 log of 3, because we pulled three arms so far, divided by 1. UCB of a2 is gonna be the same, because we've also pulled that arm once, and it got the same outcome. And then UCB of a3 is gonna be different, because its current reward is- or current expected value is 0. So it's just gonna be equal to 2 log 3 divided by 1. That's how we could instantiate each of the bounds, and now we've defined the upper confidence bounds for each of the, ah, um, each of the arms. So in this case, after we've done that, we're going to pull one of these arms. Um, let's say that we break ties randomly, so the upper confidence bound of a1 and a2 is identical. So with 50% probability, we select one. With 50% probability we can select the other. Okay. Um, and let's just compare that for a second. So, um, if we're using UCB, I said that, so I'll just redefine this here so people can remember, is equal to UCB of a2. This is- UCB stands for upper confidence bound is equal to 1, plus square root 2 log 3 divided by 1. And UCB of a3 is equal to square root 2 log 3 divided by 1. Okay. So why don't we just take a second, um, and define what would be the probability of selecting each arm if you're using e-greedy with epsilon = 0.1. And what about if you're using UCB? As always, feel free to talk to anybody nearby. [NOISE] All right, so let's vote [NOISE]. I'm gonna ask you to vote if, um, if two arms have, um, non-zero probability or three arms have non-zero probability. So if you're using UCB [NOISE], do two arms have a non-zero probability? Do three arms have a non-zero probability? Somebody who know- who thought with UCB, you only have two arms with non-zero probability want to explain why? Yeah. Because since you're picking the maximum action you're only going to pick a1 or a2. That's right, yeah. So, um, if we're picking the maximum action here, um, we're only gonna pick, um, action a1 or a2, we have zero probability of my third arm. Okay. Let's do a quick vote for, um, e-greedy. For e-greedy, do we have non-zero probability on two arms? On three arms? That's right. What's the probability of selecting a3? [NOISE]. Yeah, [inaudible] [NOISE] Um, 0.1 [NOISE]. Anyone else wanna add? Yeah? It's 0.03. Yes. Or 0323, yes, exactly. So in the 0.1 in this case, just, um, remember we normally define that by uniformly splitting. Yeah. So we're gonna just have 0.1 divided by the number of arms. Okay. So here, um, wh- why do I bring this up? So I bring this up to indicate that while UCB is still, um, splitting its attention among all the arms that look good, it's not putting any weight right now on the arms that it doesn't think could be good, which in this case is arm 3. Whereas an e-greedy, um, approach is gonna be uniform probability across anything it doesn't think it's best. So this is one of the insights for why these arguments might be better, is they're being more strategic in how they're weighing, um, what arms to pull, um, compared to what a- what an epsilon greedy, which is just doing uniform. Okay. So let's look at, um, sort of, uh, what the regret would be in this case. So, um, the actions we pulled is a1, a2, a3, a1, a2. Um, so why would, why would we pull a2 again, let's just go through that briefly. So let's say, um, so first, let's say we're, we're, we're gonna pull a1. Let's imagine that's which one we picked. Okay, so we pulled a1. So let me just go through one more step of what the upper confidence bounds would be in this case. So let's say we pull a1, okay. So now, we need to redefine the upper confidence bounds, we actually need to redefine them for all the arms, because if the denominator of that upper confidence bound, it depends on t, which is the total number of pulls or the total number of pulls so far, if we're using that form. So now, we're gonna have that UCB. Let's say we pulled action a1, and let's say I got a 1. Okay? So UCB of a1 is gonna have the same mean which is 1, plus square root 2 log 4, because we've now pulled things four times, but now, we pulled this arm twice. So it's- we're gonna divide this by 2. UCB of a2 is gonna have the same mean as before because we didn't pull it. And then it's gonna also have the 2 log 4, but we've only pulled it once, and UCB of a3 still has a mean of 0, and it's also gonna have 2 log 4 divided by 1 [NOISE]. Okay. So in this case, what we're gonna find is that we sort of are gonna get this trading off, we still have the same empirical mean for a1 and a2. But now, we haven't pulled a2 as much as a1, so we're gonna flip, and we're gonna pick a2 now. So that's why if we imagine that we got, well actually, as a quick check your understanding, um, ah, this result would happen whether we picked 1, whether we got a 1 or 0, for a1 when we last pulled it. Um, so it's a good thing to check, is all I'm saying. But even if we got a 1 for a1, we'd still select a2 on the next round, because the upper confidence bound of it would drop, even if its mean was the same. So if we, if we look at this then, so we're gonna compare it to what would be the optimal action if we take him out the whole time, which is a1. So what is our regret? Our regret is gonna be 0, and then here it's gonna be 0.05, which is just equal to 0.95 - 0.9. Here it's gonna be 0.85, because it's 0.95 - 0.1. Here it's gonna be 0, and here it is going to be 0.05 again. So that's how we'd sum up regret, the regret. Of course, we don't actually know this, and we can know this if we're doing this in the simulated world, but we can't knew- know this in reality because, again, if we actually knew it in reality, then we wouldn't have to be doing this. You would know the optimal arm. May I have any other questions about how UCB is working? Okay. Now, just I guess, one other quick note here which is, you know, these upper confidence bounds are high probability bounds, so they can fail. That is possible that sometimes the upper confidence bound is lower than, um, the true mean. And so that's why when we did the proof last time we had to talk about sort of these different, what happens if the upper confidence bounds hold? Um, uh, and there we did sort of a high probability bound. Okay, so an alternative would be to always select the arm with the highest lower bound. So what we're doing right now is just selecting the arm with the highest upper bound, but you could select the arm with the best lower bound. Um, so what might that look like, let's imagine that we have two arms, a1 and a2, and this is our estimate of the Q of a. Okay. So let's imagine that these are uncertainty bounds. So in this case a1 has a higher upper bound, but a2 has a higher lower bound. So why don't we take a minute or two and think about, why this could lead to linear regret? I think two-arm case is the easiest one to think about for this. Feel free to talk to anybody around you, if you wanna brainstorm. And this is actually an important reason why it's good to be optimistic, at least in reinforcement learning. [NOISE] What do you guys think? Does it lead to linear regret? [NOISE] Right, so we're going to get, sort of like, confirmation bill, so like, a2 is giving out smaller and smaller [inaudible]. Okay, so I talked to at least one person in the audience that gave the right answer. Which is, um, in this case if you select a2, you're gonna continue to get towards the mean. The real mean is above the lower bound of a1. And so you're gonna, kind of, get confirmation bias, like, a2 is gonna continue to look good, and you're never gonna select a1. So we're never gonna get information that allows us to disprove our, our hypothesis, and we're never gonna learn what the true optimal mean is. So that's why we can get linear regret. So one thing I just wanna highlight is that, um, upper confidence bounds are one nice way to do optimism, er, and they can change over time in terms of upper bound, but the simpler thing you might imagine doing is just to initialize things really to a high value. Um, so pretend, for example, that you already observed one pool of each of the arms and that it was really, really good. So just initialize all your values at like a million or something like that. Um, and then you just kind of average in that weird fake pool when you're doing your empirical average. So you can imagine you pretend you pull each of your arms once you've got a million, you've got a trillion, and then after that you just average in all your empirical rewards. So this actually can work fairly well in a lot of cases, the challenge is to figure out how optimistic you need to be for that fake pool. So just in terms of sort of comparing that approach to other approaches, recall that greedy gives you linear total regret. Constant e-greedy can also give you a linear total regret. If you decay e-greedy, you can get actually sub-linear regret, um, if you use the right schedule for decay in epsilon, but that generally requires knowledge of the gaps which are unknown. So this is sort of uh, it's generally impossible to actually achieve. But if, in principle, you know, if you sort of, you know, had an oracle, you could, uh, you could figure out how to decay things. If you're too optimistic in your initialization, um, if you initialize the value sufficiently optimistically, then you can achieve sub-linear regret, but again, it can be pretty subtle to figure out exactly how optimistic those need to be. I, and I'll talk later about an example where in that Markov decision process case they have to be much more optimistic than you might think they would need to be in order for this to work well. All right, so we're gonna now start to talk about Bayesian bandits and Bayesian regret. And so far we've sort of made very little assumptions about the reward distribution. We've assumed that the rewards might be bounded, so we typically assume that the rewards are gonna lie in sort of 0, 0 to 1, or 0 to some, you know, 0 to R max or something like that, that we have bounded rewards that you can't have infinite rewards as nice as it would be. Um, [NOISE] but we haven't been making strong parametric assumptions over the distribution of rewards. [NOISE] Um, Hoeffding doesn't, uh, Hoeffding requires the rewards to be bounded but it doesn't assume, for example, that they're Gaussian or Bernoulli or things like that. So an alternative approach is to assume that we actually do have information on the parametric distribution of the rewards, and it'll exploit that. Um, so we're gonna talk about Bayesian bandits now, where we sort of explicitly compute a posterior over the rewards given the history. So given the previous actions, we, the arms we've pulled and the rewards we've observed. Uh, uh, and given that posterior we can use that posterior to guide exploration. And of course, if your prior knowledge is accurate, that might help you. Um, there's sort of somewhat of a debate between Frequentist and Bayesian, um, views of the world. We're not kind of get really too much into that in this class, but the idea is that it's also gonna be a nice way to put in prior knowledge. If you have prior knowledge about the particular reward structure of your environment you can put those in and it can help in terms of exploration. Okay, so in the Bayesian view now we're just gonna do sort of a quick review of Bayesian inference. Um, a number of you guys it's probably gonna be a- a refresher for, for some people it might be new. We're gonna assume that we have a prior over the unknown parameters. So in our case the unknown parameters are gonna be the parameters that determine the reward probability distribution for each of the arms. And the idea is that given sort of observations or data about that parameter like observing rewards when you pull an arm, we're gonna update our uncertainty over the unknown parameters using Bayes' rule. So let's look at that as a specific example. So for example imagine that the reward of an arm i is a probability distribution that depends on some unknown parameter phi i. So note that this is unknown. And we're gonna have some initial prior over phi i, which is the probability of phi i. So this was before we pulled that arm at all. This is sort of our uncertainty over that parameter. [NOISE] And then we pull arm i and we observe a particular reward, ri1. And then we can use this to update our estimate of the distribution over the parameters that determine our reward probability distribution for this arm. So, we do that using Bayes' rule. And we say that the posterior probability of our parameters phi i given that we've observed that reward is equal to our prior probability over those times provide data evidence or likelihood, divided by the probability of observing that reward regardless of what your parameters were. And so this is Bayes' rule, um, and the challenge or the important thing here is how do we compute all of those things. So, that tells us how to update our posterior over the parameters, and the question is how do we do this? [NOISE] Okay. So, in general, doing this sort of updating can be very tricky to do. Because if you don't have any structure on the sort of parametric form of the prior and the data likelihood. So, this again is the prior, and this is the data likelihood. If you have no structure on this, um, one of them is a deep neural network and another one of them is some random other, um, parametric distribution, then it may be impossible to have a closed-form representation for what the posterior is. So in general, this can be really hard. Um, [NOISE] but it turns out that there's particular forms of the prior and the data likelihood that mean that we can do this analytically. [NOISE] So, who here is familiar with conjugates? Okay, some people but not everybody. So, these are really cool conjugates. Um, exponential families, for example, are conjugate distributions. Um, [NOISE] the idea is that if the parametric representation of the prior and the posterior is the same, we call the prior and the model conjugate. So, what would that mean so, for example, what if this is like a Gaussian? If this is a Gaussian and this is a Gaussian, then we would say that this and this are conjugate. Whatever thing we're using for the data likelihood. It essentially means that we can do this posterior update- updating analytically or in closed form, which is really nice. So, we call it, this means that we sort of keep things in the same parametric family as we're getting more evidence about these hidden parameters. [NOISE] I'll give an example of this in a second. Um, but there are a number of different parametric families which have conjugate priors. Which means that, um, if you have an initial uncertainty over the parameter in that distribution, then if you observe some data you can update it and you are still in the same parametric family. So, they're super elegant, and come up in statistics a lot. Um, [NOISE] all right. So, here's a particular example that's relevant to us which is Bernoulli's. Um, so let's think about a bandit problem where the reward of an arm is just a binary outcome. Um, and that this is sampled from a Bernoulli with parameter theta. So this comes up a lot. This is things like advertisement click-through rates, patient treatment succeeds or fails et cetera. So, many, many cases we- when we pull an arm, or when we take an action we're gonna get a binary reward, either 0 or 1. [NOISE] So it turns out that, um, the beta distribution, beta alpha beta is conjugate for the Bernoulli distribution. So that means we can write down our prior over the Bernoulli parameter, um, given alpha and beta as follows. It's theta to the alpha - 1, 1 - theta to the beta - 1, times a ratio of the gammas. Gammas are related to the factorial, factorial distribution. So, all of this can be computed analytically. And one nice way I like to think about this is that we can think of alpha and beta as essentially being the result of prior pulls of the arm. So, we can use them also to encode sort of prior information about this. And I'll show you shor- shortly an example of how these get updated. [NOISE] But what happens is that if you assume that the prior over Theta is a Beta. Um, so, if it looks like this, this is the prior. Then if you observe our reward that's in 0, 1, because that's what our reward, distribute rewards are whenever we sample one, then the updated posterior over theta is a really nice form. It's just the same beta distribution with either 1 added to the alpha or 1 added to the beta. So, essentially if you observe r = 1 then you get beta of alpha + 1 and beta. If you observe r = 0 you instead add it to the Beta term. So, you can think of alpha as being all the number of times that you saw a reward of 1, and beta as all the number of times you saw a reward of 0. Can you explaining how the fact that theta was Bernoulli factored into this description and why this isn't just a description of a beta distribution, the main equation? [NOISE] Like why- why it is important to first say that theta is Bernoulli? I'm bringing up that, uh, great question. So, the reason I bring this up is that beta is a conjugate prior for the Bernoulli. So, the idea is that in this case if we're thinking about an arm which has binary outcomes, then we can think of the average of that arm as being represented by a Bernoulli parameter. So, let's say like 0.7%, you know, on average 0.7 times, we get a reward of 1, and 0.3 we get a reward of 0. So, [NOISE], um, the mean of that arm is 0.7. Okay. So, we're thinking about an arm which has a Bernoulli parameter that describes its mean. So, we're thinking of like you know an arm with mean equal to theta. So, that's what the mean is for a Bernoulli distribution. Um, and what I'm saying is that we want to now be Bayesian about that, and we want to think about what is the probability of that parameter given the data we've seen so far? And if we wanna be able to update our estimate over theta, not over rewards, over theta, then we're gonna write down our distribution over what theta's might be possible, um, [NOISE] as a beta distribution. And we're going to update that as we see evidence. And I'll show you shortly like what these betas look like, so we can think of so, what is the probability distribution over thetas as we get more evidence. So, for example, you might imagine if you see a reward which is 1, 1, 1, 1, 1, 1, 1, um, then your beta distribution is going to indicate that a theta which is really high is more likely. If you get 0, 0, 0, 0, 0, your beta is gonna have a different shifted posterior, which is gonna say probably your theta's really low, close to 0. Cool. So, we'll, we'll see an example of this in just a second. Um, so, the nice thing in this case is that for Bernoulli's which is a really common distribution that we often want to think about, we can write down, um, a prior over that parameter and we can update it analytically just using the counts. So, we just keep track of how many times we've seen a 1 or how many times we have a 0, and we use that to update our posterior. Okay so how do we evaluate performance now we're in this Bayesian setting? So in the frequentist regret, we didn't think about having distributions over parameters. We just thought of there being some parameter like, you know what's the mean of that arm. Um, and then we defined our regret with respect to the best arm. Bayesian regret assumes there's this prior over parameters. And so Bayesian regret says, what is my expected regret by thinking about what are the possible parameters given my prior. Um, and then looking at the expected performance if I got a particular theta. So it's a little bit different way of looking at the world. Um, again, we're not gonna really get into the philosophical aspects of this. But Ba- Bayesian regret is saying like well, we're not sure, you know what the distributions are of these arms. Um, and there'll be different worlds in which they'll take on different values and how well do you do in those different worlds on average. All right. So how do we try to make good decisions for Bayesian bandits? So one thing you might imagine is let's say we have a parametric distribution over the rewards for each of the arms. Um, we, we could have, we could certainly have that in the Bayesian case. The idea of probability matching which I think has been around since around 1929. Its been around a long time, like almost 100 years. Um, ah, is that we wanna select an action a according to the probability that it's optimal. So it seems quite intuitively appealing like we want to select arms that might be optimal more. Um, we want to select arms that probably aren't likely to be optimal less. And it is optimistic in the face of uncertainty because [NOISE] in general uncertain actions have a higher probability of being the best. So uncertain actions mean we don't know very much about what their rewards are. Um, the problem is this sounds really nice, we'd like to sort of select arms according to the probability that they're optimal but it's completely unclear how to compute that. So this expression here is saying we wanna sample an arm given a history. So the history here, here is prior pulls and reward outcomes. So this is the history of the arms we pulled and whether we've got what sort of rewards we've gotten. And then we wanna pull an arm according to the probability that that arm is better than all the other arms given that history. So that's quite intellectually, um, appealing but it's not at all clear how we would compute that quantity. And so it's sort of somewhat magical that, um, a very simple approach turns out to implement probability matching. And the idea is called Thompson sampling, and again this came out, you know, roughly in the 1920s. And one of the really interesting aspects of that is it sort of disappeared for a long time in terms of bandits. Certainly in the AI community and CS community. And then around eight years ago, eight to nine years ago, people sort of got re-interested in understanding these, um, in part due to a paper that [NOISE] a colleague of mine published which you'll see results have shortly which indicated that empirically it can be really good. Okay. So how does Thompson sampling work? We're gonna initialize a prior over each of the arms. Often we'd like this to be conjugate, doesn't have to be. It's nice if it's conjugate. But we gonna have a probability over each of the arms. Um, now remember that this is sort of a probability over the parameters determining the distribution. So, um, it could be if we have Bernoulli arms, it could be the probability of theta. I for i equals 1 to the number of arms. So, for example, this could be a beta distribution of 1, 1. We could say the probability that my ith arm has a Bernoulli parameter of theta is equal to, um, uh, sampling from a beta 1, 1. Okay. So we're gonna pick a particular parametric family to represent our prior distribution over the, um, reward distributions for each of the arms. And then what we do is for each round, we first sample a rewards distribution from that posterior. Again we'll go through a concrete example of this in a second. But this is like picking a particular theta. So it's like saying I'm assuming that my mean for this arm e- or my Bernoulli parameter for this arm is 0.7, picking a particular value for that parameter. And then once you have that you can compute the action value function by just taking the mean of whatever that is. So notice that if theta, let's say we sample for arm 1, let's say we sample 0.9. We just happen to sample that arm 1 has, um, uh, a theta parameter 0.9. Well, the Q for arm 1 is then gonna be the expected value of a Bernoulli parameter theta one which is just 0.9. Because the expected value for a Bernoulli variable is just p, just the, just the theta. So we're gonna compute the action value function for each of these. Um, in the case of a Gaussian, it would just be its mean, for example. So you just compute what the mean expected reward is for each of the arms under the particular reward distribution we sampled and then you take whichever action looks best for those sampled parameters. So that's this. And then we take that action and we observe a reward and then we update our posterior using Bayes' Law. So we're just gonna take our priors over the parameters. We're gonna sample a particular set of parameters. These are probably totally wrong. This is just us making up what the actual, you know, parameters are for each of the arms. Then we act as if that world is optimal, we get some data, and we repeat. So it's a, it's a fairly simple thing to do. Um, we of course have to see how we can do this sampling. Um, and I'll show you an example with Bernoulli's in a second. Um, but nowhere here are we trying to explicitly compute like what is the posterior probability that this arm is optimal. We're just sampling some and then we're going to be sort of greedy with respect to those samples. Okay. So Thompson sampling turns out to implement probability matching which is super cool. Um, and for some of the intuition of this, so this is what probability matching is. Probability matching is that we wanna select an action given the, um, history according to the probability that it's optimal. According to the probability that arm really is the best arm. And we can think of that as being equivalent to the expected value of picking a reward given the H and that, ah, the arm is equal to the arg max of QA given that history. And that's what Thompson sampling is computing. Okay, so let's see how this actually looks for the broken toe example, because I think that'll make it a lot more concrete. So again, in this case, remember that we have, um, three different arms and they each are looking at the success or failure of, um, doing this treatment to try to make people's broken toes better. Um, and surgery is a Bernoulli parameter with 0.95. Taping is, uh, is, uh, 0.9 and nothing is 0.1. So if we wanted to then run Thompson sampling in this environment, remember it doesn't know what those actual parameters are. We're gonna choose a beta 1, 1 prior over the parameters. So that means that we're gonna say the probability of theta 1 is equal to a beta. Okay. So what does a beta 1, 1 look like? It looks like a uniform distribution. So this is 0 to 1. This is theta. This is probability of theta. So what this says is that if someone gives you a beta 1, 1 distribution, it says you're going to select a Bernoulli parameter from that. I have no idea what its value is. Could be 0, it could be 1, it could be 0.5. It- it's sort of an uninformative prior. Okay. So it says that initially I have no idea what, um, these theta parameters might be for each of the arms. Um, so I'm just gonna pretend it's- I'm gonna start off and assume it's flat. I have no information. Okay. So this is what, um, a beta 1, 1 looks like. But what's gonna happen in Thompson sampling? So this is our distribution over thetas. And what Thompson sampling is gonna do is it's gonna sample a parameter from that distribution. So in this case it's just a uniform distribution between 0 and 1. So we're just gonna select some value between 0 and 1. So in this case imagine what we got is, I'll just leave this up for a second so people can see 0.3, 0.5. and 0.6. There's no reason that arm three would be higher or lower than arm one or arm two or arm three. In reality arm one is best, but we have no information about that so far. We have no rewards so far. All we've done is we've just said, I have a uniform distribution over what my theta parameter might be, I'm gonna sample from it. And so in this case, it's like sampling and you've got this value once, you got this value once, and you got this value once. And we just sampled from that distribute- that uniform distribution and these are the parameters we pegged. And now we're going to pretend that's real. So we're gonna say I'm gonna pretend that my theta for surgery is 0.3. My theta for taping is 0.5 and my theta for nothing is 0.6. So if that was the real world we lived in, what arm would we select? Third arm. Third arm. Exactly. So in this world, the third arm really is best because that's a theta of 0.6. So we're going to select theta as 0.6. Yeah. Using uninformative priors because I could have many values of beta 2, 2 or 5, 5. So is that a significant advantage of using this for Thompson or something? Yeah. It makes a really good point. She said we currently use an uninformative prior, is it better or worse to do like that compared to using an uninformative prior. So you could have had a beta of like 3, 4 et cetera. Um, a beta of 3, 4 or anything that's not 1, 1 is gonna give you, um, is gonna bias your distribution, it's gonna change the shape of how you sample things. If you have actually good information that can be really useful, um, because it's essentially like having fake pools. Um, and it can- it can guide sort of your initial samples. The downside is that if that isn't correct, you can be misled for a while. So we often talk about like how robust are we to misspecified priors or to wrong priors. Um, and so using the uninformative prior means that, ah, you're not getting a lot of benefit from prior knowledge but you're also not gonna get a disadvantage. Okay. So in this case we're gonna select the arm- arm three because that's just the arm that has the best expected mean, um, under the samples that we did. Okay, but arm three is actually not very good. And we know that because arm three actually only has a reward of 0.1. And so when we sample it and we get the patient's outcome, we're gonna get a 0 in this particular case. Because the real arm three is 0.1. So if we sample from a Bernoulli with 0.1, most of the time we're going to get a 0. So now we have to do is we have to update our posterior over arm three. Okay, we have to update what sort of what are our probability of theta of arm three is, given that the reward was equal to 0. Okay. [NOISE] So what we talked about is that the beta is a conjugate prior for the Bernoulli. And if we observe a 1, we're going to update the first parameter, also we're gonna update the second parameter, so we just saw a 0. So our new beta is 1, 2. Because we just saw a 0 and so we update. So this is our new parameter. That's our new posterior over arm- the arm three and it looks like this. Okay. So this is still theta, always has to be between 0 and 1 because this is a Bernoulli parameter. And this is what the probability looks like now. So notice it shifted, so it used to be flat, and now it says well no, I just observed that we've got a reward of 0. So now I have a higher probability that theta is small. So if I was going to sample from this, it is more likely I would get a lower value compared to a higher value unlike before. Okay. So this is our new posterior. And what does the posterior look like for the other arm? So this is for, um, this is for theta three. And for the other two, they still look uniform because they're still a beta 1, 1. So for beta 1, 1. This is for the other arms, theta 1, theta 2, and this is probability of theta. The other two are still uniform because we- we haven't pulled them yet. We don't have any outcomes. So they still look like uniform distributions. But the probability over theta 3 looks skewed towards 0. So now in Thompson sampling, we again are just going to sample a value from each of these different ones. Each of those three distributions. And now imagine we get 0.7, 0.5 and 0.3, yeah? Turn back a slide, should that say p of Q a3, not Q of a1 because didn't you say a3- Thank you. Yeah, hold on, there's a couple of errors there, yeah. Thanks for catching that. Any other questions? Okay. So we updated our posterior over arm three. Our posterior over arm one and arm two is the same as the prior because we didn't- we didn't pull them. Okay. So now we're gonna sample from those three distributions. What Thompson sampling would say is now given our posterior over all of the arms, let's select an actual parameter for each of the arm. And this time we're going to get 0.7, 0.5 and 0.3. So which arm where are we going to select this time? Arm one. Arm one. Right? So now the max is gonna be arm one. Okay. So now we're gonna have a posterior that looks like beta 2,1 because we update our pe- beta. And remember we can just think of this as being the number of R equals 1s + 1 because we started with a beta 1, 1 and this is the number of r = 0s plus 1. So now our new posterior for this one makes it look like this. So this is theta 1, this is 1, 0 probability of theta 1. Okay. So now as we would expect, we saw that arm- arm one had a good outcome and so now our probability that that Bernoulli parameter is higher than 0.5 is going up because we saw some positive results. Okay. So now we have a- so what does our new distributions look like? We have a beta 2, 1, we have a beta 1, 1 because we haven't selected arm two yet, and then we have a beta 1, 2. All right. So what's going to happen next, um, let me again are gonna sample a Bernoulli parameter. So let's imagine that we got 0.71, 0.65 and 0.1. And so that means we're again gonna select arm one. And we again observe a one, surgery is pretty effective. And now our posterior is 3, 1. So now this again is 0 to 1. This is our probability of theta 1. Okay. So now it's looking even more peaked. So what's your guess of what's the next arm we're likely to sample? So remember the three distributions that we have right now is for arm two, looks like this, for arm three, looks like this. So this is the probability of that arm. Since theta a2, theta a3, 0, 1, 0, 1. So who thinks that, um, theta 1 is again gonna be sampled and look better than everything else? That's right because it's going to have- has a posterior over its Bernoulli parameter that is getting closer and more and more steep towards 1. Theta 2 we still never- we've still never taken action a2, but it just has a uniform probability. So it's very unlikely that we're gonna sample a value for it that is better than the value we sampled for arm one. So again in this case, we can imagine sampling again, we get 0.75, 0.45, 0.4. We select action a1 again and now we have a beta 4,1 and it's looking even more sharp up. So notice this is quite different than what UCB was doing. UCB was splitting its time between a1 and 2 at the beginning because, um, they were both reliant on their empirical means, um, but then a2 had been taken less times. In this case, we still haven't taken action a2 yet. And it may be hard for us to pull it for a while. Now, that's not actually bad in this case because theta 1 is actually the best arm. But there can be sor- some trade-offs. Yes. Is this the only way we can update the Beta distributions? Uh, is, is there a rule that we should increment it it by one or [NOISE] of course we have different kinds of rewards. Here rewards are 0 and 1, right? So do you have some kinds of rewards probably of beta, beta distribution. Great question. Category is like, okay, so here we've got, um, binary rewards. How would we do this if the things were not binary? In that case we wouldn't use a beta in a Bernoulli. So if you didn't have, uh, for binary rewards, Bernoulli's a really nice choice and betas conjugate. If you have real-valued rewards you might use a Gaussian. Um, and then you'd have a, a, a sort of Gaussian prior depending on whether you know your theta or not. And in general, uh, like for multinomials you can use Dirichlet distributions. Depends on what your reward distribution looks like and then you wanna find a conjugate prior for that distribution. So there's a lot of different families of parametric distributions for which you can do this sort of updating. Yeah. Then the other things we're talking about. Remind me your name. About being optimistic. Yes. And here is like a uniform distribution for initialization. Is it better to use something more optimistic? That is a great question. His question was, uh, so we talked before about the benefits of optimism. Here we just used a uniform prior. Um, and wouldn't be better to use one that is optimistic. It depends, um, I, the empirically what, so what is this doing? I- I'm just gonna, let me hold on that question for a second. So we can look at sort of what these look like. So if we did optimism, we sampled all the actions first and then we sort of got this interleaving of a1 or, and a2. In Thompson sampling, we took a3 and we took a1, a1, a1, a1, and a1, a1, a1, a1. a1 is optimal in this case. So by using a uniform prior here, essentially, um, as soon as you see something that looks pretty good like better than 0.5, um, you're gonna tend to often sample it a lot more. Um, so you're sort of exploiting faster to some extent. Um, you can put priors in there. The question is often like, how much to put that in there and if it actually helps. So one of the cool things with Thompson sampling is it turns out in terms of sort of the Bayesian regret bounds, [NOISE] they're as good as the, as the upper confidence bounds, but empirically often exploring faster is helpful. So you could put optimism in there, but it might actually hurt performance because it's gonna force you to take, like in this case, r1 actually is optimal. Um, now, you could have imagined maybe we were just lucky there and instead we got an a2. In that case, you'd want something to help you eventually take a1. Now note, we, we will still take a2 likely at some point because there still will be a probability under that uniform prior that you'll sample like 0.999 and then you'll take a2. So you can still be sure to start taking other actions even using these uniform priors. But it's a really good question about sort of, you know, weird, what information to put in there. Yeah. Um. Remind me of your name. [inaudible].it's stuck, it becomes very hard for a different action to catch up. So the, uh, so in that sense that chance that is suck is more that important? And then [inaudible]. That is a really good one which is, okay, so maybe if one arm is really good, then, um, it becomes really hard for the other arms to, to catch up. So in this case, that's true because theta 1, um, really does have 0.95. Let's imagine a slightly [NOISE] different case where this was like 0.7 or something like that. Then in that case over time it would be likely that your Beta distribution is going to converge to around the real distribution of the parameter. So if you keep sampling theta 1 forever, uh, eventually, [NOISE] you know, it's gonna sort of collapse towards what the true value is. Um, and so if the true value isn't very close to one, there will be some probability you'll sample. Like so imagine this versus a uniform. That's not very close to 1. At some point, there is a non-zero probability that you'd sample something that's higher. [NOISE] So the beginning matters, um, but you can outweigh it over time. Just like what we can with the empirical distributions. Okay. So if we look at this and we sort of look at the incurred frequentist regret [NOISE] which is not the same as Bayesian regret because in that case we'd have to average over the parameters. Um, in this case Thompson sampling is doing, uh, a lot better. So [NOISE] in this case, here, this would be 0.85 and that'll be 0, 0, 0, 0. [NOISE] Okay. So in this case, Thompson sampling would be doing a much better job. [NOISE] Now, um, Thompson sampling, uh, actually does achieve the Lai and Robbins lower bound for the performance of a- an algorithm. So, um, in terms of its lower bound is similar. So that's one indication this might be a good algorithm. But we have, there's a lot of bounds for optimism. In general, um, the, the bounds for optimism are better than the bounds for Thompson sampling. A lot of the Thompson sampling, uh, bounds end up converting to sort of upper confidence bounds. A little bit like what was asking. So if we, um, if you want to make Thompson sampling have frequentist-like bounds, um, often we end up sort of making our comfort, our sort of being more optimistic in terms of Thompson sampling. Okay, to put those. Um, but empirically Thompson sampling is often great. Yeah. Can you mix them? Can you mix Thompson sampling and upper confidence bounds? Uh, maybe start with some form of upper confidence bound and use that information to update like your priors on the Thompson sampling? [NOISE] Um, I, I shouldn't just say, can you, could you mix them? Y- You probably could. I, you could probably do it. I don't know. So I guess to me one of the, so maybe you could start with upper confidence bounds and then use Thompson sampling. Um, to me one of the big benefits of Thompson sampling empirically is that it is less optimistic than upper confidence bounds. Upper confidence bounds tend to be too optimistic for too long. And it's like, "Oh, you know, I ran into the door 30,000 times, but maybe that 30,001th time I won't." You know, like for your robot or something like that. So, um, it often tends to sort of think about the extreme events. Whereas, if you know that say the world is really Gaussian like the probability that your robot is going to run into a wall again is still really high even if it's only run into the wall 10 times. So maybe you should pick a different path. So I think often you probably would want to start with Thompson sampling, but you would like to be robust to your prior. And so some work that tries to combine these ideas is known as PAC-Bayesian, where you try to get, I'll define what PAC is in a second. But, um, you'll try to get bounds that are kinda frequentist-like, but also get the best of both worlds. So you'd like to be like Bayesian if your prior is really good and, um, and PAC if you like sort of frequentist, if it turns out that your prior's wrong. Okay. So, um, one of the papers that I think sort of changed a lot of people's minds about Bayesian and, uh, bandits and also Thompson sampling being a good idea was this paper by my colleague Lihong Li and also, uh, Chapelle where they looked at contextual bandits and we will hopefully get to this for a little bit on Wednesday. But the idea in contextual bandits is that you have a state and an action. So it's a little bit different than the bandits we've seen so far. Unlike in MDPs your action does not affect the next state. So for those of you doing the default project, you're seeing examples of this where how you treat the current patient doesn't impact the next patient that comes along, but the patient characteristics can affect which arm is best. So, so contextual bandits is a very popular and powerful framework. Um, and so in this case they were looking at news article recommendations and they were [NOISE] finding Thompson sampling did much better than upper confidence bounds in a number of other algorithms. It also can be more robust, uh, when your outcomes are delayed. So this happens a lot often in real cases. You can imagine here you treat a patient, you're not gonna find out whether or not that toe procedure helps for another six weeks. But in the meantime other people come in whose toes needs to be treated. Um, and if you use upper confidence bound algorithms [NOISE] they tend to be deterministic. Bless you. And, um, and so you just keep treating everybody with the same thing until you get the outcome from the first, whereas Thompson sampling is stochastic. So you'll be sort of trying out a lot of things. That's another good reason in practice why Thompson sampling can be helpful. Okay. So I'm not gonna go through the proof today, but I'll put some pointers so that, um, the, the nice thing is that if you look at the Bayesian regret of Thompson sampling, uh, it's going to have a similar result to what upper confidence bounds has. So it essentially has, has the same regar- regrets bounds as UCB, essentially. I'm being slightly hand-wavey for that, there's some important subtle details, but roughly you can show that these also have good Bayesian regret bounds if your prior's correct. Um, and so that's sort of again a nice sanity check, that you kind of get this logarithmic regret growth. All right. Another framework that I just mentioned sort of in passing just now is probably approximately correct. So, these theoretical regret bounds specify how your regret grows over time. Um, and one thing that's hard to know is whether you're making a lot of small mistakes or a few big mistakes. Your regret bounds are cumulative, so it doesn't allow you to distinguish between those two. So, you can imagine in the case of patient treatments, this could be pretty important. Like are you giving everybody a headache, um, or are a few patients really, you know, having really really bad side effects. So, so regret- cumulative regret, um, doesn't distinguish between those two, because if a couple of people have really bad side effects, that's the same as a lot of people having headaches when you average over those. Um, and so one idea is to say well, maybe we just wanna kinda bound the number of non-small errors. So, we wanna bound the number of people that experience really bad side effects, for example. [NOISE] So, Probably Approximately Correct comes up in supervised learning. In the context of decision-making, we often define it as follows, a Probably Approximately Correct algorithm or a PAC state that the algorithm will choose an action who is- which is epsilon close to optimal, with probabilities 1 - delta, on all but a polynomial number of steps. So, the probability part comes from here, so it's not guaranteeing that you will do this, but with high confidence or probably it will do this. It's approximately correct because we're only guaranteeing epsilon-optimality. And the important aspect is it's- only it does- makes these sort of, um, makes mistakes that might be bigger than epsilon, so the number of, you know, patients we might treat that have really, really bad side effects is gonna be no more than a polynomial function. Where the polynomial function is a function of the parameters of your domain. So, things like the number of actions you have, epsilon and delta. And you should be able to compute this in advance too. So, you should be able to compute how many mistakes you might make. Um, and one of the cool things is that you g- a lot of the PAC algorithms, um, algorithms that are PAC are based on optimism or Thompson sampling. Now, PAC for bandits is a much less common, uh, approach than when we go to MDPs. In bandits, most of the time we look at regret. But for when we look at Markov Decision Processes, PAC is more popular, and, and we'll see one of the reasons for that probably later, or feel free to ask me about it if we don't get to it today. Okay. So, what would PAC look like in our little example we had here before? So, let's use O to denote optimism, TS to denote Thompson sampling, and within epsilon, um, means that the action that we select is within epsilon of the optimal action. So, its value is epsilon close to the optimal action. So, I've written down the regret in this case. Um, here what we'd have is that the- for, um, optimism, the first action that we pull is a1, so, um, it's within epsilon, yes, because we're close to the optimal action, a2, um, has a mean of 0.9, so that's within 0.05 of 0.95, so this is yes. Action a3 is 0.1, so it's not within epsilon of the optimal action, so this is no, and so forth. So, this essentially allows the algorithm to be taking either a1 or action a2 under this definition of epsilon. Because I don't care whether or not you're taking action a1 or a2, both of them are really pretty good; both of them are within 0.05 of each other, I mean, that's fine. Um, but you- we don't want you to take action 3 very much because it's much worse. Um, and then in this case, uh, this one would say, this is not within epsilon because the first action we take is bad but then all the rest are good. And what a PAC approach would do would they'd be counting all these- counting the, the mistakes. Okay. So, we just talked about for bandits, um, different sorts of frameworks and criterias. We talked about regret, Bayesian regret and PAC, um, [NOISE] and we talked about two styles of approaches, either optimism or Thompson sampling. And what we can see now is that Markov decision processes have many of the same sorts of ideas being applicable, but it also does get a lot more challenging. So, in particular what we're gonna talk about right now is we're gonna talk about tabular MDPs. And it turns out that even from with tabular MDPs that things are a lot more subtle. Um, so, how does this work? The, the regret- the Bayesian regret in PAC is all gonna be applicable, so is optimism, and so is probability matching. So, let's start with thinking about optimism under uncertainty. First, let's think about just doing optimistic initialization. So, in this case, imagine that we just initialize all of our queue state actions, um, to some value. So, let's imagine that we initialize them to rmax divided by 1 - gamma, where rmax is the highest reward you could see in any state-action pair. Let's just take one minute, why is that value guaranteed to be optimistic? Anybody wanna answer why that's guaranteed to be optimistic? Right. Yeah. It's higher than like, the possible, um, total value, because like we've shown a couple of times that rmax one line of scandal would be the highest value, but it goes on a bit [inaudible]. That's right. Yeah, so what said is correct. Um, we've shown that for a discounted Markov decision process that the highest value you could get is rmax divided by 1 - gamma. At best all of your states have that, or else some of them might not, so this is guaranteed to be an optimistic value. So, you could start off and if you've, uh, initialized all of your state action values to be rmax divided by 1 - gamma. And then you can do Monte-Carlo, you can do Q-learning, you can do Sarsa. Um, and you could incrementally update using that. And this can be very helpful, it can sort of encourage systematic exploration of states and actions, because essentially you're pretending that everything in the world is really awesome, um, until proven otherwise. So, on the downside, unfortunately if you do this in general there's no guarantees on performance, um, even though it's often empirically better. So, even though this really is, um, you- you know, an upper bound, this is optimistic. Um, a key issue is how quickly you're updating from those optimistic values. So, as an early result in this case, Even-Dar and Mansour in 2002 proved that, if you run Q-learning with learning rates- this should say alpha-i. So if you, uh, run Q-learning with particular alpha rates, um, alpha-i on each time step i, and you initialize the value of a state, so this is the very beginning, um, to be rmax divide by 1 - gamma times the product of those learning rates, and t is the number of samples you need to learn optimal Q, then greedy-only Q-learning is PAC with that initialization. So, I just wanna highlight something here which is this part. So, notice this is way, way, way larger than just rmax over 1 - gamma, because this is a product of all your learning rates. Okay, so, this could be really enormous, like you'd imagine that, um, imagine that alpha = 0.1 for all time steps, then what you have here is you have 1 over 0.1 to the t, which is approximately- it would just equal to 10 to the t [NOISE]. So, this is like exponential in the number of time steps you're gonna make decisions. It's incredibly optimistic. Um, it turns out this is sufficient to be PAC, but it's also not very good. Um, uh, it's, it's very, very extremely large. Okay? Um, now, there's been some really cool work by Chi Jin and some others over at Berkeley that showed that, um, if you use a less optimistic initialization, um, that's strongly related to upper confidence bounds, um, and you were careful about your learning rates, so you have to change your learning rates, but if you're careful about your learning rates, they proved that, um, model-free Q-learning could also be PAC. And this was a pretty big deal recently because almost all of the work that's been going on has been in the model-based setting. So this just came out in NeurIPS, uh, about two months ago, um, and so they- oh sorry, not PAC. They, they showed the regret bounds. Um, they're not optimal regret bounds, but they're good. So, um, they're, they're not tight yet but, uh, it shows that model-free algorithms can do pretty well. [NOISE] Okay. So what about model-based approaches? And the model-based approaches for MDPs are the ones where we really have the best bounds right now. So there's a couple of main ideas or a couple different procedures we could go with. One is that, you can be really, really optimistic in all your estimates, until you're confident that your empirical estimates of your dynamics and reward model are close to the true dynamics in reward model parameters. So these sort of algorithms proceed as if they say, the reward for all state action pairs is amazing, it's rmax divided by 1 - gamma. And I'm gonna continue to pretend that's true, until I think I have enough data for that state action pair that I think that if I did a MLE, maximum likelihood estimate of those parameters, they will be close to the true parameters. So you could say, I'm just going to be incredibly optimistic until I've got enough data. And then when I've got enough data then I, um, think I can get a good empirical estimate that is close to the true estimate, and then I'll use those instead. So it's almost kinda like a switching point. You sort of keep, um, you pretend everything's really, really great until you get enough data, and then you switch over to the empirical estimate. So these were some of the earliest ones, um, that showed that MDPs could be pa- oh, algorithms for MDPs could be PAC. This is from 2002. Uh, but they're also empirically not normally so good because, um, you're pretending things are really, really awesome, even though you might have quite a lot of evidence for that state-action pair that it's not awesome. So another approach is to be optimistic given the information you have. So what do I mean by that? I mean that as your agent walks around and gathers observations of the actions and rewards it gets, it uses that to try to estimate, um, how good the world could be given that data. And so one approach to this is to compute confidence sets on dynamics and rewards models. So we already saw this for bandits, where we computed upper and lower confidence, or we could compute upper and lower confidence bounds for the rewards. Turns out we can also compute confidence sets over the dynamics model. Or we could just add reward bonuses that depend on the experience or data. And I'm gonna talk, um, at least a little bit today before we finish, about the second thing. And the reason I'm gonna talk about this particular approach is because when we start to think about doing this in the function approximation setting, if the way that your dynamics model is represented is by a deep neural network, um, then writing down, ah, uncertainties over that can be really tricky. And also a lot of the progress in deep neural networks for RL focus on model-free approaches. And if we have reward bonuses, then we can easily extend that to the model-free case. Um, and empirically these ones generally do pretty much as well as if we use explicit confidence sets. So I'm just gonna explain how the model-based confidence, model-based interval estimation with exploration bonus works. Um, so it's gonna assume that we're given an epsilon delta and some constant m. Okay. And then what we're gonna do is we're gonna initialize some counts. [NOISE] So this is just gonna keep track of the number of times we've seen a state-action pair. So we're gonna do this for all s and for all a. We're also gonna keep track of the number of times we've seen an actio- state-action, next state pair. 0 for all s, for all a, for all s prime. And we're also going to keep track of the total sum of rewards we've gotten from any state and action pair. So we're gonna say rc of s, a = 0 for all s. So essentially we are gonna keep track of the times that we've been in any state, taking any action and went to any next state, and what the sum of rewards are for when we've done that. And then we're going to define a beta parameter. Okay, I'm going to double-check, I get the- Yeah. Okay. All right. So beta is gonna be a parameter that we're gonna use to define our reward bonuses. Okay, it's 1 over 1 - gamma, 2 log the number of states, number of actions, 2 times m, m is an input parameter divided by delta. Okay. And then- yeah, I think that's all I need here. Now I'm gonna say t = 0. We're going to initialize our state. And to start, we can just say Qt of s, a = 1 divided by 1 - gamma. And this assumes that all of our rewards are bounded between 0 and 1. So they're bounded rewards. Okay, so we start off when we initialize all our accounts to 0, we said we haven't observe- observed any rewards yet, and we pretend that the world is awesome, and that our Q value is the highest it could possibly be in every state-action pair. Um, here r-max = 1. So r-max is going to be equal to 1 because our rewards are bounded between 0 and 1. So what we do then is we take an action in the current state, given our- let's do tildes given our Q function. So getting, which is going to break ties randomly. And then we're going to observe the reward and observe the next state. And then we just update our counts. So we update our counts for that particular state action pair. We update our counts for s, a, s prime, s, a, s prime, for the number of times we've been in that state taking that action and went to that particular next state. And then we update our rewards for that state-action pair. It is equal to the previous rewards for that state action pair plus r_t. And then what we're gonna do is we're gonna use, um, those empirical counts to define an empirical transition model and empirical reward model. So our reward model is going to just be the MLE reward model, which is just gonna be rc for s, a divide- times- divided by the number of times we've been in that state-action pair. That's just the average reward for that state-action pair. And then our transition model is also just going to be the number of times a, s prime divided by the number of times you've been in that state-action pair. We're just gonna define our empirical transition model and our empirical reward model. And it doesn't matter how we initialize things that we haven't seen at all. But you can treat them as uniform. Okay. So we're gonna do this for all s, a. And then we're gonna compute some new Q functions. Okay. And we're gonna compute some new Q functions where we do this. Where we take our empirical models and we also add in a reward bonus term that depends on beta and the number of times we've tried that state action pair. And we can do value iteration. That's what I'm doing here. But you could solve it however you'd like. But the main idea here is that we're gonna use our empirical estimates of the reward model and the transition model by just averaging our counts, or averaging the rewards we've gotten for that state-action pair. And then we're gonna add in this as a reward bonus. And note at the beginning of this reward bonus can be, like, it can be infinity, so you can- because if we have no counts for that, so then we can just initialize for, for any Q s,a. So for all s, a such that nsa of s,a = 0. You can just set this to be Q-max. So to deal with if you haven't sampled that state-action pair yet. So that means anything for which you haven't sampled it yet is gonna look maximally awesome. And anything else is going to be a combination of its empirical average parameters, plus a reward bonus. And that reward bonus is gonna get smaller as we have more data. So I'll put this on here where it will be neater. Um, so this is the reward bonus. And what you can see here is that over time that's going to shrink. Over time, um, you're going to get closer and closer to using the empirical estimates for a particular state action pair. But for state action pairs you haven't tried very much, there's going to be a large reward bonus. So the- the cool thing about this is that, um, we can think about whether it's PAC. So I'll just take one more minute, which is in an RL case, ah, an algorithm is PAC if on all but N time steps, the action selected is epsilon-close to the optimal action, where N is a polynomial function of these things. The number of states, number of actions, gamma, epsilon, and delta, this is not true for all algorithms. Greedy is not PAC. Greedy can be exponential. Um, we might talk about that on Wednesday. So no. And the nice thing is that the MBIE-EB algorithm I just showed you is PAC. So what does it PAC in? It means that on all, but this number of time-steps, well I'll just circle it. So this is sort of a large ugly expression, but it is polynomial in the number of states and actions. It's also a function of the discount factor and the epsilon. In general, if you want to be closer to optimal, it's gonna take you more data to ensure that you're close to optimal. So it's inversely dependent on epsilon, ah, it's polynomially dependent on, ah, state and actions. And this says on all but, ah, this many time steps, your algorithm is gonna be taking actions that are close to optimal. So this is pretty cool. It says like by just using these average estimates plus tacking on a bonus term, and then computing the Q functions, um, then you can actually act really well on all time-steps except for a polynomial number. Okay. And then, um, I put in here this are theoretical of that, and I'll just say briefly that on some sort of hard to construct, sort of simple toy domains. These type of algorithms do much better even than some other ones that are provably efficient. And they can do much, much better than things like greedy. So algorithms like MBIE-EB, MBIE is a related one that uses confidence sets, um, it can do much, much better than this is sort of be, be optimistic until confident. And these ones are generally much better than greedy. So these types of optimistic algorithms can empirically be much better, as well being provably better. And on Wednesday we'll start to talk about how to combine them with generalization. Thanks. |
Stanford_CS234_Reinforcement_Learning_Winter_2019 | Stanford_CS234_Reinforcement_Learning_Winter_2019_Lecture_7_Imitation_Learning.txt | All right. So, homework two, you guys are probably starting to work on, and we're having sessions this week that are good for if you don't have background in deep learning, and feel free to reach out on Piazza. Oh, yeah, I just have a question about the project. I just want to make sure, it seemed currently with the note on Piazza that like, I-50 was the default suggested one. Can we also do something outside of that? Oh, yeah, no, question is a great one. Yeah, there's a, the post on Piazza, you're always welcome to design your own project. That's always completely fine, and a number of you have come talk to me about those, or talked to other TAs. These are an additional option. So, if people are interested in looking at either the default project which we released yesterday, which has to do with bandits and warfarin, or if you want to look at some of the suggestions from senior PhD students or postdocs, those are great opportunities. Particularly, I think if you haven't ever done reinforcement learning before, it's often I wouldn't expect at three weeks in that you'd be able to define a state of the art project. So, if you're interested in learning more about RL research, then it can be a really great opportunity to look at some of those suggested projects then reach out to people. All right. The other thing that I just wanted to do a friendly reminder about is we explicitly post FAQs for each of the homeworks. Um, and as some of the TAs are mentioning that some of the students coming into office hours right now might not have had a chance to look at those. So, if you ever have a question when you're going over the homework, the first thing to do is to go to Piazza and particularly to look at those pinned notes at the very top which have very common FAQs about the assignment. So, make sure to read those before you go to office hours, and then, of course, feel free to come to office hours as well. But those are a really good resource to look at. Any other questions? All right, so just in terms of where we are in the course right now, we went through DQN on Monday. We're gonna talk today some a bit more about, we can wrap up some of the stuff that I had to rush through at the end of Monday in terms of deep Q-learning and some of the recent extensions. Then we're gonna talk some about imitation learning and large state spaces before next week starting to talk about policy gradient methods. So just to, we'll start off with sort of a refresher from what DQN was doing, DQN was this idea of combining between Q-learning and using deep neural networks as function approximators. And the two key sort of algorithmic changes compared to prior work was that, they used experience replay and fixed Q targets. And by fixed Q targets there that was meaning that when we used our r, rt plus gamma, max over a, Q of sta, st plus one, right. That the weights that were used for that Q representation were fixed for a while. So maybe we'd update those every 100 steps or every 50 episodes or some interval. And, uh, so this provided a more stable target for supervised learning because the supervised learning part again is that we were had this combination of, of we want to have weights and we want to minimize this error versus our current estimate, sort of minimizing the TD error. So the way that this preceded is that we'd restore transition in a replay memory buffer. We do mini batches, where we would sample a bunch of state extra word and next state tuples and then do these backups where we're sort of updating our Q function and refitting our Q function. Um, and like a lot of the linear value function methods we saw before, it uses stochastic gradient descent. And the really cool thing about this is that they did it on 50 games. They used the same architecture for those 50 games and the same hyper parameters and they got human level performance. So we've talked quite a lot about that before. And then we sort of briefly talked about three sort of major extensions to that in the immediate following years. And again, there's been a lot of extensions and a lot of work in deep reinforcement learning right now. The three of them were as follows. The first was Double DQN. And we talked before we got it, the function approximation talking about the issue with maximization bias, that when you're using the same representation to pick an action and estimate the value of that action, you can get into a maximization bias problem. And the way that that's avoided in Double DQN and I wanted to go over this again because I had a couple questions after class. We didn't have much time to discuss it. Is what happens is we have a current queue network which is parameterized by a set of weights and that is what is used to select actions. Just to be clear here, often we're doing some sort of E-greedy method. So we'd used the current Q-network weights to decide on the best action. And we would pick that with one minus epsilon probability. And then there's an older Q-network that is used to evaluate those actions. So if we look at how we're gonna be changing our weights, we're gonna be having an action evaluation using these other weights, w minus and then action selection using W. So when you look at this, that might start to look pretty similar to what DQN was doing because DQN was saying we're gonna use a fixed set of weights for, for these target updates. So what DQN was doing was this, r plus Gamma Q. I'll write the max in, max a, Q of s prime, a, w minus, minus the current s. So in the normal DQN, they were also using a w minus. But here in a Double DQN, it can be a little bit different. And the reason it's a little bit different than what we just saw is that you can maintain two sets of weights at all times and you can flip between them on every step or every batch. So when DQN was introduced, it was more of an idea of you fix your weights. Let's say, from time step t to time step t plus 100, use the same weights that whole time period for your target. In Double DQN, you don't necessarily have to do that. You can flip back and forth between these which is what we'd seen with a double Q-learning that on, you know, on step one you can use weights one to act and weights two to evaluate. On step two, you could do weights two to evaluate and weights one to act. So it means that you can propagate information faster. So instead of waiting 50 episodes or 100 episodes to update, um, the weights that you're using for your target, so again this is your target, you can flip back and forth between them which allows you to update both networks a lot have, update both set of net- network weights. The networks are identical. Yeah. Um, in general, when you're evaluating these kinds of different approaches to improve these techniques, is there, is there a trade off between how fast information propagates and then how unstable it is? So we might find that if the system we're trying to learn on is itself, relatively well-behaved and stable, we want to pick something that has faster information propagation but if it's highly noisy or unstable that we need to do something that's more conservative. uh, makes a good question which is, you know, is there generally a trade-off in terms of these methods between sort of characterizing the stability of the system and then how fast you can propagate information back? Unfortunately, I feel like it's not very well characterized. So I feel like most of the time, these are heuristics and people evaluate them, they evaluate them with a lot of different benchmarks and that's sort of the way we get generalization. But I don't think that there's a good characterization systematically of how to characterize the stability of the system, with these deep neural networks, particularly in the context of RL. So there's a lot of great opportunities for theoretical analysis here too or just sort of more formal understanding. Right now, I think we're at the level of saying this either just seems to consistently work a bunch across Atari games and maybe MuJoCo or it doesn't try to characterize the, the successes. Yeah. Is it Yes. I was wondering if we kind of get a bit more about the switching then. You're representing like why, how. Yeah. So, question is about, you know, how can we switch between these w and w minus, and how would, you know, um, why and how would you do this? So, in the DQN setting, um, you could set w minus. So at the beginning, w minus is equal to w on time step zero. And then, in DQN you would keep w minus to be the same maybe for the next 50 episodes, but you'd be updating w. And then 50 episodes in, you would update w minus. The downside about that which we talked a little bit about before is that you're not using the information you're getting to update this estimate. Okay. Because you're using that old stale set of w's. So essentially, you're just not using the information you've got over those 50 episodes to update what would happen if you were caught in S prime and then took action a. So, an alternative would be to flip between, let's say, instead of thinking of this as w and w minus, then you can think of it that way. You can just think of maintaining two different sets of weights. And imagine, um, I'll say, this is t time equals one, time equals two, time equals three. So, imagine that we're just picking between what are the weights that we used to select an action and the weights that we use to evaluate the action. So, on the first time step, you could use this to evaluate and this to the- to select the action, and then you could flip it back and forth. So that essentially means that, both sets of weights are getting updated very frequently. So, instead of updating only one of the- one of them every 50 episodes, you're- you're continuing to propagate that information back quickly. And there's of course tons of chart choices here about how frequently do you update, you know, when do you switch back and forth between these. Um, you can think of all of those as hyper-parameters you can imagine tuning. But this is instead of keeping that- keeping this target fixed for 50 steps, um, or, you know, n steps these are all parameters, you could flip back and forth between them which is what double Q-learning did before. Yeah, . Like in the normal DQN settings when we're using a target weight, uh, wouldn't that target weight like for action selection or for like evaluation still be another queuing network. So, how is that was different from double DQN except for the fact that you're searching double-Q more often? It is. Or more of the- what question is, how different is this from the previous year? It's almost identical. So, I think the- the main difference here is that you could switch, uh, as long as you're maintaining some set of weights for your target. This is saying you could sort of switch. Now you really just have- you've the same network, two sets of weights that you have to maintain in memory. And what this is saying is, you can switch back and forth with those very frequently, um, and help avoid the maximization bias during that time. It doesn't always work, it frequently helps. There is still the issue with stability, um, but it can be better and it avoids the maximization bias. We also talked about prioritized experience replay. Um, we went through a small sort of tabular example where we looked at the impact of doing backups. So, if we have this experience replay buffer of SAR S-prime tuples, which one should we use to do our backups and how do we propagate that information back? Um, and the- in this algorithm, they say the- or in this paper, they talked about the fact, um, if you can do this optimally. In some cases you might get an exponential speedup and convergence, uh, but it's hard to do that, it's computationally intensive. So, what they proposed here is to prioritize something based on the size of sort of the DQN error. The difference between the current estimate of it and your sort of target estimate that you're looking at. And so, we talked about how you could use that as a priority, um, and it could be a stochastic priority, uh, to try to select items. And we also talked about the fact that if you set Alpha equal to zero, this becomes uniform and so then, there's no particular prioritization over your tuples. Another thing that we had almost no time to talk about was dueling. So, dueling was a Best Paper, um, from, uh, two- 2016, um, in ICML. Um, let me just give a little bit of a refresher on this because we went through it very, very fast. So, the- the intuition here is that the features that you might need to write down the value of a state might be different than those need to specify the relative benefit of different actions in that state. And you want to understand the relative benefit of actions in order to decide what, what your policy should be. So, um, looking at things like game score, it's obviously very relevant to the value. Um, that it might be- you might want other features try to decide what actions to do right now in a game. And so, the advantage function that came up, uh, that was designed by Baird a long time ago. And this is the same Baird that had that counter example to show why value function approximation can be bad. Um, so, uh, Baird's work before it said, well, look you can decompose, um, if you think of your Q function which is representing the value of a policy starting in state and taking a particular action versus the value of just that state. So, this is sort of implicitly Q pi, S pi of S. So, like what is the difference between- difference between taking this particular action versus just following your policy from the current state? And he called this the advantage. What's the advantage of that action for that state? So, in dueling DQN, instead of having one network that just predicts Q functions, they use an architecture that separates into predicting values and predicting these advantage functions and then adds them back together with the idea being that you might get different sort of features here and here. So, you have to decouple for a little bit to make sure that you're capturing the features that are relevant to capturing the salient things you want to look at for Q's. Now, one thing that I mentioned very briefly last time is that, um, is the- is the advantage function identifiable? And what do I mean by that in this case? I mean that if you have a Q function which is what ultimately we're going to use, um, can we decompose it into a unique a pi and v pi. So, here ultimately we want a cube. And the question is, if we then in our architecture decomposing this into a value and an advantage, is there a unique way to do that? Is there? Um, but there isn't. So, if you- if you add a constant to both Q and V, um, then you can get the same advantage function. So, there's not a unique, you can always shift your um, shift your awards by a constant and that's not going to change your policy, it will change your value function. Um, I- so, there's lots of different ways to decompose your advantage function and your values, it's not a unique decomposition. So, the way that they defined it there is to say, well, let's force the advantage for state and action to be zero if A is the action taken. So, here they compare it to the action that's taken if you're using sort of say, a greedy approach. Um, and this is really just a way to- all of this we can think of it in some ways as an analogy to supervised learning. And so, we want to have a stable target and we want to be able to learn these advantage functions and these value functions if we have lots and lots of data about them. And so, this is sort of choosing a particular fixed point for how to define the advantage function. And then, they also said, well, empirically you could just use the mean too. So, you could just average over your advantage functions, it's more of just a heuristic approach. And what they find, again, so, we sort of we're layering up these additional techniques. We started with DQN, then we thought about adding, um, double-Q learning to DQN and then we thought about adding prioritized replay. And then this is dueling. And what they find is dueling versus double DQN with prioritized replay is a lot better most of the time. Now, let me see if I can find Montezuma's. Yep. So, for Montezuma's this new method is basically no better. Like none of these methods are really tackling hard exploration problems. But they are doing better ways of sort of propagating information in the network and trying to change the way we're training the network. Yeah, questions about that, and name first please. Can you speak a little bit louder, I'm unable to hear you well. Okay. I'll try to speak a little bit louder. Can- can people in the back over there hear me or is it just him? Okay. Good. All right. So, these were three of the methods that ended up making a big difference. We talked very briefly about practical tips. Um, I won't go in these too much. The main thing is just that we try to actively encourage you to build up your acuity representation first before you try on Atari. Um, you can try different forms of losses. Uh, learning rate is important, but in this case, in our assignment we're going to be using the Adam optimizer which means you don't have to worry too much about it. There's a issue of sort of trying different exploration schemes, is something that we're going to talk about later in this class. So, for right now we're still thinking about just simple E-greedy approaches. Um, a nice paper that came out, I think it was start of 2018, um, was Rainbow which was a paper that basically just tried to combine a whole bunch of these recent methods, um, to see really how big of an improvement do you get. Uh, now again, note in this case and we'll come back to this in just a couple of slides. This is a lot of data for a lot of experience in the world, 200 million frames of experience. But they developed an algorithm called Rainbow that combines a lot of the things we've just been talking about, double DQN, prioritized, and dueling, as well as some other recent advances. Um, noisy is one that also tries to do some different forms of exploration. And so, they found that kind of by adding these improvements together, then you could get a significant improvement. I think this is a useful insight because often it's not clear whether or not these different gains are additive or if they're just, um, sort of, um, you're, you're- they're kind of doing the same thing but maybe in a slightly different way. And so, it's nice to see that in some of these cases these different sort of ideas are additive in terms of the resulting performance gain. Um, these aren't sort of- this is still a very large amount of data. [NOISE] Okay. So just to summarize which we're wrapping up where we are with model-free, ah, deep neural networks for RL right now. Uh, they're very expressive function approximators. Uh, you should be able to understand how you represent the Q function, and you could do some Monte Carlo-based methods or TD style methods. Um, and- and at this point, it's sort of good to make sure you understand how you would do that with tabular methods, with linear value function methods, and with deep neural networks. So it's sort of, algorithmically, it looks very similar across all of those but then you- in some cases, you have to do this step of doing function approximation and in other cases you don't. Um, and then it'd be good to just make sure you can sort of list a few extensions that help beyond DQN um, and why they do. All right. So now let's go back to our, um, sort of, high level, uh, view of what we want from the reinforcement learning algorithms, these are algorithms that are sort of doing optimization, handling generalization, ah, doing exploration and doing it at all statistically and computationally efficiently. And we've just been spending quite a lot of time on looking at generalization as well as optimization. Um, but we haven't talked very much about efficiency. So one of the challenges is- is, that, um, if you want to define efficiency formally like in terms of how much data an agent lead, needs to learn to make a good decision. Um, there are hardness results, ah, that- that are known so our lab has developed some, uh, lower bounds, other people have too. Um, uh, I think we now have basically tight upper and lower bounds for the tabular MDP case, um, which indicate that there's some really pathological MDPs out there for which we just need a lot of data, lot of data, though, you know, would not scale very well as you start to go up to really huge domains. So some of these problems are really hard to do. Formerly, you would just need a lot of exploration. You can do something much better than E greedy but we'll talk about that soon. But even when we do those much better things than E Greedy we can prove that it's still really hard to learn in those, we still might need a lot of data. So an alternative is to say well there's lots of other supervision that we could have in the world to try to learn how to do things. Um, and so how can we use that additional information in order to sort of speed reinforcement learning. And so what we're going to do, is talk about some- about imitation learning today. And then we're going to start talking about policy search and policy gradient methods next week. And those, you can also think of as another different way to impose structure, um, because in policy gradient methods, you always have to define your policy class. Sometimes that can be a really rich policy class, and so maybe that's not too much of a limitation, um, but other times, you're encoding domain knowledge by the class that you- that you represent. Okay. And in particular, we're going to be thinking about imitation learning and large state spaces which is exactly the place where you might hope to benefit from, um, additional help or supervision. So if we think about something like Montezuma's Revenge, um, there's some nice work on looking at sort of how far did DQN get in this case. So Montezuma's Revenge, for those of you who haven't played, it is sort of a, um, a long- very long horizon, uh, game in which you're sort of trying to navigate through this world, and like pick up keys and make decisions. Um, and it involves a lot of different rooms. So you can see here what the outline of the, all the white squares are basically rooms. And on the left-hand side, um, sort of a DQN that was trained for 50 million frames, um, only gets through the first two rooms. Like, it's just doing very badly. It's not making very much progress. Um, whereas on the right-hand side, we see something which is explicitly trying to explore. Um, and it uses some of the techniques that we'll be talking more about later. But notice that it still doesn't get all the way through the game. Um, and so I think this sort of illustrates the fact that some of these games are really hard. Um, there has been some really nice additional progress since, um, ah, both, ah, with me and Percy Liang's lab we now can basically solve Montezuma's. Um, and there's also been some really nice work from Uber AI lab on solving Montezuma's. But a lot of the places that people originally got traction on this was by starting to use imitation learning and demonstrations. Um, so in particular, if we think about cases where RL might work well, RL works, you know, pretty well when it's easy or certainly we've seen a lot of success. So far when data is cheap and parallelization is easy, and it can be much harder to use the methods that we've talked about so far when data is expensive, um, and when maybe failure is not tolerable. So , if you tried to use the methods that we just described to learn to fly like a remote control helicopter. Typically require a lot of helicopters [LAUGHTER] because it would be very expensive. And so there's many cases where this type of, um, performance is just not gonna be practical. So one of the benefits is that if you can give the agent a lot of rewards, you can shape behavior pretty quickly. Um, so one of the challenges in Montezuma's Revenge is that reward is very sparse that, you know, the agent has to try lots of different things before it gets any signal of whether it's doing the right thing. Um, and where do these rewards come from? I think it's generally it's actually a really deep question. Um, but for right now, let's think about sort of just even the challenge of specifying rewards. Um, so if you manually design them, that might be pretty brittle depending on the task. Um, and an alternative is just to demonstrate. So if you had to write down the reward function for driving a car, it's quite com- complicated, like you don't want to hit roads, here or hit the ro- hit, um, people. You don't wanna, um, drive off the road. You want to get to your destination. And so it's a very complicated reward function to write down. But it's pretty easy for most of us to just drive to a destination and show an example of maybe an optimal behavior. So that's sort of the idea behind learning from demonstrations. Um, there's been lots and lots of work on this but since people started thinking about learning from demonstrations or imitation learning. I would argue probably this was started roughly 20 years ago, around 1999, 2000 was a paper which started to think about learning rewards from demonstration. Um, but then there's been lots of applications to it since. So thinking about it for things like highway driving, um, or navigation, or parking lot navigation. There's a lot of these cases particularly in driving right now, but, ah, where people have been thinking and- and robotics too. To think about how do you do, um, demonstrations of like how to pick up a cup or things like that to try to teach robots how to do those tasks. Um, there's also some really interesting questions too about like, you know, how do you, uh, do things like path planning or goal inference, and again these sorts of cases where it's quite complicated to write down a reward function directly or it might be brittle. And the problem with brittle reward functions is that your agent will optimize to that and it may not be the behavior that you wanted. So- so the setting from learning from demonstrations, and- and today I'm going to be somewhat informal about whether I call things learning from demonstrations, um, there's also inverse RL. And there's also imitation learning. And there are sort of differences but a lot of these things are somewhat interchangeable. Most of this is about the idea of saying that you're- you have some demonstration data, and then you're going to use it, uh, in order to help boot- either bootstrap or completely learn a new policy. So the idea is that you might get an expert and maybe they're a perfect expert or maybe they're a pretty good expert to provide some demonstration trajectories of, um, taking actions, um, in states. And in many cases, it'll be easier for people to do this but it's useful to think about when it's easier to specify one or the other, and what situations are- are common for each. So what's the problem setup? The problem setup is that we have this state space and action space. Um, some transition model that is typically unknown, no reward function and instead sort of a set of teachers demonstrations from some particular we assume for now optimal policy. Um, and the behavior cloning we're gonna say, how do we directly learn the teacher's policy? So how do we match, how do we get sort of an approximation of pi star directly from those demonstrations? Inverse RL is typically about saying, how can we recover the reward function? Once we have the reward function, then we can use it to compute a policy. Uh, and that often- that last step often is combined with the apprenticeship learning. So that we're both trying to get that reward function and then actually generate a good policy with that. In some cases, you might just want the reward function, um, can anybody think of a case where you might be interested in just the reward function, maybe you don't want to recover the policy, but you're just curious about what the reward function is of another agent. Okay. If you're trying to understand, say, [inaudible] Yeah, how the environment [inaudible]. Yeah, I think is a great example. So, in a lot of, um, uh, science, you know, biology et cetera you often want to understand the behavior of organisms or animals or things like that. And so, if you can just look at their behavior, you could say track monkeys or things like that and then use that to back solve, like, what is their reward function. What are the- the goals or preferences. Um, I think there's a number of cases where that's useful, and maybe down the line, you know, maybe there's some optimization that'll happen but- but generally often there it's just about understanding, like, what is the goal structure or what is the preference structure of- of the organism or individual. That could happen with people too that you might want to understand, like, the choices people are making in terms of nav- you know, um, uh, commuting or in terms of buying preferences or things like that. Maybe later you want to optimize for that but also you're just curious about how- how do people's behavior reveal the, sort of, um, an underlying reward structure, underlying preference model. Yeah, Imitation learning just using the teacher's demonstration set, like, an upper bound, I guess, or, uh, have there been cases such that, yes, that, like, agent learns to perform better than the actor did. Like we find a new path side. Yeah, asked a nice question of, like, is the expert's behavior an upper bound or are there cases also where the agent can go beyond this? We're not gonna talk too much about this today, but there's a lot of work right now on combining imitation learning with RL. So, um, uh, there's a lot of work on say like inverse RL plus RL. Where for example, you might use this to s- bootstrap the system, um, and then your agent would continue to learn on top of this. There's also some nice work from Pieter Abbeel's Group, um, where they looked at assuming that the expert was providing like a noisy demonstration of an optimal path and then the goal is to learn the optimal path not the noisy demonstration of it. So, often you do want to go beyond the expert. There's limitations to that, and we'll talk about that in a second actually. What- what are some of the limitations that you might have if you don't get to continue to gather data in the new environment. Okay, so let's start with behavioral cloning which is probably the simplest one, um, because essentially in behavioral cloning we're just gonna treat this as a standard supervised learning problem. So, we're going to fix a policy class which means, sort of, some way to represent, um, our mapping from states to actions. And this could be a deep neural network. It could be a decision tree, could be lots of different things. Um, and then we're just going to estimate a policy from the training examples. So, we're just gonna say, we saw all these times. We saw a state and an action from our expert and that's just our input output for our supervised learning model. And we're just gonna learn a mapping from states to actions. And early on, so this has been around, uh, really for quite a long time and I, uh, should have said more like 30 years. Um, there were some nice examples of doing this. So, ALVINN, um, was a very early, uh, paper, uh, and system about thinking about, uh, driving on the road. See, it was a neural network and it was trained, uh, at least in part using behavioral cloning or supervised learning to imitate trajectories. Um, okay, so let's think about why this might go wrong. Um, and to first, let's think about what happens in supervised learning. So, in supervised learning, um, we're gonna assume iid pairs s, a and we're gonna ig- ignore the temporal structure. So, we're gonna- if we're just doing supervised learning in general, we just imagine that we have these state-action pairs and then maybe we learn some classifier or, um, yeah, let's just say a classifier, to classify what action, you know, we should do. And it might have some sort of errors. It might have errors of, uh, well, you know, with probability epsilon. And so if we were thinking about doing this over the course of T time steps than we might have, you know, sort of, an expected total number of errors of epsilon times T. So, let's just take a second and think about what goes wrong when we're doing this in the supervised learning or in the- in, uh, in the RL context. So, by the RL context, I mean, the fact that the decisions that we make influence the next state. So, let's just take, like, one minute maybe talk to your neighbor and say, like, what do you think could be the problem with behavioral cloning in these sorts of scenarios. And if, uh, that's a simple thing to think about then maybe think about how you would address it. So, what you might do in that case if there is problems that happen when we try to apply standard supervised learning to this case where it's really underlying, uh, an MDP. [OVERLAPPING]. All right. So, first of all let's just make a guess. I'm going to ask you guys whether you think the, um, the total expected errors if we're doing this in- in where the underlying world is an MDP, is gonna be greater than or less than the number of errors that we'd expect according to a supervised learning approach. Um, so who thinks that we're going to have greater expected total errors? Okay. Who think we're gonna have less? Who- many people must be confused. [LAUGHTER] Okay. So, how about somebody who thinks the answer is that we're gonna have greater. Maybe somebody who thinks that's the case could say why they think we might have more errors if the real world is MDP and we've tried to do the supervised learning technique. Yeah, and name first, please. My idea is that kind of, like, uh, as a human you're planning a more long-term horizon or, like, you're doing one step and then you know how that action is gonna then give you, like, another sequence, but since we've just had taken a state and action and then, like, predicting. Right from there we can't plan that long-term sequence, so it's gonna, like, compound our errors as we go. That's right. So, when says that, correct, we will compound those errors, and one of the- the challenging aspects of this is that the errors can compound a lot. Um, and this is because the distribution of states that you get can- depends on the actions that you take. So, if you think about this like a navigation case, like, if I was supposed to go out the right-hand door, um, and I watched go out this door and I saw that he went right. And I- and I, you know, tried to learn a supervised learning, uh, classifier for what I should do here. But my supervised learner was a little bit broken, and so instead of going right here, I actually went left. Well, now I'm in part of the room which never went to because he was going over there to go to the door. And so, like, now I have no idea what to do here. All right, like, now I'm in the state distribution, it's something that I haven't seen before, it's very likely that I'm going to make an error. In fact, my probability of error now may not be- my probability of error here is under-assuming the fact that the data that you get in the future is the same distribution as the data you got in the past. Our supervised learning guarantee is generally safe when we have them. Um, uh, that I- if- if your data is- comes from an iid distribution, then in the future this is what your test error will be. The problem is, is that, in reinforcement learning or markup decision processes, your actions depe- determine what is the data you're gonna see. So, the fact that instead of following the right action here, um, I went over here. And now I have no data and my data distribution is different, so now there's no guarantees from my supervised learning algorithm because my data is different. It's never been trained on anything like that so we can't generalize. So that's exactly the problem. Um, and this was noted by, like, Drew Bagnell's group from CMU in 2011 of arguing that, you know, this is a really big problem for what's called behavioral cloning. So, even though there had been some nice empirical demonstrations of it, uh, he and his former students, Stephane Ross indicated why this might fundamentally be a very big problem an- and sort of ill- uh, demonstrated some things that people had sometimes seen empirically. The idea is that as soon as you deviate, so this is the time where you make your mistake, then essentially the whole rest of the trajectory might all be errors. You might make T more- T more errors. Um, and so that means that the total number of errors that you make is not expected, uh, uh epsilon times T but it's epsilon times T squared, it's much worse. And it's really due to these compounding errors leading to- you to a place where your distribution of states is very different than what you have data about. And this issue will come up again, again, this sort of equiv- this thinking about what is the distribution of states that you get, um, under, you know, the policy you're following versus the policy that you want to follow. That issue comes up again, and again in- in reinforcement learning. So, um, it really is just a foundational issue that, you know, what is the data distribution you're gonna get under the policy that you've learned versus the true policy and looking at this mismatch. And so once you go off the racecourse you're not gonna have any data about that. So, one of the ideas, uh, that Drew Bagnell and his students came up with to think about this was to say, well, what if we could get more data? So, what if when I, you know, my- I- I only have a little amount of data to start with. I've learned my per- my supervised learning policy to say, you know, what shall I do in each state, and sometimes I make a mistake. So, sometimes, you know, I- I go out that way, my race car drives off the racecourse. What if I could know what to do in that, in that state? So, I reached a state that I don't have any data about, what if I could ask my expert, hey, what should I do now. So, like, I go over here and I'm like, oh my god I don't know, you know, what to do now. And then you ask your expert, and they're like, oh just turn right. It's fine, you can still get out of the right door, it's okay. Um, so if you could ask your expert for labels, well, now you're getting labels about states that you are encountering. And as long as all the- as long as you have data that covers all the states that you're gonna potentially experience, then your- then your supervised learning should do pretty well. But the bigger issue tends to come up with the fact that you are encountering states that you don't have any coverage of in your training data. So, the idea of DAGGER which is a data set aggregation is essentially you just keep growing your data set. So, what happens in this case is you start off, you don't have any data. You initialize your policy. You follow your policy. So, in this case we're gonna assume that you have an expert, um, some expert policy. So, that might be, you know, an expert is taking some steps and then also your other policy you can- you project a trajectory. So, sometimes you're following one policy sentence you're at. And, ah, and then what you get is you get this state. You get to go ask your expert for every single state, what would you have done here? So for every state that you encountered in that trajectory. And then, you add all of those tuples to your dataset and you train your supervised learning policy on that. So, everything inside of your dataset that you're using to train your policy on, is with an expert label and you're slowly growing the size of that dataset. So, the idea is as you're getting more and more experts, ah, more and more labels of expert actions across the the trajectories you've actually seen. And there's nice formal properties for this. So you can be guaranteed that you will converge to a good policy, um, ah, by following this, um, under the induced state distribution. Yeah, . Just to confirm, this is assuming we have pi star over all space, like when we're doing the case where we don't have the expert to go to. Yeah, no this is a great question, so when I was looking at this just now, I would have to double-check. I think this is assuming your expert can give you the action there. It doesn't assume that you have explicit access to pi star. Because if you had explicit access to pi star, you wouldn't need to learn anything. So, I think in this case it's like tossing a coin about whether the expert just directly gives you the action in that case or whether you follow the other policy. Which because you have to have the expert around all the time anyway, because they're always going to have to eventually tell you what they would have done there. Get- that's how you, excuse me, that dataset. So, that's- there was one. I'm and, um, I'm curious about how is this done efficiently? I can imagine that for some situations having a person set the command line while your, your GPU trains and inputs actions won't be efficient. How do people generally do this in such a way that it doesn't require manual intervention? Yeah, 's question is a great one which is, um, you know, well, this requires you have an expert either around for every single step or like at the end of the trajectory that can go back and label everything. And that's incredibly expensive. And if you're doing this for, you know, millions and millions of time steps that's completely intractable. I think that's why, um, this, this line of research has been less influential in certain ways than the, some of the other techniques that we're going to see next in terms of how you do sort of inverse RL. So, what this is really assuming is that you have this human in the loop that's really in the loop, um, ah, as opposed to just asking them to provide demonstrations once and then your algorithm goes off. And I think that practically in most cases it's much more realistic to say, you know, drive the car around the block 10 times, but then you can leave and then we'll do all of our RL versus saying I need you to be in the car or, you know, like, label all of the trajectories that the car is doing and keep telling you whether it's right or wrong. I think this is just very label-intensive. It's very expensive. So I think that, um, in some limited cases, like, if your action rate is very slow, like, if your action rate is, you know, making decisions in the military or you know, others that are at a very high level, with very sparse decisions, this can be very reasonable. Because you could basically throw, you know, infinite compute at it before, between each decision-making. If you're doing this for sort of real-time hertz-level decisions, I think that's very hard. Okay, yeah, . Will this be compatible with like an expert taking over the system. Right, like, somebody sitting behind the wheel letting an agent drive and then, uh, like, recognizing that there's an emergency situation coming up and taking the wheel. Yeah so, like, um, so what said, is this compatible with sort of a- an expert taking over? Yes. I mean I think. And that might be an easier way to get labels. So you might say if you have an expert there, every action that's taken that's the same as the action the expert would take. Maybe they don't intervene. Otherwise, they only provide labels or interventions when it would differ. But it still requires like an expert to be monitoring, which is often still mentally challenging essentially. You know, it's still high-cost, okay. All right. So that- that is a nice. I mean there- it's very nice to see sort of the formal characterization of why behavioral cloning can be bad and what is the reason for this. Um, and I think that DAGGER is can be very useful in certain circumstances, but there's a lot of cases where just practically it's much easier to get a- example demonstrations, um, and then assume that there's no longer a human in the loop. All right. So inverse RL is more of one of the- the second categories. So, what does- what happens in inverse RL? Well, first let's just think about you know, feature-based reward function. So, well okay, we'll get to there in a second. So again, we're thinking about this case where we have some transition model that we might not observe. Um, a- or maybe we're doing. A lot of the techniques here, to start with, they didn't assume that you knew the transition model. [NOISE] That's pretty strong for a lot of real-world domains, but in some cases that's reasonable. So, for right now, we're going to assume that the only thing that we don't know is the reward function. There's some extensions to when you don't know the transition model too. Okay. So, then we have again our set of demonstrations and the goal now is not to directly learn a policy but just to infer the reward function. So, if I don't tell you anything about the optimality of the teacher's policy, what can we infer about the reward function? Like let's just say, like its not an expert, it's just demonstrations. If you get a demonstration of a state, action state et cetera. Can you infer anything about R? If you don't know anything about the optimality? [NOISE] Like, I mean, would it be the same as samples, like as we get more demonstrations, R will approach like, R star, I guess. This is assuming no assumptions about optimality. So, if I don't tell you anything about the optimality of the policy you're seeing, is there any or any information you can gather in that case? I'd be able to say that the choice that the teacher made under their policy just wrapped hot air under the reward each other function that the alternatives. So is saying, for that particular person you could say something about, um, for their reward function, assuming that they're a rational agent, that that- that was, um, higher in their own function. That's true. But if, if you wanted it to be about the general word function, um, this would tell you maybe I'm understating it so that it seems a little bit more subtle than I mean it to be. It doesn't tell you anything, right. Like if you- if I- if you see me like wandering around and like if you see an agent flailing around, right, and and you know nothing about whether it's making good decisions or not with respect to the true reward function. Demonstrations don't tell you anything, like they don't give you any information about the reward function unless you know something about, um. Unless you're trying to make an assumption that the agent is acting rationally with respect to the true reward function. Or maybe you get some information about their internal one like what was saying. But in general, if we don't make any assumptions about agent behavior and we don't, um, and we don't assume that they're doing anything optimal with respect to the global reward function, there's no information you can get. Now, the more challenging one is the next. So, let's assume that the teacher's policy is optimal with respect to the true global reward function which the agent is also maybe going to want to optimize in the future. So, you get expert driver perfo- driver, um, driving around. Um, this is like, think for a second about what you can infer about the reward function in this case. Um, and whether in particular there's more than one reward function that could explain their behavior. So think about whether it's unique. So let's say, imagine data has no issue. I give you 10 trillion examples of- of the agent following the optimal policy. 10 trillion examples. Um, and you want to see if you could learn what the reward function is and the question is, um, is there a single reward function that is consistent with their data or are there many? Maybe take a second to talk to somebody around you, um, just to see whether there's, the question really is there one reward function? Is there a unique reward function? If you have infinite data so it's not a data sparsity issue, um, versus money. [OVERLAPPING] All right, I'm going to do a quick poll. Okay, we're going to- I'm going to poll you guys then I'm going to ask whether you think there is one reward function or if there's more than one reward function. So, who thinks there's a single reward function? Infinite data, single reward function that's consistent. Who thinks there's more than one reward function? Okay. Could someone give me a reward function that is always consistent with any optimal policy? I guess that's what we need to do is in the first step just get the older apart that the feature is going to have as a policy and then just be random after that. is saying maybe on the first step, that you could maybe give most of the reward that, like, the agent would experience and then be random after that. So, and that might depend on the state. I guess I was thinking of, if- can anybody tell me like, um, a number which would allow or, you know, specification of a reward function like a constant, like a choice of a constant which would make any policy optimal. Yeah, . You just give a reward of zero for every action. Sorry, yes. So, what says is exactly correct. If you give a reward of zero, um, any policy is optimal in the respect that you'd never get any reward. Anywhere, it's a sad life unfortunately. And, um, uh, in this case, all policies are optimal. Right. So, uh, so if you just observe trajectories, then one reward function for which that policy is optimal at zero but it's not unique. So, um, this issue was observed, I think it was by Andrew Ng and Stuart Russell back in 2000. Right there is a paper talking about inverse RL where they noted this issue. The problem is that this is, uh, not unique without further assumptions, there are many reward functions that are consistent. Um, so, we have to- we're gonna have to think about how do we break ties and how do we impose additional structure. Yeah in the back. If you have a consistent reward function, for instance, if you add a constant and what are the rewards also? They're loss, oh, remind me of your name. . . So, yeah what said is, there's, there's many many reward functions. So, if you have, um, a constant, er, everything has the same, uh, any constant also be identical. So, um, there's generally many different, um, reward functions that would all give you. There are many different reward functions for which any policy is optimal. Instead, that would mean that if you're trying to infer what function given some data there are many reward functions that you can write down so that that data would be optimal with respect to the reward function. [NOISE] And, and that second part is really what we're trying to get as we're trying to sort of, uh, infer what reward function would make this data look like it's, um, coming from an optimal policy, if we assume that the expert is optimal. So, let's think about also how we do this in, um, enlarged state spaces. So, we're gonna think about linear value function approximators, um, because, again, often the places where we particularly need to be sample efficient is when our state space is huge and we're not gonna be able to explore it efficiently. So, let's think about linear value function approximator. And we're gonna think of this reward also as being linear over the features. So, our reward function might be some weights times some featurized representation of our state space. And the goal is to compute a good set of weights given our demonstration. I already said that in general, this is not unique but, um, we're gonna try to figure out ways to do this in, in different, um, methods. So, the value function for a policy Pi can be expressed as the follows- following, and you just write it down as, uh, the expected value of the discounted sum of rewards, and this is the states that you would reach under that policy. Under the distribution of states that you get to under this policy, these are their words. So, now what we're gonna do is re-express this. So, what we're doing now is that this is gonna be- we're assuming a linear representation of our reward function. So, we can write or we can re-express it like this. So, we can write it down in terms of the features of the state we reach at each time step times the weight. And then because the weight vector is constant for everything, you can just move it out. And then you get this interesting expression which is you basically just have this discounted sum of the state features that you encounter. And we're gonna call that Mu. So we talked about this very briefly earlier, but, um, now we're talking about Mu as being sort of the discounted weighted frequency of state features under our policy. How much time do you spend, um, uh, in different features, um, or you know that basically how much time you spend in different states, sort of a featurized version of that, um, discounted by kind of when you reach those because some states you might reach really far in the future versus now. So, it's related to the sort of stationary distributions we were talking about before, but now we're using discounting. So, why is this good? Okay. So what, er, we're gonna say now is that instead of thinking directly about reward functions, um, then we can start to think about distributions of states. Um, and think about sort of the probability of, of reaching different distributions of states, different state distributions, um, as representing different, uh, different policies essentially. Different policies, um, for a particular reward function, um, would reach different distributions of states. So, we can think about using this formulation for apprenticeship learning. So, in this case, we have this nice setting for apprenticeship learning. Right now, um, we're using the linear value function approximation we call it apprenticeship learning because we're learning like the agent is being an apprenticeship from, uh, from the, from the demonstrator. So, now we have this discounted weighted frequency of the feature states. So, we're always sort of moving into the feature space of states now. Um, and then we wanna note the following. So, if we define what is the value function for Pi star, that's just equal to the expected discounted sum of rewards we reach. And by definition, that is better than the value for any other policy. At least as good because either Pi is the same as Pi is the same as the optimal policy or it's different, and this is just equal to the same thing, um, same reward function which we don't know, but under a different distribution of states. It's under the distribution of states you'd get to if you follow this alternative policy. And so, if we think that the expert's demonstrations are from the optimal policy, in order to identify W, it's sufficient to find the W star such that, if you dot-product that with the distribution of states you got to under the optimal policy. Remember, this is what we know, these are- we can get this from our demonstrations. This has to look better than the distri- than the same weight vector times the distribution of states you get to under any other policy. Are there questions about that? So, it's by making this observation that the value of the optimal policy is directly related to the distribution of states you get under it times this weight vector. And that value has to be higher than the value of, uh, any other policy which is using that same weight vector and this gives you a different distribution of states. Yes, . [inaudible] U in this case is sort of like the stationary distribution of the proportions of its refugees state gave the policy. [NOISE] Yeah. Especially in serv- in terms of conceptualizing Mu. What's the sort of a good way to think about it is it should we think of it as like the stationary distribution of states? Yeah. I think it's reasonable as everything that is essentially the stationary distribution of states weighted with this discount factor on top of it. So, it's very similar to the stationary distributions we saw before. All right. So, essentially it's the same, we want to find a reward function so that the expert policy and the distribution of states you reach under it looks better when you compute the value function compared to, um, that same weight vector under any other distribution of states. Um, and so if we can find a policy so that its distribution of states matches the distribution of states of our expert, then we're gonna do pretty well. So, what this says here is that if we have a policy so the dis-discounted distribution of states that we reach under it is close to the distribution of states that you got from your demonstration, since it's the expert. So, if that's small then your value function error will also be small. So, for, for any w if you can- if you can basically match features, match feature- expected features or matched distributions of states, then you're good. Then, then you found, um, er, found a value that's going to have a very similar value to the true value, okay. So, this actually means for any w_t here. So, it means no matter what the true reward function is, if you can find a policy so that your, uh, state features match, then no matter what the true reward function is you know that you're going to be close to the true value. Yeah, . So, this, uh, this w that we will- we will be finding would be used to calculate I guess, an expectation. What your, I guess, values or state. Right? Yes. You could- once you have a w you combine that with your mu's to compute, like, a value of a state or you can sum over it. How do you, uh, I guess, that translates directly to, being able to use that to make decisions when they're out of state? So, I think, I think question is about saying, like, okay, so if we're getting these w's or, sort of, what are we solving for? Are we solving for the policy, are we solving for w et cetera. In this case, I think, uh, a reasonable way to think about it is, um, solving for w if it's solving for pi. So, what this is saying is that let's say you're optimizing over pi. If you found a pi, so right now we know the transition model which is not always true, but if you know the transition model, for a given pi you can compute mu because it's just following, like, you could do Monte Carlo roll outs for example. So if you- someone's given you a pi and they tell you the transition model, you can roll that out and you can estimate mu of pi. Then it's saying that if you do that, so let's say I have some policy, I roll this out a bunch of times, I estimate my mu and I check whether that seems to be close to my mu of my demonstration policy. If that's small, this is saying no matter what the real reward function is you're gonna have the same value as, like, uh, what you're, like, like, you've matched, um, uh, the value that you would get under the expert policy. So, you're good. You can just use this policy to act. [NOISE] Yeah. . And I'm looking at the constraint on the [inaudible] for w, and I, I don't quite see where that comes from. I'm curious since I'm missing it here or we haven't gone over it. 's question is about why we have this constraint over w. Um, I- my- I would have to double-check the details to be careful about this but I'm pretty sure it's there, so that, um, as we do these backups when we do this approximation, that, um, your errors are all bounded [NOISE] so that things don't explode. Um, and that when you do this proof that- I think I'd have to double-check it, but uh, but I think you basically use Holder's inequality and then you use the fact that the w is bounded to ensure that your ultimate value is bounded. So, you can check that. [NOISE] In general, you want your, your reward function to be bounded, um, particularly the- in the- even with just counting, like, it's useful. You always need to make sure that your Bellman operator's like a contraction to have a hope of- I mean, we've already talked about the fact that with linear value function approximated, you don't always converge, um, uh, but if your rewards are unbounded it gets worse. Yeah, . Um, I'm trying to fit this into uh, other things I'm familiar with, is this basically, like, a maximum likelihood way of looking at the policy, right? Like, if we flipped a coin 100 times and we got 99 heads and 1 tails, it's possible that came from, uh, a fair coin. You know that uh, we can't discount that but it's unlikely, right? So, if we observe some expert agent doing the same thing 100 times that could come from a reward function that's zero everywhere but not as likely as some other reward function? Does that- Oh, great question. So is asking, like, so is this, sort of, giving us some way to deal with the fact that the reward could be zero. That we have this, sort of, unidentifiability problem. This does not handle that unfortunately. So, um, er, this is still not guaranteeing that we couldn't, uh, learn a weight if it's zero everywhere, but what this is saying here is that, um, instead of thinking about trying to learn the reward function directly, if you match expected state features, um, then that's another way to guarantee that your policy is basically doing the same thing as the expert. So, if you have a policy that basically it looks like- that visits the same states in exactly the same frequency as what the expert does, then you've matched their policy. And you still don't necessarily know that the w you've got is accurate or is a good estimate of the reward function but maybe you don't need it because if you really just care about being able to match the expert's policy then you've matched it. Because if you're- if you visit all the states with exactly the same frequency as what the expert does, you have identical policies. So, it's, sort of, giving up on it. It's saying, well, we still don't know what the real reward function is but it doesn't matter because we've uncovered the expert policy. . Um, is there, like, a, um, nonlinear analog to this that might be more effective? Great question, yes. So, there's been a lot of work also on doing this with deep neural networks. I'll give a couple of pointers later to sort of, uh, other approaches. Okay. So, this sort of observation led to an algorithm for learning the policy which is, um, uh, you try to find some sort of reward function. Like, that means, you know, a w and choice of w, um, such that the teacher looks better than everything else. Looks better than all the other controllers you've got before. So, it makes it look like, sort of, the, um, this w for this state distribution looks better than w for all other distributions. [NOISE] And then you find the optimal control policy for the current w which can allow you to then get new, uh, mu's because you have your transition model here. And you repeat this until, sort of, the gap is sufficiently small. Now, this is not perfect. Um, if, if your expert policy is sub-optimal, it's a little tricky how to combine these. Um, I don't want to dwell too much on this particular algorithm, it is not something most people use anymore. Um, people would use, uh, more deep learning approaches, but I think that the, the key things to understand from this is, sort of, this aspect of, kind of, if you match state features that that's sufficient to say that the policies are identical. It's actually bigger than. Yeah, and [OVERLAPPING] remind me your name first, please. . . [NOISE] Is there any significance in using norm one versus the other norm, like, norm two? Uh, goo- great question. question is why do we use norm one in equation seven. That is actually important. Um, it's not necessarily the only choice, but here this is saying you have to match on all states. That's what the norm one is saying here. So, you can think of mu pi as- are really being of s. I'm not showing that explicit dependency here but it is of s. And so what norm one is saying is that, um, when you sum up all of those errors it has to be one. So, you're really, you're evaluating the error over all of this. You could choose other things to change the analysis. Um, uh, norm one in an infinity norm norm tend be particularly easy to reason about when you start to do, um, uh, when you're trying to bound the error in the value function. Okay. So, um, there's still this ambiguity that we've talked about, so there's the sort of infinite number of different reward functions, the algorithm that we just talked about doesn't solve that issue. Um, and so there's been a lot of work on, on, on imitation, uh, learning and inverse reinforcement learning. And two of the key papers are as follows. The first one is called Maximum Entropy Inverse RL. And the idea here was to say we wa- I don't wanna pick something, uh, which has the maximum uncertainty, uh, given that still respects the constraints of the data that we have from our expert. So saying, we're really not sure what the reward function is, we may not really be sure what the policy is, but let's try to pick distributions that have maximum entropy, sort of make Least Commitment, um, sort of the opposite of overfitting, you kinda want to like underfit as much as possible. Um, and only makes sure that you match in these expected, uh, state frequencies. So, both of these, the- both of these methods and a lot of the methods think very carefully about the expected state frequencies you get, um, er- and comparing the da- the data you get versus, um, the data you have from the demonstrated- demonstrator. Um, these type of methods can be extended to where the transition model is a node. [NOISE] Often that requires access to a simulator. Often it means, so you can imagine for the thing we had before, if you didn't have access to the transition model but did- you did have access to it actually in the real world, you could just try out new policies, see where your distribution of states that are like and how that matches your, um, expert demonstration and in fact that's often what's done. So, maximum entropy inverse RL has been hugely influential. And, and, and then the second one, and this is also a note is from, uh, also from Drew Bagnell's group, who was the same person that came up with DAGGER, so that group's been thinking a lot about and did a lot of nice contributions to inverse RL. And then, in terms of extending this to sort of much, um, broader function approximators, um, er, Stefano Ermon who's here at Stanford extended this to using deep neural networks, um, and again, is doing sort of this feature matching. So, the idea in this case, both of these methods compared to the sort of the DAGGER work or assuming that you have a fixed set of trajectories at the beginning, and then you're going to do more things for the future. And in particular, um, the general, uh, generative adversarial inverse imitation learning, um, has these initial trajectories, and then it's gonna allow the agent to go out and gather more data. So, they can gather more data, it can compute the, the state frequencies, um, it can also use sort of a discriminator to compare- one of the challenges is, you know, writing down, um, the form of Mu, can be hard when you have a really really high dimensional state space. So, writing down, you know, a distribution over images is hard. Um, so, what they do in this case, they're mostly focusing on MuJoCo-style tasks like robotic-style tasks, where you'd have a lot of different joints, but it's still hard to write down, you know, nice distributions over that. So, what they focus on in this case is thinking about things like a discriminator that could tell between your expert demonstrations and the demonstration- and the trajectories that are being generated by an agent. And so, if you can tell the difference between those, then you're not matching. That's it, that's a nice insight, is to say that we could use these sort of, uh, discriminators, uh, again, is, you know, the discriminator function, to try to figure out how do we quantify what it means to have the same state distribution in really high state- high dimensional state spaces. So, um, uh, this is known as Gail. And there's been a lot of extensions to Gail as well. Yeah, . Uh, earlier, we said that there could be real practical benefits to learning the reward function, in certain situations, um, but it seems like a takeaway to those that we can't actually do that, is that the correct takeaway here? Um, er, yeah. So, was saying earlier, well, earlier as you were arguing that maybe there are times where we really want the reward function, um, but maybe you're telling us that we can't really do that, um, I think in this- so we've mostly been talking about like frequentist-style statistics, er, when we're talking about statistical methods here, from that perspective, it's often very hard to uncover the word function. One thing that's often people do when they want to say, understand animal behavior or things like that, so you have a prior, you can do it another way to do this, is you have Bayesian prior reward functions, and then you do Bayesian updating, so that given the data that you see, you try to refine your posterior over the possible reward functions. So, it avoids like then you can just not have your prior cover that reward is a zero everywhere, for example. Um, er, so, if you have a structured prior, that can be one way to still use information to try to reduce your uncertainty over people's or agent's or, uh, animal's reward functions, yeah, [inaudible]. Yeah. Um, it's a great, says you know, what are, what are realistic priors for word functions? Um, uh, it's a great question. I think mostly it depends on the domain. Um, that people do use, uh, I think we'll talk a little bit about it for the exploration aspects, people do use priors over reward functions for exploration as well, um, things like Thompson sampling require you to do that. If you want it to be as close to frequentist as possible, um, often people do, uh, Dirichlet distributions, over multinomials or things like that, um, or Gaussians, and, um, uh, and so you'd use conjugate, um, uh, uh, exponential families, so everything's conjugate, but those aren't necessarily realistic. I think in, in real domains, um, the benefit probably of using these sort of priors would be to really encode domain knowledge about, you know, uh, whether people are very sensitive, what sort of rewards you expect, uh, to be reasonable in these cases. Yeah, I mean, I think I- to go back to point too, I think a lot of it does depend on what you want out. So if you really want to just understand the reward function and the preference function, then we need to maybe do something Bayesian or we'd need to try to have a method that's gonna help us like cover it. I think what a lot of other methods ended up saying is, well, maybe we care about the reward function, but mostly we just care about getting high performance. So, if we can uncover a policy that's matching an expert policy, we're fine. Behavior cloning wasn't a good way to do that because errors compound, but now there are these other ways that can do that better, and so we're fine with that part. And again, I just wanna emphasize like Sergei Lamin and others have done work which really combines, like you can take Gail and, and then go beyond that in terms of, er, exploration, so you can end up with a policy that's better than your demonstrator, which I think is good, because often, like if your demonstrator comes from YouTube, um, uh, which is nice, so that's a freely available place to get demonstrations. Um, you don't actually know the quality, so often you might want to use that to sort of bootstrap learning but not necessarily be limited by it. All right, so just to summarize, um, you know, in practice, there's been an enormous amount of work on imitation learning, particularly in robotics, uh, but in lots of domains. Um, and I think that, you know, if you're gonna leave class today and go out into industry, um, that imitation learning, uh, can be very useful practically, uh, because often it's easier to get demonstrations and it can really bootstrap learning, um, uh, complicated Atari games, et cetera. Um, but there's still a lot of challenges that remain, uh, particularly, in a lot of the domains that I think about we don't know the optimal policy. Um, so, I think about, er, healthcare or like customers or, um, education like intelligent tutoring systems, and all of those one of the big challenges is that you don't know the optimal policy and you're maybe doing all this because you think you could do something better than what's in the existing data. Um, so, that's, that's a big challenge. Um, and how do you combine sort of inverse RL? Ah, and maybe online RL in a, in a safe way. So, one of the motivations I said for imitation learning was, oh well if you want to be safe, um, but then if your, if your- the only safe things right now don't do very well, then you have to figure out how to do safe exploration in the future. All right. I think that's everything for today, and I'll see you guys next week where we're gonna start to talk about policy search [NOISE]. |
Stanford_CS234_Reinforcement_Learning_Winter_2019 | Stanford_CS234_Reinforcement_Learning_Winter_2019_Lecture_2_Given_a_Model_of_the_World.txt | All right. So, last time we were starting to talk about the sort of the general overview of what reinforcement learning involves, um, and, we introduced the notion [NOISE] of a model, a value, and a policy. [NOISE] Um, so it's good to just refresh your brain right now, about what, what those three things are. Can anybody remember off the top of their head what a value, a model, or a policy was in the context of reinforcement learning? [NOISE] Um, so policy is a set of actions, that, uh, the agent should take [NOISE] in a work. [NOISE] Exactly right. So, the definition of a policy is a mapping from the state you're in to what is the action, um, to take. And it might be a good policy or a bad policy. And the way we evaluate that, is in terms of its [NOISE] expected discounted sum of rewards. Does anybody remember what a model was? Yeah? A model is like, uh, representation of the world and how that changes in response to agent's accident. [NOISE] Yeah. So right, so normally we think of a model of incorporating either reward model, or a decision, uh, or, or dynamics model, [NOISE] which specifies in response to the current state and, uh, an action how the world might change, could be a stochastic model or a deterministic model. [NOISE] Um, and the reward model specifies, what is the expected reward, um, that the agent receives from taking a state in a particular action. [NOISE] So what we're gonna talk about today is, um, thinking about, if you know a model of the world, so, you know, um, what happens if you take an action in a particular state, or what the distribution of next states might be if you [NOISE] take an action, [NOISE] um, how we should make decisions. So, how do we do the planning problem? So, we're not gonna talk about learning today. We're just gonna talk about the problem of figuring out what is the right thing to do, [NOISE] when your actions may have delayed consequences, which means that you may have to sacrifice immediate reward in order to maximize long-term reward. [NOISE] So as we just stated, um, the model generally we're gonna think about is statistical or mathematical models, of the dynamics and the reward function. Um, a policy is a function that maps the students each, uh, uh these agents states to actions, and the value function as the expected discounted sum of rewards, um, from being in a state, um, and/or an action, [NOISE] and then following a particular policy. [NOISE] So what we're gonna do today is, sort of, um, build up for Markov Processes, um, up to Markov Decision Processes. And this build, I think, is sort of a nice one because it sort of allows one to think about what happens in the cases where you might not have control over the world but the world might still be evolving in some way. [NOISE] Um, and think about what the reward might be in those, sort of, processes, for an agent that is sort of passively experiencing the world. Um, and then we can start to think about the control problem of how the agent should be choosing to act in the world in order to maximize its expected discounted sum of rewards. [NOISE] So, what we're gonna focus about on today and, er, and most of the rest of the classes is this Markov Decision Process, um, where we think about an agent interacting with the world. So the agent gets to take actions, typically denoted by a, [NOISE] those affect the state of the world in some way, um, and then the agent receives back a state and a reward. So last time we talked about the fact that this could in fact be an observation, instead of a state. But then, when we think about the world being Markov, we're going to [NOISE] think of an agent, just focusing on the current state, um, so the most recent observation, like, you know, whether or not the robots laser range finders saying, that there are walls, to the left or right of it, as opposed to thinking of the full sequence of prior history of the sequences of actions taken and the observations received. [NOISE] Um, as we talked about last time but you can always incorporate [NOISE] the full history to make something Markov, um, [NOISE] but most of the time today, we'll be thinking about, sort of, immediate sensors. If it's not clear, feel free to reach out. [NOISE] So, what did the Markov Process mean? The Markov process is to say that the state that the agent is using to make their decisions, is the sufficient [NOISE] statistic of the history. [NOISE] Which means that in order to predict the future distribution of states, on the next time step. Here we're using t to denote time step. [NOISE] That given our current state s_t, and the action that is taken a_t, [NOISE] this is again the action, [NOISE] um, that this is equivalent to, if we'd actually remember the entire history, where the history recall was gonna be the sequence of all the previous actions and rewards. And next states that we have seen up until the current time point. [NOISE] And so essentially, it allows us to say that, the future is independent of the past given some current aggregate statistic about the present. [NOISE] So when we think about a Markov Process or a Markov Chain, we don't think of there being any control yet. There's no actions. Um, but the idea is that, you might have a stochastic process that's evolving over time. [NOISE] Um, so whether or not I invest in the stock market, the stock market is changing over time. And you could think of that as a Markov Process, [NOISE] um, so I could just, sort of be, passively observing how the stock market for a particular, th- the stock value for a particular stock, is changing over time. [NOISE] Um, and a Markov Chain is, is sort of just the sequence of random states, where the transition dynamics satisfies this Markov property. So formally, the definition of a Markov Process is that, you have, um, a finite or potentially infinite set of states. And you have a dynamics model which specifies the probability for the next state given the previous state. [NOISE] There's no rewards, there's no actions yet. Um, and if you have a finite set of states, you can just write this down as a matrix. Just a transition matrix that says, you're starting at some state. What's the probability distribution over next states that you could reach? [NOISE] So if we go back to the Mars Rover example that we talked about last time. [NOISE] Um, In this little Mars Rover example, we thought of a Mars Rover landing on Mars and there might be different sorts of landing sites, um, so maybe our Mars Rovers starts off here. And then, it can go to the left or right, um, er, under different actions or we could just think of those actions as being a_1 or a_2, where it's trying to act in the world. [NOISE] Um, and in this case, uh, the transition dynamics, it doesn't, we don't actually have actions yet, and we just think of it as, sort of, maybe it already has some way, it's moving in the world, the motors are just working. [NOISE] And so in this case, the transition dynamics looks like this, which says that, for example, the way you could read this, is you could say, well, the probability that I start in a particular state s_1, um, and then, I can transition to the next state on the next time step is 0.4. [NOISE] There is a 0.6 chance that I stay in the same state on the next time step. Yeah? [NOISE] Um, which dimension represents the start state? Um, so, this is a great question. Which dimension, which, which state is the start state? [NOISE] I'm not specifying that here. Um, uh, In general when we think about Markov chains, we think about looking at their steady-state distribution. So they're stationary distribution will [NOISE] converge to some distribution over states, [NOISE] that is independent of the start state, if you run it for long enough. Oh, sorry, I meant to ask, like, on that matrix, which dimension represents the initial state of- Oh, you mean, like, where you are now right now? Yeah. So in this particular case, you could have it as, um, the transition of saying, if you start in state, [NOISE] uh, let me make sure that I get it right. In this case, [NOISE] answer there, there, so if you start in state here, um, so this is yours initial start at a state s_1 and then you take the dot product of that with, I may have mo- let me see if I get it right in terms of mixing it up. It's either on one side or the other side, and then, I may have transitioned it. Um, I think you'll have to do it for the [NOISE] other side here. Yep, it'll be flipped. So, you would have your initial state. So 1, 0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, and then times P, and that would give you your next state distribution s'. Yeah? [NOISE] Um, um, so what are the probabilities computed of, like the rewards, I guess, the probability, based on the reward of going from state 1 to 2 [NOISE] or? Great question, so was, you know, one of this transition probabilities looking at [NOISE] this relate to their word, in this case, we're just thinking of Markov Chains, so there's no reward yet, and there's no actions. [NOISE] Um, and this is just specifying that there's some state of the, uh, of the process. So it's as if you're, let's say your agent, um, had some configuration of its motors. [NOISE] You don't know what that is, that was set down on Mars, and then it just starts moving about. And what this would say is, this is the transition probabilities of if that agent starts in state, I can write it this way. So, if it starts in state, [NOISE] s_1, then the probability that it stays in state s_1 is 0.6. So, the probability that you're starting in this particular state here, [NOISE] on the next time step that you're still there, is 0.6 because of whatever configuration of the motors were for that robot. [inaudible] world works. This is specifying that, this is how, yeah, this is how the world works. So that's a great question. So we're assuming right now, this is, um, the, this Markov process is a state of the world that you were, there is some the, the environment you're in is just described as a Markov Process, and this describes the dynamics of that process. We're not talking about how you would estimate those. This is really as if, this is how that world works. This is, like, this is the, this is the world of the fake little Mars Rover. [NOISE] We have any questions about that? Yeah? Uh, [NOISE] the serum one cloud action needs to be transposed [NOISE] when you multiplied by P and all [NOISE] we can see is [inaudible] Yes. Yeah. [inaudible] [NOISE] Let me just write down and correct vector notation. Would be like this. One, one two, three, four, five, six. That would be, that would be a sample starting state you could be in for example. So, this could be your initial state. Initial state and that would mean that your agent is initially in state S one. Okay. And then if you want to know where it might be on the next state, you would multiply that by the transition model P depending on the notation and whether you take the transpose of this transition model, it will be on the left or the right. It should always be obvious from context but if it's not clear, feel free to ask us. And so what would that say? That would say if you took the, uh, the matrix multiplication of this vector which just says you're starting in state s_1, what would that look like? Afterwards it would say that you are in state s_1 still with probability 0.6, you're in state s_2 with probably to 0.4. And this would be your new (state distribution). And I think that should be transposed. But it's just a one it which specify the distribution over next states that you would be in. You may have any questions about that? Okay. All right. So, this is just specifying that the transition model over how the world works over time and it's just I, I've written it in matrix notation there to be compact. But if it's easier to think about it, it's fine to just think about it in terms of these probability of next states given the previous state. And so you can just enumerate those, you can write it in a matrix form if the, if the number of states happens to be finite. So, what would this look like if you wanted to think of what might happen to the agent over time in this case or what the process might look like, you could just sample episodes. So, let's say that your initial starting state is S four, and then you could say, well, I can write that as a one-hot vector. I multiply it by my probability. And that gives me some probability distribution over the next states that I might be in and the world will sample one of those. So, your agent can't be in multiple states at the same time. So, for example, if we were looking at state s_1, it has a 0.6 chance to abstain in s_1 or 0.4 chance of transitioning. So, the world will sample one of those two outcomes for you and it might be state s_1. So in this case, we have similar dynamics from s_4. From s_4, it has a probability of 0.4 going to state s_3. Probability of 0.4 going to state s_4 or a probability of 0.2 of staying in the same place. So, if we were going to sample an episode of what might happen to the agent over time, you can start with s_4 then maybe it will transition to s_5. Maybe they'll go to s_6, s_7, s_7, s_7. So, you're just sampling from this transition matrix to generate a particular trajectory. So, it's like the world you know what the dynamics, the dynamics is of the world and then nature is gonna pick one of those outcomes. It's like sampling from sort of a probability distribution. Anyone having questions about that? Okay. So, that just gives you a particular episode. And we're going to be interested in episodes because later we're gonna be thinking about rewards over those episodes and how do we compare the rewards we might achieve over those episodes but for right now, this is just a process. This is just giving you a sequence of states. So, next we're gonna add in rewards. So, that was just a Markov chain. And so now what is a Markov reward process? Again, we don't have actions yet just like before. But now we also have a reward function. So, we still have a dynamics model like before. And now we have a reward function that says, if you're in a particular state, what is the expected reward you get from being in that state? We can also now have a discount factor which allows us to trade off between or allows us to think about how much we weight immediate rewards versus feature rewards. So, again just like before, if we have a finite number of states in this case R can be represented in matrix notation which is just a vector because it's just the expected reward we get for being in each state. So, if we look at the Mars Rover MRP, then we could say that the reward for being an s_1 is equal to 1. The reward for being an s_7 is equal to 10 and everything else that reward is zero. Yeah. Are the words always just tied to the state you're in? I think last time you talked about it also having an option. So, why are we not consider that here? Great question. I'm saying that I mentioned last time that rewards for the Markov Decision Process can either be a function of the state, the state in action, or state action next state. Right now we're still in Markov Reward Processes so there's no action. So, in this case, the ways you could define rewards would either be over the immediate state or state and next state. So, once we start to think about there being rewards, we can start to think about there being returns and expected returns. So, first of all let's define what a horizon is. A horizon is just the number of time steps in an episode. So, it's sort of like how long the agent is acting for or how long it, how long this process is going on for it and it could be infinite. So, if it's not infinite, then we call it a finite Markov Decision Process. We talked about those briefly last time. Um, but it often we think about the case where, um, an agent might be acting forever or this process might be going on forever. There's no termination of it. The stock market is up today. It'll be up tomorrow. We expect it to be up for a long time. We're not necessarily tried to think about evaluating it over a short time period. One might wanna think about evaluating it over a very long time period. So, we've done this. The definition of a return is just the discounted sum of rewards you get from the current time step to a horizon and that horizon could be infinite. So, a return just says, if I start off in time step T, what is the immediate reward I get and then I transition maybe to a new state and then I weigh that return reward by Gamma. And then I transitioned again and I weigh that one by Gamma squared, et cetera. And then the definition of a value function is just the expected return. If the process is deterministic, these two things will be identical. But in general if the process is stochastic, they will be different. So, what I mean by deterministic is that if you always go to the same next state, no matter which if you start at a state if there's only a single next state you can go to, uh, then the expectation is equivalent to a single return. But in the general case, we are gonna be interested in these stochastic decision processes which means averages will be different than particularly runs. So, for an example of that well, let me first just talk about discount factor and then I'll give an example. Discount factors are a little bit tricky. They're both sort of somewhat motivated and somewhat used for mathematical convenience. So, we'll see later one of the benefits of mathematic, uh, benefits of discount factors mathematically is that we can be sure that the value function sort of expected discounted sum of returns is bounded as long as here reward function is bounded. Uh, people empirically often act as if there is a discount factor. We weigh future rewards lower than, than immediate rewards typically. Businesses often do the same. If Gamma is equal to 0, you only care about immediate reward. So, you're the agent is acting myopically. It's not thinking about the future of what could happen later on. And if Gamma is equal to one, then that means that your future rewards are exactly as beneficial to you as the immediate rewards. Now, one thing just to note, if you're only using discount factors for mathematical convenience, um, if your horizon is always guaranteed to be finite, it's fine to use gamma equal to one in terms of from a perspective mathematical convenience. Someone having any questions about discount factors? Yeah. My question is, does the discount factor of Gamma always have to progress in a geometric fashion or like is there a reason why we do that? It's a great question. You know, the- what we're defining here is that using a Gamma that progresses through this exponential geometric fashion is that necessary. It's one nice choice that ends up having very nice mathematical properties. There, one could try using other participant is certainly the most common one and we'll see later why it has some really nice mathematical properties. Any other questions? Okay. So, what would be some examples of this? Um, if we go back to our Mars Rover here and we now have this definition of reward, um, what would be a sample return? So, let's imagine that we start off in state s_4 and then we transitioned to s_5, s_6, s_7 and we only have four-step returns. So, what that means here is that our, um, our process only continues for four time steps and then it maybe it resets. So, why might something like that be reasonable? Well, particularly when we start to get into decision-making, um you know, maybe customers interact with the website for on average two or three times steps. Um, there's often a bounded number of time you know bounded length of course in many many cases that the horizon is naturally bounded. So, in this case you know what might happen in this scenario we start off in s_4. s_4, s_5, s_6 all have zero rewards by definition. Um, and then on time-step s_7 we get a reward of 10. But that has to be weighed down by the discount factor which here is 1/2. So, it's 1/2 to the power of 3. And so the sample return for this particular episode is just 1.25. [NOISE] And of course we could define this for any particular, um, episode and these episodes generally might go through different states even if they're starting in the same initial state because we have a stochastic transition model. So, in this case maybe the agent just stays in s_4, s_4, s_5, s_4 and it doesn't get any reward. And in other cases, um, it might go all the way to the left. So, if we then think about what the expected value function would be, it would involve averaging over a lot of these. And as we average over all of these, um, then we can start to get different rewards for different time steps. So, how would we compute this? Um, now one thing you could do which is sort of motivated by what I would just showing before, is that you could estimate it by simulation. So, you could, um, just take for say an initial starting state distribution, um, which could be just a single starting state or many starting states and you could just roll out your process. So, right now we're assuming that we have a transition model transition matrix and a reward model. Um, and you could just roll this out just like what we're showing on the previous couple of time-steps. And you could just do this many many many times. And then average. And that would asymptotically converge to what the value function is cause the value function is just, um, the expected return. So, one thing you could do with simulation, um, and there are mathematical bounds you can use to say how many simulations would you need to do in order for your empirical average to be close to the true expected value. The accuracy roughly goes down on the order of one over square root of N where N is the number of roll-outs you've done. So, it just tells you that, you know, if you want to figure out what the value is of your Markov Reward Process, um, you could just do simulations and that would give you an estimate of the value. The nice thing about doing this, is this requires no assumption of the Markov structure. Not actually using the fact that it's a Markov Reward Process at all. It's just a way to estimate sums of returns- sums up rewards. So, that's both nice in the sense that, um, if you're using this in a process that you had estimated from some data or you're making the assumption that things are, er, um, you know this is the dynamics model but that's also estimated from data and it might be wrong, um, then this can give you sort of, um, if you can really roll out in the world then you can get these sort of nice estimates of really how the process is working. But it doesn't leverage anything about the fact that if the world really is Markov, um, there's additional structure we could do in order to get better estimates. So, what do I mean by better estimates here? I mean if we want to, um, get sort of better meaning sort of computationally cheaper, um, ways of estimating what the value is a process. So, what the Markov structure allows us to do, with the fact that the present that the future is independent of the past given the present, is it allows us to decompose the value function. So, the value function of a mark forward process is simply the immediate reward the agent gets from the current state it's in plus the discounted sum of future rewards weighed by the discount factor times the- and where we express that discounted sum of future words is we can just express it with V, V(s'). So, we sort of say well whatever state you're in right now, you're going to get your immediate word and then you're going to transition to some state s'. Um, and then you're going to get the value of whatever state s' you ended up in discounted by our discount factor. So, if we're in a finite state MRP we can express this using matrix notation. So, we can say that the value function which is a vector is equal to the reward plus gamma times the transition model times V. Again note that in this case because of the way we're defining the transition model, um, then the value functions here the transition model is defined as the next [NOISE] state given the previous state and multiplying that by the value function there. So, in this case we can express it just using a matrix notation. Um, and the nice thing is that once we've done that we can just analytically solve for the value function. So, remember all of this is known. So, this is known. And this is known. And what we're trying to do is to compute what V(S) is. So, what we can do in this case is we just move this over to the other side. So, you can do V minus gamma PV is equal to R or we can say the identity matrix minus the discount factor times P. These are all matrices. So, this is the identity matrix times V is equal to R which means V is just equal to the inverse of this matrix times R. Um. So, if one of the transitions can be back to itself, um wouldn't it be become a circular to try to express V(s) in terms of V(s)? Um, the question was was if it's possible to have self-loops? Um, could it be that this is sort of circulator defined [NOISE] in this case. Um, I in this case because we're thinking about processes that are infinite horizon, the value function is stationary, um, and it's fine if you have include self loops. So, it's fine if some of the states that you might transition back to the same state there's no problem. You do need that this matrix is well-defined. That you can take that you can take the inverse of it. Um, but for most processes that is. Um, so, if we wanna solve this directly, um, this is nice it's analytic, um, but it requires taking a matrix inverse. And if you have N states so let's say you have N states there's generally on the order of somewhere between N squared and N cubed depending on which matrix inversion you're using. Yeah. Is it ever actually possible for, uh, that matrix not to have an inverse or does like the property that like column sum to one or something make it not possible? Question was is it ever possible for this not to have an inverse? Um, it's a it's a good question. Um, I think it's basically never possible for this not to have an inverse. I'm trying to think whether or not that can be violated in some cases. Um, if yeah sorry go ahead. Okay. [NOISE] Yeah. So, I think there's a couple, um, if there's a- if this ends up being the zero matrix, um depending on how things are defined. Um, but I'll double-check then send a note on a Piazza. Yeah. Well, actually I think the biggest side about the transition matrix [inaudible] Let me just double check so I don't say anything that's incorrect and then I'll just send a note on- on Piazza. It's a good question. So, that's the analytic way for computing this. The other way is to use dynamic programming. So, in this case, it's an iterative algorithm instead of a one shot. So, the idea in this scenario is that you initialize the value function to be zero everywhere and in fact you can initialize it to anything and it doesn't matter. If you're doing this until convergence. And so then what we're gonna do is we're going to do what's going to be close to something we're going to see later which is a bellman backup. So, the idea in this case is because of the Markov property, we've said that the value of a state is exactly equal to the immediate reward we get plus the discounted sum of future rewards. And in this case, we can simply use that to derive an iterative equation where we use the previous value of the state in order to bootstrap and compute the next value of the state and we do that for all states. And the computational complexity of this is a little bit lower because it's only |S| squared because you're doing this for each of the states and then you're summing over all the possible next states. When I say we do this total convergence generally what we do in this case is we define a norm. So, generally we would do something like this, V_k minus V_k-1. I need to do this until it's lower than some epsilon. So, the advantage of this is that each of the iteration updates are cheaper and they'd also will be some benefits later when we start to think about actions. The other thing does not apply as easily when we start to have actions but we'll see also where it can be relevant. So, here are two different ways to try to compute the value of Markov Reward Process or three really one is simulation, the second is analytically. The analytic one requires us a step a finite set of states and the third one is dynamic programming. We're also right now defining only all of these for when the state space is finite, but we'll talk about when the state space is infinite later on. So, now we can finally get onto Markov Decision Processes. Markov Decision Processes are the same as the Markov Reward Process except for now we have actions. So, we still have the dynamics model but now we have a dynamics model that is specified for each action separately and we also have a reward function. And as was asked before by Camilla I think, the reward can either be a function of the immediate state, the state and action to the state action and next state for most of the rest of today we'll be using that it's the function of both the state and action. So, the agent is in a state they take an action, they get immediate reward, and then they transition to the next state. So, if you think about serve an observation you'd see something like this s, a, r, and then transition to state s'. And so a Markov Decision Process is typically described as a tuple which is just the set of states, actions, rewards, dynamics, model, and discount factor. Because of the way you've defined that dynamic model, is the case that if you take a specific action that is intended for you to move to your state s', you won't fully successful move to that state? Like I guess I'm curious about why there's a- why there is a probability at all? Like if you're deep in a state in K action, why is it deterministic what the next state is? Question is same like well why is this- why are there stochastic processes I think. Um, there are a lot of cases where we don't have perfect models of the environment. May be if we had better models then things would be deterministic. And so, we're going to approximate our uncertainty over those models with stochasticity. So, maybe you have a robot that's a little bit faulty and so sometimes it gets stuck on carpet and then sometimes it goes forward. And we can write that down as a stochastic transition matrix where sometimes it stays in the same place and sometimes it advances to the next state. Or maybe you're on sand or things like that. Maybe when you're trying to drive to SFO sometimes you hit traffic, sometimes you don't. You can imagine putting a lot more variables into your state-space to try to make that a deterministic outcome or you could just say, "Hey sometimes when I try to go to work, you know, like I hit these number of red lights and so I'm late and other times, you know, I don't hit those red lights and so I'm fine." So, if we think about our Mars Rover MDP. Now, let's just define there being two actions A1 and A2. You can think about these things as the agent trying to move left or right but it's also perhaps easier just to think about in general them as sort of these deterministic actions for this particular example. So, we can write down what the transition matrix would be in each of these two cases that shows us exactly where the next state would be given the previous action. So, what's happening in this case is if the agent tries to do a_1 in state s_1 then it stays in that state. Otherwise, it will generally move to the next state over. If it's trying to do action a_1 and for action a_2 it'll move to the right unless it hits s_7 and then it'll stay there. So, like we said at the beginning of class, a Markov Decision Process policy specifies what action to take in each state. And the policies themselves can be deterministic or stochastic, meaning that you could either have a distribution over in the next action you might take given the state you're in or you could have a deterministic mapping. It says whenever I'm in this state I always, you know, do action a_1. Now- and a lot of this class we'll be thinking about deterministic policies but later on when we get into policy search we'll talk a lot more about stochastic policies. So, if you have an MDP plus a policy then that immediately specifies a Markov Reward Process. Because once you have specified the policy then you can think of that as inducing a Markov Reward Process because you're only ever taking you've specified your distribution over actions for your state and so then you can think of sort of what is the reward, the expected reward you get under that policy for any state and similarly you can define your transition model for Markov Reward Process by averaging across your transition models according to the weight at which you would take those different actions. So, the reason why it's useful to think about these connections between Markov Decision Processes and Markov Reward Processes is it implies that if you have a fixed policy you could just use all the techniques that we just described for Markov Reward Processes mainly simulation, analytic, analytic solution or dynamic programming in order to compute what is the value of a policy. So, if we go back to the iterative algorithm then it's exactly the same as before, exactly the same as the Markov Reward Process except for now we're indexing our reward by the policy. So, in order to learn what is the value of a particular policy we instantiate the reward function by always picking the action that the policy would take. So, in this case, I'm doing it for simplicity for deterministic policy and then similarly just indexing which transition model to look up based on the action that we would take in that state. And this is also known as a bellman backup for a particular policy. So, it allows us to state what is the value of the state under this policy well it's just the immediate reward I would get by following the policy in the current state plus the expected discounted sum of rewards I get by following this policy. And then for whatever state I end up by next continuing to follow this policy. So that's what the V^pi_k-1 specifies. What would happen if the expected discounted sum of rewards we get by continuing to follow policy from whatever state we just transitioned to. So, if we go to the Markov- the Markov chain or the Ma- now the Markov Decision Process for the Mars Rover, then let's look at the case now where we have these two actions. The reward function is still that you either have for any action if you're in state one you get plus one and in any state any action for state s_7 you get plus 10. Everything else is zero. So, imagine your policy is always to do action a_1 and your discount factor is zero. So, in this case, what is the value of the policy and this is just to remind you of what like the iterative way of computing it would be. Yeah in the back. Um, and I think that will be zero for everything except s_1 and s_7 where it's +1 and +10. That's exactly right. So this is a little bit of a trick question because I didn't show you again what the transition model is. Said is exactly correct. The- it doesn't matter what the transition model is here, um, because gamma is equal to zero. So that means that all of this goes away, um, and so you just have the immediate reward. So if your discount factor is zero then you just care about immediate reward. And so the immediate reward for this policy because the reward for all actions and state one is always +1. And the reward for all actions and all other states is zero except for in state s_7 where it's always 10 no matter which action you take. So this is just equal to one. That's the value function address. Okay. So let's, um, look at another one. So now we've got exactly the same process. Um, I've written down a particular choice of the dynamics model for ah, state s_6. So let's imagine that when you're in state s_6 which is almost all the way to the right, um, you have a 50% probability of staying there under action A1 or a 50% probability of going to state s_7. That's what this top line says. And then there's a whole bunch of other dynamics models that we're not going to need to worry about to do this computation. And then the reward is still +1 for state s_1, +10 in state s_7, zero for all the states in the middle. And then let's imagine that, um, we're still trying to evaluate the policy where you're always taking action a_1. Um, and we've just said that V_k is equal to 1,0,0,0,10, um, and now what we wanna do is do one more backup essentially. So we want to move from V_k=1 and now compute V_k=2. So how [NOISE] about everybody take a second and figure [NOISE] out what would be the value under this particular policy, okay, for s_6. So you can use this equation, um, to figure out given that I know what my previous value function is because I've specified it there it's 1,0,0,0,10. Um, and now I'm going to be doing one backup, and I'm only asking you to do it for one state, you could do it for others if you want. Um, what would be the new value of s_6 if you use this equation to compute it? And it just requires plugging in what is the value of the reward. The value is and- and the particular numbers for the dynamics and the old value function. And the reason that I bring this up as an example is to show sort of essentially how could have information flows as you do this computation. So you start off in the very initial. Let me just go over here first. So when you start off, you're going to initialize the value function to be zero everywhere. The first backup you do basically initializes the value function to be the immediate reward everywhere. And then after that you're going to continue to do these backups and essentially you're trying to compute its expected discounted sum of future rewards for each of the states under this policy. So if you think about looking at this, that's with information of the fact that state s_7 is good, is going to kinda flow backwards to the other states because they're saying "Okay well, I've been in state s_4 I don't have any reward right now but at a couple of timesteps under this process I might because I might reach that really great +10 state." So as we do these iterations of policy evaluation, we start to propagate the information about future rewards back to earlier states. And so what I'm asking you to do here is to just do that for one, one more step. Just say for state s_6, what would its new value be? Its previous value was zero. Now we're going to do one backup and what's this new value. So what if you just uh, let's ask a question then we can all take a second to uh. I'm just wondering, er, if repeating the same process to find the value function. I guess if you don't necessarily know the value function of s, you could just like reversibly follow it down. Question was can you- if you don't know what the value function is. I guess I'm not totally sure. This is a way to compute the value, wait your question is asking because this is a way to compute the value function. So what we've done here is we've said, we've initialized the value function to be zero everywhere. That is not the real value function, that just sort of an initialization. And what this process is allowing us to do is we keep updating the values of every single state until they stop changing. And then that gives us the expected discounted sum of rewards. Now you might ask, okay well they- are they ever guaranteed to stop changing? And we'll get to that part later. We'll get to the fact that this whole process is guaranteed to be a contraction so it's not going to go on forever. So the distance between the value functions is going to be shrinking. And that's one of the benefits of the discount factor. So if people don't have any more immediate questions, I suggest we all take a minute and then just compare with your neighbor of what number you get when you do this computation. Just to quickly check that the Bellman equation make sense. [NOISE] All right. So, um, wherever you got to, um, hope we got a chance to sort of compare check any understanding with anybody else that was next to you. Um, before we go on I just want to, um, answer a question that was asked before about whether or not the analytics solution is always possible, um, to invert. Let's go back to that. So in this case, um, because p is a stochastic matrix, its eigenvalues are always going to be less than or equal to one. If your discount factor is less than one, then I which is the identity matrix minus gamma times P is always going to be invertible. That's the answer to that question. So this matrix is always invertible as long as gamma is less than one. All right. So let's go back to this one, um, which we're going to require any way for some of the other important properties we want. So in this case what is that? So the immediate reward of this is zero plus gamma times [NOISE] 0.5 probability that we stay in that state times the previous V of s_6 plus 0.5 probability that we go to V of s_7. And this is going to be equal to zero plus 0.5 times zero plus 0.5 times 10. So that's just an example of, um, how you would compute one Bellman backup. And that's back to my original question which is you seem to be using V_k without the superscript pi to evaluate it. Oh, sorry this should, yes. This should have been pi. That's just a typo. And that's that was correct in there. Question was just whether or not that was supposed to be pi up there. Yes it was, thanks for catching. All right, so now we can start to talk about Markov Decision Process control. Now just to note there. So I led us through or we just went through policy evaluation in an iterative way you could have also done it analytically or you could have done it with simulation. But as a particularly nice analogy now that we're going to start to think about control. So again what do I mean by control? Control here is going to be the fact that ultimately we don't care about just evaluating policies, typically we want our agent actually be learning policies. And so in this case we're not going to talk about learning policies, we're just going to be talking about computing optimal policies. So the important thing is that there exists a unique optimal value function. So- um, and the optimal policy for an MDP and an infinite horizon finite state MDP is deterministic. So that's one really good reason why it's sufficient for us to just focus on deterministic policies, with a finite state MDPs, um, in infinite horizons. Okay. So how do we compute it? Well first before we do this let's think about how many policies there might be. So there are seven discrete states. In this case it's the locations that the robot. There are two actions. I won't call them left and right, I'm just going to call them a_1 and a_2. Because left and right kind of implies that you will definitely achieve that. We can also just think of these as generally being stochastic scenarios. So let's just call them a_1 and a_2. Then the question is how many deterministic policies are there and is the optimal policy for MDP always unique? So kind of right we just take like one minute or say one or two minutes feel free to talk to a neighbor about how [NOISE] many deterministic policies there are for this particular case and then if that's- um, once you've answered that it's fine to think about in general if you have |S| states and |A| actions, and this is the cardinality of those sets. How many possible deterministic policies are there? Um, and then the second question which is whether or not these are always unique. [NOISE] Can anyone I'd take a guess at how many deterministic policies that are in this case? [NOISE]. It's a mapping from states to actions so it's gonna be 2 to the 7th. That's exactly right. That is it's a mapping. Er, if we remember back to our definition of what a policy is, a mapping is going to be a map from states to actions. So what that means in this case is that there are two choices for every state and there are seven states. And more generally that the [NOISE] number of policies is |A| to the |S|. So we can be large, its exponential and the state-space but it's finite. So it's bounded. Um, any one want to take a guess of whether or not the optimal policy is always unique? I told you the value function is unique. Is the policy unique? Yeah. I think there might be cases where it's not. Exactly right, um. It's not always unique. The value function is unique but if there may be cases where you get ties. And so there might be that there are two actions that, um, are or two policies that have the same value. So no. Depends on the process. You mean like unique optimal value function? Ah, yes. So the question is can I explain what I mean by there's a unique optimal value function. I mean that the optimal value of the state. So the expected discounted sum of returns, um, there is- there may be more than one optimal policy but there exists at least one optimal policy which leads to the maximum value for that state. Um, and there's a single value of that. We'll talk about- probably a little bit clearer to when we talk about contraction properties later. Um, that there's- so for each state it's just a scalar value. It says exactly what is the expected discounted sum of returns and this is the maximum expected discounted sum of returns under the optimal policy. Yeah. And on the [inaudible] policies in our- When we first define policies I thought I was describing the- um, the entire hash table with sort of one action per state rather than saying all possible combinations. It's a little surprised that is 2 to the 7th rather than being just the number of states with each one of the maps because of action. For me to sort of better clarify, you know, what this- what this how many policies there are and whether maybe- there maybe it looked like it was going to be linear and it's actually exponential. Um, the way that we're defining a decision policy here, um, a deterministic decision policy is a mapping from a state to an action. And so that means for each state we get to choose an action and so just as an illustration of why this ends up being exponential. Um, so, in this case let's imagine instead of having seven states we just have six or two states. Now we have s_1 and s_2. [NOISE] So, you could either have action a_1-a_1, you could have action a_1-a_2, you could have action a_2-a_1 or action a_2-a_2. And you have to and all of those are distinct policies. So, that's why the space ends up being exponential. Sure. When you have like A to the power S. I'm assuming that A refers to legal actions per state assuming like you could have different actions depending on the state. The question is whether or not you might be able to have different constraints on the action space for state, absolutely. So, in this case, today for simplicity, we're going to assume that all actions are applicable in all states. Um, in reality that's often not true. Um, in many real-world cases, um, some of the actions might be specific to the state. Ah, for totally, there's a huge space of medical interventions. Um, er for many of them, they might not be at all even reasonable to ever consider, um, for certain states are applicable. Um, so, in general, you can have different actions sub-spaces per state and then you would take the product over the actions, the cardinality of the action set that is relevant for each of the states. But for right now, I think it's simple as just to think of it as there's one uniform action space and then they can be applied in any state. Okay. So, um, the optimal policy for an MDP and a finite horizon problem where the agent acts forever. Um, it's deterministic. It's stationary which means it doesn't depend on the time-step. We started talking about that a little bit last time. Um, so, it means that if I'm in this state- if I'm in state s_7, there is an optimal policy for being in state s_7 whether I encountered that at time-step one, time-step 37, time-step 242 stationary. Um, er one of the intuitions for this is that if you get to act forever there's always like an infinite number of future time steps no matter when you're at. So, if you would always do action a_1 from state s_7 now, um then if you encounter it again in 50 time-steps you still have an infinite amount of time to go from there and so you'd still take the same action if that was the optimal thing to do. As we were just discussing, it's not the optimal policy is not necessarily unique, um because you might have ah more than one policy with the same value function. So, how would we compute this? One option is policy search uh and we'll talk a lot more about this in a few weeks when we're talking about function approximation and having really really large state spaces. Um, but even in tabular cases, er we can just think of searching. So, the number of deterministic policies we just discussed is A to the S, um and policy iteration is a technique that is generally better than enumeration. So, what do I mean by enumeration in this context? I mean there's a finite number of policies. You could just evaluate each of them separately and then pick the max. So, if you have a lot of compute, you might just want to and this might be better if you really care about wall clock and you have many many many processors. You could just do this exhaustively. You could just try all of your policies, evaluate all of them either analytically or iteratively or whatever scheme you want to use and then take the max over all of them. But if you don't have kind of infinite compute, it's generally more computationally efficient if you have to do this serially to do policy iteration and so we'll talk about what that is. So, in policy iteration what we do is we basically keep track of a guess of what the optimal policy might be. We evaluate its value and then we try to improve it. If we can't improve it any more, um then we can- then we can halt. So, the idea is that we start by initializing randomly. Here now you can think of the subscript is indexing which policy we're at. So, initially we start off with some random policy and then π_i is always going to index sort of our current guess of what the optimal policy might be. So, what we do is we initialize our policy randomly and while it's not changing and we'll talk about whether or not it can change or go back to the same one in a second, we do value function policy. We evaluate the policy using the same sorts of techniques we just discussed because it's a fixed policy which means we are now basically in a Markov Reward Process. And then we do policy improvement. So, the really the new thing compared to what we were doing before now is policy improvement. So, in order to define how we could improve a policy, we're going to define something new which is the state action value. So, before we were just talking about state values, state values are denoted by V. We're talking about like V^pi(s) which says if you start in state s and you follow policy pi what is the expected discounted sum of rewards. A state action value says well, I'm going to follow this policy pi but not right away. I'm going to first take an action a, which might be different than what my policy is telling me to do and then later on the next time-step I'm going to follow policy pi. So, it just says I'm going to get my immediate reward from taking this action a that I'm choosing and then I'm going to transition to a new state. Again, that depends on my current state and the action I just took and from then on I'm going to take policy pi. So, that defines the Q function and what policy improvement does is it says okay you've got a policy, you just did policy evaluation and you got a value of it. So, policy evaluation just allowed you to compute what was the value of that policy [NOISE] and now I want to see if I can improve it. Now, remember right now we're in the case where we know the dynamics model and we know the reward model. So, what we can do then is we can do this with Q computation where we say okay well I've got that previous value function by policy and now I compute Q^pi which says if I take a different action, it could be the same and we do this for all A and for all S. So, for all A and all S we compute this and then we're going to compute a new policy and this is the improvement step which maximizes this Q. So, we just do this computation and then we take the max. Now, by definition this has to be greater than or equal to Q^πi(s, pi_i(a)), right, because either a is equal to pi_i(a), sorry pi_i(s). So, either you the arg max is going to be the same as that your previous policy π_i or it's going to be different and the only time you're going to pick it differently as if the Q function of that alternative action is better. So, by definition this Q^π that max over A of Q^π_i(s,a), has to be greater than or equal to Q^π_i(s, π_i(s)). Question at the back. Is this going to be susceptible? Is this going to be like finding a local maximum goal then its kind of gets stuck there and [inaudible] for actions. Okay. So, this is going to allow us to maybe do some local monotonic improvement maybe, um but are we going to be susceptible to gain stuck. Um, in fact, ah for any of you that have played around with reinforcement learning and and policy gradient and stuff that is exactly one of the problems that can happen when we start doing gradient based approaches nicely in this case this does not occur. So, we're guaranteed to converge to the global optima and we'll see why for a second. Okay. All right. So this is how it works. You do this policy evaluation and then you compute the Q function and then you compute the new policy that takes an arg max of the Q function. So, that's how policy improvement works. The next critical question is Iris was bringing up is okay why do we do this and is this a good idea. So, when we look at this, um let's look through this stuff a little bit more. What we're going to get is we're going to get, um this sort of interesting type of policy improvements step and it's kind of involving a few different things. So, I just want to highlight the subtlety of it. So, what is happening here is that we compute this Q function and then we've got this. We've got max over A of Q^π_i(s,a) has to be greater than equal to R(s, π(a)). The previous policy that we were using before. [NOISE]. So, what I've done there is I've said, okay, the max action over the Q has to be at least as good as following your old policy by definition, because otherwise you could always pick the same policy as before or else you're gonna pick a better action. And this reward function here is just exactly the definition of the value of your old policy. So, that means that you're- the max over your Q function has to be at least as good as the old value you had. So, that's encouraging. But here's the weird part. So, when we do this, if we instead take arg max we're gonna get our new policy. So, what is this doing? It's saying, I'm computing this new Q function. What does this Q function represent? It represents, if I take an action and then I follow my old policy from then onwards. And then I'm picking whatever action is maximizing that quantity for each state. Okay. So, I'm gonna do this process for each state. But then- so that's going to just define a new policy, right? Like I thought that might be the same or it could be a, a different policy than the one you've had before. Here's the weird thing. So, this is saying that if you were to follow that arg max A and then follow your old policy from then onwards, you will be guaranteed to be doing better than you were before. But the strange thing is that we're not gonna follow the old policy from then onwards. We are going to follow this new policy for all time. So, remember what we're doing is we're completely changing our policy and then we're going to evaluate that new policy for all time steps, not just for the first time step and then follow the old policy from then on. So, it should be at least a little unclear that this is a good thing to do [LAUGHTER]. Should be like, okay, so you're, you're saying that if I were to take this one different action and then follow my old policy, then I know that my value would be better than before. But what you really want is that this new policy is just better overall. And so the cool thing is that you can show that by doing this policy improvement it is monotonically better than the old policy. So, this is just saying this on a words, we're saying, you know, if we took the new policy for one action, then follow pi_i forever then we're guaranteed to be at least as good as we were before in terms of our value function, but our new proposed policy is just to always follow this new policy. Okay. So, why did we get a monotonic improvement in the policy value by doing this say in the policy value? So, what- first of all what do I mean by a monotonic improvement? Um, what I mean is that the value, uh, something that is monotonic if, um, the new policy is greater than equal to the old policy for all states. So, it has to either have the same value or be better. And my proposition is that the new policy is greater than or equal to the old policy in all states with strict inequality if the old policy was suboptimal. So, why does this work? So, it works for the following reasons. Let's go ahead and just like walk through the proof briefly. Okay. So, this is- what we've said here is that, um, V^pi_i(s), that's our old value of our policy. So, this is like our old policy value. Has to be less than or equal to max a of Q^pi_i(s, a). And this is just by definition. Uh, let me write it like this. Is equal to R(s, pi_i+1(s)). Because remember the way that we define pi_i+1(s) is just equal to the policy that match- maximizes the Q^pi_i. Okay. So, this is gonna be by definition. So, I've gotten rid of the max there. Okay. So, this is going to be less than or equal to R the same thing at the beginning times max over a of our Q^pi_i. Again by definition, because we've said that the first thing there that we know that the pie i of s prime would also be less than or equal to max over a of Q^pi_i(s', a'). Okay. So, we just made that substitution. And then we can re-expand this part using r reward. So, this is gonna be the max over a' R(s',a') plus dot-dot-dot, basically making that substitution from that line into there. So, I'm nesting it. I'm re-expanding what the definition is of Q^pi. And if you keep doing this forever, essentially we just keep pushing in as if we get to continue to take pi_i+1 on all future time steps. And what- the key thing to notice here is that this is a greater than or equal to. So, if you nest this in completely what you get is that this is the value pi_i+1. So, there's kind of two key tricks in here. The, the first thing is to say, notice that the V^pi_i is always lower- is the lower bound to max a over Q^pi. And then to re-express this using the definition of pi_i+1. And then to re-upper bound that V by Q^pi and just keep re-expanding it. And so you can do this out and then that allows you to redefine to- when you substituted it in for all actions using pi_i+1, then you've now defined what the value is of pi_i+1. So, this is what it allows us to know that the new pi_i+1 value is by definition at least as good as the previous value function. So, I'll just put that in there [inaudible]. All right. So, the next questions that might come up is so we know we're gonna get this monotonic improvement, um, so the questions would be if the policy doesn't change, can it ever change again? And is there a maximum number of iterations of policy iteration? So, what do I mean by iterations? Here iterations is i. It's a kind of how many policies could we step through? So, why don't we take like a minute and just think about this maybe talk to somebody around you that you haven't met before and just see what they think of these two questions. So policy is monotonically improving and is there a maximum number of iterations as we've read before? [NOISE] Just in the interest of time for today- just in the interest of time for today because I want us to try to get through value iteration as well, um, why doesn't- does somebody wanna give me, um, a guess of whether or not the policy can ever- if the policy stops changing, whether it can ever change again? So, what I mean by that is that if the policy at pi, so the question here was to say, if pi of i+1 is equal to pi i for all states, could it ever change again? Somebody wanna share a guess of whether or not that is true. Once it has stopped changing it can never change again. So, no. And the second question is, um, is there a maximum number of policy iterations? Yeah. There's no- you can't have more iterations than there are policies. That's right. There- We know that there is at most a to the s policies. You cannot repeat a policy ever, um, because of this monotonic improvement. And so, there- there's a maximum number of iterations. Okay? Great. And this just- um, I'll skip through this now just so we can go through a bit of value iteration, but this just steps through to show a little bit more of how once your policy stopped changing, essentially your Q^pi will be identical. And so you can't- uh, there's no policy improvements to be, yeah, to change. After it's sort of converged, you're gonna stay there forever. Okay, so policy iteration computes, um, the optimal value in a policy in one way. The idea in policy iteration is you always have a policy, um, that is- that you know the value of it for the infinite horizon. And then you incrementally try to improve it. Value iteration is an alternative approach. Value iteration in itself says we're gonna think of computing the optimal value if you get to act for a finite number of steps. The beginning just one step and then two steps and then three steps et cetera. Um, and you just keep iterating to longer and longer. So that's different, right? Because policy says you always have a policy and you know what its value is. It just might not be very good. Value iteration says you always know what the optimal value in policy is, but only if you're gonna get to act for say k time steps. So they're just- they're computing different things, um, and they both will converge to the same thing eventually. So when we start to talk about value iteration, it's useful to think about Bellman. Um, so the Bellman equation and Bellman backup operators are things that are often talked about in, um, Markov Decision Processes and reinforcement learning. So this constraint here that we've seen before, which says that the value of a policy is its immediate reward plus its discounted sum of future rewards, um, is known as the Bellman equation. The constraint for a Markov process, er, Markov Decision Process say that it as to satisfy that. And we can alternatively, like what we were just seeing before, think of this is as, um, as a backup operator, which means that we can apply it to an old value function and transform it to a new value function. So just like what we were doing in some of the, um, ah, evaluation of a policy, we can also just sort of do these operators. In this case, the difference compared to what we've seen with evaluation before is we're taking a max there. We're taking this max a over th- the best immediate already credit plus the discounted sum of future rewards. So sometimes we'll use the notation of BV to mean a Bellman operator, which means you take your old V and then you'd plug it into here and you do this operation. So how does value iteration work? The algorithm can be summarized as follows. You start off, you can initialize your value function to zero for all states. And then you loop until you converge, um, or if you're doing a finite horizon, which we might not have time to get to today, but, um, I- then you'd go to that horizon. And basically, for each state, you do this Bellman backup operator. So you'd say, my value at k plus one time steps for that state is if I get to pick the best immediate action plus the discounted sum of future rewards using that old value function I had from the previous time step. And that Vk said what is the optimal thing my optimal value for that state s prime given that I got to act for k more time steps. So that's why initializing it to zero is a good thing to do because in this case, or a certainly reasonable thing to do if you want the result to be the optimal as if you had that many time steps to go. If you have no more time steps to act, your value is zero. The first backup you do will basically say what is the optimal immediate action you should take if you only get to take one action. And then after that you start backing up, um, and continuing to say well, what if I got to act for two time steps? What if I got to act for three time steps? What's the best sequence of decisions you could do in each of those cases? Um, again just in terms of Bellman operations if we think back to sort of what policy iteration is doing, you can instantiate this Bellman operator by fixing what the policy is. And so, if you see sort of a B with, um, ah, pi on top and saying, well, instead of taking that max over actions, you're specifying what is the action you get to take. So policy evaluation you can think of as basically just computing a fixed point of repeatedly applying this Bellman backup until V stops converging and stops changing. So, um, in terms of policy iteration, this is very similar to what we saw before you can think of it in these Bellman operators and doing this argmax. Wanna see if we can get to a little bit on sort of the contraction operator. So this is what, um, value iteration does. It's a very similar policy iteration and evaluation. Um, let me talk a little bit about the contraction aspect. So, for any operator, um, let's let O be an operator and x denote a norm of x. So x could be a vector like a value function and then we could look at like an L2 norm or an L1 norm or L infinity norm. So, if you wanna- if an operator is a contraction it means that if you apply it to two different things, you can think of these as value functions, um, then the distance between them shrinks after, um, or at least is no bigger after you apply the operator compared to their distance before. So just to, um- actually, I'll, I'll save examples for later. Feel free to come up to me after class if you wanna see an example of this, um, or I can do it on Piazza. But this is the formal definition of what it means to be a contraction. Is that the distance between, in this case we're gonna think about it as two vectors, um, doesn't get bigger and can shrink after you apply this operator. So, the key question of whether or not value iteration will converge is because the Bellman backup is a contraction operator. And it's a contraction operator as long as gamma is less than one. Which means that if you do- if let's say have two different Bell- er, two different value functions and then you did the Bellman backup on both of them. Then the distance between them would shrink. So how do we prove this? Um, we prove it- for interest of time I'll show you the proof. Again, I'm happy to go through it, um, I- or we can go through it in office hours et cetera. Let me just show it kind of briefly. So the idea to, to prove that the Bellman backup is a contraction operator, is we consider there being two different value functions, k and j. They don't have to be- This has- doesn't have to be anything to do with value iteration. These are just two different value functions. One could be, you know, 1,3,7,2 and the other one could be 5,6,9,8. Okay. So we just have two different vectors of value functions and then we re-express what they are after we apply the Bellman backup operator. So there's that max a, the immediate reward plus the discounted sum of future rewards where we've plugged in our two different value functions. And then what we say there is, well, if you get to pick that max a separately for those two, the distance between those is lower bounded than if you kind of try to maximize that difference there by putting that max a in. And then you can cancel the rewards. So that's what happens in the third line. And then the next thing we can do is we can bound and say the difference between these two value functions is diff- is, um, bounded by the maximum of the distance between those two. So you can pick the places at which those value functions most differ. And then you can move it out of the sum. And now you're summing over a probability distribution that has to sum to one. And that gives you this. And so that means that the Bellman backup as long as this is less than one has to be a contraction operator. The distance between the two value functions can't be larger after you apply the Bellman operator than it was before. So, I think a good exercise to do, um, is to then say given that it's a contraction operator, um, that means it has to converge to a fixed point. There has to be a unique solution. So if you apply the Bellman operator repeatedly you- there is a single fixed point that you will go to which is a single, um, vector value fun- uh, values. It's also good to think about whether the initialization and values impacts anything if you only care about the result after it's converged. All right. So, um, I think we can halt there. Class is basically over. There's a little bit more in the slides to talk about, um, the finite horizon case, um, and feel free to reach out to us on Piazza with any questions. Thanks. [NOISE] |
Stanford_CS234_Reinforcement_Learning_Winter_2019 | Stanford_CS234_Reinforcement_Learning_Winter_2019_Lecture_5_Value_Function_Approximation.txt | All right. Good morning, we're gonna go ahead and get started. Um, homework [NOISE] one is due today, unless you're using late days, um, and homework two will be released today. Homework two is gonna be over, um, [NOISE] function approximation and reinforcement learning. Um, we're gonna start to cover that material today, and then we'll continue next week with deep learning. [NOISE] Um, deep learning is not a prerequisite for this class and so we're gonna be releasing a tutorial on TensorFlow, um, later this week. [NOISE] Uh, and then next week, we'll also in sessions [NOISE] have the opportunity to go into some more of the background to deep learning. [NOISE] You're not expected to be an expert at it but you need to know enough of it in order to do the homeworks and, and do the function approximation. [NOISE] We will be assuming that you're very familiar with things like, um, gradient descent, and taking derivatives, and things like that. Um, TensorFlow and other packages can do that automatically for you, but you should be familiar with the general [NOISE] process that happens. Um, before we continue the sim, may I have any logistic questions. [NOISE] All right. Let's go ahead and get started. [NOISE] Um, as you can see I have lost my voice a little bit, it's coming back but we'll see how we go and if it gets too tricky, then will take over. [NOISE] All right, so what we've been talking about so far is thinking about, um, [NOISE] learning, uh, to be able to evaluate policies in sequential decision-making cases, and being able to make decisions. [NOISE] All of this is when the world is unknown. And what I mean by that is that, we're not given in advance, a dynamics model, or a reward model. [NOISE] Um, and what we're gonna start to talk about today is value function approximation. [NOISE] Um, just so I know actually, who of you, who of you have seen this before? Who've seen some form of like value function approximation? [NOISE] Okay, so, a couple of people, that most people know. Um, uh, so when I say value function approximation, what I mean is that so far we've been thinking about domains, where we tend to have a finite set of states and actions, and where it is, um, computationally and memory feasible [NOISE] to just write down a table, to keep track of what the value is, of states or the value of state action pairs, [NOISE] um, or that we could imagine writing data table to write down the models explicitly of the Reward Model and the dynamics model. [NOISE] But many real world problems have enormous state and action spaces. So, if you think about things like the Atari games, which we can debate about whether or not that's a real-world problem but it's certainly a challenging problem. [NOISE] Um, state-space we discussed at the beginning is really sort of a set of pixels. And so that's gonna be an enormous space and we're not going to be able to write down that as a table. [NOISE] And so, in these cases, we're gonna have to go beyond sort of this tabular representation, [NOISE] and really think about this issue of generalization. [NOISE] So, we're going to need to be able to say we want to be able to make decisions and learn to make good decisions. We're gonna need to be able to generalize from our prior experience, so that even if we end up in a state action pair that we've never seen exactly before, it's like a slightly different set of pixels than we've ever seen before, that we're still gonna be able to make good decisions and that's gonna require generalization. [NOISE] So, um, what we're gonna talk about today is we're starting with value function approximation, [NOISE], um, for prediction, and then talk about control . [NOISE] Um, and the kind of the key idea that we're gonna start to talk about in this case is that, we're gonna be representing the state action value, uh, value function with a parameterized function. [NOISE] So, we can think of now as having a function where we input a state, and instead of looking up in a table to see what its value is, instead we're gonna have some parameters here. So, this is, this could be a deep neural network. [NOISE] This could be, you know, um, [NOISE] a polynomial. [NOISE] It can be all sorts of different function approximations but the key here is that we have some parameters that allow us to say for any input state, what is the value. And just like we saw before, we're gonna both sort of go back and forth between thinking of there being a state value function, and a, a state action value function. [NOISE] Um, and the key thing now is that we have these parameters. [NOISE] We're mostly gonna be talking about those parameters in terms of w. [NOISE] So, you can generally think of w as just a vector. Um, [NOISE] uh, with that vector could [NOISE] be the parameters of a deep neural network or it could be something much simpler. [NOISE] So, again, you know, why do we wanna do this and sort of what are the forms of approximations we might start to think about? So, we just don't wanna have explicitly store learn for every individual state action pair. [NOISE] So, we don't have to do that in terms of learning the dynamics model, you don't have to do that in terms of a value function, or state action value function or even in terms of a policy. [NOISE] We're gonna need to be able to generalize, so that we can figure out that, our agents, our algorithms can figure out good policies for, um, sort of these enormous state spaces and action spaces. [NOISE] And so we need these compact representations. [NOISE] So, once we do this we're gonna get a multiple different benefits. There would also gonna incur potential problems as well. So, we're gonna reduce the memory that we need to store all of these things. We're gonna reduce the computation needed and we might be able to reduce the experience. [NOISE] And so what I mean by that there is, um, how much data does our agent need to collect in order to learn to make good decisions. So, this is really a notion of sort of how much data is needed. [NOISE] Now, I just wanna highlight here that, um, you know, there can be really bad, it would be really bad approximations. UM, [NOISE] and those can be great in terms of not needing a lot of data, and not needing a lot of computation, and not need a lot of memory, [NOISE] but they may just not allow you to represent very good policies. [NOISE] Um, so these are, these choices of representation or defining sort of hypothesis classes. They're defining spaces over which you couldn't represent policies and value functions, and so you couldn't, there's gonna be sort of a bias-variance trade-off here, um, and add a function approximation trade-off, in the sense that, if you have a very small representation, you're not gonna need very much data to learn to fit it, but then it's also not gonna have very good capacity in terms of representing complicated value functions or policies. [NOISE] Um, so, as a simple example, we could assume that our agent is always in the same state all the time. You know, all video game frames are always identical, [NOISE] and that's a really compressed representation, um, you know, uh, we only have one state, [NOISE] but it's not gonna allow us to learn to make different decisions in different parts of the game. So, it's not gonna allow us to achieve high reward. So, there's going to generally be a trade-off between the capacity of the representation we choose, so sort of the representational capacity [NOISE] versus all these other things we would like versus memory, computation, and data. [NOISE] Others and always, sometimes one gets lucky and, and you can choose something that's very, very compact, [NOISE] and it's still sufficient to represent the properties you need to represent it in order to make good decisions, [NOISE] but it's just worth thinking that often there's this explicit trade-off, and we often don't know in advance what is a sufficient representational capacity in order to achieve high reward. Yeah? [NOISE] Is this, um- What's your name. Oh, sorry, . Is this more or less an orthogonal consideration from the bias-variance trade-off in inter-functional [NOISE] coordination? Yeah, and you can think of it as right, the best question is whether this is an orthogonal trade-off to sort of bias-variance trade off? [NOISE] Um, can think of it as related, i- if you choose a really restricted representational capacity, you're gonna have, um, a bias forever because you're just not gonna be able to represent the true function. [NOISE] Um, so it's, be- and they all have consuming, uh, a smaller variance for a long time because it's a smaller representation. [NOISE] So, it's really didn't shift to, uh, related [NOISE] to that. If you take a machine learning and then, uh, talked about things like structural risk minimization, [NOISE] and thinking about, um, how you choose your model class capacity versus how much data you have, in terms of minimizing your tests that are similar to that too. [NOISE] So, you know, how do you trade-off in terms of capacity to generalize, um, [NOISE] versus the expressive power. All right. So, a natural immediate question that we've started, I've started alluding to already is what function approximation are we going to use? Um, there's a huge number of choices. Um, today we're only gonna start to talk about one particular set. Um, but there's an enormous number probably most of the ones you can think of have been tried with reinforcement learning. So, pretty much anything that you could do in supervised learning. You could also try as a function approximator for your value function, um, could be neural networks or deep decision trees or nearest neighbors, um, wavelet bases, lots of different things. Um, what we're gonna do in this class is mostly focused on things that are differentiable. Um, these are nice for a number of reasons. Um, but they tend to be a really nice smooth optimization properties. So, they're easier to optimize for. That's one of the reasons we're gonna focus on them in this class. Those are not always the right choice. Um, uh, can anybody give me example of where, for those of you that are familiar with decision trees, where you might want a decision tree to represent either your value function or your policy? Yeah. Yes. Uh, they tend to be highly interpretable. You keep them simple [inaudible] us all with trees. All right. [inaudible] actually understand that could be helpful. Exactly. So, what he just said is that, um, you know, depending on where you're- how you're using this sort of reinforcement learning policy, this may be interacting directly with people. So, let's say this is gonna be used as a decision support for doctors. In those cases, having a deep neural network may not be very effective in terms of justifying why you want a particular treatment for a patient but if you use a decision tree, um, those tend to be highly interpretable. Um, uh, well, depending on what features you use but often it's pretty highly interpretable and so that can be really helpful. So, thinking about what function approximation you use often depends on how you're gonna use it later on. Um, there's also been some really exciting work recently on sort of explainable deep neural networks where you can fit a deep neural network and then you can fit a sort of a simpler function approximator on top. So, you could fit like, first fit your deep neural network and then try to fit a decision tree to it. So, you try to get the kind of the best of both worlds. Super expressive, um, uh, function approximator and then still get the interpretability later. Um, but it's worth thinking about sort of the application that you're looking at because different ones will be more appropriate in different cases. Um, so, you know, probably the two most popular classes, um, these days and in RL in general are, um, linear value function approximation and deep neuro networks. Um, and we're gonna start with linear value function approximation for two reasons. One is that it's been sort of probably the most well studied function approximators in reinforcement learning but, um, up to the last few years and second, is because you can think of deep neural networks as computing some really complicated set of features that you're then doing linear function approximation over at least in a number of cases. So, it's really provides a nice foundation for the next part anyway. All right. So, we're gonna do a really quick review of gradient descent because we're gonna be using a ton over the next few days. So, let's just think about any sort of general function J, um, which is a differentiable function of a parameter vector W. So, you have some vector W, it's gonna be a set of linear week soon and our goal is to find the parameter, um, W that minimizes our objective function. Haven't told you what the objective function is but we'll define it shortly. Um, so, the gradient of J of W is we're gonna denote that. It's told to J of W and that's just us taking the derivative of it with respect to each of the parameters inside of the vector and so that would be the gradient, and so a gradient descent way of trying to optimize for a function, uh, J of W would be to compute the derivative or the gradient of it and then to move your parameter vector in the direction of the gradient. So, if your weights and generally we're going to always assume the weights are vector, um, uh, we're gonna be equal to your previous value of the weights minus some learning rate of the derivative of your objective function. So, we're sort of just we're figuring out the derivative of our function and then we're gonna take a step size and that and move our, our parameter weights over a little bit. Um, and then we're gonna keep going. So, if we do this enough times, um, are we guaranteed to find a local optima? Right. So, [OVERLAPPING] assume it yeah. So, they could be yeah, there may be some conditions o- on the learning rate. Um,ah, but yes, if we do this enough we're guaranteed to get to a local optima. Um, no- notice this is local. So, we started thinking about this in terms of the polis- uh, in terms of doing RL, it's important to think about where are we gonna converge to and if we're gonna converge and I'll talk more about that throughout class. So, this is gonna be sort of a local way for us to try to smoothly start changing our parameter representation at the value function in order to try to get to a better, um, better approximation of it. Right. So, let's think about how this would apply if we're trying to do policy evaluation. So again, policy evaluation is someone's giving you a policy. They've given you a mapping of, um, first date what your action is and this could be, it could be stochastic. So, it could be a mapping from states to a probability distribution over actions. So, but someone's giving you a policy and what you wanna do is figure out what's the value of that policy. What's the expected discounted sum of rewards you get by following that policy. So, let's assume for a second that,um, we could quer- query a particular state and then an Oracle would just give us the value, the true value of the policy. So, I, you know, I asked you like, you know what's the, what's the expected discounted sum of returns for starting in this part of the room and trying to navigate towards the door under some policy and it says, okay the expected discounted number of steps it would take you as on average like 30 for example. So, um, that would be a way that the Oracle could return these pairs and so you get sort of this pair of S, V pie of S and then let's say given that, we have all this data what we wanna do is we wanna fit a function. We wanna fit our parameterized function to represent all that data accurately. So, we wanna find the best representation in our space, um, of the state value pairs. So, if you frame this in the context of stochastic gradient descent, what we're gonna wanna do is just directly try to minimize our loss between the value that we're predicting and the true value. So, right now imagine someone's giving us these true S value pairs and then we just want to fit a function approximators to fit that data. So, it's really very similar to just doing sort of supervised learning. Um, and in general we're going to use the mean squared loss and we'll return to that later. So, the mean squared loss in this case is that we're just going to compare the true value to our approximate value and our approximate value here is parameterized by a vector of parameters. Um, and we're just gonna do gradient descent. So, we're gonna compute the derivative of our objective function and when we do compute the derivative of that then we're gonna take a step size and we're gonna do stochastic gradient descent here which means we're just gonna sample the gradient. So, what I mean by that is that if we take the derivative of our objective function, what we would get is we'd get something that looks like this. [NOISE] And what we're gonna do is we're going to take, I'm going to use this as shorthand for updating the weights, I'm gonna take a small step size in the direction of this as evaluated for one single point. So now, there's no expectation and this is just for a single point. [NOISE] So this is stochastic gradient descent where we're not trying to compute the average of this gradient we're going to- we're trying to just sample this gradient, evaluated at particular states. And what I've told you right now is that someone's given us these pairs of states and the true value function. So you just take one of those pairs, compute the gradient at that point and then update your wave function and do that many many times. And the nice thing is that the expected stochastic gradient descent is the same as the full gradient update. Um, so this has nice properties in terms of converging. Yes a name first please. Um, so just to confirm, uh, why is the expectation over policy and not over a set of states if you're saying, if SGD is a single state? So this is over the distribution of states that you'd encounter onto this policy. Yeah, the question was you know wh- why do it over- what does the expectation mean in this case? In this case it's the expected distribution of- of states and values you'd get under this policy. [NOISE] And that's, uh, it's an important point, will come up later. It'll come up again later in terms of sort of what is the distribution of data that you're going to encounter under a policy. Of course, you know, in reality we don't actually have access to an oracle to tell us the true value function for any state. Um, if we did we'd already know the true value function and we wouldn't need to do anything else. Um, so what we're gonna do now is talk about how do we do model-free function approximation in order to do prediction evaluation um ah without a model. Okay. So, if we go back to what we talked about before we thought about EBV sort of Monte-Carlo style methods or these TD learning style methods um where we could adaptively learn online a value function to represent the value of following a particular policy. Um, and we did this using data. And we're going to do exactly the same thing now except for we're gonna have to whenever we're doing this sort of update step of um do- updating our estimator with new data, we're also going to have to do function approximation. So instead of just like um incrementally updating our table entry about the value of a state, now we also have to re approximate our function whenever we get new data. All right. So, when we start doing this we're going to have to choose a feature vector to represent the state. Um, let me just ground out what this might mean. So let's imagine that we're thinking about a robot, uh, and a robot that, well robots can have tons of really amazing sensors but let's imagine that it's old school and it just has a laser range finder. Um, a lot of laser range finders used to basically be a 180 degrees um, and so you would get distance to the first obstacle that you hit along all of this 180 degrees. So maybe here it's like two feet and this is 1.5 feet, this is 7 feet. And this sort of gives you an approximation of what the wall looks like for example. So here's our robot. It's giving- it's got a sensor on it which is the laser range finder and it's telling us the distance towards the walls. And so what would this feature representation be in this case? It would just be simply for each of these 180 degrees, what's the distance? One degree, two degree. [NOISE] That'll be example of a feature representation. Now, why? That sounds like a pretty good of it, maybe slightly primitive but generally a pretty good feature representation, um, but what's the problem with that? Well, probably isn't mark off. So a lot of buildings have hallways that would say, you know, on my left and my right there's a wall about two feet away um, and then there's nothing in front of me at least for it perhaps out to my laser range finder, you would say you know out of rage. And that would be true for many different parts of the same hallway and it will be true for many different hallways. And so there'd be a lot of partial aliasing. So this is a feature representation that probably is not mark off um, but it might be reasonable. It might be a reasonable one on which to condition decisions, maybe if you're in the middle of the hallway and that's what it looks like you was just wanna go forward. And that's an example of a type of feature representation. And again just emphasizes the point that the choice of the feature representation will end up being really important. Um, and for those of you who have taken through deep learning classes you've probably already heard this but it's kinda before deep learning. There was often amo- a huge amount of work and there's still a huge amount of work on doing feature engineering to figure out what's the right way to write down your state space so that you could make predictions or make decisions. Now, one of the nice things about deep neural networks is that it kind of pushes back that feature selection problem so that you can use really high dimensional sensor input and then do less amount of hand tuning. So what do I mean by hand tuning? Well, in this case, you know you could use the raw features about like how far you are to on each of these a 180 degrees or you can imagine having higher level abstract features like trying to understand if there are corners. So you could already have done some pre-processing on this raw data to figure out what features you think might be relevant if you're going to make decisions. And the problem with doing that is that again if you- if you pick the wrong set you might not be able to make the decisions you want. Yes, the name first please. Uh, could you please elaborate why this is not mark off, um, this [NOISE] ah kind of getting the 180 degrees. Is it ? Yeah. So, the question is can I elaborate why this is not markup? Um, I, if just have a 180 degrees for a robot, if you think about something say like a long hallway. Let's say this is floor one. This is floor two, like n gates for example. So if you have your little robot that's walking along and it's guiding its laser range finder, to try and tell it to the distance to all of the things, um, you're not going to be able to distinguish with that representation whether you're on floor one or floor two because your immediate sensor readings are gonna look identical. And in fact you're not even able to tell where you are in that hallway from this hallway. Yeah? [inaudible] So, um, can we generalize that ah if we have partial aliasing then, uh, we say its not Markov? Great question. [NOISE] ask, can we generalize to say if we have partial aliasing it's not Markov? Yes. I mean, you could change the state representation to be mark off by including the history um and so then each individual observation would be aliased but the whole state representation would not be but in general yes, if you have a state representation for which there is, um, aliasing it's not mark-off. Might still be that you could could still do pretty well with that representation or you might not but it's just good to be aware of in terms of the techniques one has applied. Good questions. All right. So let's think about doing this with linear value function approximation. Um, so what do I mean by linear value function approximation? It means that we're simply going to have a set of weights and we're going to.product this with um a- a set of features. [NOISE] So you know maybe it's my 180 degrees sensor readings and then I'm just gonna have a weight for each of those 180 features. Um, and we can either rep- use that to represent ah a value function or you can do that for a state action value function. Um, those of you who are already thinking about state action value functions might notice that there's at least two ways to do that once you start getting into q just mentioned that briefly. You could either have a separate weight vector for each action or you could put the action as sort of an additional um feature essentially, multiple different choices. You get different forms of sharing. Okay? But right now we're just thinking about um er estimating the value of a particular policy. So we're just going to think about values and we're gonna say that remember W is a vector and X is a vector. Now X and S is just going to give us the features of that state. So it could be like the real state of the world is where the robot is and the features you get out are those a 180 readings. So we're again going to focus on mean squared errors, our objective function is this mean squared error. The difference between the values we're predicting and the true values. And this is our weight update which is, uh we want to update our weight by a learning rate times the derivative of this function. So what does this look like in the case of linear value function approximation? [NOISE] So what we're gonna do is we're just gonna take the derivative of J using the fact that we know that this is actually X times W. Okay? So, what we're gonna get in this case is W- delta W is equal to 1.5 alpha to P pi of S minus S W times X because the derivative of X times W with respect to W is X. Yes. Is this expected value over all states or for a particular state? Great question, remind me your name one more time. Yes. So the question is, is this is an expected value of all states or particular state? When we're doing the update of the W we're going to be evaluating this at one state. So we're gonna do this per each state, um, tha- well, we're going to see different algorithms for it but um, generally we're gonna be doing stochastic gradient descent. So we're gonna be doing this at each state. The expected value here you can think about is really over the state distribution sampled from this policy. So if you were to execute this policy in your real MDP you would encounter some states. And if you, um and we'll talk shortly more about like what that distribution looks like but that's the- we want to minimize our error over all- over the state distribution we would encounter under that policy. They're good questions. Okay. So, if we look at this form, what does this look like? It looks like we have a step size which we've seen before with TD learning. And then we have a prediction error which is the difference between the value function, uh, the true value function and the value function we're predicting under estimator and then we have a feature value. So that's one of the nice aspects of linear, uh, uh, linear value function approximation is that these updates form into this sort of very natural notion of how far off were you from the true value weighed by the features. Yeah? The question about the math here so that you have the negative [inaudible] the negative inside V Pi s hat. So, does the- it should be a negative excess there with the negative [inaudible] outside as well? We're going to push this into either, so the question is about just being careful about um the negatives they come out. Um, yes you could push that negative out into here in general alpha is a constant so you can flip it and be positive or negative. Generally, you're going to want your, um, if you're minimizing this is kinda be, ah, you're going to be subtracting this from the weights but you just want to be careful of depending on how you're defining your alpha to make sure that you're taking gradient descent- gradient steps in the right direction. Okay. It's a good question. Okay, so how would we do this, remembering again that we don't actually have access to the true value function? Um, so we don't actually know, so in this equation, right? This assumes this is true, like this is if Oracle has given you the value of a state under that policy, but of course we don't have access to that. Um, so what we're gonna do is sort of use the same types of ideas wi- as what we saw, um, in Tabular learning, um, now with a value function approximation. So, the return which is the expected or the return which is the sum of rewards from timestep t till the end of the episode, is an unbiased noisy sample of the true expected return for the current state wherein on time step t. And so, we can think about doing Monte Carlo value function approximation as really as if we're doing supervised learning on the set of state returned pairs. So now, what we're doing here, is we're substituting in G_t. It's an estimate of the true value. [NOISE] So, we don't know what the true value is, but, uh, we know that the, the Monte Carlo returned is an unbiased estimator, so we're gonna substitute that in. [NOISE] Okay, so what does that mean if we're doing linear value function approximation? It means inside of our wait update, we have a G here. [NOISE] So, we would take the state. We would take the sum of rewards on that episode. So again, this can only be applied in episodic settings just like generally with Monte Carlo, then we take the derivative and in this case that's just x, our features because we're using a linear value function approximation and then on the last line, I'm just plugging in exactly what our, um, V hat estimator is. So, we're comparing our return to our current estimator, um, and then we're multiplying it by our features. And as usual, we have the problem that G might be a very noisy estimate of the return. Yes, the name first, please. [NOISE] Can we differentiate first time and every time like before? Sort of. Do we differentiate first-time and every time visit, uh, like before? [NOISE] Great question to ask. Do we, um, distinguish between first-time visit and every time visit? Yes. The same exact distinctions apply to Monte Carlo up to, remember that applied before. [NOISE] So, [NOISE] I'm here, I'm showing a first-visit variant of it, but you could also, could also do every visit. [NOISE] And it would have the same [NOISE] strengths and limitations as before. Every visit is biased, asymptotically it's [NOISE] consistent. Okay, so what does the weights look like? In this case, we would say weight is equal to the old weights plus [NOISE] Alpha times G_t of s minus v, uh, of sw, remembering that this is just x times w for that state, [NOISE] times x of s. [NOISE] So, it's very similar to what we saw before for Monte Carlo, um, uh, approximate Monte Carlo policy evaluation. [NOISE] Um, what we do is we start off, in this case now instead o- of having a value function, we just have a set of weights, um, which is gonna now be the zero vector to start. And we sample an episode, you have to sample all the way to the end of the episode using the policy, [NOISE] um, and then we step through that episode and if it's the first visit to that state, then we compute the return from that state till the end of the episode, and then we update our weights. Yeah? Um, just to check on that, [NOISE] are you adding, uh, the learning rate, uh, because of the mechanism, uh, reward? [NOISE] Considering that, uh, question is about, um, the Alpha where, oh, in terms of negative versus positive? Right. Each one [inaudible] gradient. Yeah. So, in general, this is gonna look like [NOISE] this. I'm gonna be a little bit loose on those. Um, Alpha is gonna be a learning rate, that's, um, a choice. Generally, we're gonna be, um, trying to minimize our objective function that we're gonna be reducing our weights, um, uh, and will need to be able, again, be a little bit careful about how we pick Alpha over time, um, and, and this has been evaluated at each of the states that we encounter along the way. [inaudible] and just to be, uh, [NOISE] careful on step six, read again factor or just adding up of notice now? Good question. Um, uh, on step six, um, uh, was it ? sorry. said, um, "Do we need to have a Gamma function?" Um, it's a good question. Um, in episodic RL, you can always get away with Gamma being one. Um, so if it's an episodic place, Gamma can always equal one. It is also fine to include Gamma here. [NOISE] So here, generally in episodic cases, um, you will set a Gamma being equal to one because one of the reasons why you set our, our Gamma to be less than one is to make sure things are bounded in terms of their value function, but then the episodic case, it is always guaranteed to be bounded, um, but it is also completely fine to include a Gamma here, yeah. [NOISE] So, I got a couple of questions about same point, um, about this, this G, so when we do that, it seems like we'll and, uh, sam- sampling G's that have reward- rewards over episodes of different lengths, [NOISE] but, so doesn't that close their distribution without stationary and more variance? This question [inaudible] there's a problem with the fact that, um, the returns you're taking are gonna be sums over different lengths. [NOISE] It isn't. Um, so, uh, you're always trying to estimate the value of being in this state, um, which itself under this policy. Um, and in episodic case, you might encounter that state early on in the trajectory or late in the trajectory, and your, your value is exactly gonna be averaged over whether you encountered early or late and one of the returns. So there's no problem with, um, we're assuming all of your episodes are bounded, they have to be finite. So there has to be with probability, one, you're episode has to end. If that is true, then, um, your, your rewards are always bounded, and then you can always just average over this and that's fine. Sometimes you might encounter, um, a state really early in the trajectory in a lot of rewards, other times you might encounter at the end and have very few rewards, [NOISE] um, and the value of interest the expectation over all of them. [NOISE] Yeah? [NOISE] um, on this part of clarification, so essentially is uplink, you're updating this little video approximation at the episode. So, [inaudible] And not just once as the velocitor is in. You're not just updating the weight once an episode many times, right? So, you look at all of the states you encountered in that episode and for each of those, you update your weight vector. [NOISE] Which is equivalent of like generating all the episodes and trying to feed them in a single, in a single- [inaudible] Well, what if we did this in a batch setting, so what if you generate it all every data and then afterwards tried to fit it. So this is an incremental approach to doing that, um, and now ends up converging to the same thing. Question, yeah? Um, [NOISE] I'm just wondering, do we include Gamma, should be Gamma our j minus t slowly start discounting, um, like going forwards. J minus t, oh, yeah. Uh-huh. [NOISE] Catch. [NOISE] Again, you shouldn't need a Gamma in this case. [NOISE] So, in general in this case there should be probably knows that there'd be no Gamma from the episodic case. But it's good to be precise about these things. Okay. All right. So, let's think about this for a particular example. Um, it turns out that when we start to combine function approximation with making decisions, um, ah, [NOISE] and doing this sort of incremental update online, things can start to go bad. Um, and there's, uh, um, and, and what I mean by that is that we may not converge and we may not converge to places that we want to in terms of representing the optimal value function. So, there's a nice example, um, when people are really starting to think a lot about function approximation in the early 1990s, um, uh, Baird came up with this example where it can illustrate some of the challenges of doing function approximation when combining it with doing control and decision-making. So, we're gonna introduce this example now. We're doing MC policy evaluation and then we'll see it a few times throughout class. So, what does this example showing? So, in this example they're going to be two actions. So, a_1 is gonna be straight lines and those are all going to deterministically go to what I'm going to call state S seven. And this is state S1, S2, S3, S4, S5, S6. And what you can see inside of the bubbles there is what its feature value representation is. So, remember I said that we would have a state and then we could write it down as, um, a set of features. So, what does S1 look like? It looks like two, two, three, four, five, six, seven. So, weight one is two, um, and weight eight is one. So, what does S2 look like? S2 looks like zero to one, two, three, four, five. S3 looks Like this. And so on until we get to S7 which looks like this. Okay. So, S7 looks a little bit different than the rest of them. That is the feature representation of those states. Now notice that it looks pretty similar to a tabular representation. In fact, there are more features than there are states. So, there are only seven states here and there are eight features. That's completely possible, right? Like your feature representation could be larger than the number of true states in the world. So, then we have, um, action a_1 and action a_1 always takes us from any state to deterministically to state S7. And then we have action a_2 which is denoted by dot dot dot. And what action a_2 does is, um, with probability one over six, it takes you to state Si where i is n one to six. So, basically uniformly spreads you across one of the first six states. There are only two actions. Either you deterministlly go to state S7 or if you take the second action then you go to one of the first six states with equal probability. And it's a pretty simple control problem because the reward is zero. Everywhere, for all actions. So, the value function for this is zero because there's no rewards anywhere. Um, and yet we can start to run into trouble in some cases. So, before we get to that part let's first just think about what, um, like a Monte Carlo update would do. Um, and, and let's just imagine also that there's some additional small probability here that from S7 that we actually go to a terminal state. So, um, like let's say, you know, with probability 0.999 we stay in S7 and or like 0.99 we stay in S7 and 0.01 we terminate. And, uh, this is a slight modification but I'm doing that just so we can do it for the Monte Carlo case. So, we can think of episodes ending. So, if you're in state one through six you can either go to S7 or you can stay in states one through six. If you're an S7, um, you can either go to states one through six. You can stay in S7 or you can terminate. All Right. So, what then- what might an episode look like in this case? So, let's imagine that we are in state S1. We took action a_1, that deterministically gets us to state S7. Actually before I do that, I'll specify we got zero reward. Rewards was zero. We went to S7. We took action a_1. We got zero reward. We stayed in S7. We took action a_1. We got zero reward and then we terminates. That's our episode. Okay. So, now we can think about what our Monte Carlo update would be. So, our Monte Carlo update in this case would be let's start with state S1 and try to do the Monte Carlo update. So, for state S1 the return is what? Zero. Zero. So, the return is zero. Um, what is x? Um, I should tell you. So, let's start with initializing all of our weights to be one. So, what is our initial estimate of the value function of state S1? [inaudible] How many? So, it's all of the weights are one. The state S1 representation is 200013. That's right. Okay. So, and that's just equal to our, ah, X times W. Okay. So, then what does our update look like and of course I would have to tell you what alpha is. So, let's say alpha is equal to 0.5. So, what our weights are gonna- our change in the weights is gonna be equal to 0.5 times 0 minus 3 times our feature vector for x. Our feature vector for x is to 20001. So, that means that we're gonna get simply minus 1.5 times 20001 minus 3 times minus 1.5. One, two, three, four, five, six. So, notice this is gonna give us an update for every single weight but it's only gonna give us an update for the weights that are non-zero in this particular state, which is the first weight and weight eight. And so then if we were to actually get the new weights, so now we're going to have w is equal to w plus delta w. Then our new representation would be minus two, one, two, three, four, five, six minus 0.5. So, that would be one update of Monte Carlo for the first state. Now you would do this for every single state in that episode. Say, you would then do it for the first time you see it and the algorithm I've defined before. So, we'd next to this for state S7 as well, where the return would also be zero but the value would be something different, so we would get a different, um, well actually in this particular case the value is also three. Um, it depends on if you've already updated your w then your, your value will already be different. Yeah. So, we're doing SGD per state not per episode. questions is are we doing SGD per episode or state? We do it per state. Yeah. In the previous slide where we had before every state- ev- every encounter, does that mean that- For every- for every first visit in that episode. So, yeah. And it's within that specific- if so then you go to a new episode that would be S7. question is about through this first visit, we basically step along that episode similar to what we did with Monte Carlo before and the first time we are encountering state in that episode we update the weights using its return. And when we do that for every single unique state and that episode the first time we see it. And then after all of that we'd get a new episode. Okay. All right. So, this is what would happen. Um, and you can see that the changes can be fairly large because we're comparing like the full return to our value function. Um, it depends of course on what our alpha, alpha is an alpha can change over time. And generally we'll want alpha to change over time in order to get convergence. Um, this gives an example of sort of what Monte Carlo update would look like in this case with linear value function approximator. Okay. So, a natural question might be, um, does this do anything reasonable? Are we guaranteed that this is gonna converge to the right thing? Um, and what does the right thing mean here? Um, we're constrained by our linear value function approximator. So, we're gonna say are we gonna converge to sort of like the best thing in our linear value function approximator. Okay. Before we do this let's just talk for a second about, um, the distribution of states and how that influences the result. So, if you think back for maybe the first or second lecture we talked about the relationship between, um, Markov processes, Markov reward processes, and Markov decision processes. And we said that once you define a particular policy, then your Markov decision process is actually a Markov reward process. Where you can think of it as, um, a chain where the next state is determined by your dynamics model, where you only use the action according to your policy. So, if you run that, if you run your sort of Markov chain defined by an MDP with a particular policy, you will eventually converge to a probability distribution over states. And that distribution overstates is called the stationary distribution. It's a probability distribution its sayings are like what percentage of the time you're going to be in state one, on average versus state two et cetera. Has to sum to one because it's a probability distribution. You always have to be in some state and it satisfies a balanced equation. So, it says that the probability distribution over states before, um, I summed- yeah, I guess. Let me just flip this. I think it's a little bit easier to, to think about it the other way around. You've got, um, d of S prime is equal to sum over S sum over a. We're just doing the sum over a right now so that we can be sure that, um, we allow ourselves to have stochastic policies. So, we look at all the actions that we could take under the current state. And then we look at where we could transition to you on the next state. So, we're in some distribution over states. We think of all the actions we could take from each of those states, where we might transition to. And then that gives us a new distribution over states S prime. And those two have to be identical. So, um, this is often also thought about in terms of a mixing property when your Markov chain has run for long enough. Um, this balance equation will eventually hold and this is just that your distribution over states on the previous time step has to be exactly the same as your distribution over states on the next time step after this process is fully mixed. And it's just telling you on average, you know, how much time are you spending in, in, um, what's the probability on any particular time step you're gonna be in a particular state. This is not telling us how long it takes for this process to occur. So, this depends a lot on the underlying dynamics of the system. So, it might be that this takes millions of steps until you reach the stationary distribution or it might mix pretty quickly, it depends on the properties of your transition matrix, um, under the policy. I'm not gonna get into any of that in this class. Um, it's just important to know that you can't- it's not like you can just wait a 100 steps and definitely you are going to be in the stationary distribution that depends on the problem. Yeah. Have there been any proof of- [inaudible] meaning. Yeah. Have there been any proof of [inaudible] Any proven bounds on the mixing time of this type of Monte Carlo methods. Not that I know of. There might be some. Um, [NOISE] it's a really tricky issue, often, because you don't know how long it will take to get to this, sort of, stationary distribution. There is a really cool paper that just came out in like, a month ago at [inaudible] , um, that talks about how, when we're thinking about of- policy evaluation, which we'll talk more about later today. [NOISE] Um, instead of thinking about, um, superstep, um, ratios, or whether you'll be taking a certain action and a certain policy or not. You can think about these stationary distributions, and the difference between them, in different policies. Problem is, you often don't know how long, and whether your data has got to that stationary distribution. So, would also be really nice if there were easy test to tell if this was true. That's also really hard to know. Yeah. Uh sorry, [inaudible]. And it's. Yes. [LAUGHTER] Um, [inaudible] Why isn't it [inaudible]. Ah, yes. So, question is about, uh. So, um, I, sort of, gave a long what As when you gave a long prelude about saying, like, that things might not converge, but everything looked fine there. We're gonna go into that bar. Yes, and we're gonna talk about the fact that, actually, in the on policy setting where we're just doing policy evaluation. Everything's gonna be fine. It's only when we get into the control case, um, where we're gonna be using data from one policy to estimate the value of another, where in this example and many others, things start to go right. So, we'll use this as a running example, but right now, there's no reason for you to believe this is pathological. Okay. So this is the stationary distribution. And then, the convergence guarantees are related to that. Okay. So what we're gonna do is to find the mean squared error of our linear value function approximator, with respect to the stationary distribution. Why is this reasonable? Well, because you probably care more about your function approximated error in states that you visit a lot. There's a state that's really, really rare, probably it's okay to have bigger error there. You want your overall mean squared error to be defined on that stationary distribution. [NOISE] So, this is the mean squared, um, sort of, value prediction error. Um, and it compares what we predict versus the true value, um, weighed by this distribution of states. And what we're assuming for right now, is that the approximation we're using is a linear value function approximator. [NOISE] Um, let me just note, for historical reasons that, um, John Tsitsiklis and Ben Van Roy. John is a, um, MIT, ha- um, I had the pleasure of him teaching me probability, which was great. And then, um, Ben Van Roy is here. Um, and was one of, I think John's, uh, PhD students for postdocs. Anyway, they, um, in about 1997, people were getting really interested, in whether or not, when you combine function approximation with, um, reinforcement learning, what happened, and whether things were good or bad. And- and they're responsible for, um, this nice analysis. [NOISE] So, let's assume we have a linear value function approximator. What you can prove is that, if you do Monte Carlo policy evaluation, linear value function approximators, gonna converge to the wage which have the minimum mean squared error possible. Just, kind of, best you could hope for. So, um, this is saying the limit as you have lots and lots of data. Um, ah then, an- and you run this many, many, many times, um, then you're, kind of, just converge to the, the best weights possible. Now, this air might not be zero, because it might be that your value function is not approximatable with your linears that have weights. But it's gonna do the best job they can. It's just gonna find the best. It's, basically just doing the best linear regression that you can do, okay, on your, on your data. [NOISE] So, it's good. That's, you know, sort of, a nice sanity check. It's gonna converge to the best thing that you could hope to do. Um, some people have been asking about, uh, okay, well. I've knew me this, sort of, incremental method. And maybe, in some cases, that's reasonable. And maybe, you're running like a customer recommendation system, and you're getting data over time, and you're updating this estimator. But in some cases, you might have access to just a whole bunch of data from this policy. And couldn't you just do that to, kind of, more directly. And the answer is, yes. So, this is often called Batch, uh, Monte Carlo value function approximator. And the idea is that you have a whole bunch of episodes from a policy. And the nice thing is, now you can just, kind of, analytically solve for the best approximator. So, again, our, our GI's are gonna be our unbiased sample of the true expected return. And what you can do is, now, N is just our set of data. This is really a linear regression problem. We're gonna use our, um, unbiased samples as estimates of the true value function. We're just gonna find the weights that minimize this mean squared error. You take the derivative, you set it to zero. It's linear regression. You can solve for this analytically. Um, so, a lil- just like how we talked about you could do policy evaluation analytically in some cases. You can also do it analytically in this case for the linear value function approximator. Um, and note again, this is Monte Carlo. We're not making any Markov assumptions. We're just using the full return. So, this is also fine in non-Markov environments. Yeah. Can you speak to the [inaudible] of this approach versus our, our [inaudible] that we use [inaudible] policy evaluation. Yeah. [inaudible] Okay. So whe- wha- when we'd wanna do this versus the other derivative one. This generally has higher computational cost. X's can be a very large matrix. It may not be possible to even write down X, X. Um, is all of your data in the, in the future representation form, and it requires taking a matrix inverse. [NOISE] Um, so that may not be feasible, if you've got, you know, huge feature vectors, um, and, you know, millions or billions of customers. [NOISE] Um, Facebook can't do this, um, and do this, er, er, directly. Um, and also, you know, if you're doing this, you could do this, sort of, incrementally, but you're always refitting with all of your data. Um, that also could be pretty expensive. So most of it it's about memory and computation. Um, if you have a really small case, it's probably a good thing to do. And it also depends, whether you already have all your data or not. Yeah. [NOISE] You could also do some batch as well, right? And that could help with convergence and not having your, um, radiant estimations fluctuating crazily. [inaudible] So, this, of course there's an in-between. So, you could do, you don't have to. If you have access to quite a bit of data, you could either do it completely incrementally or all batch, or you could do some batches. Um, [NOISE] and there's some nice, uh, work by my colleagues. And also us showing that in, in terms of, um, [NOISE] reading into deep learning, there can be a lot of benefits to doing, sort of, some amount of this analytical aspect over like, you know, a sub batch of data [NOISE] because, um, you're, sort of, particularly when you get into TD learning. Or, kind of, proper getting information a lot more quickly than you are, um, if you're just doing this, sort of, incremental slow update. Because, remember, in TD learning we're also, kind of, only doing like, one step of backup compared to kinda propagating all of our information back, like we do with Monte Carlo. All right. So now we're gonna get into temporal difference learning. Um, so remember in temporal difference learning, we're gonna use both bootstrapping and sampling. Monte Carlo only uses sampling to approximate the expectation. [NOISE] TD learning also use bootstrapping, because we don't have to wait till the end of an episode. Um, we just bootstrap and, like, combine in our estimated ah, expected discounted sum of returns by using our current value function. So in this case, what we used to do is, we would bootstrap. This is the bootstrapping part. And our- what we often call our target is the reward plus gamma times the value of the next state. And I remember the reason this is sampling is, um, we're sampling this to approximate our expectation. We're not taking the full probability of S prime, given as a, and summing over all of S prime. So before we did this and we represented everything as a table. [NOISE] Now, we wanna not do that anymore. Um, so let me just- before we get into this, let me just remind us the three forms of like- of the, the forms of approximation we're gonna have now. Now, we're gonna have a function approximation, bootstrapping and sampling. But we're still on policy. What do I mean by that right now we're still just doing policy evaluation which means we're getting data from the policy that we're trying to estimate its value. It turns out things are just way easier in that case when you're on policy and perhaps they should be somewhat intuitive. It's quite similar to the supervised learning case. Supervised learning, you're generally assuming your data is all IID or data is a little bit more complicated than that. But our data's closer to that in this case because we have a single policy. It's not sort of this non-stationary aspect that comes up when we start changing the policy. So, right now we have these three forms of approximation. Function approximation, bootstrapping sampling but we're still on policy, mostly things are still going to be okay in terms of convergence. So, what does that look like? We're again going to sort of think about doing the equivalent of supervised learning. We'd like to just have our states and the Oracle tells us what the value is and fit our function instead of having the oracle, we're going to use RTD estimates. So, we're going to use our word plus Gamma times our approximate value of the next state. And that's going to form our estimate of what the true value is. Okay. And then we're going to find the weights to minimize the mean squared error in that setting. So, if we do that, what we can see is that if we're doing this with the linear case, we write this out it's just this is the TD target. Just as a quick side note, I'm gonna use the words TD zero a lot. We haven't talked about it in this class but there's actually a whole bunch of different slight variance of TD which often called TD Gamma. And so if you're reading the book that might be a little bit confusing and so I just want to be clear that we're doing the TD zero variant, which is probably the most popular and there's a lot of other extensions. For simplicity, we're just going to focus on TD zero for now. So, this is the TD target. This is our current estimate and then we take the derivative. In this case that means that we're going to end up plugging in our linear value function approximator for both our current state, the next state and looking at that difference weighed by the feature vector. So, it should look almost identical to the Monte-Carlo update except for the fact that now we're bootstrapping. So, instead of this being G versus being the G return of that we saw before for a particular episode, now we're bootstrapping and we're getting the immediate reward plus the estimate of the discounted sum of rewards which we're using our value function approximator to estimate. So, this is what the TD learning linear value function approximation for policy evaluation algorithm looks like. And again we're gonna initialize our weight vector. We're gonna sample a tuple and then we're gonna update our weights. So, we get to now update our weights after every single tuple just like what we saw for TD learning. And what we can see here is that we're just plugging in our particular estimator minus old estimator times X. So, let's see what this looks like on the Baird example. So, again we have the same state feature representation as before. State one is 200001. We still have zero words for everywhere. Let's set our alpha equal to 0.5. Now we're in the case through or we can say that there is no terminal state because TV learning can handle just continuous online learning. So, we're just gonna assume that S7 always stays S7 under action A one. So, A one is the solid line and A two is the dashed. And we're gonna initialize our weights to 1111. And then let's look at this tuple. So, just like the first tuple we saw before let's imagine we're in state one. We took action A one, we got a reward of zero when we went to state S seven. So, why don't we take a minute and you calculate what the new weights would be after we do that update and maybe compare back to the Monte Carlo case in terms of how much they have changed. And feel free to talk to a neighbor. Let's make this a little bigger so it's easy to remember what S seven is. All right. Have they moved a lot, a little? How much of the weight changed compared to what we saw with TD with Monte Carlo? Seen some people indicates smaller. Yes, that's right. Okay. So, the- the- value- the initial value of the states this is gonna be so for X of S one times W it's still gonna be three. For X S prime S prime is seven. We look up what that is. This is also going to be three. But now what we're gonna have, is we're gonna have Delta W is equal to Alpha times zero plus 0.9 times three minus three. So, that's gonna be equal to Alpha times minus 0.3. So, remember before it was actually minus three so it's a much bigger update. And so then when we add that into our- our new weights, we're gonna move our weights but we're gonna move them much smaller than we saw before. And this shouldn't be too surprising that sort of consistent with what we saw for Monte Carlo updating and TD learning. TD learning is only updating these sort of smaller local changes like one state action or word next highest state. The- the Monte Carlo is saying this is an episode full episodic return. It's not bootstrapping. So, it's a no really the return from starting in state. S one is zero. So, we're gonna move it a lot more here. It's saying, okay, I'm going to pretend that the return from status one is 2.7, which is close to three, it's not zero. So, when we move our out weights over here, the- the difference is gonna be much smaller than what we saw for Monte Carlo, which is similar to what we saw without function approximator. All right. Whatever theoretical properties in this case, pretty good. So, these are also, uh, if you look at TD zero, you're gonna converge the weights which aren't necessarily quite as good as Monte Carlo but they're within a constant factor. So, they're going to be one over one minus Gamma of the minimum possible. So, they're not quite as good as Monte Carlo but they're pretty good. And depending on your, uh, discount factor and the function approximator possible, this varies in terms of benefits. So, just to check our understanding for a second, I've put up both of these results. So, one says the Monte Carlo policy evaluator converges to the minimum mean squared error one possible under your linear value function approximators and TD zero converges to within one over one minus Gamma of this minimum error. So, again what is this minimum error that says if you could pick any linear value function approximator, uh, how good could that be at representing your true value of your policy? So, let's take just another minute and this is a good one to talk to a neighbor about. If the value function approximator is the tabular representation, what is the MSVE for both Monte Carlo and TD? We guaranteed to converge to the optimal solution, optimal value for the true value for V of pi or not and if it's not clear what the question is, feel free to ask. [NOISE] So when you, when you say it's a tabular representation, do you mean that you are reducing the repre- representational capacity of the system? Last week's session is, if I say it's tabular representation; what do I mean by that? I mean that there is one feature for each state, it's like a one-hot encoding. So it's like the same representations we saw for the first few lectures, where like, for each state you have a table lookup for that value of that state. [NOISE] Yeah? Can you please explain- And your name? Can you please explain what the barrel mains into? Like, if they're [inaudible] into. Ah, good question. So TD0, I- everything we're seeing in uh, um question is about TD versus the TD0. Everything we're talking about in class right now is TD0. I'm using that because um, there are multiple versions of TD. And if you look at the book they'll have TD Lambda sometimes too. So I'm just making sure it's clear. So if you read other resources you'll know which version of TD to know too. Alright. Well first question. For the um, if we're using a tabular representation, can we exactly represent the value of a policy? [inaudible] Well if we're using the, if we- for every single state in the world, you can grab a different- different table at representation, can we exactly represent the value of the policy? Yes. Yeah, we can. So if you um, I, if you have one feature for every single state in the world, it's not going to be practical, we can't actually do this, but you can exactly represent the value of that policy. How could you do this? You could simply run the policy for every single state. Um, I do Monte Carlo returns an average and that would give you the true value of the state. So you could do it. You could represent the expected discounted sum of returns by representing that in every single table. So that means that um, this error is equal to 0 because um, your functional capacity is sufficient to represent the value. Let's go. So what we're seeing is in expectation, right, the difference between what you're function [inaudible] actually gets to 0 but it's like any chart is going to be a bit different. So there's um, I- i-in expectation at 0 but at any upsets, it-it's different. In this case, if you have a tabular representation and this is in the limit, so with infinite amounts of- of data, et cetera, then this will be um, this will be 0 for every single state. So this, equals 0 for every state. You will converge to the right value for every single state if you're using a tabular representation. And that's because if you think of just having like literally infinite amount of data and you run your policy, just you know, infinite-infinite amount of times then for every state you have an infinite number of trajectories starting from that state, and you can write down that value separately in the table. So it'll be 0. So what that means is that the mean squared value error for the Monte Carlo estimator is 0 if you're using a tabular representation. And because it's 0, that is exactly the same as the mean squared value estimator for TD except for -- so this is just equal to MSVE of the Monte Carlo times one over one minus Gamma. So, that means that this is also 0. So if it's a tabular representation, just to sort connect back to that, um, not- none of these methods have any year. Yeah, question at the back? Your name first. Me? Yeah. Uh, I'm wondering where the one over one minus Gamma constant came from? Yes, the question is; where does that one over one minus Gamma constant come from? Um, in the interest of time, I'm not gonna go through it too much. Um, I encourage you to read the Tsitsiklis paper. Um, intuitively, there is error that's a propagate-propagating here because of the fact that we're Bootstrapping and so if you have a function, what this- what this result is sort of trying to highlight is that, if your function approximator actually has no error, then there's gonna be no difference between Monte Carlo and TD because for both of them the mean squared value error, um, inside of that sum, the minimum over w is going to be 0. So it doesn't matter whether you're using TD or Monte Carlo. But if that's not true, like if you can't actually exactly represent the value function, then you're going to get error and that error is going to um, so like basically you can think of one over one minus gamma is approximately a horizon each. And basically that's getting multiplied because you're adding up these errors. And the reason those get added up is because you're bootstrapping. You're propagating that error back whereas Monte Carlo doesn't suffer that. Yeah. Um, my name is . In general, the mean squared error has taken over distribution uh, of the states but- Under the policy yeah. -yeah, yeah. Under specific policy, uh, but the only specific one we've seen as a stationary distribution. Do you ever use another one? Like- Great question, - Okay, right now we're seeing this under the stationary distribution of the states that you're gonna reach under the policy that you're exit that you care about evaluating. For that, I think that's the right choice because that really is the state you're gonna get to under this policy. We start to think about control. You might want others, like, if you're gonna change your policy. Okay. All right so let's um, I guess just briefly more on this. I- are they faster? Is one of them better? To my knowledge that's not really understood. If you come across any literature on that, I'd love to hear it. Um, practically TD often is better, is to empirically often the bootstrapping often helps to pull up. Alright. Let's move on briefly to control. Um, it's going to be pretty similar. So instead of representing the value function and we're going to represent the state action value function which is what we saw before when we wanted to often move from policy evaluation to control. And now what we're gonna do is we're going to interleave sort of policy evaluation with a value function approximator with performing like an e-greedy policy improvement. Um, this is where things can start to get unstable. Um, what are we doing in this case? We generally involving function approximation, bootstrapping, we're often a- are also doing sampling. But really the- the really big issue seems to be the off-policy learning. But when we think about before we had this nice stationary distribution or converging to a stationary distribution over states, we're not going to be doing that anymore because we are going to be using our- changing our control policy over time, and so that's changing the distributions of states that we encounter. And um, setting the bar to often call it The Deadly Triad. If you want to, start combining function approximation and bootstrapping and off-policy learning, things start to get a little bit um, they can fail to converge or converge to something good. Alright. But before we get into that let's think about it procedurally. So now we're going to have um, Q functions that are parameterized by a W, and we can again do stochastic gradient descent, so its going to look almost identical to what we had before. And again, the stochastic gradient descent can sample the gradient which means for particular state action pair, then we're gonna do these updates. So, here what we're gonna do, is we're gonna represent, um, our Q function by a set of linear state action, um, weights. So, that means we're gonna have features that sort of both encode the state and the action. So, like what I saw when I was turning left as my robot for example. Um, so, it's gonna be a combination of these two. And then, once we have that then we're gonna just have a weight vector on top of that for the Q. So, we're not having separate weight vectors for each action. Instead, we're having features that try to encompass both the state and action themselves. And then, we can do our Stochastic gradient descent on top of that. So, how does this work for, um, uh, like, Monte Carlo? Um, it's gonna look almost identical to before. We're just gonna again use our return. Now, we're gonna be defining returns from a particular state action. For doing first visit the first time we reach the state action in that episode, we look at the return, the sum of rewards till the end of the episode and we use that as our target. Use that as our estimate of the oracle, the true Q function and we update towards that. In SARSA we're, gonna use a TD target, so we're gonna look at the immediate reward of our tuple plus gamma times Q of the next state that we encountered and the action we took. And so, then we're, again I'm just gonna just plug that in. And then for Q learning, it's gonna look almost identical to Q learning except for again we're gonna plug in function approximators everywhere. So, we're gonna plug in, um, this. Remember, is gonna be a function of RX which is gonna be a function of S prime and A prime times R W. Whereas here this is going to be a function of the state in action. Everything's linear and we're just doing different forms of bootstrapping and comparing whether or not we take a max or not. All right. So, I went through that a little bit fast but it's basically exactly analogous to the first part which we sort of stepped through more carefully. Uh, so, far everything's with Q functions now. Why might this gets or weird or tricky? So, TD with value function approximation does not really follow in a gradient. I don't have time to go into total details on that today, but there's a ni- some nice explanations on this in Chapter 11. So, certain Umberto Chapter 11, um, it's a great resource and we also have lecture notes available online. Um, and so, informally we're sort of doing this interleaving or doing this like approximate sample Bellman backup combined with what's often known as a projection step because we're trying to sort of project our value function back into the space of representable functions. And intuitively for why this might start to be a problem, is that the Bellman operator we showed is a contraction. Like when we're doing dynamic programming we showed that if you do Bellman opera- Bellman backups you're guaranteed to converge to a fixed point. When you do the value function approximator, it can be an expansion. What does an expansion mean. Well, what a contraction, just as a reminder what a contraction said is let's say for an operator that's a contraction. If you apply this operator and this is an operator like the Bellman equation. I wanna back up, if you apply it to two different value functions, the distance between this feel like a max norm or something like that is less than or equal to the previous distance. Which means as you apply this operator, the distance between your old value function and your new value function keeps getting smaller and smaller and smaller and eventually get to a fixed point. The problem is this now we're not doing that anymore. It's more like we're doing like O V and then we do some sort of projection operator. I'm just gonna call it kinda weird P. So, this is the projection operator which means when you compute your new value function, it may no longer lie in your value function approximators space and you're gonna have to refit it back into that space. And when you do that, um, that operator itself can be an expansion. For those of you that are interested in some of the early sort of discussions of this, Jeff Gordon, um, has a really nice paper on averages from 1995. Um, but they talk about how linear value function approximators can be an expansion. So, the Bellman backup is fine. It's a contraction but when you do this approximation you might expand the distance and that's one of the problems. Okay. So, if we go back to our Baird example and, um, think about this a little bit more in terms of the, the control case. So, let's imagine that we have a setting where, um, you have two different policies. And the first policy, and this is the policy that you want to evaluate you always take the solid line. So, you always take A1 and in your behavior data, this is the data that you're, this is the policy you're using to gather data, you take A2 with six-sevenths of the time and you take A1 one seventh of the time. Gamma is point nine nine. Um, and what you do is you generate a whole bunch of data. So, you generate data, data under your behavior policy. So, there's some really cool work on how you deal with, um, sort of correcting for the, the data that you're getting versus the data you wanna evaluate. Let's imagine we, we don't go into any of that which I think is super cool and we're, instead we're just gonna do something super simple which is we're gonna throw out all the data that doesn't match. So, imagine you just throw away data if, um, A is not equal to Pi of S. So, you generated all these data points. So, what does it, what do I mean by data points here? We had SAR as prime. So, you take all these tuples. If it turns out that the action that was taken there is not the same as the, the policy you wanna evaluate but where you're only ever taking A1, just throw out that tuple, you don't update. So, now all of your remaining data is consistent with your policy. So, let's imagine then you tried to do TD learning with that data. The problem is, you can diverge. And what do I mean by that? It mean that, that, that your weights could blow up. Super interesting why this happens. Um, the main intuition for it is that your distribution data is not the same as the data you'd get under your desired target policy. In particular, if you were actually to run the, the policy carve out Pi what would happen? Let's say you start off in state S1. You take A1. That determinant will get you to state seven but you're gonna stay in a seven for a really long time because it's deterministic. So, you'd get like S1. S7, S7. Let's say even you did this maybe you, maybe it wasn't episodic case and you have multiple episodes, you'd still have cases where you'd like have very little bit amount of data about S these S's and lots of data about S7. But in the data that you get from your behaviour policy because a bunch of the time it takes A2, it'll keep teleportating you back to one of S1 through S6. Which means the distribution of your data, the data you have looks very different. The distribution of states you visit looks very different than the data that you get and the states you'd visit under Pi. And that is the problem. If you, if you don't account for this mismatch, then the values can diverge. Even though they're all sort of compatible, all in the sense that you're only ever using state action pair if you took the action under your desired policy. And this sort of issue can also come up, um, when you're using Q learning. Um, uh, and you're doing generally like updating this policy over time. So, in terms of this, um, just to briefly summarize before we finish. In the tabular case everything converges, it's beautiful. Um, in the linear case, [NOISE] I mean by this I mean that, um, you can chatter. You basically converge but you might, um, uh, there might be some oscillation. Uh, but Q learning like we are doing this off policy aspect can diverge. And once we get into nonlinear value function approximation, every- like mostly all bets are off. Now, this is a little bit of an oversimplification. Um, there has been a huge amount of work and huge amount of interest in this because everyone wants to do function approximation with or else we can tackle real problems. And so, over the last, like, one or two decades, there's been a huge amount of work on this. Um, and there are some algorithms now that do have convergence guarantees. Um, and there's some coo- super cool really recent work, um, where they're looking at batch URL which can converge with nonlinear approximators. So, there's definitely a lot of work on this that we're not gonna get to. Um, I just wanna highlight that it's a really important issue not just whether it converges, but what it converges to. Like you might converge to a point which is a really bad approximation. So, it's stable. It doesn't, your dab- your weights aren't blowing up but it's just a really bad approximator and, and some of the critical choices here are your objective functioning and your feature representation. So, just before we close I think this is a really nice figure from Sutton and Barto. Um, what they're showing here is you can think of it as like you have this plane which is where you can represent all of your linear value function approximators. And what happens when you do a Bellman update, um, or like you do a TD backup, is that you're gonna now sort of have a value function that might not be representable in your plane and the you're gonna, you can project it back. And these different, you can quantify different forms of error, different basically this allows you to find different objective functions that you're trying to minimize in order to find the best approximator. And we've seen one today which is sort of this me- minimum mean squared error approximation essentially over the like these Bellman errors but that's not the only choice and it's not necessary even the best choice. Um, because it might be that the one that has the smallest error there is not the same one that has the best performance in your real problem. So, that's a little bit fast but it, it's super and Shane that's covered in, um, Sutton and Barto in 11. If you wanna go into more detail. Just really quick, what are the things you should understand? You should, um, you have to implement these ones on linear value function approximator both for policy evaluation and control. You should understand whether or not things can, uh, converge in the policy evaluation case and when the solution has zero error and non-zero error. Um, and you should just just understand qualitatively what are the issues that can come out so that some of these solutions may not always converge and it's this combination of function approximation bootstrapping and all policy learning. All right. So, that's enough just to get started with the homework two that we're gonna be releasing this week, today. And then, um, next week we're gonna start to talk about deepening. Thanks. |
Stanford_CS234_Reinforcement_Learning_Winter_2019 | Stanford_CS234_Reinforcement_Learning_Winter_2019_Lecture_9_Policy_Gradient_II.txt | All right. Welcome back everybody. Um, before we get started today, I- does anybody have any questions about logistics, or midterm, or anything like that? We'll be doing a midterm review on Monday, and the midterm will be on Wednesday. Because there's a number of people in the class, we're gonna be spreading everybody across a couple of rooms, and will be, er, sending instructions out about that. Does anybody have any other logistics questions? Yeah. Midterm is during class time right? Midterm is during class time. Um, instructions will also be on the web, but you're allowed to bring a one-page, um, of written notes, um, aside from that everything is closed book. Yeah. Is it okay to type said notes [NOISE] or it has to be handwritten? I think we've already issued a policy on that, Let me just double-check and see what it is. Okay. [BACKGROUND] All right. Okay. Lets go ahead and get started. Um, before we do that, I just want to say thank you to all of you that, ah, participated in the class feedback survey. It's really helpful to me and to everybody else to understand what's helping you learn and what's not helping you learn. Okay. So in terms of, um, the responses, so note, you know, all of these things there's about 230 people registered in the class. Um, so for some of you if you didn't give me feedback, it's hard for me to know what- what's helping you or not helping you learn. So we're just gonna go with what people gave us feedback on. Um, about 65% of you thought it's the right pace, about 27% of you thought it's going too fast, and there's only about 8% of people that think it's going too slowly. Um, so we're gonna keep roughly in the same pace as what we've had before. Um, a number of people noted that they wished it was like a semester-long course. Um, I will mention there's a number of other classes that do reinforcement learning and I highly encourage you to take them. I- I offer an advanced class, and also Ben Van Roy offers a class normally in the spring that's more theoretical. This was super controversial. I didn't think it would be this split. Um, so we offer sessions on Thursday and Friday. Attendance has been really low. Um, I think we've had around like, three to seven people showing up for these sessions. Um, so we thought this was gonna be something that everybody wanted to take out. Because it's about evenly split, I asked all the TAs to compare that people that are coming to their office hours versus the people that are coming to the sessions. Um, we probably have about 4 to 5x people trying to go to office hours than trying to go to sessions. So we're going to switch this to office hours. Just so that we can kind of, serve as many people as we can. Um, so let me just write that there. So we're gonna switch to office hours. Now, the sessions will still be offered on the other days, and they'll still be recorded. So for anybody who wanted to go to them on Thursday and Friday, you can still go, you can still participate on Zoom or you can watch the recorded lecture. Um, but we're gonna switch this to office hours because my TAs have been saying they've had a number of office hours where they've either had to stay really late or they feel like they're not getting to some people. And so again, just in terms of serving the most people. I will say when I was going through these responses, it really made me think about the fact that in reinforcement learning and sort of sequential decision making in general, um, we're always optimizing for expected rewards [LAUGHTER] And so that's kind of exactly the same thing we're doing here is that, I know that everybody needs slightly different things, and we're just trying to do our best job and expectation. Um, but it's exactly why things like intelligent tutoring systems and other stuff might be better. Um, okay. In terms of things that people thought were working well for them, we've got a number of positive remarks about doing worked examples in class, doing derivations. Um, a lot of people really like the fact that my iPad had problems on Monday, um, and so we did things on the board. So we'll try to keep doing the same amount or more of derivations. Um, people also were generally really positive about the homeworks. In terms of things, er, we saw repeatedly, so- so what I did is I got, I just co- collated, sort of all the free responses and tried to look for common themes. And anything that came up, you know, three or five times or more, I considered was a common issue that people would love addressed. Um, people would love even more focused on the big picture explaining, um, as well as connecting from the toy examples to real-world examples. So we'll try to do that where we can. Um, I'll also try to make sure that I'm speaking loudly throughout. Several people said that sometimes it was occasionally hard to hear, so I'll try to do a better job on that. If you can't hear me in the back please feel free to raise your hand. Um, and people would like even more examples, um, worked examples. And so in particular, we're going to try to make sure that in sessions, we emphasize worked examples even more. And let me just say again, if this was not one of the things that you were most concerned about, I'm sorry, we can't address all of them this term. Um, er, we definitely and it was kind of amusing to go through this, would have people saying exactly opposite things. Sometimes right in a row, um, in terms of how it was collated about things like, some people don't like the fact that the slides have gaps and I do derivations in class. And a number of other people really like that I do derivations in class. Um, some people felt like it was moving too slowly, other people said it was moving way too fast. Um, so again, we're just gonna try to do the best job we can to address everybody's needs. All right. So today we're going to continue talking about policy search, which as I said before, is probably the most important reinforcement learning thing you'll learn this term. [LAUGHTER] Um, I think this is used really very widely right now, um, in order to optimize functions. And we can again think of policy search here as a lot of things are gonna sound similar to when we were doing value function approximation. And what we're thinking about here is having a parameterized policy. Often we're going to use theta to parameterize It, but we could use W or anything else. But we have a policy that's parameterized. Um, and then we have some value of that policy. And what we're going to want to be doing, is trying to find, you know, a good optima, trying to maximize the value, um, of that policy. And one of the reasons why we did this right after imitation learning was to connect it with the idea of, um, you have to choose a policy class, a way to specify that parameterization. And so because of that inherently, it's a place to put in structure. Okay. So just as a recap, I mentioned that we've done a lot of work so far, on model-free value-based methods. We're starting to do work on direct policy search methods now. Um, and today we'll also start to talk more about Actor-Critic methods, where we maintain both. We maintain both an explicit parameterized policy and an explicit parameterized value function. And also throughout all of, you know, the last couple of weeks including yesterday or Monday and today, we're mostly going to be talking about cases where we want to be able to work in really really large state spaces. So I'm just gonna do a brief refresher of last time. Why do we want to do this? Well, we're going to generally be able to guarantee that we converge to a local optima. We don't always have those guarantees for value function based methods. Um, er, and that might be important. Er, it's a nice property to have. Um, the downside is that, if you use policy gradient methods, typically we only converge to a local optima. The last time I showed you the exoskeleton example, where they are using a global optima approach. So there are other ones which can policy based RL isn't inherently always going to get you to a local solution, but the gradient based methods typically will. And the other issue that we were talking about, uh, um, as ways to try to address this is, um, or- or as tools to address the fact that evaluating a policy itself might be rather inefficient, and high-variance. So what we are defining before is a policy gradient where, um, now, er, before we'd sort of thought about these things being parameterized by a theta, so we can either think of the value being, uh, the policy being parameterized by a theta or pi of theta means parameterized. But we're often going to talk about value functions because ultimately, the value function depends on the policy and the policy depends on the parameters. And when we think about what we want out of these algorithms, typically what we'd really like is to try to converge to a really good local optima. Often we don't have very much control over that. Um, but the things we often do have control over is things like how quickly we converge to that local optima. Um, and so we want to use sort of go as quickly as we can down that gradient, if we're doing a gradient-based method, um, and use our data as well as we can. So one of the things we're gonna talk more about today, is when we're doing this sort of policy gradient technique. So we're going to be sort of moving down. Now, we're gonna have our gradient. We're gonna have our functions. This is B pi, and this is our parameterized pi. And as we're moving down towards some local gradient, um, it would be nice if when we update our policy, that it is monotonically improving. So can anybody give me a reason why we might want monotonic improvement? Yeah. To help guarantee convergence. Answer is right. Can help guarantee convergence absolutely. And while I love math as much as many of you, um, that- that is a great reason. But perhaps, I was also thinking of like an empirical reason why we might want that as well. Yeah, in the back. [inaudible] like in a high-stakes situation. So what we've seen before in fact, um, one of my students, ah, [inaudible] was giving a practice job talk yesterday, and he was showing this graph for DQN, which looks something like this. Where this is, like, the performance, this is the reward, and this is time. Of course, it doesn't always look like this. And typically when you go to- um, when you read papers, people smooth over many, many runs, but often it looks something like that. That as you're going across multiple episodes or across multiple time steps, like, you're really getting a very jagged up and down performance of your- of your val- of the policy that you're running as you do DQN. So why might that not be good in, like, a high-stakes situation. Yeah, over there. And name first, please. If, like, it's something high-stakes and you have something good and then it goes down, people are going to be upset with you that now it's done something worse, even if it will later go back up. Yeah, what was said is that if the system is a high-stakes scenario, um, if you do- you know, if your policy works pretty well and then the next one, next episode, it works really badly, even if it might go up later, um, you know, your boss still might fire you [LAUGHTER]. I mean, I'm joking, but I think that people are often loss averse and also it's often not tolerable, um, in- it might not be okay, you know, in a company to say, well, this quarter we did really well and next quarter we're going to do, you know, worse, but then eventually, you know, after many, ah, quarters we're going to do well. Like we often may wanna ensure that we're sort of monotonically going up. Um, and in the case of something like patient treatment or other sorts of high-stakes scenarios or airplanes or stuff like that, um, it just probably will not be tolerable to people, if you say we're- we're going to do much worse for this period of time. Ah, no- now there are exceptions to all of this, but I think there are many cases, where you'd really like monotonic improvement, i- if you can. Ah, so I think it's a really- in addition to the theoretical, ah, benefits, ah, it can help us prove things. Ah, it also can just be, um, something that's sort of appealing for people to be actually be able to deploy. And we know that in general that people are very risk- like, very loss averse. So having policies that are monotonically improving, um, can be very nice and DQN and a lot of the value based methods do not have those guarantees. Um, we can talk more also about whether that's always possible, um, in terms of if you wanted to get to a global optima. Yes. Just to be clear, the monotonic improvement in these cases are data that you have access to or have seen, right? So technically like if there's a distribution in terms of your life environment where it may differ somewhat from your actual simulation or, ah, environment, you may not necessarily quantify or improve it, given all that secret data. Is that right? Yeah, which is to say, when we're gonna be- what's- you know, what is this monotonic improvement? What are the conditions under which this will be guaranteed or possible? And are we sort of doing this based on our previous data and making some assumptions about the future da- future data that's collected. Absolutely. We're gonna assume that we're still on the same decision process and that it's stationary. And what I mean by that is that the transition model and the reward model is the same across- you know, you might not have observed all the states yet, but it's the same across episodes. So we're not dealing with the fact that, you know, um, customer preferences have totally changed or, um, you know, climate change is changing your environment. We are dealing with the fact that if the world is stationary, that then we're going to be guaranteed to have monotonic improvement. Now the other thing that I'm going to show you that in some cases we can guarantee that. Um, the other really important thing to know is, this is going to- we're going to hope to show monotonic improvement in expectation. So- so the value function has expected reward. So what we're going to be able to hope to say is the series of policies that we're deploying in the environment that their value function is going up. So what does that mean? That means that V_Pi1 we would like that to be less than or equal to V_Pi2 less than or equal to V_Pi3 dot-dot-dot, where this is, sort of, um, the policy we deploy on each iteration or each round. Ah, but it doesn't guarantee that for a single run this policy is better. So you could easily have it that it on average you're deploying a policy that is better, you know, for your airplanes or for patient treatment, et cetera, but for individual patients that might be worse. A- and I think a really interesting active area of research right now is, um, safe reinforcement learning, um, and safe exploration. And a lot of different people are thinking about this, um, including a number of people here at Stanford. And one of the things that we're looking at in our group is, how do you really efficiently get to a safe solution? What do you mean by safe in this case? I mean you might not want to max- maximize expected reward. You might want to be able to maximize, um, some sort of risk averse criteria. And we'd like to figure out ways to really efficiently get to that solution. But there's lots of really interesting stuff that says, you know, how do we try to do policy search? Or how do we do this improvement in cases where we don't just care about expected outcomes? All right. So what we're gonna be trying to do today is move towards, sort of, ideally, not just monotonic improvements, but large monotonic improvements. Um, as you might guess, it is easier to try to achieve small monotonic improvements than it would be to guarantee really large monotonic improvements. Um, does anybody have any intuition for why that would be true? That might be harder. Um, this kind of goes back to the state distributions. So if you change your policy a lot, um, does the state- can the state distribution change a lot in terms of the states you visit? So- so intuitively that should- the answer should be yes. So we've talked some about how any policy induces, um, a state distribution, like if you run it for a long time you're going to have sort of a stationary distribution over states. Um, and if your policy is really different than your old policy, then that state distribution might look really different, which means you might not have very much data. Um, whereas if you have almost exactly the same policy as before, um, you're probably going to be able to have a really good notion of what that value is in the estimate. But we'll get more into that later. So what we're going do today is to try to think about moving beyond what we were talking about last time, where we're trying to do policy gradient methods. And we're trying to do it in a way that was sort of efficient. We're going to talk about other ways to make it more efficient, ah, and less noisy, and then try to go towards monotonic improvement. All right. So that things that we talked about last time, is we started off when we said what can we do in terms of policy gradient? One thing we could do is, kind of, you do Monte Carlo returns. So these are Monte Carlo returns. And sometimes people use big R of Tau, where Tau, this is a trajectory. So you could just run out your policy till the world terminates or each steps or however you're defining your episodes, um, and then look at the reward and you'll look at their reward across, you know, per time step. We can also use, um, G_i_t to denote the reward we get from time-step t and- and in Episode i. And what we've said before is that, um, this is an unbiased estimate of the gradient, but it's really noisy. And so we started talking about additional structure we could use in the reinforcement learning problem, where we were assuming the world is Markov, um, to try to reduce the variance of this estimate. And so what we talked about last time was using temporal structure, which we did some on the board. And the intuition there was the reward you get at some time point, um, uh, can be- is not influenced by the later decisions you make. So you didn't have to take kind of this complete product of the probability of action given the states, because future actions don't retroactively change our earlier rewards for intuition. So what we're going to start talking about now is, um, other things, which is baselines and alternatives to Monte Carlo. Okay. So what's the baseline? Well, a baseline is as let's still think about looking at, um, the sum of rewards we get from this time-step onwards. So this is the same thing that we've often called GT. So this is the reward we get from this time-step till the end of the episode. And we're gonna subtract a baseline that depends on the state. And what I'm going to show shortly is that by subtracting this baseline which depends only on the state, your resulting gradient estimator can still be unbiased, but it might not, have much lower variance. And in particular often a really good choice is the expected return, which is basically the value function. And so why would we do this? We'll then we can kind of look at we- increasing the log probability of an action, proportional to how much better it is than a baseline. Which in general is going to end up being sort of a little bit like an advantage function. So why is this true? Okay. So, um, what are we going to try to do here? We're gonna say we have this high-variance estimate right now. If we don't have, so imagine we didn't have this, we don't have that. We have our, the standard estimate we were talking about last time. And what I want to convince you is that if we subtract off this thing which is a function of the state, that an expectation that additional term we're subtracting off is zero. Meaning that our estimator is still unbiased. So I said our original estimator is unbiased. We're subtracting off this weird thing, we want to show that the resulting estimator is still unbiased. And the way we do that is by showing there's, the goal is to show, [NOISE] that this is equal to zero. So that's what we're gonna try and do, and if we can show that then that's gonna justify why we can subtract this random thing. And then we can start to talk about what that random thing should be. But first, we're just going to show no matter what that random thing is. If it's a function only of the state, that this, um, expectation is 0. So how do we do this? Well, first of all noted on the outside there's an expectation over tau. That is all the trajectories we might encounter by running our current policy. Okay. So what I'm gonna do first is I'm just going to split it into two parts. So this is still tau. And all I've done here is I've split it into the first part which is all the way up to time step t, and the second part which is on time step, um, t, all the way to the end. So I've just sort of decomposed, that, I-I I've just written out what, um, ah, a trajectory is, and then decomposed into two parts. So I'm just decomposing the trajectory. [NOISE] And once we do that, then we can see that the baseline term is only a function of S_t. [NOISE] So we can pull it out of this inner term. Right. So we're gonna pull this out because it doesn't depend on any of these future time steps. It's independent of those. [BACKGROUND] Okay. Then the next thing we're gonna do is we're going to, uh, write out or note the fact that in this case. That all we have in this inner term here is S_t and a_t. So we can drop all the future terms. Again that's sort of the prob- the the only thing in here is the probability of the action a_time-step t, given the state t and theta. So we don't need to worry about the future states or the future actions that are taken, um, we- we're independent of those. So now, so we just pull it up, first we pulled out baseline. And now, we're going to drop those things that we don't need to depend on. [NOISE] So all we have here is an expectation over the action that's taken. Okay. And now what I'm gonna do is I'm going to read it, so what is this expectation? It's an expectation over a_t. What is the problem, you know, what is that expectation? We're just going to write that out explicitly, that depends on the policy that we have. So we're going to sum over a_t, the probability of that a_t, which of course just depends on the policy that we're following, times the derivative of log. So that is me writing out the expectation, and I'm gonna take the derivative of the log. [NOISE] So that's just gonna be the derivative with respect to the policy itself. Divided by pi of a_t, s t theta. Okay. But now we note that there's a term on the numerator and a term on the, denominator that we can cancel. So this starts to simplify [NOISE] b of S_t times the sum over a_t, just the derivative of the policy. Just canceled numerator and denominator there. And now we note that we can reverse the sum and the derivative. This is the others, that kind of critical step of this proof. So now what we're gonna do here is we're going to move the derivative out. [NOISE] Well, this is just 1, because the probability that we select some action, under our policy always has to be 1. And so now we see that we're just taking the derivative of 1. So we are trying to take the derivative of 1, and of course that's a constant so this is equal to 0. [NOISE] So that's pretty cool. So that means that we have added in this baseline. That is some function that depends on the state, and we haven't said we told me, you know, we haven't talked about all the different ways we could compute that gap or we say it doesn't matter what it is. No matter what you added there it's always unbiased. So just to check our understanding for a second if we go back to this equation, if I set b of S_t, to be a constant everywhere, is the gradient estimator still unbiased. [NOISE] Just take like one minute and talk to your neighbor, and say, so based on what I just said if b of S_t is equal to a constant, this is like a constant, for all S_t is the gradient estimator unbiased, just take like one second or one minute and talk to your neighbor. [BACKGROUND] All right. So let's start with, um, everybody here thinks it's still unbiased, vote? Yes. Great. Okay, yes. It has to still be unbiased, now it's a function not even of s, just a constant. And so it's definitely a function of an s, it's a tri- trivial function where it doesn't matter what the value of s is, and so it's still unbiased. Just to note here, um, if s was a fu- if, um, the baseline was a function of state and action, do you think this proof would go through? No. No. No. Right. Because one of the things that we did at the very beginning is we moved b of S_t all the way out. And if it depended on the action too, we couldn't have done that. So this is specific to this being only a function of the state. Yes. [inaudible] functions b that do not give you an un- an unbiased estimate state. So I don't know, is there any functions of b? b is only a function of the state, all of them are unbiased. Yeah, so it is always unbiased. There could be really- I mean, just like what I put here, um, you could just put in a constant and it might not reduce your variance at all. So there's certainly unuseful definitions of a baseline, um, but all of them are unbiased. So they're not gonna affect, ah, whether or not your estimator is unbiased. They could make your estimator potentially worse if they're really bad, um, or [NOISE] they're really themselves, um, very bad estimators potentially, um, and they certainly couldn't make it better [NOISE] by choosing good choices. Okay. All right. So this ends up allowing us to define what's sort of known as the vanilla policy gradient algorithm. Um, so vanilla policy gradient operates by, we collect a bunch of trajectories using our current policy. And then for each time step inside of the trajectory we compute the return from that time step to the end. Um, and then we compute the advantage estimate. So, all right, we'll write out vanilla policy gradient. [NOISE] Okay. So vanilla policy gradient works as follows. You start up, you initialize the policy with some parameter theta and you need to start with some estimate of the baseline. Okay. So what happens with vanilla policy gradient, is for iterations i = 1, 2 dot, dot, dot. We're gonna collect a set of trajectories using your current policy. And then for each time step, for t = 1 dot, dot, dot, the length of your trajectory i, then you do two things. You compute the return, which is just equal to the sum of all the returns till the end of the episode. And then you compute the advantage A-hat_i_t, which is just equal to this return. I'll parameterize it with i just to show that that's the i'th trajectory. Um, [NOISE] - b of S_t. So just to note here for a second, um, this is a return of the sum of rewards till the end of the episode. This is a baseline which is like a fixed function. Um, so this could be, you know, a deep neural network, this could be a table lookup. Ah, but this is a function and you input the state at time step t and trajectory i [NOISE] and you output a scalar. So that's what the baseline is doing there. And then wha- um, in vanilla policy gradient we do is, then we refit the baseline. So in this case, the baseline is gonna be an estimate of the average of the Gs [NOISE]. So in vanilla policy base, bu- um, vanilla policy gradient. What we do is the next step is, we sum over all the trajectories we've got so far. We just sum over all the time steps. Um, we do basically, just a least squares fit. [NOISE] So note this can be done with like su- this is supervised learning. We just have some a baseline function that can be parameterized [NOISE]. I'll make sure to put an i there. So the baseline function that can be parameterized with some totally other weights or parameters, um, and then we have our returns g that we've seen so far and we just try to minimize that distance. And so then the baseline is really, er, representing the expected sum of rewards. Um, note that this is in some ways a little bit funny, right? Because we're using all of our data that we've ever seen. So this can either be done over, um, all the data you you've ever, ever seen or it can be done over just the most recent round. There's lots of choices for how to do the baseline. Um, I- if you use all the data you've ever, ever seen, um, then which is what this would do. Um, then you could be averaging over lots of different policies, because you've got data from different policies. If you just do this over the most recent round, then you're just gonna be getting an estimate of essentially V_Pi i- V_Pi i, like the- the iteration. This is gonna end up approximating. All right. If you ju- are only doing it over, um, the trajectories, if you don't do this, but you sum it over the trajectories for this round. I guess the way I've written this is a little bit unclear. So let me see if I can make that a bit clearer. So let's say that we have a- a- a- we have d trajectories. So if we do it this way, then that's exactly equal to V_Pi i. So i now is the iteration, d is the trajectories we've gathered just on iteration i. So this is only averaging over, um, the policies for this particular- the- the trajectories for this particular policy. I said a lot a bit out of order. Does anybody have any questions about exactly what we're doing in this case? So normally, in this situation there's a number of series of rounds. This is for each- So we're gonna have a series of pi's, basically. And then for each policy, we're gonna have a set of trajectories. And for each trajectory we have a set of time steps. And what this is saying here is average over all the trajectories you have for the current policy, and fit the baseline to that. All right. And then once you have that, so this gives you the baseline. Um, and then we do update the policy using your gradient. And it's gonna be a sum of terms that include these derivatives with respect to the policy and your advantage function. Okay. So you're gonna take in this advantage function that you computed over here. And then you're gonna be multiplying it by what was the probability of the actions given the state and theta. The log derivative of that. And then we plug that, this gr- this estimate of the gradient, into something like stochastic gradient descent or ADAM or something else. So this has been vanilla policy gradient. [NOISE] And what we're going to see during the rest of today is just a number of different slight variants on this basic template. So I'll get to you in just a second, but I just want to emphasize that if you- if you- when you walk away from unders- like from what I'd like you to understand, from the- the main idea for policy gradient, is essentially what's on the board right now. Is that, what we're doing is we are running, we take one policy, we get a bunch of data from it, and then we have to fit something like an advantage, and there's going to be different ways to compute that. We could end up doing bootstrapping, to do some sort of TD estimate, or we can just directly use the returns. We often use a baseline, um, that we're fitting over time. And then we're going to update the policy, and have to choose some step size with respect to the gradient. So this is sort of the most important thing. Is to say, hey, there's this basic template for almost all policy gradient algorithms, I can choose different things to kind of plug in here, and I can choose different ways to take my step sizes. Um, and that's going to define a whole bunch of the different policy gradient algorithms that you see. So what function are we using to represent i so that we can take its gradient? Great question. So, um, is asking, you know, how- how do we represent, um, the policy so we can take its gradient. We have to be able to take this here. Um, we talked briefly about this last time, um, but it was also on the board near the end, Gaussians work, Softmaxes both are- are- are- are- both of those you can analytically take the derivative and often we use deep neural networks or shallow deep neural networks. Yes. I saw a question back there? And name first, please. Ah, I was wondering if there's any, ah, issue with like non-states- if we're getting like b of- the baseline with the neural networks, there's like non-stationary issues with that? Yeah, it's a great question. So, ah, the question I believe is to say, um, you're asking me about the baseline, right? So like how- are there non-stationary issues with that? Empirically what a lot of people, including myself, have wondered is, um, we have all this other data. So when we're estimating the gradient right now, typically we're running the policy, just you know, for D trajectories and then we're estimating a gradient with that. Um, and could we maybe use other data to do that, but then ends up being off policy, because then you're mixing together data you've gathered from different policies. Empirically, I think people often end up using only the data from the current run, and then you're essentially just estimating V_Pi, with this and you're not necessarily mixing data for many other policies. Empirically, it seems like often, it's really helpful to be on policy. A- and you could reweight the old data, ah, but that introduces variance. And so empirically often, it's best. I think the jury is still out on it. There's ongoing research on it. We've looked at it, Sergey Levine's group has looked at it, but most of the time using the on-policy data makes sense. Yeah, is there another question? And name first, please. I just want to confirm, so when you saying refit baseline, we're setting baseline equal to the value that minimizes the function error. [NOISE] Perfect. Yeah, for error, if we do this if we're only averaging over the data points that we have for this current policy, when we do this, it's essentially- it- it's essentially the same as when we were doing Monte Carlo policy evaluation. So this is almost exactly like Monte Carlo policy eval. Where we have a fixed policy and then we have a parameterized function to represent it. Um, and then we just want to fit those parameters so we can best estimate the policy value using Monte Carlo. Okay. So I'm going to- there's a little bit of information about auto diff you can check it in the slides. Um, uh, the things we're going to go through next, um, is thinking about this aspect as I was saying, and then we'll talk some about this. So this part is going to be where we think about monotonic improvement. Because once we have a gradient, we have to figure out how far to go. An- and can we guarantee, um, depending on how far we go, whether or not we're going to get a monotonic improvement. Um, and this part is about sort of giving better estimates of our gradient, ideally with less data, um, and reducing the variance of it. So they're both important, they're doing slightly different things. All right. So let's talk about, ah, could we move this up, please? Thank you. Okay. So like we sort of started talking about before, um, well, let's- let's talk about first the baseline. So how should we choose the baseline? Um, one thing that we can do for the baseline, is just to- like what that what we're seeing there, which is an empirical estimate of V_Pi i. So we could say, in general, we wanna just have- use V_Pi i as a baseline. That means we have to compute it somehow. And the way we estimate that could be from Monte Carlo or it could be from TD methods. All right. So what we've seen so far, is using these as a- so- so I guess just to be clear here, there's a couple different places we're going to be able to maybe switch between doing Monte Carlo returns and doing something TD-like. One is here, and another is our baseline. Okay. So we have a baseline function here that we're sort of subtracting off. And we also have a G_t prime here, okay? And so if we think about our general equation again, so what we have in this case is we have delta theta, v of theta. This is our parameter, this is specifying our policy parameters. And we've said this is approximately equal to 1 over m sum over i = 1 to m of some reward- Well, I'll put this inside. Sum over t = 0 to t - 1, of,- I change my mind. Okay. I'm going to put this out here because it's going to end up being sort of a function we can use in lots of different ways. Okay. So this is our basic equation we've been working with. We've said the derivative of the value with respect to our policy parameter is approximately as sum over m trajectories, where we've sampled those trajectories from that policy, times the total reward we've gotten on that trajectory, times the sum over all time steps of the derivative of the policy with respect to, um- given the action we took in the state we were in. All right. And we said it was very noisy, um, but unbiased. And now we can think of changing this as a target. So this here was an unbiased estimator of the value of the policy. And now we can think about substituting other things in. All right. So we can imagine doing all sorts of things here. We cou- we could do, um, you know, TD or MC methods. If we do it with a value function or if we try to explicitly compute a value function or a- or a state action value function, then we typically call this a critic. So a critic computes V or Q. So when we talk about actor-critic methods, that's when we have an explicit parameterized representation of the policy, and we have an explicit generally parameterized representation of the value or the state-action value. And if we have that, then we can imagine using that to change what our target is. I want to emphasize here that so, actor-critic methods combine these two, combine policy plus critics. And probably the most popular one of this is A3C, which is by Mnich et al, this was introduced in 2016 ICML. And it's been hugely popular. This is a version for deep neural networks. Um, but actor-critic ideas themselves have been around for a lot longer than that. But A3C is one of the most popular versions of this for deep neural nets. All right. So how do we do sort of policy gradient formulas with value functions? What you could do instead here, I shall put on this side. What you could do is, you could have almost the same equation as we had before. So derivative with respect to the value function is equal to an expectation with respect to the trajectories that you might encounter, the sum over all the time steps in that trajectory, times the derivative with respect to your policy parameters, times Q of S_t, w - b of S_t. So instead of having your Monte Carlo estimates in here, you could plug in your estimate of the Q function. And another way to represent that here is if we think this is an estimate of the value, and this is basically our advantage function again. But it could be our- so we had an advantage function over here. You define that advantage function here, but this one was a function of the Monte Carlo returns for that episode. This is a different advantage function which is the Q function which- where this could be maintained by a critic, minus your baseline, which is an estimate the value function. So they look pretty similar, but you can plug in different choices here. And these are going to have different trade offs. So the Monte Carlo estimate of the return is an unbiased estimate of the value of the current policy. This is going to be biased generally, but lower variance. All right. So I also want to emphasize here that when we think about, um, kind of getting this estimator, which we often say the critic is going to compute this estimator, um, It doesn't have to be only either a TD estimate or a Monte Carlo estimate, but you can interpolate between these. It's often known as n-step returns. So what does that mean? So let's call- let's write this in a slightly, well, I'll call this here. So let's put this is a hat. Okay? Just to note that you can think of this as kind of just a function. It's going to be an estimate of your state action value function. And so what we could have is we could have an estimator. I could have estimator of the value from time-step t onwards, which is equal to the actual, this is the actual one we got on time-step i, so I'm going to call this. I'm going to call this sort of i, 1. And this is going to be then the actual reward you got on time-step t in episode i, + gamma V of S_t + 1 i. So this should look almost exactly like TD(0) style estimates. We talked about this before. So this says, I got- I look at the actual, immediate one-step reward I got and then I bootstrap. I add in the value. So this would be- I get this value function for my critic. And I would plug that in and then that would be my target. That then I would use in this equation. Okay. So that would be one thing you do- you can do. So we've seen this, and we've seen a lot of this, which I'm going to call the infinite or Monte Carlo version. And this one is you sum over all t prime, all the way to the end of the episode of gamma 2. The t prime minus t times r t prime. So this is the Monte Carlo return, where we just sum up, we don't do any bootstrapping, and we sum up all the rewards at the end of the episodes. But as you can see here, there should be, you know, there's probably some way to interpolate between these two. And these are often known as n step returns. And so for example, you could do this, you could say, I'm going to add in the reward at time step i and the reward I got at time step t + 1. And then I'm going to bootstrap. So this is just sort of one of the estimators that are in between these two extremes. One is you only take in one step of reward, another one is do you sum up all the rewards, and then there's a whole bunch of interpolation you could do between those. Why would you want to do that? Well, this one is generally going to be somewhat biased, but low variance. This is going to be unbiased, but really high variance. And there's no reason to assume that the best solution is on either of those two extremes. And so you could interpolate between a TD estimate and a Monte Carlo estimate. And all of these just form returns that then you could subtract off a baseline for too. So traditional and all this would probably be sort of hyper-parameter that we can choose through validation or cross-validation. Right. Is that what people do here? Is that kinda too computationally intensive? So you just have to pick something. Question is if we were doing standard machine-learning, this would be just considered as some sort of hyper parameter. You could turn this into n and you would decide like how many steps do you do- do people do. And the question was do- do people do that in reinforcement learning or is it that considered too expensive? You certainly could. I- I think it's an interesting question. I feel like the tricks that people do in this case, [NOISE] I think I probably see more of this but it varies [NOISE] more using the TD(0), doing a lot of bootstrapping, but it probably depends on your application domain. Another thing that it would likely depend on is if your domain is really Markov or not. So this case this is still working, and this is giving you a real estimate of the return of your policy even if your domain isn't Markov. This case you're making a much stronger Markovian assumption. So you also might want to make- do different things depending on your domain. And also how expensive it is to collect data. All right. So this gives us sort of a different way to plug things in to that vanilla policy gradient algorithm I put over there. So you could plug in these sort of targets instead, over there to trade off between bias and variance, when you're doing this estimate of the gradient. So what this is doing here is it's changing what our targets are, and it's changing how we're computing our gradient [NOISE]. But then the next thing I wanna talk about is this part. Which is once we've actually got our gradient, however we've chosen to get it. We have to actually figure out how far to go along that gradient. [NOISE] So why might this be important? Well, it might be important because, this is just a local approximation. You're giving your local estimate of the gradient. Yeah. How often do you update the parameters of the critic? The question was how often do you update the parameters of the critic? It's a great question, again, it depends. So you can either- you can do this often asynchronously. So you can have different threads and different networks for your critic and for your policy and principle, you could just be updating your critic all the time like, you know, you can be using DQN for this and doing lots and lots of backups. In general, it depends, I think you'd have a schedule. Yeah. So often you might do something like 10 or 100, it really varies by application, um. Uh, but there's no reason that the critic needs to be updated only on the same schedule at which you're updating the policy. And doing it asynchronously often makes a lot of sense. All right. So if we think about what's happening here, here's our parameterized policy. Here's our value. We have some crazy function. Okay. And then we are computing our gradient. And this gradient, locally is pretty good. So kinda round here things like linear and things look pretty good. But of course, as we get further out like here, it's gonna be bad. Like if we- if we try to follow the gradient too far, we're going to get an estimate that's very different than the real va- real value function. So when we're taking step sizes in this case, it ends up being important to consider this fact of sort of how far out do you want your step sizes to be. Let me just get this back to one [inaudible] Okay. So we want to figure out how far we should go in the gradient and that's important. Now, you might say, okay this is always true. Right? Like you always need to be careful when you're doing gradient descent or ascent in any supervised learning problem. Whenever you're using stochastic gradient descent. Of course, you don't wanna go too far along your step size because you could overshoot and you're using this linear approximation and it's bad. Why does anybody- does anybody have any sense about why this might be even worse in the reinforcement learning case? Why might it be even more important to think about this step size. And it has to do with where the data comes from. Yeah. So when you have a bad policy that affects the data you collect, and you might just go down a bad road. Sure. So she's exactly correct. In a supervised learning case, your data is being generated by an IID distribution, it doesn't matter what choices you just made for your stochastic gradient descent. In RL that is determining the next policy we're using inside of our iteration to gather data. So we're not going to, you know, if we take a really bad, if we get really, really bad policies, we just may be getting no data towards the actual optima of this function. So it's even more important to sort of carefully think about where we're going along here and ideally hopefully get monotonic improvement. So this is- this is the really, it's very important in the reinforcement learning case, to think about how we're doing this step size because this determines the data we collect. pi and therefore data. And one version of, um, one of my colleagues talks about a similar problem. He sort of has the picture of the Roadrunner running off the cliff, right? And like them you if you're in a part where your I- your policy is just really really bad. You may get no more useful data. Then you can't get a good estimate there of the gradient and then you're just stuck. You might get in a really really bad optima. Okay. So we'd like to think carefully about this part. So one thing that we could do, is do something like line search. So we're talking about right now sort of how do we do, so how to do step sizes. So one thing we could do is try to do some sort of line search along the gradient. [BACKGROUND] And this is, um, okay but it's a little bit expensive. So it's simple but it's expensive. And it tends to sort of ignore where the linear approximation is good. So we'd like to do better than this. Okay. So now we're gonna go back to that point that I mentioned at the start which is what we'd really like to be able to do is when we're doing this updating we would like to ensure monotonic improvement. And so can we kind of choose our step sizes in a way or choose how far along the gradient to go in order to achieve monotonic improvement. So what our goal is gonna be is we'd like to have it. So that V pi of i + 1 is greater than or equal to V pi i. And we're hoping to achieve this by changing how big of a step size we take. All right. So let's think about what our- our objective function is again. Um so we're getting- we have our value, our parameterized value. So V of theta is equal to the expected value under our policy that's defined by theta of just the sum over t = 0 to infinity of gamma t_r S_t a_t under our policy. Okay. And this is where we just sort of look at the series of states we get under our policy. So that's our basic equation here in terms of expressing the value of a parameterized policy. And what we would like to do here is we would like to get a new policy that has a better value. But the problem is that we have samples from an old policy. So when we're doing this we're gathering policies with pi i and then we're trying to figure out what our pi i + 1 should be. So this is gonna fundamentally involve. Um so we have access to- we have access to trajectories that are sampled from pi of theta. And we now wanna sort of predict the value of v of pi of theta. I'll put pi i, i + 1. So we'd like to sort of now figure out what a new value would be if we update these, update these parameters in some way and take like a max. You know we'd like to figure out what the new parameters are. But this is fundamentally an off policy problem because we have data from our last policy and we wanna figure out what our next policy should be. Okay. So what we're gonna do is we're gonna first re-express um the value of our policy in terms of the advantage oh- the value of our new policy in terms of the advantage over our old policy. So I'm gonna move down to vanilla policy gradient for now. [NOISE] Okay. So what we have is we have V of Theta tilde. So that could be like our new parameterized policy is gonna be equal to the value of our old parameterized policy. So whatever we had before plus the following. The distribution over the states and actions we'd get if we were to run our new policy. Now we don't know that. But let's ima- let's ignore that for a second of a sum over t = 0 to infinity gamma to the t, the advantage pi. Okay. So this- this just generally holds. Um, this doesn't have to do anything necessarily with being um, parameterized. This is just saying the value of any policy which is here parameterized by pi tilde is equal to the value of another policy plus the sum over the states and actions you'd reach under your target policy of the advantage you get of taking this new policy over the old policy. Okay. So that um, that just expresses how we can say what the va- how uh, the value of a new policy relates to the value of the old policy. It's exactly the same as the old values policy plus the advantage you'd get if you were to run the new policy and look at the state action distribution you'd encounter. Yes? Should the subscript be Pi tilde on the advantage? Should the subscript be Pi tilde on the advantage? [OVERLAPPING] policy? Yeah. So we're doing in it- let me write this up thing. So [inaudible] question is a good one, let me just write this out to be for- for long. Okay, so we've got V of theta plus sum over all the states and we're gonna use mu pi tilde of s. So remember this was the stationary distribution. Um we use this to denote the stationary distribution over states that we'd reach um if we were to run our new policy which is parameterized tilde. This is theta tilde. Okay? Um times the advantage function. Okay. So what this is saying here is this S_t and a_t are under our desired policy. And the advantage here is using the old one. Okay? So this is allowing us to compare. So what does this do? It's allowing us to compare s_a S_t a_t minus Q of S_t our old policy. And this is under- so it's allowing us to compare how much better is that if we take like our new action. Okay? All right. But one of the problems of this is we don't know this. Yeah? Sum over from t = 0 to infinity somewhere? Oh, thank you um. And answering- yeah. Yeah. Yeah. And also again [OVERLAPPING] So the question is is [inaudible]? No. And thank you for make me uh allowing me to clarify that. So in this case what we're doing is we're taking um an expectation over all time-steps and this is saying over the trajectories that we'd get to under our new policy. I've now reformulated and said well we have a stationary distribution over states. If we look at what is the probability of reaching those states and then we weight that by the advantage. So we've went from a time averaging to a state averaging. Does that makes sense? So we can either think of our value function or averaging our value function across time-steps where we can think here is averaging across all the states and what is for each state. What is the relative value you get by following your new policy versus your old policy? [inaudible]. Oh sorry. Thank you. Then that- those are typos. Okay. All right. So this would be under- so we look at the states that- for each state what is the probability we'd reached that state under our new policy and what is the relative advantage? The thing that you're pointing to should not have [inaudible]. Oh sorry. We have a tilde over here, this is our value of our original one plus this we get the advantage term over all the states. Yeah. Is there a difference between pi theta tilde and tilde pi? I was just doing this here to make it clear. Pi L- say Pi tilde is also- Pi tilde is parameterized by, this is a policy parameterized by theta tilde. Yeah. I'm just saying it's like in the expectation they wrote in the first slide [inaudible] Okay. We can vote on that side. But either- I- I wanted to just be clear um, often this notation goes back and forth between using do you wanna make the policy explicit as opposed to just the parameters. Um I think it's more clear to have a policy um and parameters here but often we also use the- you can just directly parameterize the value function in terms of theta as opposed to V pi tilde of theta. But I'm gonna just use any of these is fine. Is anybody confused about what this is? I mean if it's easier I can just go like this. Okay. So I can just remove all of this. Okay. So this is just whenever I say pi tilde that's the policy that's parameterized by the new- that's your new policy. Okay. And that's this. I know it's a lot of notation. Does anybody have any questions about that notation last? Yeah? Yeah. Name first please. So I guess I'm a little bit confused mostly just because it's a little bit different from the slides. And I'm just wondering- Shouldn't be different by the slides but I'm trying to go- [LAUGHTER] I was wondering sort in this case do we sum over the possible actions for a given state or as we've noted here do we assume that we take a single action per uh by using the policy which is what we have here on the board? You are right. I forgot that right. I'm gonna go. Repeat that again. Okay. Okay. Let's say V of theta tilde I'll go with the same exact same notation as the slides is equal to V of Theta plus sum over states or stationary distribution over Pi tilde of s. This is the distribution we get. Um this is the discounted weight of distribution under our target policy. Under pi tilde. Okay. Sum over A. Okay. So this is the weighted affair, this is the weighted distribution under the states. We went from the time domain to thinking about the distribution over states times looking at all the actions we might take under our target policy and the relative advantage of each of those over our previous policy, [NOISE] okay? And this should look very, very similar to imitation learning in certain ways, right? Like, so we're again sort of thinking about, um, instead of thinking about subbing rewards over time steps, we're thinking about what is the stationary distribution we might get to under a new policy and how that compares to the stationary distribution we would have had under our old policy. Um, and what we're looking at so far is, we're looking at the [NOISE] stationary distribution under our target policy. The problem is, we don't know this, so we can't calculate this. This is just an expression. Um, uh, but this is unknown [NOISE] because we don't have any samples from pi tilde, we only have samples from pi, okay? So we can't compute this. Why would we, and just to go back, why are we trying to do any of this? We're trying to do this because when we do vanilla policy gradient, [NOISE] we're gonna be trying to figure out a new policy at a, that has a value that's better than our old policy. What we did here is, we tried to estimate the derivative out of the current pol- of the current policy, um, but we don't know anything yet about the value once we take that step. And so what we're trying to do here is to say, well, can we somehow understand what the value will be of a new policy before we execute it? And we're gonna do that by trying to relate it to what is our value of the previous policy plus some sort of distance between the old policy and the new policy, ideally computed in terms of things we can actually evaluate using our current samples. That's where we're trying to go to, okay? [NOISE] Right. So we have this nice expression, but we can't compute it. So we're gonna make up a new objective function, okay? We're gonna do this one sort of backwards because we're [NOISE] gonna make it up, and then, we're gonna show why it's a good thing to do. So what are we trying to do? We'd like to use something like this. If we had this, then we could compare the value of the new policy to the value of the old policy. The problem is, we don't have this because we don't have the, uh, the stationary distribution under the new policy. So what we're gonna do instead is, we're going to define an objective function L_Pi, [NOISE] which is as follows. It's the value of your old policy plus a sum over all your states, the stationary distribu- discounted distribution of your old policy. This is where it's different. So this is [NOISE] the old, your current policy, okay? And then, the rest of the expression looks the same. [NOISE] Now, notice, we can compute this, okay? This is, um, we could just average over all the trajectories we have for the current episode, we could estimate our stationary distribution from our current data, we could know for a new policy what its action or, so if someone gives me a new policy pi, I could evaluate this, and I could also evaluate the advantage, okay, because this advantage is defined only in terms of my previous Pi. So as long as if I have a, uh, a representation of the state action value function for my old policy, I could evaluate this. So now, all of this is evaluatable. [NOISE] This might not be a good thing to do, but we can compute this. [inaudible] and then, like, giving that to pi itself? Yes. [inaudible] in terms of sort of notation where, like, yeah, I'm using pi tilde interchangeably with, you can, they- they're just some new parameters for computing. So our policy is always parameterized by some set of parameters. You can either think of that as just being a new policy, or you can think of that as the new parameters, either is fine. Uh, so could quickly explain why [NOISE] if you're given, um, the pi tilde, why you won't be able to calculate the, uh, new [OVERLAPPING]? You don't have to. So, um, so the question is, if you're given pi to, uh, the new policy, why can you not compute mu? It's a good question because you don't want any data from that. So someone has given you a new polic- [NOISE] the- there could be ways to approximate [NOISE] this, but the only data that you have right now is from [NOISE] the current old policy, [NOISE] from Pi. So you've run this out M times, you've got M trajectories, which are M trajectories gathered under your old policy pi, okay? You don't have [NOISE] any data from pi tilde. And in general, if pi tilde is not the same as pi, you're going to get different trajectories. So you don't have any direct estimate of this. Does that makes sense to everybody? So if we go back to [NOISE] the vanilla policy gradient, what do we do? We had a policy pi_i, we ran it out, and we got D trajectories from that pi_i. We could use that to estimate that mu. That just gives us on policy data of what are the states and actions we experience when we're [NOISE] following Pi_i. We don't have any data of Pi_i + 1 yet, we haven't run it. Okay. Yeah. Uh, just to be clear, to get an estimate of the stationary distribution for the old policy, [NOISE] uh, you basically [NOISE] look at all the data that you have, uh, like, all the trajectories, and see basically what fraction of the time you're spending at one state. Okay. Exactly. So [inaudible] is exactly right. [NOISE] How would you go from this just raw data, these te- D trajectories to mu? You could just count, you know. I mean, in general, if you're in really high dimensions, you want to do something smoother than that, you want to approximate the density function. Um, but essentially, you can just directly, i- in a type of so, you could just count and [NOISE] just like, how many times did I get to this state and take this action, and then, that would give you a direct estimate of, um, the mu's. [NOISE] In general, you're gonna want some sort of parametric function in high dimensions, but you couldn't fit that using, you could, you could imagine this is parameterized itself, and you can fit that using your existing on policy data. Uh, intuitively, does this work because we assume that distribution of the states [NOISE] won't change too much between policies? Oh, yeah. The question is, intuitively, why does this work? I've not told you why this works, I've just said this is something we could do and that it's computable. [NOISE] Um, and I haven't told you yet why this is a good thing to do. But we're gonna show that, um, that this is going to allow us to get to something which is a lower bound, um, and then, we can improve on those lower bounds. Okay. [NOISE] So just a quick thing to notice here, which is, if you do this, if you do L_pi of pi, [NOISE] so that's just what is this objective function if you plug in the old policy there, this is just equal to [NOISE] V of the theta, [NOISE] okay? So if you evaluate [NOISE] this function under the same policy, it just gives you the value, okay? All right? [NOISE] All right. So conservative, [NOISE] I'll just briefly, we'll have to continue this further later, but, um, [NOISE] so we can use this to do what's known as conservative policy iteration. So [NOISE] conservative policy iteration, um, and the intuition here is, let's just first just [NOISE] start with mixed, um, a mixed policy. [NOISE] So imagine that you have a new policy, which is a mix of an, of, um, an old policy and something different. So you have 1 minus alpha times your old policy plus alpha times some new policy pi prime, okay? So that just means, you take some ol- your current existing policy and you [NOISE] mix in something else, okay? Then, in this case, you can guarantee that the value [NOISE] of your new policy is greater than or equal to, if you'd to take this objective function here and you evaluate it with your new policy, so you take your new policy, you evaluate under your old policy, you plug that in, that's computable because you have data from your old policy minus [NOISE] 2 epsilon gamma, 1 minus gamma squared alpha squared, okay? So you can lower bound [NOISE] the value of your new policy in terms of whatever this objective function is when you compute it minus this expression, okay? Um, I just wanna close with two other thoughts, which is, note again that if you plug in alpha = 0, [NOISE] that means that pi new is the same as pi old, and this goes to 0, [NOISE] which means that, and since we know that this is equal to that, that just says that your new policy has to be greater than or equal to your old policy. And since the same, their policies are all the same, this is tight. Okay. So we, um, this is a, a little bit different than we expected because of, um, [NOISE] the technical challenges with PDF. Uh, so what I'll just close with here is that the next steps we'll go from this is to show we can use this to essentially derive a lower bound on the new value function. And we can show basically that if we improve across the lower bounds, that we're guaranteed that the actual value function is monotonically improving. [NOISE] So we will go through that. Um, I haven't decided yet whether we'll go through that on Monday because that's [NOISE] the midterm review or if we'll wait on that until the following week after the midterm. Um, the policy gradient, uh, homework won't be released until after the midterm. So we have a bit more time for that. Um, and I'll go through also, like, the main takeaways with policy gradient stuff when we conclude this part. Thanks. |
Stanford_CS234_Reinforcement_Learning_Winter_2019 | Stanford_CS234_Reinforcement_Learning_Winter_2019_Lecture_8_Policy_Gradient_I.txt | All right. We're gonna go ahead and get started. Um, homework two, it should be well underway. If you have any questions feel, feel free to reach out to us. Um, [NOISE] project proposals, if you have questions about that, feel free to come to our office hours or to reach out, um, via Piazza. Somebody have any other questions I can answer right now? All right. So today, we're gonna start- this is a little bit loud. Um, today we're gonna start talking about policy gradient methods. Um, policy gradient methods are probably the most well used in reinforcement learning right now. Um, so, I think they're an incredibly useful thing to be familiar with. Um, [NOISE] whenever we talk about reinforcement learning, we keep coming back to these main properties that we'd like about agents that learn to make decisions about them, you know, to do this sort of optimization, handling delayed consequences, doing exploration, um, [NOISE] and do it all through statistically, and efficiently, in really high dimensional spaces. Um, and what we were sort of talking about last time in terms of imitation learning was sort of a different way to kind of provide additional structure or additional support for our agents, um, so that they could try to learn how to do things faster. Um, and imitation learning was one way to provide structural support by leveraging demonstrations from people. And we've seen other ways to sort of, um, encode structure or human prior knowledge, um, when we started talking about function approximation. So, when we think about how we define q, like when we define q as s, a, and w, where this was a set of parameters. We were implicitly making a choice about sort of imposing some structure in terms of how we are going to represent our value function, and that choice might be fairly strong like assuming it was linear. So, this is sort of a quite a strong assumption, or it might be a very weak assumption like using a deep neural net. And so, when we specify sort of these function approximations, and representations, we're sort of implicitly making, uh, uh, choices about how much structure and how much domain knowledge we want to put in, um, in order for our agents to learn. So, what we're gonna start to talk about today and we're gonna talk about this week is policy search which is another place where it can be very natural to put in domain knowledge. I mean, we'll see that in in some robotics examples today, and it can be also a very efficient way to try to learn. So, as I was saying, before we sort of we're approximating where we're doing model-free reinforcement learning, and when we started to try to scale up to really large state spaces. Um, I've been having several different people ask me about really large action spaces which is a really important topic. We're not gonna talk too much about that in this quarter, but we will talk a little bit about when your action space is continuous but low-dimensional. But we have started to talk a lot about when the state space is really high-dimensional and, and, and really large. And so, we talked about approximating things, um, uh, with some sort of parameterization, like whether it will be parameters Theta or we often, or we often use w, but some sort of parameterization of the function. So, we used our value function, um, to define expected discounted sum of rewards from a particular state or state action, and then we could extract a policy from that value function or at least from a state action value function. And instead, what we're gonna do today is just directly parameterize the policy. So, when we talked about tabular policies, our policy was just a mapping from states to actions. And in the tabular setting, we could just look- do that as a lookup table. For every single state, we could write down what action we would take. And what we're going to do now is to say, "Well, it's not gonna be feasible to write down our table of our policy, so instead what we're going to do is parameterize it, and we're gonna use a set of weights or Thetas." Today, we're mostly gonna use Thetas, but this could equally well think of this as weights. Um, just some way to parameterize our policy. We'll talk more about particular forms of parameterization. Um, but just like what we saw for state action value functions, um, this is gonna have a big implication because this is effectively defining the space that you can learn over. So, it's sort of, um, it's determining the, the class of policies you could possibly learn. Um, [NOISE] and we're again gonna sort of focus on model-free reinforcement learning, meaning that we're not gonna assume that we have access to an a priori model of the dynamics or reward of the world. So, we had thrown some of these diagrams up at the start of the quarter, I just want to go back to it. Um, we've been talking about sort of value, we- we haven't talked so much about models, the models are also super important. Um, but we've been talking a lot about sort of value function, based approaches which is this, and now we're gonna talk about policy, um, direct policy search methods. And as you might expect, there's a lot of work which tries to combine between the two of them, and these are often called actor-critic methods. Um, where you try to explicitly maintain a parameterized policy, and explicitly maintain a parameterized critic or value function. So, this is the policy, and this is a Q. Okay, so, we're gonna start today and we're gonna be talking about policy-based methods. So, why would you wanna do this? Um, [NOISE] well, uh, it actually goes back a little bit to also what we were talking about last week with imitation learning. For imitation learning, we talked about the fact that sometimes it's hard for humans to write down a reward function, and so it might be easier for them just to demonstrate what the policy looks like. Similarly, in some cases, maybe it's easier to write down sort of a parametrization of, um, the space of policies than it is to write down a parameterization of the space of state action value functions. Um, in addition, they're often much more effective in high-dimensional or continuous action spaces, and they allow us to learn stochastic policies which we haven't talked very much about so far, but I'm gonna give you some illustrations about where we definitely want stochastic policies. Um, [NOISE] and they sometimes have better convergent policy- convergence, uh, properties um, that can be a little bit debated, it depends exactly what- whether we're comparing that to model-free or model-based approaches and how much computation we're doing. Um so, this can be a little bit of a function of computation to computation can matter. One of the really big disadvantages is that they are typically only gonna converge to a local optimum. So, where you're going to converge to something that is hopefully a pretty good policy, but we're not generally guaranteed to converge to the global optima. [NOISE]. Now, there are some techniques that are guaranteed to converge to a local- to the global optima, and I'll try to highlight some of those today, but generally, almost all of the methods that you see in like deep reinforcement learning that are policy gradient based, um, only converge to a local optima. Um, and then the other challenge is that typically we're gonna do this by trying to evaluate a policy and then estimate its gradient, and often that can be somewhat sample inefficient. So, there might be quite a lot of data to estimate, um, what that gradient is when we're taking a gradient-based approach. So, why might we want sort of a stochastic policy? So, in what I mentioned before, um, in the tabular setting, so let me just go back to here. So, in a- now, why do we want this? Do we want this? If you think back to the very first lectures, um, what I said is that if we have a tabular MDP, there exist a Pi which is deterministic and optimal. So, in the tabular MDP setting, we do not need, um, er, deter- we do not need stochastic policies because there always exists a policy that is deterministic that has the same value as the optimal policy does. So, this is not needed in the tabular Markov Decision Process case, but we don't always- we're not always acting in the tabular Markov Decision Process case. So, as an example, um, [NOISE] who here is familiar with rock-paper-scissors ? Okay. Most people. Um, er, possibly if you're not, you might have played it by another name. So, in rock-paper-scissors, uh, it's a two player game, um, [NOISE] everyone can either pick, uh, paper or scissors or rock. And you have to pick one of those, and scissors beats paper, paper- rock beats scissors, and paper beats rock. Um, and in this case, if you had a deterministic policy, you could lose a lot, you could easily be exploited by the other agent. Um, but a uniform random policy is basically optimal. What do I mean by optimality? In this case, I mean, that you, you could say a plus one if you win, and let say zero or minus one if you lose. We're not gonna talk too much about multi-agent cases, um, er, in this class, but it's a super interesting area of research. Um, and in this case, um, you know, the environment is not agnostic. Um, the environment can react to, uh, the policies that we're doing and could be adversarial, and so we want a policy that, um, is robust to an adversary. So, a second case, um, is Aliased Gridword. So, um, so in this case, so why, you know, why is being stochastic important here? Well, because we're not really in a stochastic setting, we are in an adversarial setting, and we have another agent that is playing with us and they can be non-stationary and changing their policy in response to ours. Um, so it's not, uh, the environment doesn't pick the next- doesn't pick rock-paper-scissors regardless of our actions, um, in the past it can respond to those. [NOISE] Um, so it's sort of got this non-stationarity or adversarial nature. Um, another case is where it's not Markov, so it's really partially observable, and you have aliasing, which means that we can't distinguish between multiple states in terms of our sensors. So, we saw this before a little bit when we talked about robots that, you know, could have laser range finders and sort of tell where they were in a hallway. By how far away each of the, um, the, the first point of, um, uh, uh, obstacle was for all of their 180 degrees, and so that will look the same in lots of different hallways. Um, so this is a simple example of that. So, in an Aliased Gridword, um, let's say that the agent because of their sensors cannot distinguish the gray states and they have features of a particular form. Um, they have a feature for whether or not there's a wall to the North, um, er, or East, or South, or West. So, it can basically, like, if it's here it can tell, like, "Oh I have walls to either side of me and not in front or behind me." Um, but that could be the same over in that in the, in the other [NOISE], um, grey state. So, if we did a value-based reinforcement learning approach using some sort of approximate value function, um, it would take these features which are a combination of what action am I going to take? And whether there are walls around me or not. Um, or we could have a policy-based approach which also, um, takes some of these features but then just directly tries to make decisions about what action to take, and those actions can be stochastic. So, in this case, the agent is trying to figure out how to navigate in this world. It really wants to get to here. This is where there's a large reward. So, this is good. It wants to avoid the skull and crossbones, and those will be negative reward. So, because of the aliasing, the agent can't distinguish whether or not it's here or here. Um, and so it has to do the same thing in both states. And so either it has to go left or it has to go right, Call it West or East [NOISE], um, and either way that's not optimal because if it's actually here it should be going that way, not over here and down. Um, and so it can distinguish whether it's in here or here but it could just end up moving back and forth, er, or making very bad decisions. And so it can get stuck and never be able to know when it's safe to go down and reach the money. So, it learns a near-determini- deterministic policy because that's what we've normally been learning with these, um, and whether it's greedy or e-greedy and generally it will do very poorly. But if you have a stochastic policy when you're in a state where you're aliased, you could just randomize. You'd say, "I'm not sure whether I'm actually in this state- in this state or this state, um, so I'll just go, er, either East or West with 50 percent probability." And then it'll generally reach the goal state quickly. Because note, it can tell what it should do when it reaches here because that looks different than these two states. So, once it's in the middle it knows exactly what to do. So, that's just, again, an example where a stochastic policy has a way better value than a deterministic policy and that's because the domain here is not Markov, it's partially observable. [NOISE] Okay. So, that's sort of one of the reasons why we might want to- some of the reasons why you want- might wanna be directly policy-based, and there's a lot of other reasons. Um, so, so what does this mean? Well, er, we're gonna have this parameterized policy and the goal is that we wanna find. Yeah, Like you said, can we conclude that when the world is not Markov, it is partially observed, stochastic policy is always better? Your name is ? I'm sorry yeah. So what said is can we conclude that, um, if the world is partially observable stochastic policies are always better. Um, I think it depends on the modeling you wanna do. I think, in this case, better than being stochastic because it's still doing something, kind of, not very intelligent in the gray states, it's just randomizing, would be to have a partially observable Markov decision process policy, um, and then you could track, uh, an estimate over where you are in the world. So, you can keep track of a belief state over what state you're in, and then you could hopefully uniquely identify that, "Oh, if I was just in this state I have to be in the state now." And then you can deterministically go to the right or left. [NOISE] So, it depends on, on the modeling one's willing to do. Good question. Okay. So, when we, um, start to do the- go to parameterize policy search, what we're gonna wanna do is find the parameters that yield the best value. The policy in the class with the best value, and so similar to what we've seen before we can, we can think about sort of episodic settings and infinite sort of continuing settings. So, in an episodic setting, that means that the agent will act for a number of time-steps often, let's say, H steps. But it could be variable, like, it might be until you reach, you know, a terminal state. And then we can just consider what is the expected value, wha- wher- what is the value? What is the expected discounted sum of rewards we get from the start state or distribution of start states? And then what we wanna do is find the parameterized policy that has the highest value. Um, another option is that if we're in a continuing environment which means we're in the online setting, we don't act for H steps we just act forever. There's no terminal states and we can either use the, um, average value where we average over the distribution of states. So, this is, um, like what we saw before thinking about the distribution, the stationary distribution over the Markov chain that is induced by a particular policy. Because we talked about before about the fact that if you fix the policy, um, then basically, uh, you get into Markov reward process. You can also just think of the distribution of states you get is a Markov chain. So, um, if we're acting forever, we're gonna say sort of on average what is the value of the states that we reach under that stationary distribution? Um, and another way to do it is also to say we just look at sort of the average reward per time step. Now, for simplicity today we're gonna focus almost exclusively on the episodic setting, but we can think about similar techniques for these other forms of settings. So, as before and this is an optimization problem similar to what we saw in the value function approximation case, uh, for linear value functions and using deep neural networks, um, we're gonna wanna be doing optimization, er, which means that we need to do some sort of optimization tool to try to search for the best data. So, one option is to do gradient free optimization. We don't tend to do this very much in policy search methods, um, but there are lots of different methods that are gradient free optimization. Just for us to find whatever parameters maximize this V Pi Theta. Um, and just to connect this- just like what we saw for Q functions, now we have Theta which is specifying a policy. And it maybe has some interesting landscape, and then we wanna be able to find where's the max. So, we're really trying to find the max of a function as efficiently as we can. And there are lots of methods for doing that that don't rely on the function being differentiable. Um, and these actually can be very good in some cases. Um, so this is some nice work done by a colleague of mine and- We have developed a method for automatically identifying the exoskeleton assistance patterns that minimize metabolic energy costs for individual humans during walking [NOISE]. During optimization the user first experiences one control law while respiratory measurements are taken. Steady-state energy cost is estimated by fitting a first-order dynamical model to two minutes of transient data. The control law is then changed and metabolic rate is estimated again. This process is repeated for a prescribed number of control laws forming one generation. [NOISE] A covariance matrix adaptation evolution strategy is then used to create the next generation. The mean of each generation represents the best estimate of the optimal control parameter values. After about an hour of optimization, energy cost was reduced by an averageg of 24 percent, compared to no assistance. So this is work that's done by my colleague uh, Steve Collins, um, who's over in mechanical engineering and we've been collaborating some on whether you can train people to- do this better, um. So, the idea in this case is that, uh, there's lots of instances for which you'd like to use exoskeletons. Um, a lot of people have strokes, a lot of people have mobility problems um, and of course there's a lot of veterans that lose a limb. Um, and in these cases one of the challenges has been is how do you, sort of figure out what are the parameters of these exoskeletons in order to provide um, support for people walking and generally it varies on physiology and for many different people. They're going to need different types of parameters, um, but you want do this really quickly. So, you want to be able to figure out very fast for each individual. What is the right control parameters in order to help them get the most assistance as they walk. Um, and so Steve's lab treated this as, uh, sort of a policy, a policy search problem, where what you're doing is you're having somebody wear their device, you're trying some, uh, control laws, um, that are providing a particular form of support in terms of their exoskeleton. You're measuring their sort of, um, metabolic efficiency, which is, how do you- you know. How hard are they breathing? How hard are they having to work, compared to if they weren't wearing this or under different control laws. And then you can use this information to figure out what's the next set of control laws you use and do this all, in a, closed loop fashion as quickly as possible. Now one of the reasons I bring this up is, both because it was incredibly effective, it's a really nice science paper that, um, illustrates how this could be much more effective than previous techniques. Um, and second because it was using CMA-ES, which is a gradient free approach. So even though most of what we're gonna discuss today in class, is all with gradient based methods. There's some really nice examples of not using gradient based methods also to do policy search for lots of other types of applications. So, I think it's useful to sort of, know in your toolbox that, one doesn't have to be constrained to gradient based methods, and one of the really nice things about, things like CMA-ES is that, they're guaranteed to get towards, uh, to a global optima. So in some cases, uh, you might really want to be guaranteed that you're doing that because it's high stakes situation. Um, and in general, it sort of is, has been noticed repeatedly recently that sometimes these sort of approaches do work kind of embarrassingly well, um, uh, that they tend to be in some ways sort of a brute forced, a smart brute force way, um, that often can be very effective. So they're good to consider, in terms of the applications you look at. But, you know, despite this, um, uh, even though, they can be really good and sometimes, um, they're very, very helpful for parallelization. Um, uh, they're generally not very sample efficient. And so depending on, the domain that you're looking at and what sort of structure you have, often it's useful to go to a gradient-based method, particularly if you might be satisfied with the local solution at the end. Sort of locally optimal. So what we're going to talk about mostly today- just like what we did for, um, value, like, uh, value-based methods is, gradient descent, um, and gradient based methods, um and other methods that try to exploit the sequential structure of decision making problems. So CMA-ES doesn't know anything about the fact that this- the world might be an MDP or any form of sort of, sequential stochastic process. And we're gonna focus on ones that sort of, leverage the structure of the Markov decision process, in the decision process itself. So let's talk about policy gradient methods. Um, where again just sort of, um, define things in terms of theta, so that we're explicit about the parameter and we're gonna focus on episodic MDPs, which means that, we're gonna run our policy for a certain number of time steps, until we reach a terminal state or for certain you know, maybe h steps. Uh, we're going to get some reward during that time period, and then we're going to reset. So, we're going to be looking for a local maximum, and we're going to be taking the gradient, with respect to the parameters that, um, define the policy, and then use some small learning rate [NOISE]. So just this is- this should look very similar, very similar to the, similar to, uh, Q and V based search. And the main difference here is, that instead of taking, uh, the derivative with respect to parameters that define our q function, we're taking them with respect to, the parameters that define our policy. So, the simplest thing to do here, is, um, to do finite differences. Um, so for each of your policy parameters, you just perturb it a little bit, um, and if you do that for every single one of the dimensions, um, that define your policy parameters, then you're going to get an estimate of the gradient. Here, just doing sort of a finite differences estimate of the gradient. And you can use a certain number of evaluations for doing this, in each of the cases. So you can- let's say you have this, um, k dimensional, uh, set of parameters that defined your policy, you try changing one of them a little bit, you repeat it, you get a bunch of samples for that, new policy. Um, you do that for all of the different dimensions, and now you have an approximation of the gradient. It's very simple, it's quite noisy, um, it's not particularly efficient, but, it can sometimes be effective. I mean it was one of the earlier demonstrations of how policy gradient methods could be very useful, in an RL context. Um, and the nice thing is that the policy itself doesn't have to be differentiable because, we're just doing sort of a finite difference approximation of the gradient [NOISE]. So, one of the first examples that- I see- well, um, I think of when- I think of, sort of how policy, uh, gradient methods or how policy search methods can be really effective, is Peter Stone's work on doing, uh Robocup and who here's ever seen- like Robocup? Okay. A few people but not everybody. So let's see if we can, get up like a short demonstration of like what these robots look like. So, let's- ah, okay. So you probably can't see it do you? We won't do that right now, um, but essentially what you have is, there's a bunch of different leagues of Robocup. One of the goals has been that, um, I think by 2050, the goal is that, we're going to have, uh, a robotic soccer, uh, team that is going to be able to defeat- like able to, you know, win the, the World Cup. Um, so that's been one of the driving goals, of this Ro- the Robocup initiative. Uh, and there's lot of different leagues within this, and one of them is, these sort of quadruped robots, um, which try to score goals against each other. And one of the key challenges for this is, they look kind of like that. Um, and, you have to figure out the gait for walking, um, and you want them to be able to walk, quickly but you don't want them to fall over. Um, and so just simply that question of like, how do you optimize the gait, is an important question in order, to win, because you need your robots to move fast on the field. So, Peter Stone has been a leading person in, in Robocup for a long time. Um, and their goal was simply to, learn a fast way for these AIBOs to walk. Um, and to do it by, uh, real experience um, and data's really important here because it's expensive um, you have these robots walking back up forth and you want them to very quickly, optimize their gait. Um, and you don't want to have to keep changing batteries and things like that, so you really wanna do this with very little amounts of data. So, what they thought of doing in this case is sort of to, to do a parameterized policy and try to optimize those proper policies. So this is where significant domain knowledge came in and this is a way to inject domain knowledge. So they, um, specified it by this sort of continuous ellipse, of how gait works for, um, the small robot. And so they parameterized it by these 12 continuous parameters. And this completely defines the space of possible policies you could learn. This might not be optimal. Peter Stone and his group have a huge amount of experience on doing Robocup, um, at the time they were doing this paper and so they really had a lot of knowledge they could inject in here. And in some ways it's a way to provide sort of this hierarchical structure about, what sort of policies might be good. And then what they did is they did just this method of finite differencing, in order to try to optimize for all of these parameters. [NOISE] So, one of the important things here, um, is that all of their policy evaluations were going to be done on actual real robots, um, and they just wanted to, have people inter- intervene every once in a while, in order to, replace batteries which took- happened about once an hour. Um, and so they did it on three AIBOs, very small amount of hardware. Um, they did about 15 policies per iteration, and, they evaluated each policy three times. So, it's not very many but it can be a very noisy signal, um, and each iteration took about 7.5 minutes. So- and then they had to pick some learning rate. And so what do we see in this case? Well, we see that, in terms of the number of iterations that they have versus how quickly they're- of course you have to define your optimization criteria in this case, they're looking at speed of stable walking. Um, and a lot of people have been trying to figure out how to do these using hand tuning before, um, uh, including, so they're the UT Austin Villa team. Um, including, them- in the past people have found different ways to sort of hand tune, um, I don't know if we'll be using unsupervised learning et cetera. And you can see, as they do multiple iterations of trying to, search for a better policy using this finite difference method, that they get to faster than everything else. And this is not that many iterations, um, so this is something that was happening over, you know, a few hours. So, I think this was a really compelling example of how, policy gradient methods could really do much better than what we- had happened before and they didn't have to require an enormous amount of data. That's very different than probably what you're experiencing in assignment two, so this is, no total number of iterations. Um, uh, I think this was on the order of, let's see, like, this is on the order of, you know, tens to hundreds of policies, not millions and millions of steps. So these things can be very data efficient. But there was also a lot of information that was given. So, um, if you think about sort of like, uh, I have a little bit on here. So, in their paper they discussed sort of what was actually impacting performance in this case, and there are a lot of things that impact performance. So, um, you know, how do we start? Um, so I may have a sense of why, you know, why does the initial policy parameters used matter for this type of method? Yeah, Well, because we're not guaranteed to have a global optima, only a local optima, so your starting point is gonna affect which local optima you are able to find. Exactly. So what just said is that, um, because these methods are only guaranteed, particularly this method is only guaranteed, to, find a local optima and all of the sort of policy gradient style methods are. Um, then wherever you start you're gonna get to the closest local optima and you have no guarantee that that's the best global optima. Um, so it's important to either try lots of random restarts here in this case or to have domain knowledge. Um, another important question here is, how much you're perturbing sort of the size of your finite differences. And then I think, really most critical is this policy parameterization. Like just how are you writing down the space of possible policies that you can learn within because like, if that's not a good policy space, then you're just not going to learn anything. Um, yeah. on slide 26 What is an open loop policy can you explain a bit more on that. Yeah. Um, uh, so, question was about the open loop policy part. So, these policies that we're learning don't have to be adaptive. And open loop policy is essentially a plan. It's a sequence of actions to take, um, regardless of any additional input that you might have. So, um, we typically have been thinking about policies as mappings from states to actions, but they can also just be a series of actions. And so, when we talk about an open loop policy, that's a non-reactive policy because it's just a sequence of actions that regardless of the state of the robot you just keep going. So, maybe there's a really large wind in the middle, and the robot's next action is the same whether there's a lot of wind or not. It doesn't have to be reactive. Okay. So, but in general, um, you know, finite differences is a reasonable thing to try. Um, often we're gonna want to use gradient information and leverage the fact that our policy for function is actually differentiable. So, what we're gonna do now is, um, compute the policy gradient analytically [NOISE] excuse me. This is most common, um, in most of the techniques that are used right now. Um, we're gonna assume it's differentiable wherever it is non-zero, um, and that we can explicitly compute this. So, when we say, what we- when we say know that means that this is computable. And we can compute this explicitly. And so, now we're gonna be, um, thinking only about gradient-based methods. And so, we're only, [NOISE] we're gonna only converge to a local optima. Hopefully, hopefully, we'll get to a local optima, that's the best we can hope for in this case. Okay. So, we're going to talk- people often talk about likelihood ratio policies, um, and they're gonna proceed as follows. So, let- we're thinking about the episodic case. So, we're gonna think about it as having, um, trajectories. So, state action reward, next state, et cetera, all the way out to some terminal state. So, this is where we terminate. And we're gonna use R of Tau to denote the sum of rewards for a trajectory. Okay. So, the policy value in this case is just gonna be the expected discounted sum of rewards we get by following this policy. And we can represent that as the probability that we observe a particular trajectory times the reward of that trajectory. So, it just says given under this policy what are the, you know, what's the probability of seeing any trajectory, and then what would be the reward of that trajectory? Because the reward is the deterministic function of the trajectory. Once you know the state action rewards, et cetera, then your reward is, um, just the sum of all of those. And so now, in this particular notation, what our goal will be is to find policy parameters Theta, which, um are the arg max of this. Uh, and the reason we sort of- what have we changed here, um, the change now then has been the fact that we've gonna focus on here. So, notice now that the policy parameters only appear in terms of the distribution of trajectories that we might encounter under this policy. And this is, again, a little bit similar to what we talked about for imitation learning before or where in imitation learning, we talked a lot about distributions of states and distributions of states and actions, and trying to find a policy that would match the same state action distribution, as what was demonstrated by an expert. Um, today, we're not gonna talk as much about sort of state action distributions but we are talking about sort of distributions of trajectories that we could encounter under our particular policy. So, what's the gradient of this? Um, so, we wanna take the gradient of this function with respect to Theta. So, we're gonna go for this as follows. We are gonna rewrite what is the probability of a trajectory under Theta. So, sum over Tau. I wanna do probability of Tau [NOISE] times. All right, first actually I'll whip it in here. And then what we're gonna do is, make sure I get the notation the same. Okay. So, then what we're gonna do is, we're gonna do something simple where we just multiply and divide by the same thing. So, we're gonna put in probability of Tau given Theta, divided by probability of Tau given Theta, times the derivative of the probability of Tau given Theta. And the probability- if we instead had a log. So, if you're taking the derivative of log of probability of Tau given Theta, that is exactly equal to the one over the probability of Tau given Theta times the derivative of p of Tau given Theta. So, we can re-express this as follows; Sum over Tau r of Tau, p of Tau given Theta times derivative with respect to log of p of Tau given Theta. Now, so far that doesn't necessarily seem like that's gonna be very useful. Um, [LAUGHTER] So, we've done that, that's a reasonable transformation, but we'll see shortly why that transformation is helpful. And in particular, the reason this transformation is helpful is it's gonna be very useful when we think about wanting to do all of this without knowing the dynamics or reward models. Um, so, we're gonna need to be able to, you know, get reward in terms of, uh, a trajectory, but we wanna be able to evaluate, um, the gradient of a policy without knowing the dynamics model, and this trick is gonna help us get there. So, when, when we do this, this is often, this is often referred to as the likelihood ratio. And we can convert it and just say, "Well, we noticed that by doing this, this is actually exactly the same as the log." Now, why else does this start to look like something that might be useful? Well, what do we have here? We have, if we- this is the sum over all trajectories. Of course, we don't necessarily have access to all possible trajectories, but we can sample them. So, you could imagine starting to be able to approximate this by running your policy a number of times, sampling a number of trajectories, looking at the reward of those, um, and then taking the derivative with respect to this probability of trajectory given Theta. So, typically we're gonna do this by just running the policy m times. Um, and then, that p of the- Tau given Theta, we're gonna just approximate that by the following. So, that part drops out, we're just gonna weigh all of the trajectories that we got during our sampling uniformly, and then we look at the reward of that trajectory, and the log of p of, um, [NOISE] uh, Tau given Theta. So, what is happening in this case? Okay. So, this is saying that the gradient is this sort of, um, [NOISE] uh, the reward that we get Um, times the log of the probability of that trajectory for the reward with associated word times Theta. So, what's happening in that case? So, in this case, we have a function which for our case is the reward, which is sort of measuring how good, um, that particular, um, you know, trajectory is or how good that sample is. And so, what this is doing is we're just moving up and the trajectory of the log probability of that sample based on how good it is. So, we wanna sort of push up our parameters, that, um, are responsible for us getting samples which are good. So, um, we want to have parameters in our policy that are gonna cause us to execute trajectories that give us high reward. So, if we think of just sort of here f of x again is the reward. And we are- this is gonna be our policy or parameterized policy. We want to increase the weight of things in our space that lead to high reward. So, if this is our f of x, which is our reward function and this is the probability of our trajectories, then we wanna reweight our policy to try to increase the probability of trajectories that yield high reward. So, you would end up having larger gradients towards things that have high value, high reward. Okay. So, then the next question is, if I'm gonna do this then I have to be able to approximate the second term, which is this log. You know, the derivative with respect to the probability of a trajectory, um, under some parameters. So, I have to be able to figure out what is the probability of a trajectory under a set of parameters, and we can do that as follows. And so, this is gonna be Delta Theta of log of the pro- of Mu of S_0. So, this is our initial starting state, the probability of our initial starting state, times the product over j equals 0 to t minus 1 of the probability of observing the next state given the action that was taken, times the probability of taking that action under our current policy. So, there's like another bracket at the end. Um, and so, since this is log, we can just decompose this. So, this is gonna be equal to Delta Theta of log of Mu of S naught plus Delta Theta sum over Delta Theta because it's a log term, of J equals 0 to t minus 1 of log of [NOISE] the transition model. Remember, we don't know this in general. This is unknown. You're just gonna hopefully end up with an expression which means we don't need to have it. Um, and what i is indexing here is which trajectory we're on. Add sum over J equals 0 to t minus 1. And this is gonna be our actual policy parameters. All right, can anybody me why this is a useful decomposition? And whether or not it looks like we're gonna need to, so, let me just parameterize all these things, um, does this look hopeful in terms of us not needing to know what the dynamics model is? [inaudible] How about everybody just take a second, talk to your neighbor and, um, then tell me which of these terms are gonna be zero. So we're taking the derivative with respect to Theta. And which of these terms depend on Theta [OVERLAPPING]. Remember Theta is what determines your policy parameters. Theta is what determines what action you take in a given state. All right I'm gonna do a quick poll, um, so I'm gonna call these items one, two and three. Does the first term depend on Theta? Raise your hand if yes, raise your hand if no. Great, okay, yeah. So this is independent Theta. So this is gonna be zero. Raise your hand if the second term is independent of Theta. Great, so this goes to zero. So the only thing we have less is this, which is great. So, um, the nice thing and so now it sort of becomes more clear why we did this weird log, um, transformation, because when we did this weird log transformation, it allowed us to take this product of the probability of the action that we took in the state transitions, then instead we can decompose it into sums. And now once we see that we decompose it into sums, we can apply the derivative separately and that means some of these terms just directly disappear which is really cool. So, it means that we don't actually need to know what the transition model is. Um, we don't need to have a explicit representative. Yeah question and name first please. And the question is, I was wondering, doesn't the dynamics of the system depend on the policy though, um, in general? Great question. So question is, does the dynamics of the system depend on the policy? Absolutely, but only through this part. So, it's like um, the agent gets to pick what action they take, but once they pick that, the dynamics is independent of the agent and so it's this de-coupling. So, if you have a different policy, you will absolutely get different trajectories but the way in which you get different trajectories, um, is only affected by the policy in terms of the actions that are selected and then the environment will determine sort of the next states that you get. And so we don't need to know that in terms of, um, estimating the impact of the actions on the environment. It will also come through in terms of the rewards you get. Because the rewards you get are also a function of the state. So you'll absolutely visit different parts of the state depending on the actions you take. Any other questions? I'm and I just want to make sure I understand how I get the estimate for the probability of how given Theta, I mean, most likely we are just saying if we took m episodes and this one showed up, you know, i times it's gonna be i over m. It is that what we are doing here is correct? Great question. So, is asking, um, you know, this is, what I put here is just one of those internal terms that would cover one i, yes, So, what we're doing here is, we're saying, we're gonna take this policy. We're gonna run it m times. We're probably not gonna get any trajectories that are identical. And what we're gonna do is compute this log of the probability of trajectory for each of those separately. And then sum them up. You might end up with that, I mean, you know in deterministic cases you might, um, if your domain doesn't have a lot of stochasticity and neither does your policy, you might end up with multiple trajectories that are identical. In general your trajectories will be completely different. And so will your estimate of their local gradients. So, this is really nice. We're gonna end up with this situation where we only have to be able to have an analytic form for the derivative of our policy with respect to our parameters. So, we still need and we'll talk about this a little bit more later. We still need this. We have to evaluate this. This is about how we parametrized our policy. And if we want this to be analytic, we need to have parametrized our policy in a way that we can compute this exactly for any state and action. So, we'll talk more about some ways, you know, some policy parameterizations which make this computation analytic and nice. In other cases you might have to estimate this thing itself by, you know, brute force or computation or finite differences or something. But if we choose a particular form of parameterized policy, then this part is gonna be analytic. So, another thing is I don't find this, um, er, I don't find this additional terminology particularly helpful but it's used all over the place. So I wanna introduce it which is people often call, um, this part a score function. Just the score function which is not particularly helpful, I think but nevertheless is often used is called this. So that's the quantity that we were just talking about needing to be able to evaluate. So this really gets into, um, er, well, we'll write it out again. So, um, when we take the derivative of the value function we approximate that by getting m samples and we sum over i equals one to m. And we look at the reward for that treject- um, trajectory and then we sum over these per step score functions. Can everybody read that in the back? Yeah. Okay great. Yeah so these are sort of our score functions. And these are our score functions, um, that can be evaluated over every single state action pair that we saw and we do not need to know the dynamics model. So, the policy gradient theorem slightly generalizes this. How is it gonna generalize this? Note in this case, what we're doing here is we're- this is for the episodic setting. And this is for when we just take our raw, our raw reward functions. So we look at the sum of rewards for that trajectory and then, um, we weigh it by this sort of derivative with respect to our policy parameters. Um, it turns out that we can also slightly generalize this. And let's say I'm gonna call, so this is a value function, um, let's say that we had slightly different objective functions. We talked before about how we could have episodic reward or average reward per time step or average value. So, we could either have our objective function be equal to our normal value for episodic or we could have it equal to what I'm gonna call J AVR which is average reward per time step or we could have it as average value. Let's say we're always continuing and we want to average over the distribution of states that we encounter. So we can think about that. It's a good scenario too. It turns out that on all of those cases you can do a similar derivation to what we did here for the episodic case. And what we find is that we have the derivative of our objective function, which now can be kind of any one of these different objective functions, is equal to the expected value under that, the current policy of the derivative with respect to just those policy parameters times Q. And Sutton and Barto in Chapter 13 which we also reference on the schedule, um, have a nice discussion about a number of these different issues, um, and so again, we're not gonna talk too much about these slightly other different objective functions but just know that this all can be extended to the continuing case. Okay, so what we've said here so far is that we have this approximation where what we do is we just take our policy, we run it out phi m times, for each of those m times we get a whole sequence of states and actions and rewards. And then we average. And this is an unbiased estimate of the policy gradient but it's very noisy. So, this is gonna be unbiased and noisy. If you think about what we saw before for things like Monte Carlo methods, it should look vaguely familiar, same sort of spirit, right? We have, um, we're just running out our policy. We're gonna get some sum of rewards just like what we got in Monte Carlo, um, estimates. But, [NOISE] so, it'll be unbiased estimate of the gradient. So, it's unbiased estimate of the gradient, estimate of gradient. But noisy. So, what can make this actually practical? Um, there's a number of different techniques for doing that. Um, but some of the things we'll start to talk about today are temporal structure and baselines. [NOISE] Okay. So, how do we fix this? I'm gonna start to look at, you know, fixes, [NOISE] uh, temporal structure and baselines. And before we keep going on this, um, based on what I just said in terms of Monte Carlo estimates, um, what are some of you guys' ideas for how we could maybe reduce the variance of this estimate? Based on stuff we've seen so far in class. Like, what are the alternative to cut up Monte Carlo methods? Yeah? We could use bootstraps. Um, can I get your name first. Oh, I'm . ? Yeah. What said is exactly, right. So, said we could use bootstrapping. Yeah. So, we've repeatedly seen that, um, we have this trade off between bias and variance, and that bootstrapping, um, like temporal difference methods that we saw in Q-learning that you're doing in DQN, can be helpful in, um, reducing variance and, and speeding the spread of information. So, yeah. So, we could absolutely do things like bootstrapping, um, to kind of replace R with something else or use, um, a covariate in addition to R. To try to reduce the variance of R. Okay. All right. So, what we're gonna do now is, we're first gonna do something that doesn't go all the way to there but tries to [NOISE] at least leverage the fact that we're in a temporal, temporal, um, process. [NOISE] Okay. Um, and for any of you who have played around with importance sampling before, this is closely related to, um, per-decision importance sampling. And basically, the, um, thing that we're going to exploit, is the fact that, um, the rewards, uh, can only, um, of the temporal structure domain. Oh, I'll write it out first. Okay. So, what we had before, is we said that, um, the derivative with respect to Theta of the expected value of Tau of the return is equal to the expected value under the trajectories you could get of the sum over t equals 0 to t minus 1 of rt such as the sum of rewards you get [NOISE] times the sum over t equals 0 to t minus 1 of your derivative with respect to your policy parameters. [NOISE] Um, that's what we had before. So, we just sum up all of our rewards, and then, we'd multiply that by the sum over all of the gradients of our policy at every single action state pair we got in that trajectory. [NOISE] Okay. So, let's think about doing this for a single reward, instead of looking at the whole sum of rewards. So, let's just look at. We take the derivative with respect to Theta of expected value of rt prime. [NOISE] So, this is just a single time step reward that we might encounter you know, along our trajectory, and that's gonna be equal to the expected value of rt prime times sum over t equals zero to t prime of this derivative. So, this is gonna look almost exactly the same as before. [NOISE] Except, the only key difference here is that I'm only summing up to t prime. Okay. So, we're only summing up, um, you could think of this as just like a shortened trajectory. I'm looking at the product of, um, the states, and the actions and the rewards that I, I reached all the way up to when I got to, um, rt prime. Okay. So, I don't have to sum all the way over the future ones. So, we can take this expression, and now, we can sum over all time steps. So, this says, what's the expected reward, uh, or the derivative with respect to the reward for time step t prime? Now, I'm gonna just sum that, and that's gonna be the same as my first expressions. So, what I'm gonna do is I'm gonna say, [NOISE] V of Theta is equal to the derivative with respect to Theta of er, and I'm gonna sum up that internal expression. So, I'm gonna sum over t prime is equal to zero to t minus 1 rt prime, and then insert that second expression. Okay. So, all I did is I put that in there and I summed over t prime is equal to zero, all the way up to t minus 1, and then, what I'm gonna do is I'm going to reorder this and this by making the following observation. So, if we think about, how many terms one of these particular, um, log Pi Theta at st appears in? So, if we look at, um, [NOISE] log of Phi Theta of a_1 s_1. So if you look at how many times that appears, that appears for the early rewards and it appears for all the later rewards too. Okay. This is going to appear for r_1, it's going to appear for r_2, it's gonna appear all the way up to rt for t minus 1. Because we're always summing over everything before that t prime. Okay. So, what we're gonna do now is we're gonna take those terms and we're gonna just reorganize those. So, some of these terms appear a whole bunch of times, some of them, the last one, the [NOISE] log of Pi Theta of at minus 1, st minus 1, it's only gonna appear once. It's only re-responsible for helping dictate the very [NOISE] final reward. So, we can use that insight to just slightly, um, reorganize this equation as follows. [NOISE] So, now, we're gonna say this is equal to the expected value of sum over t equals zero to t minus 1. So, notice before, I put the t prime on the outside and the t was on the inside, and now, what I'm gonna do is put the t on the outside, and I'm gonna say, [NOISE] got Delta Theta, log Pi Theta, at st times sum over t prime is equal to t all the way to t minus 1 of rt prime. Okay. So, all I've done is I've reorganized that sum. Yes? Is that ? Yeah. Yeah. Um, on the second line from the bottom, is it's supposed to be the derivative that- is that a value function [NOISE] with respect to Theta? Um, at the very [NOISE] left. Yeah. Okay. Yes. Oh sorry. You mean[OVERLAPPING] it's supposed to be the derivative of this? Yes. [NOISE] Yeah. Thank you. Okay. So, what we've done in this case was we've reorganized the sum. We-we've just recollected terms in a slightly different way. But it's gonna be the- in a useful way. So, [NOISE] let's move this up, and I'll move this one down. [NOISE]. Okay. So, right now, we're still working on the temporal structure. [NOISE] And what is this going to allow us to do? Well that second term there, should look somewhat familiar. What that's saying here is that's saying, what is the reward we get starting at time step, uh, t all the way to the end? And that's just the return. So, we had previously defined that, um, when we are talking about, like, Monte Carlo methods, et cetera, [NOISE] that we could always just look at, um, rt prime at I. This is just equal to the return. The return for the rest of the episode starting in time step t on episode i. So, that, that should look very familiar to what we had seen in Monte Carlo methods, where we could always say from this state and action, what was the sum of rewards we get starting at that state and action until the end of the episode? Okay. So, that means we can re-express the derivative [NOISE] with respect to Theta as approximately one over m sum over all of the trajectories and we're summing over, sum over all the time steps. The derivative with respect to Theta of our actual policy parameter times just the return. And this is gonna be a slightly lower variance estimate than before. Okay. So, instead of us having to sort of separately sum up all of our words, and then, we multiply that by the full sum of all of these derivatives at the logs, we are only kind of needing to take the sum of the logs, um, for some of the reward terms essentially, and so, we can reduce the variance in that case. Because in some ways, what this is doing, this is saying, like, for every single reward, because you could re-express this as a sum of rewards. For every single one of those rewards, you have to, um, sum it by sort of the full trajectory in terms of the derivative of the gradient, uh, the derivative of the policy parameters. And now, we're saying, you don't have to, uh, multiply that by all of those. You only have to multiply it by the ones that are relevant for that particular reward. That means that you're gonna have a slightly lower variance estimator. [NOISE] Okay. So, when we do this we can end up with what's known as REINFORCE which, um, who here has heard of REINFORCE? Yeah. Number of people, not everybody. REINFORCE is one of the most common reinforcement learning policy gradient algorithms. So, you get the REINFORCE algorithm. [NOISE] So how it works is you, um, then the algorithm is you initialize in it, theta randomly. [NOISE] You just always will have to first decide on how you're parameterizing your policy, so somewhere you already defied- decided how you're parameterizing your policy. Now, you're gonna set the values for that policy randomly. And then for each episode, so you're going to run an episode with that policy. [NOISE] Episode. [NOISE] And you're gonna gather a whole bunch of actions and rewards, [NOISE] and this is sampled from your current policy. So, your sample, your current policy according to, um, sample from your current policy, you get a trajectory, and then for every step in that trajectory, you're gonna update your policy parameters. [NOISE] So, [NOISE] for every time step inside of that episode, we're gonna update our policy parameters. [NOISE] So, it's going to be the same as before times some learning rate. [NOISE] I will not use W there, I use alpha, um, times the derivative with respect to Theta, log Pi Theta, st at Gt, where Gt is just in this episode, what is the sum of rewards from st at onwards? [NOISE]. So, that's the just the normal return like, what we had with, um, Monte Carlo methods. So, just like, what we did when we were estimating like, the linear value function, and we were using rewards from the state and action onwards. We're going to do the same thing here except for, um, we're going to be updating the policy parameters, and we do this many, many, many times and then at the end, we return the Theta parameters. Yeah? [NOISE] I have a question. So, for each episode, do you sample from the updated, um, policy? We're gonna talk with you. Hope you're ready. Yes. Uh-um. [NOISE] Yeah. So, what just asked is, right. Um, so, in this case you, um, I- after you do all the updates for one episode, so you could do these incremental updates. Um, I- and then, a- at the end of doing all of your incremental updates, then you get another episode with your new updating parameters. Yeah, ? Um, since we're doing every time updates would this be a biased method? Um, good question. So, since we're doing every time estimates, um, this should be an unbiased estimate of the, um, I- It should still be an unbiased estimate of the gradient. It's stochastic, um, but, um, we- there's not a notion of state and actions in the same way. Um, this will be asymptotically consistent. It's a good question. So, the notion of, um, a state and action in this case is different because we have just these policy parameters. So, we're not estimating the value of a state and action here. Um, so, this is certainly asymptotically consistent. I think it's still just unbiased. Um, if I, if I reconsider that later, I'll send a Piazza post, um, but I think it's still just an unbiased estimate of the gradient. It's a good question. Okay. So, I go back to my slide notes. Um, I think the last thing I just wanted- well, I'll- I mention the, uh, probably the things will, um, [NOISE] one critical question here is, whether or not, or how to compute this differential with respect to the policy parameters? So, I think it's useful to talk about, you know, what are the classes of policies that often people consider, [NOISE], um, that have nice differentiable forms. So, um, some of the classes people considers are things like, Softmax, Gaussians, and Neural networks. Those are probably the most common. So, I- what do I mean by that? I mean, that's how we're [NOISE] going to actually, just parameterize our policy. So, let's just look at an example. So, Softmax is where we simply [NOISE] have a linear combination of features, and we take sort of, um, an exponential weight of them. So, what we're gonna do is we're gonna have some features of our state and action space, and we're gonna [NOISE] multiply them by some weights or parameters. These are our parameters. [NOISE] And then, to actually get a probability of taking an action, so if we want to have our policy where we say, what is the probability of action given the state? We're gonna take the exponential of these weighted features. So, we have e [NOISE] to the phi t Theta, divided by the sum over all actions [NOISE]. So notice, this is a reasonable thing to do when our action space is discrete, action space discrete [NOISE]. So, [NOISE] a lot of Atari games, a lot of different, um, scenarios you do have a discrete action space, um, so you could just take this exponential, divide by the normalized. Um, sum over the exponential, and that immediately yields our parameterized policy class. And so, then, um, if we want to be able to take the derivative of this with respect to the log, that's quite nice because we have exponentials here, and we have log term. We're taking the log of this. So, um, we want to be able to compute this term from this sort of parameterized policy class. What we get is the derivative with respect to Theta of log of this type of parameterized policy [NOISE] is just equal to our features [NOISE]. So, this is whatever feature representation we're using, like in the case of the, um, uh, locomotion robotic case, this would be like all the different. Um, uh, this could be something like, you know, angles, or joints, or things like that. Um, so, this is whatever featurization we're using minus the expected value for Theta of [NOISE] your parameters, um, with an exponent with, um, your expected value [NOISE] over all the actions you might take under that policy. So, it's sort of saying the features that you observed versus the sort of average feature, average, average over the action. Okay? So, that's differentiable. Um, and you can solve it, then it gives you an analytic form. Um, another thing that's really popular is a Gaussian policy. [NOISE] And why might this be good? Well, this might be good because often, we have continuous action spaces. So, this is good if we have discrete action spaces. Often, we have continuous actions spaces. This is very common in, um, controls and robotics. [NOISE] So, you have a number of different parameters and i- in their continuous scalar values that you wanna be able to set. Um, and what we could say here, let's say, we use mu of s. It might be a linear combination of state features, times some parameter. Okay. And let's imagine for simplicity right now that, um, we have a variance but that, that's static. So, we could also consider the case where it's not, but we're gonna assume that we have some variance term that is fixed, [NOISE] so this is not a parameter. This is not something we're gonna try to learn. We're just gonna be trying to learn the Theta that's defining our mu function, and then our policy is gonna be a Gaussian. So, a is going to be drawn from a Gaussian using this mean per features. Okay. So, we compare our current state to the mean, and then we select an action with respect to that, and the score function in this case is the derivative of this Gaussian. [NOISE] So, score's a derivative of, uh, of this Gaussian function [NOISE], which ends up being a minus mu of s times our parameterization of the state features divided by sigma squared [NOISE]. So, again, it can be done analytically. And the other really common one is deep neural networks. So, those are, um, [NOISE] those are- those are sort of very common forms, um, I- that people use. We're gonna talk next time about another common way. Before we finish, um, we're gonna do, um, spend about five minutes, um, to do some early class feedback. It's really helpful for us to figure out what's helping you learn, what things do you think could be improved? Um, so, what I'm going to do right now is open, if you guys could go to Piazza. Um, It would be great if you could fill this out. All everything is anonymous. Um, and my goal is to get feedback for you guys on Wednesday about this. So, just- let me see if I could redo that. So, it's posted. Okay. Let me see if I can- I'll pin it, so it's easier to find. [NOISE] So, it should be pinned now at the very top. Yeah. So, a class feedback survey, if you go to Piazza, um, it's a very short survey. You can give us information back. That would be great. What we'll do next time is we'll continue to talk about policy search. We're gonna talk about baselines, which is another way to reduce variance. Um, and this again is a super active area of research, so there's, um, a ton of work on deep reinforcement learning and policy gradients, and so we'll talk about some of that work, and then you'll have a chance to, uh, play around with this a- and, uh, do get sort of hands-on experience with policy gradients after the mid-term. So we're releasing the assignment about this at post the midterm, and that will be the third assignment. |
Stanford_CS234_Reinforcement_Learning_Winter_2019 | Stanford_CS234_Reinforcement_Learning_Winter_2019_Lecture_13_Fast_Reinforcement_Learning_III.txt | All right, we're gonna go ahead and get started. Um, I want to start the class with some stuff about some logistics, um, as well as sort of to address some questions, um, that have come up on Piazza about the grades on the midterm and some people have concerns about what that might mean for their grades on the final class. Um, it was interesting to go back and compare the means and the distributions on the midterm last year, it's near identical. Um, so last year the mean was about 69%, this year it was about 71%. Um, and you see pretty similar distributions. Uh, the one on the top is our- Oh, no, the one on the bottom is ours. So you can see this is 2019, this is 2018. [NOISE] Okay. So they look pretty similar distributions. Um, we don't do an official curve for the class. If anybody's getting over 90%, I always consider, even if everybody gets over 90%, that that means that those people all understand the material well enough to deserve an A. Um, and then if we have really, um, abnormal distributions that sometimes we curve below that. But just to give you a sense, um, last year, about 42% of people got an A in the class. So for those of you that are concerned about your midterm performance and concerned about your final grade and whether it's still possible to do well in the class, it definitely is. So, so does anybody have any questions about the midterm? I know we've had some regrade- regrade requests and we're going through those as quickly as we can. Yeah. Is there any [inaudible] I'm supposed [NOISE] I'm very curious [inaudible] but is there any distribution available per, [NOISE] per question? Cause like- I feel like, for example, for me, I just ran out of time on the last question and I wonder if that's. Yes, we have that information. Um, I'll double-check with the TAs that there's no reason we shouldn't release that, I don't think there is. So, um, we, Gradescope gives us full distributions for all the questions. So we can release that. Um, a lot of people ran out of time, uh, the last problem was definitely the hardest. So that was where we saw the biggest variation and so that's where we tried to be particularly careful on that rubric. Um, and we very much tried to make sure that if you're doing algebraic mistakes throughout the exam, that that was worth very little and we were focusing on the conceptual understanding. Any other questions about the midterm? So I'll just write down per- per problem breakdown. Basically, when we're going through it, we try to look at any problem that had really high variance, um, and then step through the rubric again to make sure that we're being fair. Oh, one other thing I- which is we're gonna continue to accept regrade requests for the midterm through Friday. And after that, it will be closed. [NOISE] Okay. So that's the midterm. Um, hopefully, that helps, sort of, quell some concerns from at least some of the people in my class. The other thing that I wanted to bring up right now is, um, the quiz. So the quiz is gonna be in about two weeks, a little less than two weeks. It's a weird format. We do this for a reason. Um, I think one of the big tensions in classes that have big final projects is whether to do a big final project and a big final exam, uh, which I think is a lot of [LAUGHTER] a lot for students to do both on, um, and otherwise why go to class after the midterm? [LAUGHTER] I mean, now why, why, you know, how do we, uh, make sure that there's a reason to learn about the material in the second half of the course which we do think is valuable and particularly is often covering [NOISE] more important recent topics, um, but without doing a really high-stakes large exam. So I, I talked to a number of people in the sort of teaching, uh, the teaching center here called VPTL. Um, and the idea that we came up with is to do a low-stakes quiz, and the idea is that it's fun. Um, and I have heard from multiple people that this is actually true, that's the design of it. The design is it's gonna be a two-part quiz. It's multiple choice. It's all supposed to be about, sort of, high-level conceptual questions. We'll release last year's, so you guys can see an exa, uh, example. And so the idea is you do it in two parts. You first do it individually, that takes around 45 minutes, and then you'll be paired up with random groups and you will, um, have to do one joint quiz. And your grade will be composed both of your individual part and your group part, but you can only do better on your group part. So if your group does worse, then you're gonna just get the same as your individual grade. So the reason that we do this is that, um, for the group, then it's a scratch off exam. So you scratch off answers as you de- decide on them as a group, and the point is to- that you should be able to articulate why you believe some of these answers are true or false and convince your classmates, and in doing so, that can be a really useful way to think about really knowing the material well, um, and also hearing the perspectives of other people. So that's how the last quiz goes. Um, [NOISE] again last year everybody- there was some concern about it before on Piazza, people were concerned. There are certain game theoretic aspects that can come up. Um, we carefully design this so that it's a very small part of your grade. Um, again you could only do better with the group than you can on your individual, so it's carefully constructed. And empirically when we did this, there was lots of laughter and lots of people seem to really enjoy this aspect, and we've thought about whether we'll do it in multiple parts of the class. But it's different, almost nobody has ever done an exam like this. So does anybody have any questions about that? It's about 5% of your grade. Yeah. Uh, I remember you saying something earlier on in the course [NOISE] that you guys have already decided the teams or will have decided the teams, is that true? Yeah. So we haven't already decided that the question, um, was how are the teams assigned and have we already decided them, we'll do that by random assignment. It'll depend also on which SCPD students are taking the exam on campus or not, uh, but we'll release this a few days before, um, and you'll be randomly allocated to a team and then you'll just sit with that team for that part of the exam. [NOISE] Anybody have any other questions about that? And we'll- we'll release the, the sample one. It will cover all of the course. So it'll be more heavily weighted to stuff that's happened since the midterm, but anything from the whole course will be game, and the idea is that someone who has been attending lectures, um, or, or watching lectures online, um, should have to study for, you know, on the order of a few hours and then be pretty well prepared for the exam. You'll also be able to bring, uh, a cheat sheet, just like what you did for the midterm. [NOISE] Any other questions about that? Yeah. And I feel like you probably mentioned this, but I'm just missing which part of the quiz is individual versus with other people? Do we sit down individually and then after like 20 minutes go be with other people? Yes. So, um, that was a good one. So how does- what is this individual group thing? So the idea is that you come in, everybody gets an exam, you work on that exam, probably from around 45 minutes. Um, you hand it in when you're done, then when everybody- when that part of the, the class is done, most people finish early, it depends, um, then you as a group get a new exam. And you do exactly the same exam as before, but you just have to jointly agree on the answers. And you scratch it off so we can see how many- it's, uh- how many you have to scratch off until you got the right answer. But essentially, we can just see whether or not you got the answer on the first time or, or it took more than one. Any other questions about the quiz? If you have any concerns about that, just write, um, email us on Piazza. I'll just say briefly there- about what was the one of the concerns that came up last year. Um, the concern was game theory. So you always get the max of, uh, your score versus the, the group score. So what people said is well what you should do is you should answer the best you can on your individual part because that's worth the most credits, and then on the group part, you should- if you were torn between two, you should get people to agree on the second answer to hedge your bets. Um, [NOISE] you can do that if you want to [LAUGHTER] or try to, your or your group members can outweigh you. Again, the group part last year was about 0.5%, so it's very small. Um, so there, there is that possibility- you would only want to do that if you were genuinely really torn between two options, um, and again, there's only one right answer, so I, I think that we've observed in practice, there was very little need to do game theoretic analysis of this. [NOISE] But, no. Again, it's- we're always welcome to hear how people interpret these things. All right. Any other questions about the quiz or logistics right now? We also- most of you who have already turned in your, uh, m- milestone for the project, we'll be giving feedback on those over the next few days for those of you that are not doing the default. Okay. All right. So I put this up before. This is for everybody that didn't see it before, the grade distribution is basically identical. So today, we're gonna do the last part on fast learning. Um, this is, uh, a really big topic, there's tons of work on it. Um, well- we'll spend some more time on it today, and then on Monday, Chelsea Finn, who's, uh, just finished her P- PhD at Berkeley and she'll be joining the faculty here in the summer, um, will come and talk about meta-learning, which is also a really exciting area, and she'll be talking about meta-learning for reinforcement learning, where meta-learning is relevant to, sort of, multitask or transfer learning tasks. [NOISE] So just to go refresh our minds about what we're talking about in terms of this fast learning, we're thinking about cases where data matters. So things like healthcare, and education, and customers. Um, I was getting an invite to talk at Pinterest on Monday, they definitely care about these types of ideas as well. Um, and we've been talking about two different settings: Bandits and Markov decision processes, as well as frameworks for formerly understanding whether an algorithm is good or whether it is fast in terms of the amount of data it needs. And I'll note there that we haven't talked much about computational complexity for this part of the course, but there are similar, um, some of these frameworks can even easily be extended to talk about polynomial sample complexity. So often, you can extend these frameworks to also account for computational complexity requirements. Okay. So let's continue with Markov decision processes. What we started seeing last time is that we built up sort of this expertise on bandits so far, of thinking of a couple of the main ways we evaluate whether or not a bandit algorithm is good and approaches to try to achieve that. So we talked about mathematical regret, which was the difference between how well we could've acted and how well we did act in bandits, and, um, a lot of the work in bandits focuses on regret. We also talked about two different types of techniques for trying to achieve low regret, which was optimism under uncertainty, and then also Thompson sampling. So trying to be Bayesian and explicitly represent posterior over what you think might happen when you pull an arm, or take an action, and using that sort of information. [NOISE] Then last time we started talking about Markov decision processes where I argued that very similar ideas are important here, but- but the problem is a lot more challenging in many ways. I- and we were talking a little bit about probably approximately correct. So, in particular, we were talking, uh, a bit about model-based interval estimation, which I mentioned was a probably approximately correct algorithm. And so just to remind ourselves, what did PAC mean? [NOISE]. And some of you have seen this probably in machine learning. So probably [NOISE] approximately correct [NOISE]. Probably approximately correct RL algorithm is one that given an input, epsilon and delta. So epsilon is gonna specify sort of how good close to optimal we want to be, and delta's gonna specify with what probability we're gonna want this to occur. Um, with input epsilon and delta on all but N steps [NOISE]. Our algorithm will select an action where there, the Q value of that action, the true optimal Q value is greater than or equal to, I'll write down it as V. The best possible you could have for that state, minus epsilon [NOISE] with probability at least 1 - delta, and throughout today I'm going to be a little bit loose about constants, sometimes this will be 1 - 2 delta, sometimes there might be a little constant in front of here, sometimes there might be a little constant in front of there. I'll put one here just so you can keep that in mind. There might be small constants there. Those are just, there might be two or four. Um, but the important thing is that, that you're close- very close to optimal, except for maybe a constant factor away, um, where N is a polynomial function of S size of your state space size of your action space gamma, um, epsilon, delta. Yeah. 1 over epsilon [inaudible] 1 over epsilon, yes great. Question was good is, um, are these going to depend on epsilon or delta or 1 over epsilon or 1 over delta? Yes, in- inside of all the expressions they'll end up being 1 over epsilon and 1 over delta. So you could equally write this as this. [NOISE] Because essentially N is going to be larger, if you want to be more accurate. So that's going to, um, as epsilon gets smaller, you're going to need more data to be more accurate. And if you want to be more sure, you're going to be accurate, you're going to also scale up with that delta. Okay, and I just want to before we kind of continue further. I'd like to briefly contrast this with regret because in the bandit setting we mostly think about regret. But it's nice to think about what the difference is between PAC and regret, particularly in online learning. Meaning like our algorithm's learning online in a MDP and it's learning forever. Um, which is what regret is telling you. So regret is saying [NOISE], Is that large enough in the back, can you see regret? Okay. So what regret is saying is let's say we start off at a state S_0. Regret is saying, what if you did the optimal thing from then on wards. Like, how great would your life have been. So if you had won that, you know, first coloring contest, and that set you up for Harvard, and set you up to the Supreme Court, like it's fabulous. But if instead you didn't enter the, that, um, coloring contest, and you got down here instead. So you could have had like a +10 there, but instead you had a 0. And you went to a different state, which, all right, you went to a different state in terms of MDP, where now you're not the person that won the coloring contest. And so then, you know, [NOISE] your life trajectory was irreversibly ruined. Um, in this case, [LAUGHTER] you are judged with respect to not just the, the actions but the state distribution you could've got to under the optimal policy. So you're always being judged with, what could I have reached if from the very beginning I always made optimal decisions? I always went into the coloring contest. I always went to Harvard. You know, I never went to that Stanford place and, and you got up to the Supreme Court versus, um, as soon as you make a different decision you might end up in a different states distribution. But you're going to look at these gaps. Okay. So you're going to be judged by the state distribution you ended up in and the rewards you got there versus the state distribution you'd get in under the optimal policy and rewards you get there. So regret in some ways is a pretty harsh criteria. Because it's saying like you always have to be judged for, like, if you'd made optimal decisions forever. PAC is much more reasonable in certain ways. PAC says, "I'm judging you under the state distribution you get to under your algorithm." So because it says, it will take an action that's close to optimal for the state that you're in. So what does PAC say? PAC says, okay you started off here. You didn't enter the coloring contest. You went to there. Okay, that's too bad. Um, given that you could've then, you know, I don't know, entered the next coloring contest, or you didn't. And I'm going to be judged by that local gap. I'm going to always only be judged by how optimal am I given the distribution of states I'm getting to under my algorithm. So PAC can give, have much smaller regret. I'm sorry, much smaller, sort of, um, negative, ah, differences compared to regret. Because imagine you have a really harsh MDP, and you have to make the first right move and then you go to some wonderful land. I'll see you always toil about in this horrible gridworld. Um, so in that case regret would compare you to, if you'd actually made the right first choice, whereas PAC would say, okay maybe made a bad first choice, but like you're making the best of things for where you're at, and you're kind of be near optimal given this bad state-space you've ended up in. So in some ways you can think of PAC is kind of making the most of the circumstances you've got yourself into [LAUGHTER], whereas regret is always judging you from if you'd make good decisions from the beginning. I saw a question back there. Yeah. And everyone please remind me of your names. I know, I'm trying hard, but I sometimes forget. Yeah. Episodic MDP's? Great questions. The question is does this extend to episodic MDPs? Um, so in episodic MDPs just to recall, um, those are MDPs where we act for h steps or, or a finite number of steps and then we reset. In episodic MDPs, regret and PAC are closer becau se normally, the PAC guarantees we get in that case. I'm not going to talk too much about those today but, um, are going to be with respect to the starting state. So you're going to look at, like V star of S0 versus Q star of S0, of like the actions you're taking, or the policy you're following. So in those cases they start to be closer, because you're always being judged from the starting state and you can reset. But in online, like continual learning, um, for reinforcement learning they can be quite different. Because the state distributions could be so different. Yeah. Could you just explain more about where C1 and C2 come from because I don't see them as any like given parameters. Oh, yes. I just put up, question about C1 and C2. I just, I'm going to be very loose with constants today. Most of these type of regret guarantees are all about orders of magnitude. So it's stuff like is N a function of S to the 6 or is it a function of S to the 4. And we generally don't worry about constants too much. So I just put these in there to say, some of the different theoretical bounds will have different constants there. But for today we're just going to kind of ignore those but just so that you know that there might be constants there. This might be 1 - 2 delta, for example, instead of 1 - delta. [NOISE] Okay, right so that's one of the differences between regret and PAC. So let's go back to this algorithm now. Um, and I'll highlight, so we'll talk a little bit about generalization later today. But I, I wanted to go through sort of one of, how do we start to think about whether an algorithm is PAC or not. I told you that this algorithm is PAC. But I wanted to talk some about why it's PAC, and what it means for an algorithm to be PAC, and are there general sorts of templates that we can use to show an algorithm is PAC. All the stuff I'm going to talk about right now, involves tabular settings where we can write down, um, the value function as a table. And later we'll talk some about how these ideas extend. We're particularly picking this algorithm which is what's known as a reward bonus algorithm. So we have this nice little reward bonus here. Because it's going to be easier to extend to the model-free case. Now one thing I just want to highlight when we look at the MBIE-EB is that, if we go back and refresh our memories about this, what we are doing is we are computing the maximum likelihood estimate, or otherwise known as just adding up the counts and dividing, of the empirical estimate of the transition model and the reward model for every state-action pair. So we look at how many times we've been in a state-action pair, which next states we transition to, and we use that to construct an empirical model. And we do the same for the reward structure. And then we want to figure out how to act, we take those empirical models, and you can think of this, this operator here as if we're slightly changing our reward model. So I put it here as the empirical reward plus this bonus. But you can alternatively think of this as like an R hat prime which is equal to R hat of SA plus this bonus term. [NOISE] So you can think of this as like kind of defining a new MDP. There's a new MDP where the transition model is T hat and the reward model is R hat prime. Which is the empirical reward plus this bonus term. And it's- it's not a real MDP, but, but that's an MDP we could solve and try to compute the optimal value for and that's what we're doing here is we construct this sort of optimistic MDP, where we're using the empirical transition model. And then we use a reward model that has really large bonuses in places we haven't visited very much. [NOISE] All right. And- and a key thing we're gonna see shortly is the critical thing is how optimistic to be. Um, and there's been tons of work on- on trying to make things more or less optimistic. And if we have time, I'll show you some other slides about some recent progress in this field. Okay. So we talked before about this MBIE-EB PAC. And then now, let's talk a little bit about sort of what are the sufficient conditions to make something PAC, and then, how does MBIE-EB satisfy those form of conditions. So the conditions that I'm gonna talk about are derived basically from this paper, um, with slight modifications that paper does not- I'll just write down here, does not analyze [NOISE] MBIE-EB. So things would have to be a little bit different. But from a 30,000 foot perspective, this is basically, a reasonable way to think about why MBIE is a- MBIE-EB is a PAC algorithm. [NOISE] Okay. So let's unplug this or yeah, oops, I'll put these up on the board, because I think it's helpful to kind of see all of it at once. Okay. So what is a sufficient set of conditions to make something PAC? [NOISE] I know that I found this paper super helpful when I was starting to do PAC proofs in my PhD. So- so what's a sufficient [NOISE] conditions for PAC? And the theory is beautiful. But even for those of you that aren't interested in theory, I think looking at this is helpful because it gives one an intuition about what types of properties do your algorithms need to have in order to be efficient or wha- what types of properties are sufficient for your algorithm to be efficient? Okay. So the first one is optimism. Again, this is not the only set of conditions that are sufficient- that are sufficient for something to be PAC but here's a set. So here's optimism. Optimism. Okay and optimism simply says that, the computed value you use. Okay. So this is for this is s_t, this is a_t. So this is the actual value we compute like from MBIE-EB that optimistic value we compute. So this is the computed value of your algorithm [NOISE]. Has to be greater than or equal to the true optimal value for that state-action pair [NOISE] minus epsilon on all time steps [NOISE]. Okay. So it says that, whenever we are doing the MBIE-EB calculation, when we've taken our empirical models and we add in that reward bonus, we have to pick a reward bonus so that whatever we compute for the resulting state-action pair is optimistic minus some epsilon on all time steps. All of this is only gonna need to hold with high probability but I'll just write it out like this. So this is the first condition. The second condition is a little bit more [NOISE] subtle but I'll say more in a second about the specific cases for this. So the second thing is what's known as accuracy. And I'll write down the accuracy first. So the first thing says, you need to be optimistic on all time steps. The second thing is that, you need to be accurate. Which means the V_t. This is again, what the algorithm computes [NOISE]. Your algorithm is gonna compute this. So it's what MBI computes using your optimistic model needs to be - V pi t of a weird MDP. And I'll tell you what that, about that weird MDP in a second. It is not the MDP I just said that is the nice optimistic MDP. It is not the real MDP. It is an MDP that's sort of in the middle of those. And this type of trick in RL comes up a lot where we sort of construct, you've seen probably in several proofs now, where we add and subtract the same term which is sort of halfway in between say, two different Markov decision processes. We're gonna play a similar trick here and we're gonna construct an MDP that is sort of half optimistic and is half like the real MDP. Again, this doesn't exist in the real world we're just gonna use it as a tool for our analysis. So I'll say what this is shortly. [NOISE] What there's different ways to define this but, um, it says that, something that's gonna be closely related to both your optimistic [NOISE] MDP and the true MDP [NOISE] that your value. So this is pi t is the policy you're actually executing at time step t. The- the value that you compute has to be close to this sort of weird hybrid MDP [NOISE] within epsilon. So it has to be pretty close to this other MDP. And the reason for this is that, this is we're gonna be able to use this to try to bound how far away we can be from the real MDP. So why are we gonna need this? We're gonna need this because optimism would be easy to hold by just setting our values super high and never updating. So that's fine but you need to be able to use the information you have so that eventually you're gonna be acting near-optimally. So if something really is bad, you don't want to be really optimistic forever. And so the accuracy condition is gonna say, if we've got sort of enough information about some state-action pairs or value for some of those needs to be fairly close to the, to a real value [NOISE]. Okay. And then, the third thing [NOISE] is bounded learning complexity. Okay. [NOISE]. And this has two parts. This says, the total number of updates, total number of queue updates. So in MBIE-EB, we would update our state-action values. And we would rerun that sort of optimistic Q value iteration. The total number of times we do that has to be, is gonna be bounded as is, the number of times [NOISE] we visit an unknown pair, state-action pair and I'll say more what that is in a second [NOISE]. All right. So we're gonna classify all state-action pairs and we're in the tabular settings and this is reasonable for us to do, um, we're going to end up classifying every single state-action pair into being either known or unknown. And we're gonna say the total number of times we visit an unknown state-action pair both of these have to be bounded [NOISE] by some function. It is a function of epsilon and delta. So it means you can't do an infinite number of queue updates and you can't visit unknown state-action pairs an infinite number of times, like your algorithm can't. These are conditions on your algorithm. Okay. So if you can satisfy all of these, then your algorithm is PAC. So if 1 through 3 are satisfied [NOISE] then, it'll be epsilon order epsilon optimal [NOISE] on all but and I'll write this out here just so you can kind of get a sense of what these type of bounds can look like and all but N which is equal to order. This- this bound is sample complexity divided by epsilon times 1 - gamma squared times some log terms. So essentially, this is saying that if you can be optimistic accurate with respect to some weird MDP I haven't told you about and that if your total number of queue updates and the number of times you visit unknown state-action pairs which I often haven't told you about exactly how we define that, if that is bounded then, you're gonna be PAC. You're gonna be near optimal on all time steps except for a number that scales as a function of this which is also generally, a function of the size of the state space and action space. This is also function of say [NOISE] at epsilon of 1 - delta. So this is kind of a template. So if you can show that your algorithm satisfies these properties, then you can show that it's PAC. All right. So how does MBIE-EB satisfy these properties? Well, the first thing we need to show is that MBIE-EB is optimistic. Yeah question. Ah, so the first question was for three, part one and part two, do they both have the same bounds, or are there just two separate bounds with different magnitudes? Good question, he's asking about, do you mean the epsilon there? The total number of Q-updates and the total number of times you visit other states. Uh, good question, um, so for part three, as the total number of Q-updates the number of times you visit state-action pairs, um, they are going to be very closely-related essentially, whenever you visited another state-action pair, then you can do a Q-update. For one and two epsilons- are they the same? Yes, yeah. Good question. So that's what I thought you're asking. In one and two, um, the question was, are epsilon the same? Yes, they're the same. Epsilon is same through 1, 2 and 3. So if you're designing your algorithm, 1 and 2 and 3 all have to be the same. Constants probably don't matter. It can, you know, be 1 minus. In some of these cases you can be- have a constant in front of the epsilon. One just has to be a bit careful. So for here, we'll just put them like that, okay? Same epsilon everywhere. Okay. So let's talk first about why MBIE-EB is optimistic. Um, let's- actually, can we put this up please? I think that'll be better. So I'm gonna just reput up MBIE's, um, bonus term so that you can, uh, see what that looks like. Okay. So I think this is gonna go up in just a second, and then just to remind us what the update was for [inaudible]. So when- what we were doing in MBIE-EB is, we had a state-action pair. We had our empirical reward for that state-action pair, plus our bonus, beta divided by square root of n(s,a). Is that too small in the back, is that okay? It's okay? Okay, great. Um, I see at least one person nodding. So this is our sort of optimistic reward. You can call this, like R tilde. This is our optimistic reward. Plus sum over s'. This is our, again, empirical transition model. S' given as a max over a' of Q tilde of s',a'. Okay. So this would be a backup we could do. So it's like a Bellman backup, with our optimistic reward bonus. And just to remind ourselves, beta was still gonna be defined as 1 over 1 - gamma, square root of one half log 2 S, A, M divided by delta. All right. So in this case, what we wanna be able to show, we don't have to think about known or unknown state-action pairs yet. We want to show that this value, when we compute it, is an upper bound to the true Q star, up to epsilon. So we wanna be able to show this first optimism condition. We want to- what we're trying to argue right now is that, that beta is sufficiently large as a bonus, that when we do this procedure we're gonna be optimistic. Okay. So let's step through it here. Okay. So how do we show that? Let's think about a particular state-action pair. So let's think about one state-action pair. So s,a and let's think about that we visited some n(s,a) times which is less than m, okay? So in our algorithm, we are only gonna update our empirical estimates until we have m samples. So [NOISE] we only use the first m samples of s,a to compute R hat and T, okay? After that we're gonna throw away our data. So this is like saying the first, um, m times you visit this particular state-action pair, um, you can use that data to try to compute an empirical model, and use it to compute an empirical model before you have m counts, but after that you're never gonna update. I'll- I'll just put a side note in there which is, um, [NOISE] you might think why [LAUGHTER] why should we do this? And in particular, uh, there's a really lovely description of the whole field of machine learning by Tom Mitchell who's one of the- really, the founders of machine learning where he argues the whole discipline in machine learning. The point is to look at, um, the foundations of how an agent can learn and also that we design algorithms that continue to improve with more data. And this is violating that to some extent, because this is saying that even if you get 10 trillion examples of that state-action pair, which surely would make your empirical model better, we're gonna throw all that data out. [NOISE] Just to give you a sense of why we do that or why this earlier analysis did that, we do that for the high probability bound. The idea is that, um, the high probability bound is gonna work, like what we saw for bandits of making sort of upper confidence bounds and kind of guaranteeing that our estimates, say if the transition model are close to the true values. And those bounds all hold with high probability. And so, who here has seen union bounds in different things? Okay. A few people, but most people have not. So union bounds are a way to make sure that if you have a number of different events, all of which hold with high probability, that the total of those events all hold with high probability. And that is essentially why here we only use a finite amount of data. Intellectually, this is completely unsatisfying [LAUGHTER] um, because you should clearly be able to use more data and your algorithm should do better with using more data, and empirically, we use all the data. Um, one thing that I find really satisfying for last few years is that, uh, with my student and Tor Lattimore who's over- who's one of the authors of the bandit book that we recommend, um, we showed that you can remove this restriction. You couldn't just continue to use data forever, by using smarter things than union bounds. But regardless, for today, we're gonna- we'll do this. So we're gonna say- we're only gonna, um, use up to m samples. Now, let's think about cases where n(s,a) is less than or equal to m. Okay. So we have up to m samples, but in general, you know, it may be one, it might be two. Some number that is smaller than m, which is some constant that we have not specified yet, okay? So what we're gonna look at is for this state-action pair. We're gonna look at all of the experiences for that state-action pair. So let's call X_i, to be defined to be r_i + gamma, V star of s_i. This should look quite like what- these were the targets that we had in TD learning. Um, this is saying that the reward we got on the ith time, we sampled s and a, and the next state we got 2 on the ith time we sampled s and a. So this is from ith visit to s,a. This is the next state. Next state [NOISE] Okay. So we can define this. So we can think of each of these are gonna have an expectation of the true Q star of s,a, okay? Because I've just defined this. We don't have to know what V star is right now. We're just analyzing what would happen with these samples. So if we define our samples to be r, the real reward we saw, the real next state we saw. On average, this is really just Q star of s,a. All right. So if this is Q star of s,a, we can think about how many samples do we need until we have a good approximation of Q star s,a, or how far away can our average B, over the averaged- the real empirical average of X_is versus Q star [NOISE] And we can do this using Hoeffding or other sorts of deviation bounds. A little bit like what we saw of our bandits. So for bandits, we looked at if you have a distribution over rewards, if you have a finite number of samples of that, how far can it be away from the true mean reward? And similarly, here we're gonna say if you have a finite number of samples of the next state and the reward received, how far away can we be from the true Q star in this case? Right. So there's some technical details here because one has to be a little bit careful about, um, the fact that the data that we gather depends on the history. So this is, again, one of the ch- more challenging things in, um, Markov decision processes in this sense, or particularly Markov decision processes in that the data you gathered depends on your algorithm. So you're gonna get more samples for state-action pairs that you think are gonna be good, and less samples for state-action pairs that you think are gonna be bad. So there's coupling. The data isn't really IID, um, across the whole distribution, but sometimes conditioned on the fact that we're sampling for the state-action pair. The next state and reward are IID because it's Markov. [NOISE] So just to give some of- that's just to say we have to be a little bit careful here, but we can basically do the pro- use things like Hoeffding to say the probability that, Q star of s,a, - 1 over n(s,a), sum over i = 1 to n(s,a) of X_i, where X_i is just what we defined up there. The probability that's greater than or equal to beta over square root of n,s,a, is gonna be less than or equal to the exponential minus 2 beta squared, 1 minus gamma squared, okay? This is using like Hoeffding or like a similar type of deviation inequality. You can also use ones that depend on martingales, for those of you that have seen some of those before. Regardless, this basically just allows us to say, as you get more and more samples for this particular state-action pair, how far away could you be from optimal if you did know B star? [NOISE] Now we know what beta is. I put it up there. So if we plug beta into here. So if you plug beta in [NOISE] the real value for beta in, you get that this is gonna be equal to delta divided by 2 size of the state space, size of the action space, m, okay? So it just says that this holds with high probability. That the number- as you have more samples, um, the probability that you're gonna be far away from the true Q star is small, okay? All right. So we can put that, like, substitute this result back in, and we can say what that therefore means is that, if we look at the union bound across all of this, um, well here, let me just write down one more thing which is what does this X_i actually look like. So X_i, if you say 1 over n(s,a) of sum over i = 1 to n(s,a) of X_i. What is that? That's actually just the equation that we had up there. Okay. So it's very similar to- it's your empirical reward, plus gamma times T Hat s' s,a. Almost that equation except for you've got B star here, okay? So this should look really similar to that Q tilde up there. We're using the empirical rewards here in the emp- empirical transition. The difference is that here we're using Q star and up there we're using Q tilde. All right. So what this means is that if you have a number of samples, then you can bound the difference between the thing that we're doing up the- the- this thing and the Q_star. So let's do R_hat of s, a plus gamma sum over s_prime transition, the empirical transition model, s, a, V_star of s prime, minus Q_star s, a is greater than or equal to minus beta divided by square root n(s, a). Okay? And this is gonna hold for all T, s, and a. All right. So we've used Hoeffding and now we can relate, um, the empirical reward, the empirical transition model, and if someone gave us the optimal Q to what Q_star is. And then, now, what we wanna do is compare this to what we're- that equation up here. So I'll just- all right, this is one. So that equation one up there is what we're actually doing in MBIE-EB. We keep doing that over and over again until it convergences. So we take our empirical transition model, we take our empirical reward model, we add in this bonus, we do value iteration until we converge. And what we would like to do now is to compare what happens to that quantity, versus what is Q_star. And we want to show that that quantity up there is going to be greater than or equal to Q_star, okay? So we're gonna do that by induction. I think I'm gonna- [NOISE]. Okay. So the proof is by induction. So what we're gonna do is we're gonna get Q_tilde, i of s, a be the i-th iteration of value iteration. So this is using that equation 1. So equation 1 up there. And we're gonna let V_tilde_i equal the- for a value s just to be, if we were to take the max action of that Q_tilde. Okay? For every state and action pair. And what we're going to assume is that we initialize optimistically, and we're gonna initialize Q_tilde of 0 for s, a equal to 1 over 1 minus gamma, which by definition is greater than Q_star, is at least as good as Q_star. So that's our base case. And again, what is the- what are we trying to do? We're trying to show optimism here for MBIE-EB. We're trying to say that if we do this procedure with MBIE-EB, we'll be optimistic. We're going to do a proof by induction and this is the base case. So we start off, we initialize our Q_tilde optimistically, and now we're gonna, um, assume that this holds. So we're gonna assume Q_tilde_i of s, a is gonna- we're gonna assume that Q_tilde_i of s, a is greater than equal to Q_star, of s, a for the previous time-step. Okay? So we're going to- All right. So let's write out what Q_i + 1 is going to be. Q_tilde_i + 1 is going to be equal to for s, a, is going to be equal to our empirical reward, plus gamma sum over s_prime, or empirical transition model, times our V_tilde of i, of s_prime, + beta over square root n(s, a). Okay. That's just the same as equation 1. So now we're going to say that this, by definition, is going to be greater than or equal to R_hat of s, a, + gamma sum over s_prime, our empirical transition model, times the true V_star. Because that's our induction- inductive hypothesis. We assume that this held on the previous time-point. Okay. + beta, divided by square root n(s, a) this is by my inductive hypothesis. We assume that we knew, we have a base case where this holds. We're going to assume that, that held on the previous iteration, and then the last part that we need in here is that if we, the- this part, we look at this part versus this part. Okay? So if you rearrange this equation, then you can see that R_hat of s, a, plus all this stuff, is greater than or equal to Q_star(s, a) minus beta, square root n(s, a). Okay? So that means that if we substitute that back in, back in over here, we can say this is equal to Q_star(s, a), minus beta over square n(s, a), plus beta, square root n(s, a). Should go to Q_star. Okay. So now we've shown optimism. So the key idea in that case was to say, we know that we're getting to- that, we're going to relate this to what would happen if we had the true Q_star, we showed that if we know the true Q_star on the next timestep, then doing this one step backup, um, we can bound how far away we'd be from Q_star in terms of our function beta, and then we can do an inductive, an inductive proof to show that if we were optimistic on the previous timestep, we can always ensure that hel- held at the beginning, because we used optimistic initialization. Then we'll continue to be optimistic for all the, for the resulting Q- Q_hat, Q_star. So this proves optimism. Anybody who may have questions about that proof? Okay. So that's the proof of optimism. The other key part, and I won't go through the other part in quite as much detail, but I'll, I'll talk about it briefly at a high level. The other really important part is accuracy. I mean, bounded, I'll keep this up in case anyone's still writing. Accuracy is really important, um, and the fact that you will eventually become accurate is important. So the- the intuition for this part is that, um, you can think of defining a couple different, well, you can think of defining things as being known or unknown. Okay. Somebody want me to keep this up or is everyone finished writing? Raise your hand if you'd like it up. Okay. Okay. So in many, er, PAC proofs for finite state-action pairs, there's this notion of knownness. So what does it mean? So known state-action pairs. Intuitively known state-action pairs are gonna be pairs for which we have sufficient data that their estimated models are close to the true model. So the intuition here, intuition (s, a) where R-hat with (s, a) and T-hat of (s, a), s prime given (s, a) are close to true R (s, a) and T (s, a). [NOISE] So intuitively, if you get more and more data for a particular state-action pair, we know from Hoeffding etc., that your estimated mean is gonna converge to the true mean, and your transition model is also gonna converge to the true tradition model. And what we're doing here is we're sort of drawing a line in the sand and we're saying, "When are things- when do we have enough data for a state-action pair that we are satisfied with our empirical estimates?" Where we can bound how close our empirical estimates are to the true models. And if things are close enough, then we know that using those allow us to compute a near-optimal policy. [NOISE] So if everything is known, if all (s, a) pairs are known, then we can show that the V_Pi star under your sort of empirical model. I'll denote that as hat. So this is like your empirical model - V_Pi star, and I'll put this under your empirical model for your- of your true model, um, that this is bounded. [NOISE] I kinda go through all the details so that you saw a little bit of this on the midterm. This is often known as the simulation lemma. If things are close, like if your models are close, your transition model is close to the true transition model and your reward models are close to the true reward model, then if you use those to compute a value function, your value functions are also close, which is a really cool idea. It's basically saying, "You can propagate the errors in your empirical model into the errors that you get in your value functions, and the errors you get in your policies." So you can sort of propagate error. You can go propagate, [NOISE] propagate empirical predictive error [NOISE] to control error. [NOISE] Bless you. [NOISE] Bless you. So this is- this is really nice because you can say like, if you have good predictive error, then you can end up with small control error. And the known state-action pairs are just providing a way to sort of quantify whether- you know, what level do you need in order to be good enough, and the good enough you need is gonna depend on what that epsilon is you want. So one really important idea is to think about these known state-action pairs, um, and what they allow us to do in terms of defining some alternative MDPs. So, in this case, I will erase this. [NOISE] Just how you- so you know how to alter some of the parts of the proof, um, we'll go forward. I won't go through all of it in detail, but I'm very happy to talk about it offline. Um, [NOISE] so we can define two other sorts of MDPs. We can ide- call an MDP M prime, which is equal- which is a MDP, where for this (s,a) pair, it's R and T. So its transition and its reward dynamic- its dynamics and its reward model are equal to the true MDP on (s, a) in known set, else, it's equal to the M tilde model. So these are where we sort of start to define these slightly weird MDPs. So this is a model that is not quite our M tilde model that we're using to do in that equation one, which we have these kind of like rewards plus bonuses plus our empirical transition model. And it's not quite the real model. It's saying, on things that are known, we're gonna use the real-world model. Again, we don't know what any of these are, these are just tools for analysis. And then on state-action pairs which are unknown, we're gonna use this optimistic model. It defines an MDP. Um, uh, and the reason that this is useful as we can end up using it to help figure out how does the MDP that we have relate to the real MDP on state-action pairs that are known? Okay? This is what MDP we end up using for this, and then the other one is a similar one, which is M-hat prime, which is an MDP equal to M-hat. So this is just the, uh, uh, empirical- empirical estimates on K and equal to M tilde on all others. [NOISE] So on the known set. Okay? So this is a MDP, where for the known set, we use our empirical estimates, not the true- true est- not the true models, and then we use M tilde on all of these other ones. So the idea is that we can use these different forms of MDPs and we qua- can quantify how far off the value is that we get by computing, uh, using our- our Q tilde versus the value on these other ones, and we can use it to basically do a series of inequalities to relate the value we get by executing our policy versus the value we get, um, in the true world, and to the optimal policy. So these are the types of tools that allow us to help prove accuracy on state-action pairs that we know about. Yeah. So is that M-hat and an M tilde? Yes. Whether. It's M-hat. So M-hat here is the empirical estimates like the- um, uh, if you use just, um, [NOISE] the counts of the rewards and the count of transition model. M tilde is the empirical estimates plus the bonus. So the transition model in these two cases would be identical, but thi- this one would have the reward bonus there. Okay. So intuitively, this, uh, allows us to help quantify the accuracy. Um, [NOISE] the final thing that I guess I just want to mention briefly, uh, of how these types of proofs tends to work is that we need this bounded learning complexity. We need to make sure that we're not gonna continuously update our q function and we're not gonna continuously run into unknown state-action pairs. So the last part is to sort of, you know, how would we prove 3, which is bounded learning complexity. [NOISE] And the intuition for this is a little bit like the general intuition for optimism. So the intuition for optimism was to say if you assume the world is awesome, either the world is awesome, in which case you have low regret. In the case of PAC, that means, if you assume the world is awesome and it really is awesome, that's like not making a mistake. That's like picking the really- the- the action that you pick is good. So then you won't suffer, um, uh, a worse than epsilon decision. And then we want to be able to say that the times where you don't make mistakes or where you do make mistakes, where you pick an action that's bad, which is less than epsilon close is bounded, and that's what this is about. And the key idea here is the pigeonhole principle. So the idea is that if you think about- you don't have to have episodic MDPs, but if you ha- um, you can think of dividing your stream of experience into episodes. And during each, um, sort of episode, you can think about whether it's, uh, likely that you're gonna run into an unknown state-action pair. So you consider what's the probability that we reach an unknown state-action pair? And remember, an unknown state-action pair here is one that we don't have good models of like we've only visited it once or something. So what's the probability we're gonna reach an unknown state-action pair in T steps. [NOISE] Okay. So what we can show in this case is that if this is low, this is small, if small, that probability is small, we're being near accurate, near- we're being near optimal. So if- if it's a really small probability that you're gonna reach anything that's unknown, we can show that on the known state-action pairs you're being near optimal. So if you're unlikely to reach anything that where you have bad models of it, you're gonna be near optimal. If it's large, [NOISE] this can't happen too many times. So if it's large you're gonna visit it, it can't happen too many times. [NOISE]. Because if this is large, that means that you're really likely to reach an unknown state-action pair. Remember, I said that for every state-action pair, you only update it at most m times. So by the Pigeonhole principle, this probability cannot stay high for too long. Essentially, you're gonna have a function of like the number of states, the number of actions, and m. It's larger than that. But this is saying that you need to be able to visit each state-action pair m times. So that, that goes into our end bound here of the times we're gonna make mistakes. And we, we're- we might sort of reach things that are unknown. Um, it might take us more steps and we might make some bad decisions along the way. But essentially, um, things can only be unknown for m steps for each state-action pair, which means that our probability here has to be bounded. So eventually, everything has to be known or you have to be acting near optimally. And that allows us to show that things have, uh, that things are PAC. It has bounded. We're gonna make a bounded number of bad decisions. So either, we're not going to be reaching any part of the state-action space which is unknown. So everything we reach, we have good models of them and we're using them to make good decisions, or we are reaching things, and then in that case, we're getting information because we're getting a new observation of what it's like to be in that state-action pair and we get only, something can only continue to be unknown until we get m counts. Do you mind putting down the thing again? Okay. So that gives us an overview of why MBIE-EB is PAC as well as sort of why a lot of, um, the types of proofs that you do to show things are PAC. And I think the, the key idea in this is really the sort of notion of optimism, and accuracy, and the ability to make progress, ability not to be stuck always wandering, by decreasing these confidence intervals sufficiently fast. Okay. So now let's go back to Bayesian-ness. So that kind of concludes the PAC MDP part for a while. There's been a lot of exciting recent work in this area. Um, I guess let me see if I can just really quickly briefly bring this up. Um, this is a form of presentation. One of the TAs is giving up at Berkeley today on some of our joint work together. Just to highlight here, um, in terms of sort of PAC and regret analysis, there's been a lot of progress, uh, on getting better approaches. Um, so over the last few years, I mean, some of my grad students, some other really nice groups who've been doing a lot of work on this, and we're also now trying to have an analysis that is problem dependent, which means that if the algorithm has more structure, then we should need less data in order to learn to make good decisions. Okay, so now let's be Bayesian and see how being Bayesian helps us. So we saw Bayesian Bandits and in Bayesian Bandits we said, we're gonna assume that we have some parametric knowledge about how the rewards are distributed. So we're going to think about there being a posterior distribution of rewards, and we're going to use that to guide exploration. And we particularly talked about this notion of probability matching, which is we want to select actions with the probability that they are optimal and it turns out that Thompson Sampling allowed us to do that. So in these, sort of, approaches, it was very helpful if we have conjugate priors, which allowed us to analytically compute the posterior over the rewards of an arm given the data we've observed and our previous prior over the probability of different rewards for that arm. So we saw as one example of that the Bernoulli, which means that the reward is just 0, 1 and the beta distribution, and the beta distribution is one where we can think of the alphas as being counts of the number of times we've seen a +1, and beta is the number of times we've seen the arm being 0 and as we get observations of one or either of those outcomes, then we just update our beta distribution. So that allowed us to define Thompson Sampling for bandits, which was this algorithm where we say at each time step, we first sample a particular reward for each of the different arms, and then we sample a reward distribution we compute the expectation for those reward distributions and we act accordingly. So we saw this for the toe example, where we saw that we would sample different rewards, um, and then use these to act and at least in that example, we were seeing that we happen to exploit much faster than what we were seeing with an upper confidence bound approach. So a very similar thing can be done in the case of MDPs. So now in being Bayesian for Models-Based RL, we're going to have a distribution over MDP models. So what's the difference here? That means we're going to have both transitions and rewards. So we're going to have the rewards should look very similar to bandits. So the rewards, very similar to bandits. You can also use betas. If your reward distribution is 0, 1, you could also use betas and Bernoulli's. So this is very similar to bandits. T is a bit different. We're not gonna talk a lot about the different distributions you can use. But for example if we're in tabular domains, T can be a multinomial, and the conjugate prior for multinomial is a Dirichlet. Its conjugate. So if you want your, your transition model to be a multinomial, which is the probability over all the other states and actions, then a conjugate prior for that is a Dirichlet, which has a very nice intuitive, um, description similar to what beta is. In beta, you could think of this as just being the number of times you've observed 1 or 0. If you look at a Dirichlet, you can think of this as being the number of times you've reached each of the next states, so S1 S2 S3 S4. So the Dirichlet distribution would be parameterized by a vector for one of each of the states. And again, we can use this sort of posterior to update it to allow us to do exploration. So in this case, we're going to sample an MDP from the posterior and then solve it. So if we look at what the algorithm looks like, it's going to look very similar to Thompson Sampling for Bandits, except for now we're going to start off and we have to define, um, a prior over the dynamics and reward model for every single state-action pair. So notice this is tabular. We're assuming that we have a finite set of S and A. So we can write this down. We can write down, we have one distribution for every single state-action pair for both the dynamics and the reward model, and then what happens is that you sample an MDP from these distributions. So for every single state-action pair, you sample the dynamics model and a reward model. And now you have an MDP, and so once you have that MDP, you compute the optimal value for it. So this is obviously more computationally intensive than what we had for bandits, but it's certainly a reasonable thing to do. And then you act optimally with respect to the Q-star you have for that sampled MDP. So this is known as Thompson Sampling for MDPs. It also implements probability matching and empirically, it can often do really well just like Thompson Sampling did really well often for bandits. So I think that's probably all I'll say about Thompson Sampling for MDPs. There's been, uh, a number of different works on this. Just to highlight some people that do some really nice work on this from Stanford. Ben Van Roy, Roy's group has a lot of work on this and sometimes they call it posterior sampling for MDPs. So people like- some of his former students like Dan Russo and Ian Osband. Ian's now at Deepmind. Dan Russo is now at NYU. Uh, they've done some really nice work on these types of spaces. Yes? [inaudible] generalized these like non-tabular MDPs? Yes. Great question. So the question was whether or not we can generalize this to non-tabular MDPs? Yes, and I'll talk about that in a second. But kinda poorly. [LAUGHTER] But yes, that's the goal. All right, anybody have any other questions about, sort of, finite state and action scenarios? Now we are going to talk a little bit about generalization. Okay. All right. So of course, everything I've just been saying right now is for finite state and action spaces, which is not very satisfying because if we think about these types of bounds, um, I said this was polynomial in the size of the state and action space. So what if S is infinite? I mean, it says we can make an infinite number of mistakes and that seems sort of unfortunate. Um, so it's not clear that this initial, uh, framework is is it all helpful when your state space is either infinite or insanely large like the space of pixels. Um, though- even though, the framing of this is really nice, we'd like to be able to take these types of ideas up to generalization, but we'd like to figure out how to, how to use them in a way that can be practical. Now, when we start to- so this is a very active area of current research, there has been a lot of different ideas about this for the last few years. I do want to highlight that on the theory side, we still have a long way to go. Uh, as we talked about some with function approximation, even just function approximation and doing control like off policy control like Q learning, we said that we didn't have good asymptotic guarantees for some of the basic algorithms. Um, so if we don't even have good asymptotic guarantees, it's unlikely that we would have really nice finite. These are often known as finite sample guarantees because they guarantee that the number of mistakes you're gonna make is finite. So we have relatively little theory in this case it's something that my group is working on. There's several other groups that are also working actively on this, but there's a lot still to be done. But there has been some really nice empirical results recently. Okay. So let's think first about generalization and optimism, like optimism under uncertainty where we're now in a really really large state space. So let's think about what might need to be modified if everything was extremely large. If we go back to this algorithm. Well first of all, we had this, these sort of counts that we're keeping track of sort of for every single state-action pair, what the counts were. So this isn't going to scale because if you're in a scenario where your state spaces is pixels, you may never see this same set of pixels again. Does anybody have any ideas of how you might extend this to the deep learning setting? Like what we would like in this case is some way to quantify our uncertainty, our uncertainty over sort of states and actions. Um, but we don't wanna do it based on like raw counts because then everything will just be one forever. Yeah? Could you use some form of like bounded VFA? So the suggestion was whether we could do some form of like a, a VFA for example. Um, yes, we could imagine trying to use some form of, uh, some sort of density model or some sort of way to try to either get an estimate of how much related information have we see, or how many related states that we'd seen in this area. [inaudible] Somewhere like being able to [NOISE] [inaudible] some sort of [inaudible]. Yeah. So [inaudible] to use some sort of embedding, absolutely. So another thing you could do here too is to do some sort of- form of compression of the state space. One of the challenges [NOISE] is the right compression in the state space depends on the decisions you want to make. And so generally, it will be non-stationary. But that's not necessarily bad. You might just want to go back and forth between those. So those sort of ideas are great. In general, we want some way to quantify our uncertainty that is going to have to generalize, and say similar nearby states. Um, if we visited that area of the world a lot, then we should have less of a bonus on that in terms of optimism. [NOISE] Okay. So as I said, the sort of counts of s, a and s, a, s' are not going to be useful if we're only going to encounter things once. Um, another thing that I want to highlight is, the methods that I was talking about before were really model-based. Lot of the work there is model-based. In contrast, a lot less of the things that- it's starting to change recently over the last year, but in general there's been much less work on the model-based approaches for deep learning in terms of RL. And I think that's because the model-free and the policy search techniques have generally been much better. In part because the error that you make in your models accumulates a lot as you do planning. And so I always remember David Silvers' first talk about this, or one of his earliest talks about this from, I think, 2014 where he showed this beautiful sort of model-based, um, uh, simulation, which was horrible for planning. So the errors really have to be very good, uh, the- the errors have to be very small, and it's not clear that the representation you get by maximizing for predictive loss is always going to be your best for planning. So we'd really like to be able to take the ideas we saw before for things like MBIE-EB, um, and translate them over to the model-free approach, and think about some way to, uh, to encode uncertainty in the deep neural network setting. So let's think about doing something like Q-learning, uh, like deep Q-learning, and in this case- so I've been a little loose here, you could- this could be a target. So target, could fix this. But think about something like sort of the general Q-learning and Q-target, um, where we use the max of our current function approximation or we use the max of our current target function approximation. So one idea for- inspired from MBIE-EB would be simply to include a bonus term. So again, this could be, you know, a fixed target. But the idea here is that when you are updating your parameters, just put in a bonus term for that particular state-action pair when you're doing your Q-learning update or when you're refitting your weights. Of course, we have to know what that Q bonus would be, but this would help with the planning aspect. So now we're in the model-free environment, or model-free setting, and so we could- [NOISE] when we're doing our sort of Q-learning updates, let's insert a bonus term. So that's why I chose MBIE-EB is the algorithm to show you before because I think that those sort of reward bonuses are much easier to extend to the model-free case. Now, of course, the question is what should that reward bonus be? It's got to reflect some sort of uncertainty about the future reward from that state-action pair. And there's a lot of different approaches that have been trying to make progress on this. So Mark Bellemare, um, [NOISE] and some of the follow-up papers thought about sort of having a density model, trying to, er, explicitly estimate a density model over the states or state-action pairs that you've visited. Um, other people have done sort of hash based approaches, which is sort of more similar to the embedding in a way, try to hash your state space, and then use those- use counts over that hash state-space, and then update your hash function over time. So that's some of the work that's come out of Berkeley. So there- there's different, um, different ways to quantify this. Another thing I want to highlight here is that these bonus terms are generally computed at the time of visit. When we looked at MBIE-EB, you could recompute these later. So if you- if you're storing things in your episodic replay buffer, you might want to update those over time, because those state-actions, if you now- later sample that reward, a next state pair from your, uh, replay buffer, you may want to change that bonus term. So there's a number of different subtleties when we try to bring us up to deep learning, but there's been some really encouraging progress. Um, so let me just- just- so in this case, what we can see is the initial work from Mark Bellemare's group, um, where we compare on Montezuma's Revenge, which is considered one of the hardest Atari games, so the progress of a standard DQN agent that was using e-greedy, um, there's a number of different rooms in Montezuma's Revenge. In this case, you can see after 50 million frames, um, it was doing incredibly badly. Don't- it had only been through two of the rooms. Whereas if you use this sort of exploration bonus, this one did much, much better, so just enormously better by being strategic. Okay? So I think that highlights the empirical significance of doing this sort of strategic exploration. Um, let's think briefly about Thompson sampling. So in this case, one of the ideas that we did a few years ago is to say, you could do Thompson sampling both over your representation and the parameters of your model. What do I mean by that? I mean that if you have a really large state space, you can imagine collapsing your states and doing a dynamic state aggregation, and sampling over possible state aggregations as a way to sort of do collapsing your state space. Uh, and- and Thompson sampling can be extended to sampling over representations. So that works well, but it doesn't scale up fully. When you want to really scale up, if you want to be a model-free, it's a little bit different than what we saw before. Because before we saw model-based approaches, and now we're sort of wanting to sample from a posterior over possible Q stars. And it's not clear how to write that down. I don't think we've made good progress on that even at the tabular setting. Uh, but that's what we're trying to do- people who are trying to do for the deep learning setting. And there's been a couple of different main approaches. Uh, one is again sort of from Ian Osband, who was here for his PhD. They did bootstrap DQN, the idea here is that you can use bootstrapping on your samples, [NOISE] and you can build a number of different agents. So now you kind of have C, Q values instead of just one Q value, and then you can act optimistically with respect to that set. It's not incredibly effective. It gives you some performance gain. Um, another thing that we did, uh, we sort of- we- someone I was working with before, I was involved with one of the earlier versions of this, um, and since then they've been continuing to push it forward. The idea here is that you kind of fix your embedding, you do linear regression on top. And that is super simple, but if you do Bayesian linear regression, you get a notion of uncertainty, and that actually gives a lot of performance gains in a lot of cases. So you can sort of have like a really little bit of uncertainty representation on top of a deep neural network. But this is an active area. There's a lot of people thinking about this different type of work, uh, and I think it's going to continue to be a really big area because we still haven't made sufficient progress, in how to do exploration plus generalization. So just to summarize, I know we've done, um, quite a lot of theory in this section. The things that you should have- should make sure you are familiar with is to understand what is the tension between exploration and exploitation in RL? Why this doesn't arise in on- other types of settings? You should be able to define these different sorts of criteria for what it means to be a good algorithm in terms of PAC or regret, um, and be able to map the sort of algorithms that we've discussed in detail in class to these different forms of criteria. So if I say, you know, is this optimism under uncertainty approach, is that good for PAC or regret or both? You should know that it's good for both. So that's the kind of high level that you should be able to understand from the things we've been doing, and just know that there's a lot of really exciting work that's continuing to go on there, including defining new metrics of performance. And next time, we'll hear about meta-learning. Thanks. |
Stanford_CS234_Reinforcement_Learning_Winter_2019 | Stanford_CS234_Reinforcement_Learning_Winter_2019_Lecture_4_Model_Free_Control.txt | All right. We're going to go ahead and get started. Um, uh, before I get into the technical stuff, we'll do a little bit of logistics. Um, so, we are starting these things called sessions. Um, we announced them on Piazza. If you didn't, uh, if you're not getting our Piazza stuff, definitely make sure you've signed up for Piazza, or send us a note. Um, the sessions are designed to go into the material a little bit deeper, also to discuss something about the homework. Um, these are structured session as opposed to an office hour, where you can ask one-on-one questions about the homework . the sessions are designed to go a little bit deeper into the material, and they were prompted both based on some colleagues of mine, feedback of how much students have liked them in their other classes, as well as some request from last year for opportunities to go deeper into the material. So, we've announced these on Piazza. The idea is that you will sign up for a session. They are optional, you don't have to do them. We will be giving one percent extra credit for attending them if you attend a sufficient number of them. Um, the details for that has also been announced, um, and I, I think that's true. If I've got that right, just email me. I think that's been announced also on Piazza. Um, so, if you go to Piazza, there's a number of different sessions you can sign up for. The point of signing up for them is to make sure that we have room capacity, but I'm pretty sure we'll be able to accommodate almost any session you want to go to. The last session will be done via Zoom, and it's particularly targeted at SCPD students, but anyone is welcome to do it. Um, the way that we'll be keeping track of whether or not people are going to sessions or not is, we will have a code that's, uh, mentioned inside of the, the material, and so, then, you will just write in that code to indicate your attendance. Um, we'll record the last session so that if, for some reason, your schedule is such that you can't attend any of these but you want to participate in session, you can go through the material later and then record that you attended it by using that code. And we'll be relying on the Stanford Honor Code, that only people that are doing this will upload the codes. Somebody had any question about sessions and what those involve? Again, they're optional, they're way to go deeper into the material. Um, some other people have really liked these sort of things. You can see what you think, it's an experiment. All right. Any questions about anything else outside of sessions? So, homework's been released, office hour is happening as usual this week. Feel free to come talk to us or use Piazza for any questions that you have. All right. We're gonna go ahead and get started now then. Um, as usual, I really appreciate it if you use your name whenever you're asking a [NOISE] question or making a comment. So, today, we're gonna finally start to get into making decisions where we don't have a model of the world, and in particular, we are going to be focusing on model-free control. [NOISE] So, the things that we're going be covering today is really focusing on how can an agent start to be making good decisions when it doesn't know how the world works and it's not going to be explicitly constructing a model? Um, and remember, a model, in this case, is going to be a reward and/or a dynamics model of the environment. So, today, we're gonna be looking at methods that do not involve constructing a verbal, um, a dynamics or a reward model, but it's just going to be directly learning from experience. [NOISE] So, um, [NOISE] before- we were mostly talking last time about, well, maybe we don't know how the world works, we don't have these explicit dynamics and reward models, but we're going to be trying to evaluate a policy that was provided to us. And now, we're going to be thinking about the real problems that often comes up in reinforcement learning, which is, how should an agent make decisions when they don't know how the world works, and they still want to maximize [NOISE] their expected discounted sum of rewards. So, when we think about sort of how good is the policy, as soon as we have information about how good a policy is, then we can start to think about how do we learn a good policy instead. And, in fact, when we started off at the very start of the class, we'd talked about how you would learn to make good decisions or how you would compute good decisions if you were given a model of the world. And so, that's what we're gonna be going back to now. So, in particular, now, we can think of starting to get at this issue of this optimization and exploration. We're still not going to get into generalization yet. Um, this will be happening soon. Um, we've already seen this a little bit it came up with planning, but now, we're going to start to think about how do we explore and how do we do optimization. So, when we think about- well, I, I think I'm just gonna go through more of these as we start to, to go into this area. So, um, again, we're going to be thinking about how do we identify a policy that has a high expected discounted sum of rewards. There's going to be delayed consequences, which means is, our agent takes actions that may not see the result of whether or not those actions were good or bad for a while, and we're going to start to think about this exploration aspect. Okay. So, let's start with, um, you know, where these types of problems come up and where people model things when we're thinking about Markov decision processes and maybe not building a model. So, I think probably one of the first really big examples of success for doing reinforcement learning and doing it in this sort of model free way was for Backgammon, which was roughly 1994. They trained an agent to play Backgammon, the board game, um, I- actually using the- a neural network. Um, neural networks went sort of out of fashion for probably around 10, 15 years, and then came back, but in the early '90s, people were using neural networks. Um, and, uh, Gerald Tesauro used it for Backgammon and got some very nice results, and that was sort of one of the first demonstrations of reinforcement learning in kind of a larger setting that you could solve these sort of complicated games. Um, many other g- problems can also be modeled in MDP, whether games or robots or customer ad selection or invasive species management. Um, and in many of these cases, we don't know the models in, i- in advance. So, what we're going to be thinking about today is, um, situations in particular, mostly here, where we think about the model being unknown, but if we can sample it. But there are occasionally cases too [NOISE] where you do know the model, but it's [NOISE] really, really expensive. So, for something like computational sustainability or climate modeling, you might be able to write down a good model of the world, but it's really, really expensive to run because actually simulating the climate is really, really compete- really, really hard, and even then, your model will probably be about. But I just- I raise this second point in the sense that when we mostly think about sort of learning from the world, we think of a robot, like, running around in the world, and, and that being an expensive thing to do because robots are taking real time to, to do this. But you can also think about agents that are learning to sort of interact with a simulator, where that's also really costly. All right. So, what we're going to be thinking about mostly today is what is known as on-policy learning, where we get direct experience about the world, [NOISE] and then, we try to use it to estimate and evaluate a policy from that experience. But we're also going to stock- start to talk more about off-policy learning, where we get data about the world and we use it to [NOISE] estimate alternative ways of doing things. So, we can kind of combine experience from trying out different things, to try to learn about something we didn't do by itself. And, um, this three thing's really important. So, I'm- all right. The second thing is really important, so I'm just gonna talk about it briefly up here. So, imagine you have a case where, say, there's only a single state for now, but it's like, you have a state S1, you do A1, you stay at S1, then you do A1. Or you're, you're in S1, you do A2, so you're in S1, and then, you do A2. [NOISE] So, you'd like to be able to kind of combine between these experiences so you could learn about doing this. [NOISE] even though you've never done that in the world. You've never experienced that full trajectory, but you'd like to be able to sort of extrapolate from that, that prior experience. So, [NOISE] this sort of policy would be an off pol- [NOISE] uh, an off-policy learning because it's different than the previous policies we've tried. We'll go into that more when we think about Q-learning. All right. So, let's start with generalized policy iteration. Okay. So, if we go back to policy iteration, we talked about that a couple lectures ago, and policy iteration, we originally saw it when we knew the model of the world. So, it's a way for us to compute what was the right thing to do given- right thing meaning the policy that maximizes our expected discounted sum of rewards. So, how do we do this when we know how the world works? We're given our dynamics and reward model. In that case, we initialize some policy, probably randomly. So, initializing, again, would mean that we'd said, pi of S equal to some A for all S, and this is generally probably going to be chosen at random. [NOISE] Um, and then, we did this policy evaluation procedure, where we first computed the current value of the policy, and then, we updated the policy. So, we took whatever we had, and then, we did this sort of one more thing, which you can think of as kinda doing one more Bellman backup, where we said, okay, we're taking that V pi, we're plugging it in over to here, we're using the fact that we know the dynamics model and we know the reward model, and we're computing this one-step updated pi prime. And we talked about the fact that when we do this, we actually get monotonic policy improvement, [NOISE] which is sometimes referred to as a policy improvement theorem. So, this procedure, when we're doing it with sort of, um, in this dy, uh, in this case where we knew the dynamics model and we knew the reward model, um, would guarantee to always give us a policy that was at least as good as the previous policy or better. Um, and, eventually, it was guaranteed to converge at least in the case where we have finite states and actions because there's only a finite number of policies. So, in this case, there were only A, to the S possible policies. So, we only need to do this whole procedure, at most A to the S times. Each time, we're either picking a better policy, or we stayed in the same. And once you find the same policy, you're, you're done. So, now, we want to do all of this, um, but we don't have access to the dynamics or reward model. So, does anybody have any ideas of how we might be able to do the same thing now that we don't know the dynamics or reward? We can maintain another- [NOISE] a matrix transition probabilities, uh, that you calculate [NOISE] it as you experience the world. Yeah. The better suggestion is, well, what if we try to, uh, if I interpret correctly, wha- what if we try to basically estimate the dynamics model and a reward model from the world, and then, we could use this to- you could still compute your, your value function maybe using some of the methods we saw last time, um, and then, you could do this update as policy improvement using your estimated dynamics and reward model of the world. That's a completely reasonable thing to do, um, and may have any other idea of what we could do. Yeah, name and- Uh, I think instead of having to, uh, uh, a compute a model, can we do away with model and directly try to estimate what is the value of a particular state or state-action pair? Doing away with models So estimate the value of a particular state. state and action? Yes. Yes and with actually? Yes [OVERLAPPING]. What she actually said is exactly the path that we're going to look about today which is we're going to focus on model-free control. So we're not going to directly estimate a model today. I'm actually personally very partial to models that can be a very simple efficient but for today we're not gonna look at that and we're gonna do exactly what was just proposed which is we're gonna compute a Q and if we compute a Q function which just to remember Q is always a function of state and action. We're gonna estimate the Q function directly and after we have that then we can do policy improvement directly using that Q function. So how would we do that? So this is Monte Carlo for on policy Q evaluation and it's gonna look very similar to Monte Carlo for on policy value evaluation. Um but we have to make a couple modifications. So before, if we were doing this for V, I'm just gonna write it to kind of contrast. So for V we just had a count of the number of states. Now we have a count of the number of state action pairs. Um before we could just keep track of G here um which can be the sum of previous rewards we've seen across all episodes for G of S. Now we're gonna do that for S, A and then now we're gonna end up having a value function. We're gonna have a Q pi. So essentially almost everywhere where we'd just had S before now we have S, A and then it's gonna look very similar. So we're gonna assume that we're still provided a policy, we can sample an episode and then we compute GIT for every single time step and that remember now is gonna, I mean it was before but we're gonna to think about the fact that it was also associated with a particular state and a particular action for that time step and then for every state action pair instead of just state visited in the episode either for every first time we saw that state action pair or every time we saw that state action pair, we can always, just like before we can either have first visit or every visit. There we're just gonna update our counts update our Sum of Total Rewards and then estimate our Q function. It's basically exactly the same as before except that now we're doing everything over state action pairs. Now once we have that now policy improvement is even simpler than before. So we're given this estimate of the Q function and now we can just directly take an arg max over it. So we define our new policy to simply be arg max of the previous one. Alright so did anybody see any problems with doing this so far for the type of policies we've been thinking about in the class? So far we've been thinking about mostly policies but are all deterministic which means that per, there are mapping from states to actions and we've been thinking about cases where this is um a deterministic mapping. So we always pick a particular action for a particular state. Yeah in the back and name first please. Oh what's your name? Oh yeah but what's the problem the problem is we're never exploring which is correct but what's the problem with not exploring? We only sample one path over and over again and we never actually learned anything about the rest of the world that we don't see. So we don't know whether there's a better policy. So what he is saying that maybe we're only going to sample one path. I think what he means more than that is so you can still sample different paths because your state space can be stochastic but you are only gonna ever try one action from one state. So you're never gonna learn about what it would be like if you took A2 instead of A1 in that state which means that when you're doing this for any particular state you'll only see one corresponding action. So the time whenever you see state S1 the only time the only action you'll see will be A1 or whatever your policy says to do there. Which means that you're not gonna have any information about doing anything else there which means your policy improvements is gonna be pretty boring because you're not gonna get any other information about things you should be doing instead. So we're going to have to do some form of exploration essentially now, we are gonna have to start to have some sort of stochasticity in our policy or there needs to be changing over time but we can actually try different things even from the same state and know what to do.Yes name first. My name is . Do we know the whole action space beforehand? Great question. question is do we know the whole action space beforehand? Yeah we're gonna assume that we do for at least all this lecture and in general yeah. Since you've made the action initially have high values so then after it's computed is probably computed took it low so the next time you see it you would try to other actions? has made a very nice suggestion so that relates to how we're initializing the Qs. So one thing you can do which is what he just suggested is you can initialize your Q function really high everywhere to basically do what's known as optimistic initialization um and that actually can be a really useful strategy for exploration and if you initialize it in a particular way then you can have a provable guarantee on how much data you're going to need in order to converge to the optimal policy. So optimistic initialization is often a really good thing to do to be a little bit careful of how you initialize things like what those value should be but generally empirically it's really good. And formally it can be very good to. We're not gonna talk about Optimistic initialization today but we will later in the class or talk about optimization. So doesn't really on the Markov assumption to be able to estimate Q right. Yes. But my question is whenever we're defining the policy, we only define it in terms of the state, and if the reward that you get from the state depends on officially you have, then that brings the Markov assumption in like even sorry, even though the reward is not Markov then your policy will act you we're defining a policy as if it were. Yeah so your real world may or may not be Markov all the policies we're talking about right now is assuming a world as Markov. The policies are only mappings from current state to action. They are not a function of history. So those may or may not work well because your real-world may or may not be an MDP and if it's not then you're considering essentially restricted policy class. Considering only mappings from the immediate state to the action and if you, what you should do really depends on the whole history then you might not make good decisions. Good point. Okay so this is sort of how the basic way you would extend Monte Carlo to be able to start to estimate Q and once you have that you could do policy improvement but now it's clear that we need to do something in terms of how we should get- gather experience so we can actually improve when we tried to do this policy improvement um. Because now we don't know how the real dynamics of the world work. So we need to do some with some sort of interleaving of policy evaluation and improvement and we also need to think about how we're doing this exploration aspect. So in general it might seem a little bit subtle. So we've already got one nice suggestion from like maybe you could initialize everything optimistically and maybe that would help you explore. It does, but in general it might seem like it's a little bit hard of how are we going to get this good estimate of Q pi because what Q pi does is it says um if you wanted a really good estimate of Q pi of S,A for all S and all A it would say you kinda need to get to every different state take every possible action and then follow pi from then onwards and so how do I make sure that I visit all of those things and what we're gonna talk about today is a very simple strategy to make sure that you visit things which works generally under some mild conditions about the underlying process. So the really simple idea is to balance exploration and exploitation by being random some of the time. So let's imagine that there's a finite number of actions we're gonna call that cardinality A, um, here then e-greedy policy with respect to a state action value is as follows. With probability one minus Epsilon, you're going to take the best action according to your current state action value function and else then you're gonna take an action A with probability Epsilon divided by A. So with probability one minus Epsilon you take what you currently think is best according to your group or your estimate of the Q function and with probability Epsilon you select one of the other actions. So it's a pretty simple strategy and the nice thing is that, it's still sufficient. But before we do that why don't we just do a brief example to make sure that we're on the same page. So let's think about how we would do Monte Carlo for on policy Q evaluation for our little Mars rover. So now our Mars Rover is gonna have two things that can do instead of we're gonna be reasoning more about that. So I've written down the reward function here. I'm saying that if you take action A1 you get the same rewards we've been talking about before which is 1, 0, 0, 0, 0, 0 plus 10. And now I'm changing it I'm saying well you're action does -- your rewards do depend on your state and the action you take and so the action for A2 is now going to be 0 everywhere and then you get a plus five at the end and gamma is one and let's assume that our current greedy policy is you take action A1 everywhere and that we're using an Epsilon of 0.5. And we sample a trajectory from an e-greedy policy. And again what an e-greedy policy means here is I set Epsilon equal to 0.5 which means that half the time we're gonna take our current greedy policy of action A1 and the other half the time we're going to either take A1 or A2. So what that would yield as an example would be a trajectory such as state three action A1 0, state two. And now this is a case where we're sampling randomly. So we flipped a coin. We said oh this time I'm gonna be random. Then I have to flip a coin again to see whether I'm taking an action A1 or A2 and I took A2 there. I got a reward of 0 and then the rest of trajectory as follows and my question to you and feel free to talk to a neighbor of course is what is now the Q estimate for all states for both action A1 and action A2 at the end of this trajectory using Monte Carlo estimates? So we're doing first visit in this case. Yeah. Uh, [NOISE] I have a question about the action we choose on the Epsilon table. Yeah. Uh, is it important when- what would the actions on policies , or should we pass in that action question is whether or not when you hit, uh, now do something random, whether you should include the action that you'll be taking normally if you're being greedy. Um, you could. In some ways, that's like just picking a different Epsilon. Yeah. I hear less talking than normal, so that they may have any clarification questions about this or, or [NOISE] or are there questions? Good. [LAUGHTER] Sorry. I have an idea. Okay. Yeah. So, um, uh, if everybody's ready to ask yourselves.So , what, what did you guys think? Uh, so, you will have in this case S3, well , so everything that you did not, every state of action pair you did not see will remain zero. Yeah. And at a particular, uh Q of S3, A1 will be zero, cause you saw that once and reduce that to zero, Q of S2h will also be zero, Q of S3A1 will be zero, and then, the only one that will be non-zero will be Q of S1, A1, which in this case will be 1. Because you saw it once, and the reward that you got when you saw it was 1. That's one answer. Anybody with a different answer? so, uh, all of the state action pairs that we've seen will be one, and all other state action pairs will be zero. That's another answer. So what, uh, was say would be right for the TD case. Okay. What you were saying would have been right for last week, uh, or if yesterday, or, or Monday. Any else who may have a third answer? Could you repeat what the second- the second choice was? The first choice is that we only update, um, S1, A1. The second choice is that everything that we saw will now be 1, and maybe I misunderstood over there. So we're gonna have two different um, we have two vectors now. So we have Q of A1, and we have Q of A2, and they're not gonna look identical. So, sometimes we take action A1, and sometimes we take action A2, and we can only update what we saw the returns for the action we took. So what actions do we take for S3? A1. Just A1, right? So that means for those ones, for S3 it's gonna be 1, and that for Q of S3, A1, so I'll fill in all the ones that are zero, one, two, three, four. Um, do we ever take A2 in S3? No. So that also has to be zero, cause we didn't ever start there, take action A2, and get a return. Uh, what about for, what action do we take from S2? Right. So for that one, we get a 1. So we basically, uh, distributing your experience. So now if you were going to take a max over those, then you would get the same thing that we saw last time for Monte Carlo, which would be 11100000 to the end, um, but here we- we're subdividing our samples. So, you only get to get an experience for the action that you actually took in the state. And because we're in the Monte Carlo case, we'll see the TD case or, Q-learning we'll call it later, um, then we get to add up all the rewards to the end of the episode. So G here is gonna be the sum of all these steps, and I didn't speci- oh, I did. Good. And we're keeping Gamma equal to 1 here just to make all the math. Just adding. Yeah? Should we just [OVERLAPPING]. Sorry. Can just be one half for Q S1A1 or in Q S3A1? Uh, is talking about whether or not if we did every visit, if anything would change here. [NOISE] Excuse me. It would not change in this case, because, um, both times when you visited S3, the sum of rewards to the end of the episode was 1. So you'd have two counts of 1, and then we divide by 2. It da- it can actually be different, but it's mostly different if you got like a different sum of rewards from then to the end of the episode. Yes? So is [OVERLAPPING]. Remind me [OVERLAPPING]. Yeah. Isn't that? Maybe I misunderstood. Yeah. So, I thought we were supposed to say that everything was, and I missed that. Did, did you say that that was different for the two actions? That was one for in the projectory, um, zero. I understand. Sorry about that. Okay. Okay. So now we're gonna show formally that this does the right thing. So, um, we're gonna show provably that like what we did before when we were doing policy improvement, we're showing that if you pick a policy, um, pi i, that was, uh, generated by being greedy with respect to your Q function, then that was guaranteed to yield monotonic improvement, and the same thing is gonna be true here too, when you do e-greedy. Um, so if you use sort of er, an e-greedy policy, then you can gather data such that, uh, the new policy- the new value you get, if you're optimistic with respect to that- oh, sorry, if you're greedy with respect to that, that means you're gonna get any better policy. Okay. So let's say that, um, we have an e-greedy policy, Pi i, and then we're gonna call an e-greedy policy with respect to Q Pi i, which is gonna be Pi i plus 1, so we had a greedy- e-greedy policy Pi i that was doing some amount of exploration and some amount of greediness in the past, we use that to gather data, we then evaluated that policy and we got this Q Pi i, and now we're gonna extract a new policy. We're going to do policy improvements. I'm gonna show that that's a monotonic improvement. Okay? Does anyone have any questions about the, what we are showing? Okay. So, what does this mean? So right now what we're gonna be trying to show is that this, this Q function, the Pi of s Pi i plus 1, so, is gonna be better than our previous value. At least as good or better than our previous value of our old policy Pi i. So the way we define this is now, um, the Q function here is going to be a sum over, our policy is stochastic. So it's Pi i plus 1, of the probability we take an action in a certain state, times Q Pi i of SA, and then we're gonna expand that out, and we're going to redefine it in terms of what it, what it means to be an e-greedy policy. So with, remember in a e-greedy policy we either take something randomly, and that's with probability S1, and we split our probability mass across all the actions. So that's how we get this equation. So this says, this is the- this is the random part. So with probability, with probability epsilon, we take one action, one of the actions, and then we would follow that from then always. So that's just Q Pi i of SA, and then with probability 1 minus Epsilon, we're greedy. And we follow the best action according to our current Q Pi i. So, now what we're gonna do is we're going to rewrite that. The first term isn't gonna change and I'm gonna expand the second. [NOISE] I haven't done anything here. I just multiplied the last term by 1, but I expressed the 1 as 1 over Epsilon divided by 1 over Epsilon, and now I'm gonna re-express that part. So, and I'm gonna rewrite the first term, plus 1 minus Epsilon, max over a, and what I'm gonna rewrite this as- It's gonna use the fact that whenever we define our e-greedy policy, if you sum over all actions in a certain state, those are all probabilities of us taking those actions in that state, so it has to sum to one. So I just first divide it, I just multiply by one, and we're expressing as 1 minus Epsilon divide by 1 minus Epsilon, then I re-expressed the 1, because it has to equal to 1, cause we have to take some action in a particular state. A policy always has to, the probability of us taking any action state has to be equal to 1, and then I'm gonna do the that expression because we're, here is where we'll take the best action. So by definition, the best action has to be at least as good as taking any of the other actions. So we're gonna do the following; we're gonna push that Q inside. [NOISE] So that has to be smaller than what we saw before, because basically we just push the Q inside, and we're no longer taking a max. And the Q values- all the Q values at best have to be equal to the max, and in other cases they'll generally be worse. Okay? But then once we have that, we can cancel that 1 over Epsilon minus 1 over Epsilon, and what do we have? We have two different terms here that look very similar. We have one. Let's see. We need one was taking that apart. And we'll keep this up. Yeah. There is an Epsilon over a right there. Okay. So now I'm going to pull that out. [NOISE] If I split those terms up, the first term and the third term are identical, and one is subtracted and one is added. Make sure that's clear. So, this just ends up becoming the middle term. [NOISE] And that was just the previous value. Yeah? first line, where we changed it to, instead of the sum, over all, A of Pi I a given s minus Epsilon to, [NOISE] minus Epsilon over the cardinality of A in this case? Yes. Um, is that s- that, is that [NOISE] still one minus Epsilon, [NOISE] I mean, that, that looks all- That's all . Got it. Does that answer your question? I think so. Yeah. So, the, what we did from the one minus Epsilon, to the next one. [NOISE] So, we had a one minus Epsilon divided by one mi- minus Epsilon, [NOISE] and, I re-express that as the sum over A [NOISE] , Pi I of A given S minus Epsilon divided by sum over A, and then, if you sum over A that's second term, just this Epsilon, and the first term is one. Okay? Yes. Isn't that [OVERLAPPING] Can you remind me your name? [OVERLAPPING] the Pi, the, what? Name? Oh. Pi I is the Pi, like Pi I plus one, negative Pi, and then Pi I. Pi I plus one. [NOISE] Which line are you thinking about? [NOISE] . Which- [OVERLAPPING] The means, I'm sorry. The second line . Yeah. You wrote E Pi I plus [NOISE] one, negative five [NOISE], five Pi times five. I'm just not understanding your question with- so you're on that second line is that right or-? In the . Okay. Pi plus one is- sorry. What is the question? [NOISE] . Yeah. The? Yeah. Yeah. [NOISE] [NOISE] . Yeah, so good question. The greater than or equal happen because we push that Q Pi we had a max over A Q Pi I of s, a we pushed it inside of the sum. And so that sum now no longer includes a max. And so, now that the max is always greater than or equal to any of the other elements. So, that's where you got the, greater than or equal to. Yeah? So, I just wondering if you could explain like intuitively you go random or optimal actions and then you end up with monotonic improvement. [NOISE] Yeah. Can we get some intuition this is the algebraic derivation. And I think intuitively the idea is that by doing some e greedy exploration and you're gonna get evidence about some other state action pairs. Um, and then you can use this to estimate your Q function and that when you do that then that's also gonna give you uh, then that can improve your policy and you can have evidence that there is something better you could do then the current one, the current thing you're doing. If you don't do any exploration your Q function wouldn't change from before. But now because you're doing exploration then you can learn about other stuff and then if it's better you'll see that in your Q function. function like your exploration is not as good then you just take the old one? Yeah. Um, yeah. So, this is now um, this is saying that you'll get this monotonic improvement if you're computing this exactly. So that's an important part. So this show- so what this shows here is that if you get a Q function and it looks like there's some improvement from some other actions that you're not taking right now you're gonna shift your policy over towards focusing on those actions. This is assuming right now in terms of the monotonic improvement that Q Pi I's have been computed exactly. So that's what we thought when we were doing planning where we knew what the dynamics model was in the Reward Model and we're using that [NOISE] to compute a value function. Um, so if we we're doing in that case we have the guaranteed monotonic improvement because we had the exact value of Pi and similarly here if we have the exact value of Q Pi I, then when you do this improvement then you're guaranteed to be monotonically improving if you didn't, like if you have just an approximation of Q Pi I, then it may not be monotonic. like let's say you tried another action once in that state. You may have a bad estimate of how good things are from that point. So, this, this is an important aspect. And this is going to be really important when we start to think about function approximation because we almost never will have computed Q Pi I, exactly. But if you do like that say, you can just iterate through this a ton of times like you're learning that's still a tabular environment. You've converged you know your Q Pi I, is perfect. Then when you- then do policy improvement you can get a benefit even you can improve- though there's going to be this interesting question of how often do you improve your policy versus how much time do you spend evaluating your current policy . Yeah? Uh, yes. Yeah. So does this mean that it definitely converges to like an optimal Q that was Q function? The overall- Perfect. Yeah, we'll talk about that question is great too. So, this is just saying like one step monotonic improvement what, what's gonna happen in terms of total convergence we'll talk about that in just a second. Yes? Remind me name, please. [NOISE] question and answers When I think of V Pi, I think of it as being a function of a state? But action given a state? Uh. So, to sort of re- refresh all over my what is a Pi and how we define the function. Now we're thinking of it as a mapping from states to actions but it can be a stochastic function. So, it can be a probability distribution over actions. So, I can select action A1 with 50 percent probability. or action A2 with 50 percent probability. For example from the things- [OVERLAPPING] ? . Okay. I mean depends how you want to implement it like that concerned to be a bit. Essentially I think of it as you're in a state and then you have some probability distribution over actions you have to sample from that to decide what action you take. The policy the- so what we're doing here when we expanded this as we said what is the policy for an action given a state we said with one minus Epsilon probability we will be taking this max action. So that one. And with Epsilon probability we would be taking one of the actions. And so, then we summed over each of the actions we could take. So, what we did there is we split this sum up into the probability of taking one action and what'll be the Q function of that action and the probability of taking each of the other actions and what would be the value of those. So, it's like our expected value. Yeah at the back. Yeah . So when we talked about the Bellman operator, we said that if you got to the same value function - [NOISE] You can stop iterating. Here, would you have to have tried every ah, action to know that you are done? That's a great question before in policy improvement if you got to the same policy you you, you are done you don't have to do any more improvement. The question is, in this case is it true or there's some other additional conditions? Um, this is very related to question too. So why don't we go onto the next part that is saying, you know, under what conditions are these going to converge and converge to optimal? Um, do- do you have question before? Yeah, in this this also say that the only time we get strict equalities is when Epsilon is 1. So you just act purely randomly? Uh, the question is whether or not there's, um,so if if policy is random, would you get such a quality here? Um, yeah, you should get. I mean if you can get such a quality whenever you've converged to, like if your Q function is converged your policy is optimal. Are you guaranteed such a quality against what interest are? No, I don't think so. Because if you're acting totally randomly in fact that's normally often how you start off and then you want to improve from there. Could you review I mean if you're, if you're if you're worried if it's uniform some things are going to look better than others. So even if acting randomly, some actions are going to have higher rewards than others and that can be reflected in your Q function. Any other questions before we get on to convergence in the back . One outside this should um, yeah . Yeah, another question. Um, um, do you exclude um, argmax when you explore? Do we what? Do we exclude the argmax action? Like, you know by exploration, um, and e greedy part. And what is your name ? Pardon. Um, I- no you don't exclude it, don't exclude. You don't exclude the argmax action when you explore. You pick all of them. Um, if you wanted to that would be equivalent to sort of defining. You could do that. But in the simplest version, including in this proof here, we assume that when you're um, acting randomly you just sample from any of the states. It's often easier from implementation, too. Okay, great questions. Let's, um, write that up here as well. Er, okay, so this other really great question that's coming up from several people here. Um, er, I have, okay, what does this mean over time? Um, I have call it monotonic improvement and what guarantees do we have? So, the guarantees that we have is, um, if you assemble all state action pairs an infinite number of times, and your behavior policy converges to the greedy policy. So what do I mean by that? Um, so the behavior policy here is sort of what policy you- you're using versus what policy is greedy with respect to your current Q. So, if you have the case that as the limit as I goes to infinity, have Pi a given s goes to argmax Q, s, a with probability one. Which means that in the limit you converge to always taking the greedy action with respect to your Q function . Then, um, then you are greedy in the limit of infinity exploration. That's called GLIE often. So that means you visit all these state action pairs an infinite number of times but you are also converging [NOISE] in the limit to be greedy with respect to your Q function. Um, and there's different ways to do this.The simple way to do it is to sort of, decay your um, your Epsilon or your or your E greedy Policy over time. Um, so you can reduce your Epsilon towards zero at a rate of like one over I, for example, that's sufficient. It's not necessarily This is, this is separate than what you wanna do empirically. This is just to sort of show under these conditions. Then, um, then we're generally going to be able to show that we are going to converge to the optimal policy and optimal value for Monte Carlo and TD methods. So, generally when think will talk about this again as we talk about some of the other algorithms generally when you're GLIE, um, and you have some conditions over how you're learning the Q functions, um, then you will be guaranteed to converge to optimal policy. Yeah. Um, do you realize like, like we've seen Epsilon and. Yeah. So question is is this the only way to guarantee it, um, there's sort of interesting different things that are happening here. Um, you could be guaranteed that you're converging to the optimal Q function without converging to the optimal policy. So, you could keep Epsilon really high, um, and you could get a lot of information you will be learning about what the optimal Q function is, but you might not be following that policy. And we'll talk more about that in a, in a minute. All right, so let's talk a little bit more about Monte Carlo Control. In that given this precursor. So, if we wanted to do Monte Carlo Online Control, instead of just this evaluation we talked about before, we can kind of combine these ideas of learning the Q function and doing this er, improvement at the same time. So we can initialize our Q functions and our counts in the same way we were talking about before. Um, and then what we could do is we can construct an E greedy policy. So E greedy policy in this case is always going to be that with probability one minus Epsilon. We pick the argmax with respect to Q with probability Epsilon we select an action and, let me write it this way: probability Epsilon over a we select action a. So we're just mixing up between this random um, or being greedy. Yeah. If I heard that so, actually like the optimal actions in this case you are selecting with probability one minus Epsilon plus Epsilon over the cardinality of A right? Yeah. Okay. Several people would ask about this. So essentially, you're being greedy with probability one minus Epsilon plus Epsilon over a. And then the remaining part of your probability is going to be an exploration. Because when you're being random you could also select what's currently the best action. So, um, it looks pretty similar to what we saw before. We're going to sample an episode, after we finished the episode then in this case I'm defining as first visit, no, you could make this every visit. I could do every visit. The same, um, benefits and restrictions apply here. So what we had before in the sense that you could either be getting a slightly more biased estimator if you're doing every visit but generally going to be able to use more data. It's going to be sort of less noisy. Um, so in this case what we're doing is we're just maintaining counts over state action pairs and we're updating our Q function. And then after we finished that episode then we can update, um, our k and our Epsilon, in this case we're just using Epsilon equal to one over k, and then we redefine our new E greedy policy with respect to Q, and then we get another episode. So that's just sort of Monte Carlo Online Control. So why don't we go back to that Mars Rover example? So, in the Mars Rover example what we had is for this is what our 2 Q functions look like. So, at this point what would be just spend a minute and say what would be our new policy, um, if we're at the end of this episode and- and its fine just write down tie if there, if there are two Q functions that have exactly the same, um, value for the, for the same state for two different actions and it's just a tie. Then you can choose how to break, the break the tie. Um, and then also write down what the new E greedy policy is. I'll just take a minute to do that. Okay, what's our greedy policy? What is the greedy policy for S1? A1. What is the greedy policy for S2? Two. And then what's our greedy policy for S3? One. And then what is it for everything else? Tie. Okay. And depending on your implementation you could either always be you could either sort of define your greedy policy or you would just like break ties randomly and keep track of that. Could constantly be breaking ties randomly. That would probably be better empirically like, instead of predefining one greedy policy, you can probably just always be Q, er, querying what argmax is of Q. And if you're getting ties just break them randomly to get more exploration. Um, so then if we then define an E greedy policy where K is three and our Epsilon is one over k, with what probability do we follow? Random. So k is three, Epsilon is equal to one over three. So that would mean that with one-third probability, we select something random and with two-thirds probability, we select the pi greedy policy. And then that would be the update for that particular episode. So, if you do this, if you do- if you have greedy in the limit of infinite exploration Monte Carlo, then you're gonna converge to the optimal state-action value. [NOISE] So, now, we're gonna start to talk about TD methods. So, similar to what we were seeing, um, for Q, uh, Monte Carlo, there is gonna be sort of this simple analogy that moves us over to TD. So, remember, for TD what we had before is, we have our V pi of S. It was equal to our previous V pi of S plus one minus Alpha times- oops, let me rewrite that- plus Alpha times R plus Gamma V pi of S prime minus V pi of S. [NOISE] And this was where we were sampling an expectation [NOISE] because we're only getting one sample of S prime, and we were bootstrapping because we're using our previous estimate of V pi. So, that was kinda the kwo- two key aspects of TD learning that we're both bootstrapping and sampling. In Monte Carlo, we were sampling, but not bootstrapping. Um, and one of the nice aspects of TD learning is that then we could update it after every tuple instead of waiting till the end of the episode. So, just as, like, what we do with Monte Carlo, we're kinda replacing all of our Vs with Qs, we're gonna do exactly the same thing here. [NOISE] So, now, we're gonna think about this sort of what's often known as temporal difference methods for control. [NOISE] So, what we're gonna do now is, we can do- we can estimate the Q pi function using temporal difference updating with, like, a e-greedy policy, um, and then, we could do Monte Carlo improvement by setting pi to an e-greedy version of Q pi. That would be one thing we can do. [NOISE] There's an algorithm called SARSA, which stands for state-action-reward- next state-next action, so SARSA. Um, how does SARSA work? So, what we do is, we initialize our e-greedy policy randomly. For example, uh, we take an action, we observe reward and next state, and then, we take another action, and we observe another reward and next state, and then, we update our Q as follows: We say our previous va- um, our value of Q for [NOISE] ST, AT is gonna be whatever our previous value was. Actually, I'm gonna be careful with this. We're not going to index them with pi anymore because we sort of have this running estimate, and our policy is gonna be changing, too. So, the, the Q function that we get here is now not just for one policy, but we're going to be averaging it over different samples, and we can be changing how we're acting over time. So, it's ST, AT, it's gonna be equal to Q of ST, AT plus one minus Alpha RT plus Gamma Q of ST plus one AT plus one minus Q of ST, AT. The important thing about this equation is that I am plugging in the actual action that was taken next. So, you see- you're in a state, you do an action, you get a reward, you go to a next state, and then, you do another action. And so, once you know what the next action is that you've done, then you can do this update in SARSA as you're actually plugging in the action that was taken. And then, once you have that, you can do policy improvement in the normal way. So, you can have ST is equal to arg, max Q, [NOISE] so, like, the E-greedy wrapper for that. Now- so, this is a little bit different than Monte Carlo for two reasons. Um, it's sort of, uh, we're doing these tuple updates, we see the state, action, reward, next state, next action tuples, and then, once we do those, we can update our Q function. Um, we can do those along the way, we don't have to [NOISE] wait till the end of the episode, and similarly, we don't have to wait till the end of the episode to change how we're acting in the world. So, like, in the, um, trajectory that we saw before, we saw some states multiple times. In this case, we could actually be changing our policy for how we act in those states during the same episode. So, if your episodes are really long, this can be really helpful. [NOISE] So, in general, um, I think it's often extremely helpful to, um, update the policy a lot. Yeah Is there a reason [NOISE] ? Oh, yeah. So, they're both the same, it's just either you could write it where you put the V in the next part or not. So, you can either have it as one minus Alpha times your old value plus reward plus Gamma of your next thing, or you could have it as V plus Alpha times, or that, that should be still an Alpha here plus reward minus V. So that's either. They- they're the same. If you know that I've made a typo, just let me know. Yeah. Uh, and is there a reason we use, like, the next state action pair that we choose, uh, uh, A plus one rather than the max state action? question's about why do we use the next state action pair you choose instead of the max. Q-learning is going to be the max, we'll see that in about a slide. Um, SARSA's basically updating on policy, um, that can have- generally, you want to do Q-learning, which is going to be doing the max. Sometimes, there's some benefits, particularly in cases where, um, [NOISE] you could have lot of negative outcomes, that the optimism of being max can end up sort of causing your agent to make a lot of bad decisions early on because it's really optimistic about what it's- what it's- could do instead of what it's actually doing. Um, there's a nice cliff walk example inside Sutton and Barto where they show that SARSA actually is doing better in sort of early, early stages, early samples compared to Q-learning, because SARSA is realistic about what happens if you take certain actions next to- as opposed to optimistic. And if you're doing a lot of randomness, um, that means that SARSA can be more realistic in the early stages. But empirically, generally, you want to do Q-learning, and both will convert to the same thing. Yeah. Um, so, [NOISE] , should be, um, Q ST one A be plus one be ST plus one. Yeah. Thank you. Yes This might be question but you're talking about how its getting the information from the future action, but you have to have already done that action. So, why is it called, um, er, state action or -or next state actions, when it's really the past one that you're updating from what I'm understanding. Because you- you're doing this one and you're using the information you learn, that take the one in the back. So, why is it- why are we talking about it like it's a future action? What's the purpose of that? Um, all right. I- I don't think isn't about the particular terms used to define SARSA , I don't think it- I mean, I guess, it's really just that you have to wait till you get, um,f that- that last A is important. So that instead of saying that- but before we thought with TD learning if you were in a state action reward next state and then you could update your Q-function now we're just saying you have to wait till you've actually decided what to do in that next state. Okay. Because that's how you're choosing how to do update your Q-function here, and that's what you're plugging in for your target. So, in terms of the convergent properties, um, it requires a couple of different things, uh, so, if we are, um, we need sort of two things. We're gonna need the fact that we're- we're updating our Q-function and it's gonna be updating incrementally, and so, like what we talked about before, we're gonna need some conditions over the Alphas. Um, if alpha is equal to 1, uh, generally your Q-function is not gonna converge, because it means you're not remembering anything about the past. Um, if alpha is 0, then you're not updating anymore. So, generally, you need something in terms of the step sizes, which allows you to sort of slowly be incrementing but still be converging. So these are one sufficient set of conditions, um, so if you have stuff like Alpha T, is equal to 1 over T. Now empirically often you're going to want to pick very different forms of learning rates. So Alpha T, is often referred to as, like, the learning rate parameter, and empirically you are often not gonna wanna use this [NOISE], generally not gonna use this. This is gonna, um, uh, you- you're often gonna wanna use different things empirically, you could end up using sometimes small constants, or slowly decaying constants. Often that depends on the domain, but this is from a theoretical side what is sufficient to ensure convergence. And then the other aspect is that the way that your, uh, that your policy itself has to satisfy the condition of GLIE, which means that you are, sort of, slowly getting more greedy over the time but you're doing so in a way that you're still sampling all state action pairs an infinite number of times. Now- now, just note for a second that that's not always possible, like, so if you have a domain, um, where, uh, things are not reachable after a point it's not argotic, you can't get back to certain states after you get there. Let's say you're flying a helicopter and you break the helicopter. So you can't get back up there, um, then you're not gonna be able to satisfy GLIE because at some point you broke your helicopter and then like you have no idea what it would have been like if you continued to fly your helicopter in the air. So, there can be some domains for which it is very hard to satisfy GLIE, um, but we generally are going to ignore those even though there are some really interesting work on, so, how do we deal with those cases as well. In those cases, somebody might assume that it's more of an episodic problem, so maybe you have like a 100 helicopters and so when you crash one that's considered a termination condition and then you get out your next one. Um, so you may or may not be able to be greedy in the limit of information, in the limit of infinite exploration there but you can, sort of, have a bounded amounts of exploration. And we're going to talk a lot more later about, sort of, how to do this exploration in a much more smart manner and in a way that can give us finite example guarantees on how much data we need to learn a good policy. So, this is just what I said before which is, you know, we generally are not gonna use the step type. where you have Q plus alpha because 1 is alpha, times the-the . Yeah. Okay, yeah. So this is the- this is for SARSA. So this is the condition for SARSA assuming that- that particular update of how we're updating our Q-functions. Okay? So, yeah. Uh, so in the Monte Carlo case, we have sufficient condition on- with the pie that has been GLIE with the Epsilon going down to 1 over t. Do we have anything similar in general? Um, great question. Uh, question is about if, for the Monte Carlo, do we have a sufficient, uh, a similar condition. If you're just, um, if you're doing first visit that alone is sufficient. Because you're getting an unbiased estimator that's converging for all of your returns with only a few of all the state action pairs in infinite number of times. If you're doing it in this incremental fashion, um, then if you're, if you're- if you're playing around with how about Alpha is, then you need to have similar conditions to make sure it guarantees. What I mean is, uh, how do you, like, how do you know that condition will impulse? How do you know that things are GLIE? Yes. Like in Monte Carlo we did have condition rate. It was that Epsilon decay as long as it- Oh, so, um, great question. so is like, how do you make sure something's GLIE. Um, one sufficient condition is that Epsilon is 1 over- uh, it's over T or one over I. And do you know like, uh, with that that work like, yeah. Oh, if you need to like know if there are sufficient conditions like this? Yeah, like, it will be GLIE if and only if- if Epsilon put to 0 but there's some diverges or something like that? Yeah I think it's quite similar as sequence, similar like you're, sort of, essentially you're- you're ensuring that you're doing infinite number of updates, infinite amount of grid its like random exploration but still its going down fast enough to converge in. I think it's probably exactly the same but converges. Okay. So then when we get into Q-Learning which is related to the question which was asked, okay, why are we just picking in that particular action next why don't we just pick the max. Um, yeah. We could just pick the max instead. So SARSA is picking this particular action next, Q-learning is picking the max action next. Yeah. as you said what does it take to do better early on because its not too often that statement that later. Um, is there any way that we could mix SARSA and Q-learning, you certainly could, um, but then that also means that maybe I wasn't being clear enough with the earlier part. So SARSA can do better in some domains early on particularly if there's a lot of really negative rewards because it's being realistic, um, another case is Q learning. Will it be better even early on? Because you're being more optimistic and as we talked about a little bit before, often optimism is really helpful for exploration. The cliff walk example in Sutton and Barto is a case where some actions lead the agent to like fall off a cliff and so some actions are really bad and so there being optimistic early on means that you're gonna take a lot of really bad decisions and suffer a lot of negative rewards for a while. Many other domains are not like that so depends on a lot. And yes you could certainly mix them. Alright. So I guess in terms of Q-learning one thing that's interesting here is, uh, so we can again sort of think about how are we're improving this and we're gonna, sort of, be e greedy with respect to the current estimate of optimal Q, and- and really this is quite similar to what we we're doing in SARSA except for now when we update this Q we're really just gonna be doing this MAX. So Q of ST, AT is gonna be equal to the previous value, plus alpha or plus max over A. So, now also note that you can update this a little bit earlier, so, you don't have to wait until the next action is taken. So, you only need to observe this part. You don't need to actually see the next action that's taken and then you can perform policy improvement, and in general, in this case, you're only gonna- you only need to update the policy for the state that you were just in. So you can do pi, you can update pi b for ST for the action you just took. You don't need to- particularly, in large state space, that can be helpful. So we actually ended up talking about this a little bit already about whether or not how you initialize Q matters. It doesn't asymptotically, I mean, if you have a case where your Q function is gonna converge to the right thing, it will still converge to the right thing no matter how you initialize it as long as it satisfies these other conditions, but it certainly matters a lot empirically and so even though often we think of just initializing it randomly or initialize it with 0, initializing it optimistically is often really helpful. So we'll talk more about that when we talk about exploration . Yeah On the previous slide line six, either max or a argmax?. Thank you. [NOISE] So now, um, if we do Q-learning. Um, Let's see. I think wha- I'm gonna leave this as just an exercise you can do later, but you could just do the exact same exercise for Q-learning, um, and see how these updates propagate. Um, so just like Monte Carlo versus Q- Monte Carlo versus TD for policy evaluation, there's some of the same issues with Q-learning. Q-learning is only gonna update your Q function for the state you are just in. So, even if it turns out later in the same episode, you get a really high reward. You're not gonna backpropagate that information at the end of the episode in the way that you would with Monte Carlo. So Q-learning updates can be much slower often, um, than Monte Carlo. Just like enter that has implications for how quickly you can learn to make better decisions [NOISE]. So, the conditions that are sufficient to ensure that Q-learning with the ε-greedy converges, it's basically the same as SARSA. We need to make sure that things are, um, that are GLIE, and, I see, and slightly revise this. So, if you just wanna make sure that you converge, that needed to be the all SA infinitely often. I need to have these conditions on the Alpha. So if you look at the same conditions, in order for the Q functions to converge, you need to have these conditions on how you're updating your li- like what you're learning rates are. Ah, and that you visit all state action pairs infinitely often. But that just- that's sufficient to allow you to converge to the optimal Q values. And then if you want to actually make sure that the policy you're following is really the optimal policy, then you need to be GLIE. You also need the policy you chose to be more and more greedy. All right, let me just briefly into the maximization bias before we finish. The maximization bias is an interesting question. Ah, so why are we going to talk about this? Well okay, let's go back to this one. So in Q learning, what are we doing? In Q learning, we're computing the Q function and then we're being e-greedy with respect to it. Now, we're going to need some more data and we're re-updating our Q function and we're being greedy with respect to it. And so we're e-greedy with respect to it. And so, we're always sort of doing this dance between updating stuff, getting more evidence, but then trying to kind of exploit that knowledge up to some random exploration. And the maximization bias points out that maybe there can be some problems with this. Okay. So, let's just consider a particular example. Imagine there is a single state MDP which means there's only one state. Um, but there are two actions and both of them actually have 0 mean random rewards. So now, you can think of these as being like, Gaussians. Right now, we're mostly talking about it when the reward is actually deterministic but it doesn't have to be. It could be stochastic reward. But in this case, where you would imagine that whether you take action a1 or action a2, your expected value is zero, but the value you get on any particular episode- any particular step might not be zero. Might be one or minus one or things like that. The average is still zero but on any particular step, you could have something different, okay? But the expected value is zero um, and so the Q value for both sa1 and the Q value for sa2 is zero which is the same as the value. And these are all the optimal Q and S values. So let's imagine that there are some prior samples. You've tried action a1 a bunch of times, you've tried action a2 a bunch of times, and you compute an empirical estimate of this. And here again where um, there's just a single state. Um, and we can just average over these. Let's imagine that it's super simple that we have um, gamma is equal to zeros. So, we're really just estimating over the immediate reward. Okay, so there's no future rewards. We're just saying all the times that we've tried this action before. What were all the rewards we get when we average? And now what we wanna do is we wanna take our empirical estimates of the Q function for a1 and a2, and we want to figure out what the greedy policy is. And the problem is that it can be biased. So even though each of these unbiased estimators of k- of Q are themselves, even- even though the two estimates the ah, actions are unbiased, when you take a max over it, it can be biased. Let's just write out what that is. So our V Pi hat is equal to the expected value of max over Q a1, Q a2. So I'm going to be taking the expected value of max of these two things because that's how I defined my policy. My policy says pick whichever of these two empirically looks best. But we know that from Jensens, this is greater than equal to if you switch the max and the expectation [NOISE]. And this is just equal to max of zero, zero. So the important part is this, and this is equal to the true V Pi. So that means that whatever we compute um, can be a biased estimator of the true V Pi. So why did this happen? Well if you get ah, you know, if you only have a finite number of samples um, I- if I have tried action a1 a finite number of times, it might be on that finite number of times it happens to look slightly positive like it's like, a 0.1 instead of zero. And then when I take my policy, I'm going to maximize over those. So I'm going to immediately exploit whichever one happens to look better even if with statistical chance. So that's why you can get this maximization bias. And the same thing can happen in terms of MDPs. So ah, this generally can happen. You can also look at some nice examples from this paper by Johns- Johnson Tsitsiklis and Shie Mannor where they show how this can also happen in Markov decision processes. Where essentially if you ah, if your estimates for these Q functions ah, then you're going to be sort of biased to whatever has happened to look good in your data and so you can have a maximization bias. So one thing that was proposed to try to handle- deal with this case is called double Q learning. And so the idea is instead of ah, having one Q function, we are going to have two different Q functions. And we're going to create two independent unbiased estimators of Q, and you're going to use one of them for your decision-making and the other to try to estimate the value. And that's gonna allow us to have an unbiased estimator. And the reason that you might want to do this is because ah, then it can sort of help- help with this issue that you can end up being overly bias towards things that have happened to look good. Yes, now you're separating like between the samples that you're ga- that you're getting to estimate how good an action is versus ah, the way you're trying to estimate your policy. So I'm just going to be a little brief with this because of time. Q learning basically- double Q learning basically means that we're going to have these two different Q functions. Um, and then with 50% probability, we're going to update one, at 50% probability, we're going to update the other. So, this was- and in this case, I'm going to skip out all others um, the final slides I want to show you the difference. Um, the difference here can be significant sometimes. So, in this case, this is sort of looking at the percent of time that we're taking bad actions in this domain where you can have, in this case, you have a scenario where it's actually the wrong thing to do but it's stochastic. And so with a small amount of data, it can end up looking better compared to another option where the reward is deterministic and actually better but has no stochasticity, and then Q learning can suffer quite a lot from this maximization bias. Um, if you're using the same Q function to essentially immediately define your policy as you are um, for estimating the value of that policy, whereas double Q learning does a lot better in this case. So it's something to consider in terms of when you're implementing these things and it's pretty small overhead too because you can just maintain two different Q functions. Right. I know that was a little bit fast but make sure to put details on there, um, when I- we upload the additional slides today um. The main things that you should know from today is to be able to understand how you do this Monte Carlo on policy controls and same for SARSA and Q-learning. It's useful to understand how quickly they update, um, both in terms of whether you have to wait to the end of the episode and then how quickly information propagates back. And also to understand how to define the conditions on the algorithms converging to the optimal Q function. Thanks. |
Stanford_CS234_Reinforcement_Learning_Winter_2019 | Stanford_CS234_Reinforcement_Learning_Winter_2019_Lecture_15_Batch_Reinforcement_Learning.txt | Portion, and then a group portion. You'll be assigned groups in advance, um, and there'll be numbers on the chairs for the room that you're in so you'll know where to go sit. And the way that it will work is you'll first do the individual part, you'll turn that in, and then you'll receive another exam for the group part, and then go in and discuss your answers, agree on them, and then scratch that off to see if you got it right, and then when you're done, you'll hand that in as your group one. [NOISE] And, yeah. Uh, just, logist- logistical question. So we're splitting into the two groups again without being asked later? Yes. Okay. Great question. [inaudible] asked is whether or not we're splitting into two rooms, yes, we're again gonna do that, um, and that will be your assignment. Um, I think it's likely to be the same as last time, but I will confirm that. We might make it slightly different because some SC- SCPD students will be joining us for this one that didn't before and vice-versa. So it's possible, particularly, if you are right on the borderline, we'll, um, you'll be in a different room this time. So we'll announce that. Um, [NOISE] just as a reminder, right now, we're basically done with all of the assignments so this is a great chance to be focusing on the projects. Um, you should have been getting feedback about all of those along with, with, um, with a little bit of information back about your project and the person who graded it, will have signed that. So that's a great TA to go ask questions of and their office hours are on Piazza. Um, but you're welcome to go to any of the office hours to ask about project questions. [NOISE] And that poster session will be [NOISE] at the time, the original time announced. It is sub-optimal but what can you do? We're gonna meet in the morning on, uh, the last day of finals. [NOISE] So that was the one that we are assigned. Anybody else have any other questions about this? Who here went to Chelsea's talk on Monday? [NOISE] Okay. She does, uh, for those of you who did- didn't get to see it, Chelsea's talk will be online, it's a really nice talk about meta-reinforcement learning. It will be covered on the quiz next week, but pretty light, um, because you haven't had much exposure to that idea. So again, just as a recap for the quiz. The quiz will cover everything, um, in the course. But things that you didn't have a chance to actually think about, um, uh, because you didn't get practice on it with an assignment, uh, we will test more lightly. It also will be multiple choice. I highly recommend that you take the quiz from last year ignoring any topics that we haven't covered, [NOISE] um, and do that without looking at the answers. Um, one of the robust finding for educational research is that forced recall is incredibly effective at helping people learn. Forced recall is a nice way for testing [LAUGHTER]. So, um, this is, of course, in the case where you're not getting assessed by it so you just can use that to, to check whether or not you understand these things and look at anythi- re-look up anything [NOISE] that might- you might have questions about. Any other questions? All right. So let's go ahead and get started. Um, we're gonna talk today about batch reinforcement learning, and in particular, safe batch reinforcement learning, I'll define that what that is. Um, this is a topic that I think is extremely important, we do a lot of work on it in my own lab. And most of the topics I'm gonna focus on today will be work that came out of, uh, work that was done by my postdoc, Phil Thomas, who's now a professor at UMass Amherst, but the work he did before he worked with me, um, some of our joint work, and I'll also highlight some other related work. So let's think about a simple case. Um, let's think about doing a scientific experiment, um, where you have a group of people who we're- we're gonna call the A group, and they get to first do this particular type of fractions problem. This is a fractions problem for one of our tutoring systems, um, uh, where people are having to add fractions together and then reduce the sum to the lowest terms. And then they have to do something where they do cross multiplication, um, and then after that, they get an exam. [NOISE] And, again, an average score of a 95. And then we have the B group that does the same activities but in the opposite order, and then they get an average score of 92. And the question is, for a new student, what should we do? [NOISE] And what would I- what would- what additional information might you want to know in order to be able to answer this question? So feel free to shout that out. So what would you need to do for like now, a new student comes along? [NOISE] Now, let's imagine our objective is to get a high score on this exam, for that student to get a high score on this exam. So what- which sequence of activities would you give to that new student in order to maximize their probability that they get a good score on this exam, indicating hopefully that they've learned the material? So, yeah. Do you know how big A and B are? So one great question. [inaudible] to say, um, how big is this group? Um, so how large is an A and B? And you might want to know this for a number of reasons. For example, if the number of people in group A is 1, and the number of people in group B is 2, maybe these are just sort of statistical noise between these di- distinctions. What else might you want to know? [NOISE] Um, the variance. Yes. So I suggest that you might want to know the variance. That's another thing you might want to know. This is just the mean. So what is the variance? What other pieces of information might you want to know? Yeah. Probably the difference between the median and the mean. Yeah. So other sort of forms of statistics about, sort of, you know, the distribution. I'm- I'm thinking about something also more, um, in a different direction. Yeah. Um, the people in the group. So is that bigger group A is all high school students, group B is all like lower grade or something like that. Exactly. So maybe the- maybe group A is kindergartners and maybe group B is, um, you know, high schoolers and [LAUGHTER], we're all regressing [LAUGHTER]. So yeah. So- so who's in these groups? [NOISE] And in addition to that, often, you'd want to know who was the new student. So is the new student then a kindergartner or a high schooler? Okay. So there's really a lot of additional information that you'd want to know in order to be able to answer this question precisely, uh, and it involves a lot of different challenges. And one of the challenges here is that, um, if group B is different than group A, then we have this fundamental issue of, um, censored data. You don't get to know what would have happened in group A if they had had the same seq- or in the group B if they'd had the same sequence of interventions as in group A. So this is sort of the fundamental challenge that you'll never know what it would be like if you were at Harvard right now, um, but, uh, but there's this, uh, you only get to observe the outcome for the action that's taken. We've seen that a lot in reinforcement learning, and that's true in this case too where we have old data, um, about sequences of decisions, and so it requires this kind of actual reasoning. Another thing that it involves is, um, generalization. So here's a simple example where you can think of it basically as being just two actions. Each of the two different prob- each of the two different problems and you can think of this as the reward. The delayed reward may be of a reward of 0, reward of 0, and then a reward of the test score. And so here there's only two actions, and who knows how big the state space is, it depends how we'd want to model students. Um, [NOISE] but we don't want to think about combinatorially all the different orders of actions. Um, and even if we're writing down a decision policy, that might start to be very large very quickly. And so we're gonna need to be able to do some form of generalization, either cross states, or actions, or both so that we don't have to run a combinatorial number of experiments to figure out what's most effective for student learning. [NOISE] So we're gonna talk about this problem today in the context of batch reinforcement learning which we can also think of as being offline. So this has to be offline batch RL, and this is frequently gonna generally be off policy. Now, we've seen a lot of off-policy learning before, Q-learning uses an off-policy algorithm. Um, but I want to distinguish here that what we're gonna be mostly focusing on today is the case where someone has already collected the data. So we already have a prior set of data, um, and then we're gonna want to use it to make better decisions going forward. Now, this problem comes up not just, um, in the case that I just mentioned, but in a huge number of different domains. You could argue that areas like econom- economics, um, and statistics, and epidemiology are constantly asking these sort of questions. Um, it comes up in things like maintenance, you know, um, what sort of order of, um, actions do you want to do to make sure that your machines, your cars run for longest. Um, it comes up in health care, like what sort of sequence of activity should we give to patients in order to maximize their outcomes, their quality-adjusted life years. And in many of these cases, it's gonna be state-dependent because what's gonna work best for patient A is gonna be different than what works best [NOISE] for patient B. [NOISE] Now, one of the big challenges here too is that when we think about a lot of the cases where we have this old data, it's gonna be high-stake scenarios, which means that whether it's because we have really expensive, you know, nuclear power equipment which we don't want to go wrong, um, or we're treating people, um, for, you know, really significant diseases, then we wanna make sure that we make good decisions in the future. So we might may or may not have a lot of data, um, but the data that we have is precious and we wanna make as good decisions as possible from that. So that means we need to have some form of confidence in how well it's gonna work going forward. So we would really like to have some sort of upper and lower bounds on its performance before we deploy it. Um, and in general, we just want good methods to try to estimate, um, if we do this counterfactual reasoning, if we think about how well people, or, you know, how much more healthy people might be if we were to treat them in a different way, um, how confident can we be before we convince a doctor to go actually deploy this. So what I'm gonna talk about today is thinking about sort of this general question of how can we do batch safe reinforcement learning. Safety can mean a lot of different things. Um, when I'm talking about safety today, I'm mostly gonna be thinking about this in terms of this confidence. This ability to say, um, before we deploy something, how good do we know it is. Um, now, there's different forms of safety. There's things like safe exploration, making sure you don't make mistakes online, there's risk sensitivity, thinking about the fact that, um, each of us is only gonna experience one outcome, not the expectation, um, so we may want to think about the full distribution instead of averaging. But what we're gonna talk about today is mostly still thinking about expected outcomes but thinking about being confident in the expected outcomes. And so, in general, we would like to really be able to say with high confidence this new decision policy we're gonna deploy for patients, or for nuclear power plants, or for other sorts of high-stakes scenarios, we think it's better. We think it is better than what we're currently doing. And why might you want this? You might want to, sort of, to guarantee kind of monotonic improvement particularly in these high-stakes domains, which is something that we've seen earlier this quarter. [NOISE] Okay. [NOISE] So let's talk just briefly about some of the notation, some of this will be familiar. I just wanna make sure that, we're all on board with that, and then I'll talk about sort of some of the different steps we might think about to try to create batch, um, safe reinforcement learning algorithms. So whereas usually gonna use pi to denote a policy, um, we're gonna use T or H to denote a trajectory. I'll often use big D to denote the data that we have access to. This is like [NOISE] electronic medical records systems, um, or you know data about power-plants, etc. And for most of this, we're gonna assume that we know what the behavior policy is. So we know what was the mapping of states to actions, or it could have been histories to actions that was used to gather the data. [NOISE] Can anybody give me an example where that might not be reasonable, where we might not know the behavior policy? [NOISE] [inaudible] actions, and we don't know the questions. Exactly. In many, many cases where the data is generated from humans, um, we will not know what pi_b is. So when we look at medical health data, we typically don't know what pi_b is. So, you know, if this is generated by doctors, [NOISE] we typically don't know what pi_b will be. There's obviously guidance, but we don't typically have access to, um, the exact policy people used and, and if we did, they probably wouldn't have phrased it as like, you know, a stochastic process. [NOISE] But like when someone comes into their office with probability 0.5, they're gonna treat them with this person 0.5. They probably think of it, in deterministic terms and they probably wouldn't think of it in terms of these if-then rules. So there are many cases where [NOISE] we don't have that. Um, we've done some work on that recently, other people have as well. I might talk about that briefly at the end. But for most of today, we're gonna assume that we have access to this. So can someone give me an example, where it is reasonable to assume that we have pi_b? [NOISE] Sure. [NOISE] [inaudible] is based on a [inaudible] set of guidelines or something? Yeah. So in some cases, you know, [NOISE] [inaudible] like sometimes you have like an algorithm to make a decision or, you know, a clear set of guidelines. Were you gonna say something similar or different? A little different. Um, so if you have a power plant with maintenance records, ah, [NOISE] like an established maintenance plan that has records [NOISE] match the plan then you basically have pi_b. That's right. Yeah. So in those real cases where you have these fixed protocols. Another example, I often think about is, you know when your decisions were made by reinforcement learning [LAUGHTER] agents, or, or they're made by supervised learning agents. Um, if you go to a lot of different, ah, like how, you know, Google serves ads. We know exactly how it serves ads, it has to all be logged. And there's a, there's an algorithm that is making that decision. So in many cases, [NOISE] we have access to the code that is being used to generate algorithms automa- ah, generate decisions [NOISE] automatically. In that case, we can just look it up, as long as we've saved it. So, and our objective is usual is to think about how do we get good, good policies out, and good policies with good values. Um, when we think about trying to do safe batch reinforcement learning in a setting, we're gonna be thinking about, how do we take that old data? So we're gonna take our data's input and push it through some black box, and get out a policy that we think is good. So we're gonna have sort of some algorithm or transformation that is instead of interacting with the real-world, and getting to make [NOISE] decisions and choose its data, it's just taking this fixed data, and it'll put a policy, and one thing that we would like is that, if we feed data into our algorithm, and our algorithm could be stochastic, then the value of the policy it outputs. So we can think A(D). So that's, you know, this is being our algorithm A, it's gonna output some policy. That might be a deterministic function, that might be a stochastic function. Whatever policy it outputs, we want it to be good. Ideally, at least as good as the behavior policy [NOISE]. So that's what this first equation says, is it says the value of whatever policy is output, by our algorithm when we feed in some data set. We would [NOISE] like it to be as good, as what [NOISE] the behavior policy that generated that dataset, um, whatever that value was. So this is sort of [NOISE] the value of the policy used to generate the data. Now, we don't normally, we're not normally given via pi_b, um, directly. But can anybody give me an example of how we might learn that? [NOISE] Given a dataset, which was generated on policy, from that policy generated, um, using that policy. Yeah. Just do like dynamic programming or whatever on that small [inaudible] approximately. Yeah. So I think, like one thing is that, you know, you could use that data. Um, I don't know if you could do dynamic programming, because you don't necessarily have access to the transition and reward models. But you could do something like Monte Carlo estimation, could average the reward, um, let's assume in the dataset that you can see states, actions and rewards. Um, so you could certainly just average it, you know, average over all the trajectories. So we can get an estimate, we can estimate p pi_b, by looking at, let's imagine, it's an epic- episodic problem. So you can look at sum over all, equals 1 to number of trajectories, of the return for that trajectory. This is a return. Which is essentially just doing Monte Carlo estimation, because you know that that data was generated on policy, [NOISE] and so you can just average. [NOISE] So that'll give us a way to estimate the pi_b but then we need some way to estimate V of our algorithm, outputting D. Um, and that means we're gonna have to do something off policy, because in general we're gonna be wanting to find policies that are better than pi_b, which means that they would have had to be making some different decisions. And I'll just highlight here that you know, sometimes you might not just want to be better than the existing behavior policy, but you might need to be substantially better. Um, often, if we're thinking about real production systems, it costs time, and money, and effort, whenever we want to change protocols. If you want to get doctors to change the way they're making decisions, we want to change things in a power plant, there's often overhead. So often, you might not just need to be better than the existing sort of state of the art, you need to be significantly better. So the same types of ideas we're talking about for relative to sort of the current um, performance, you can always add a delta to that because they have to be at least this much better. And so again, just to sort of summarize, what does this equation says? It says, I want to have situations, where the policy that's output, when I plug in my dataset, I want that to be better than my existing policy with high probability. So delta here, you know, is gonna be something between 0 and 1. So now, let's talk about how we might do this. Um, we're going to start with off-policy policy evaluation. [NOISE] So the idea in this case, well, okay, first of all, go through all three of these really briefly. Um, and then we'll, we'll go step through them more slowly. So the three things in terms of stuff we might want to do safe batch reinforcement learning, and there's tons of variants for each of these depending on the setting we're looking at, is we need to re do- have to do this off-policy batch policy evaluation. Which is we need to be able to take our old data, and then use it to estimate how good an alternative policy would do. We might want to get confidence bounds over how good that is. So this could just allow us to get some estimate of V A(D), or V pi_e. Pi_e is often used to denote an evaluation policy, a policy we wanna evaluate. So the first thing is just doing off-policy policy evaluation. The second thing is saying how would we know how good that estimate is, so this is an estimate could be really good, could be bad, so you might want to have some uncertainty, over this estimate. So that we can quantify how good or bad it is. And then finally, we might want to be able to actually, take like, you know, an argmax over possible policies. So you might want to be able to do something like argmax, [NOISE] pi V pi_e, with some confidence bounds. So in general, you're not gonna just want to be able to evaluate how good alternative policies will be, you're gonna wanna figure out a good policy to deploy in the future, which is gonna require us to do optimization because we don't normally know what that good policy is yet. So typically, we're gonna end up evaluating a number of different policies. So we can think of it as sort of the first part is we're gonna take our historical data, take a proposed policy, plug it into some algorithm that we haven't talked about yet, and get out an estimate of the value of that policy, and we're gonna talk about how to do important sampling [NOISE] with that. And then after that, we're gonna go into the high-confidence, um, policy evaluation and safe policy improvement. To get confidence bounds, we're gonna look at Hoeffding. We've seen Hoeffding before, um, as something that we looked at when we were starting to talk about exploration. So when we look at high-confidence and then think back- think back to exploration. So in exploration, we're often trying to quantify how uncertain we were about the value of a policy or its models in order to be optimistic with respect to how good it could be and use that to drive exploration. But we also could have computed confidence bounds that are lower bounds on how good things could be, and that's gonna be useful here when we try to figure out how good policies are before we deploy them. And we'll do Hoeffding inequality for that, and then finally we're gonna be able to wanna do things like safe policy improvement, which is can you answer the question of saying, if someone gives you some data and they say, "Hey, can you give me a better policy?" Can one have an algorithm that either gives a better policy that it is actually better like when you go and deploy it with high probability? Or can the algorithm also know its limitations and say, "Nope, there's no- there's no way for me to give you a policy that's better." So I think it's also really nice to have algorithms that are aware of their own limitations. We're doing quite a bit of work on that in my lab right now, um, so that when people who are using these, particularly for human in the loop systems, um, that they can understand if the algorithm is giving out garbage or not. And so in this case, the idea is that sometimes if you have very little data, you can't do improvement in an, uh, confident way. So for that example I showed you before, we had like two different ways of teaching students, and someone, you know, made the good point of saying, how many people are in each of these. If only one person has tried either of these and someone says, "Can you definitely tell me what's going to be better for students in the future?" You should say, "No." [LAUGHTER] Because there is only one data point, like there's no way we would have enough data from, you know, one data point in each group to be able to confidently say in the future how we should teach students. So I think that the safe policy improvement needs us to be able to both say when we can be confident about deploying better policies in the future or when we can't. So we're gonna look at sort of a- a pipeline for how to answer that sort of question. All right. So let's first go back, go and think about off-policy policy evaluation. So the aim of this, um, is to get a- an off-policy estimate that is unbiased. So we want to get sort of a really, you know, a good estimate of how, um, how good an alternative decision policy would be. So we have data right now, which is sampled, um, under some policy, let's call it a behavior policy Pi 2. So we have like this dataset. This is D, which is giving us through these samples of these trajectories, and then we want to use them to evaluate how good an alternative policy would be. And while we could do this for something like Q-learning, we want to do it with a different method that's gonna allow us to get better confidence intervals, um, and where it's going to be an unbiased estimator. So in Q-learning, if we think back to sort of what Q-learning was doing, you know, Q-learning is off-policy. We could do this with Q-learning. Q-learning is off-policy, and it- it samples and it bootstraps. And because it samples and bootstraps, it can be biased, okay? And so we're wanna do a different thing right now, which still allows us to be off-policy, but in a way that we're not biased, that our estimator might not systematically be, um, above or below, particularly because right now we're always gonna have finite data. We're never gonna be in the asymptotic regime where we have tons and tons and tons of data, um, and we can sort of assume this went away. So again let's think about the return. G_t is the return. It's just how much reward we got under a policy, you know, either over a finite number of steps like for one episode or across all time. And our policy is just or the value of a policy again is just the expected discounted reward of that. So the nice thing is that if Pi 2 is stochastic, the data that you're using- that you're gathering from your behavior policy, then you can use it to do off-policy evaluation. This would have been essential for doing Q-learning too. And one of the nice things is that because we're kind of following this Monte Carlo type frame work, you don't need a model and you don't need to be Markov. That's really nice because we're gonna end up getting an estimator that is, um, unbiased and it does not rely on the Markov assumption holding. And in many cases, the Markov assumption is not gonna hold, particularly when we start to think about patient data or other cases where we just have a set of features that happen to have been recorded in our dataset, and who knows whether or not that system is Markov or not. We've certainly seen in some of our projects that it is not. And that in some of those cases, if you make the assumption that the world is Markov, you have really bad estimates of how good, you know, an alternative way of teaching students might do. Okay. So why is this a hard problem? Well, um, it's because we have a distribution mismatch, okay? So if we look at, um, imagine we just had a two-state process, where we thought about, you know, kind of this is S and this is- or like we can say this is the probability of your next state S prime, and I've sort of made it smooth. We can think of a Gaussian here, and this is under pi behavior. Under pi evaluation, it might look different, okay, versus this. In general, the distribution of returns you're gonna get, the sequence of the state-action, reward, next state, next action, next reward, so this sort of trajectory. The distribution of trajectory is gonna be different under different policies. So the distribution of tau here is not gonna look like the distribution of tau here. If it looks identical, what does that say about the value of the two policies? [inaudible]. Sorry what? [inaudible]. Exactly. So what you said is correct. If, um, I- if the distribution of states and actions that you get under both of these policies are identical, then the value is identical. And we saw this idea also in imitation learning when we are gonna be doing sort of state action, uh, or- or state feature matching. Now in this case, we're talking about not just states and actions, we're talking about full distributions or full trajectories because we're not making the Markov assumption, but the idea is exactly the same. The only way we define the- the value is basically the probability of a trajectory and the value of that or, you know, the sum of rewards in that trajectory. So if a distribution is identical, then the value is identical. We don't care if the policies are different because we already know how to estimate the value. [NOISE] So the key problem here is that they're gonna look different, which means that you would have went- done different things under different policy. So it's like, you know, right now, maybe you go and visit this part of the state space a lot, [NOISE] excuse me, [NOISE] and this part infrequently. And now you're gonna have an alternative policy, which only goes here infrequently and goes over there a lot. [NOISE] Excuse me. But thinking about it in this way gives us an idea about how we can look at our existing data over there and make it look more like that. Does anybody have any idea of how we could do that? So if someone gives you a bunch of trajectories, um, how might you maybe change them so they look like the distribution you care about? Yeah? Importance sampling. Right. So we can do importance sampling here, okay? So let's just review and refresh importance sampling. So the idea is that for any distribution, um, we can reweigh them to get an unbiased sample, okay? So let's imagine that we have data generated from, um, or we want data generated from some distribution q, we wanna estimate f(x), okay? So we'd have- wanna get f(x) under the probability distribution q(x). So we can multiply and divide by the same thing, let's incorporate another distribution. It's just a different distribution over x times q(x), f(x), dx, okay? So we can just rewrite this as being equal to integral over x probability of x times this quantity, which is q(x), divided by p(x) times f(x), and let's imagine that we actually have data from q(x). So we want data from q(x) but we have data from p(x). So we can approximate this by 1 over n, sum over i, q(xi) divided by p(xi) f(xi), where xi is sampled from p(x). I remember when I first learned about this a number of years ago and I thought it was a really lovely insight just to say we're just gonna reweight our data. And so we're gonna focus on, um, the data points that come from, you know, that- that are ones that we would sample under the distribution we care about. We're just gonna reweigh them so they look like they're having the same probability, um, that they would under- under q(x). In our case, under our desired policy. Okay. So importance sampling works for any distribution mismatch. If you have data from one distribution you wish you had it from another, um, those can come up in things like physics. Often you have really rare events like Higgs bosons. And in those cases, you might, um, there are different scenarios where you could reweigh things, um, so you can get an estimate of the true- the true expectation. Okay. This is for just generic distributions. Let's remind ourselves how this works for- for the reinforcement learning setting. So again, we're gonna have our episodes. We can call them h, we can call them tau, which is a sequence of states, actions, and rewards. And then if we wanna do importance sampling or let me just write this out. So, um, in this case, we're gonna wanna get something like p of hj under our desired policy, okay? So what does that gonna be? That's gonna be our probability of our initial state. Let's assume that's identical no matter what policy that you're in, and then we're gonna have the probability of taking a particular action, given we're in that state, times the probability that we go to a next state and the probability of the reward we saw. And we can sum that over j = 1 up to n - 1 or lj - 1. So we just repeatedly look at what was the probability we pick the action, the transition model, and the reward model. So that's how what we have for, um, the probability of a history. And then if we wanna do this for importance sampling, so what we want is we wanna have, um, the probability of this history, um, we need to be able to compute this ratio of q(x) divided by p(x), which for us is gonna be the probability of a history j under or the evaluation policy divided by the probability of a history j under the behavior policy. So- and we wanna do this and we're hoping that everything is gonna cancel because we don't have access to the dynamics model or the reward model. So unfortunately, just like what we saw in some of the policy gradient work, it will. So if we have probability of hj divided by pi b, we're gonna have again the initial state distribution, which will be the same in both cases, and then we have this ratio of probabilities, the probability of aj divided by sj. And this is under pi e, probability of aj divided, um, given sj for pi b, and then the transition model and reward model. Okay. And so this is nice because this cancels and this cancels. Because the dynamics of the world is what determines the next state and the dynamics of the world is what determines the reward. And notice here just to make this not incredibly long, I- here, I've made a Markov assumption. So this is the Markov version. [NOISE] But you could do, um, you could condition on the full history. So this trick does not require us- does not require the system to be Markov. Because no matter whether your- your dynamics depend on the full history or the immediate last step, they're gonna be the same in the behavior policy and in the dynamics. And in the evaluation policies, you can cancel these and same for the reward model. So this- this, um, insight does not require a Markovian assumption. And what that means is that we just end up getting this ratio of, um, the way we would pick actions under the evaluation policy divided by the ratio of the way we would pick actions under the behavior policy. Yeah? Assuming that, uh, same trajectory is generated by two different policies. Great question. Um, yes, we're assuming the same trajectory was generated by two different policies. Yes. So we're saying for this trajectory, what's the probability you would have seen this under the behavior policy versus the evaluation policy? And so if it was more likely under the evaluation policy, we wanna upweight whatever reward we got for that trajectory. And if it's less likely under that evaluation policy, we wanna downweight it. So the intuition is that we have a bunch of, um, uh, trajectories and their sum of rewards. So we kind of have these h, you know, h_1, G_1 pairs. So we have these sort of trajectory, sum of rewards. And if we had the same behavior policy as the evaluation policy in order to know how good that evaluation policy is, we just average all of those G's but they are different. And so what we wanna do is we wanna say for h's that are more likely under an evaluation policy, upweight those. For h's that are less likely under a evaluation policy, downweight those so that you get, um, a true expectation when you do those G evaluation, G weightings. Does that makes sense? Does anybody have any questions about that part? So this is just so far telling us how we re-weight our data. It's allowing us to get a distribution that looks more like the distribution of trajectories we would get under our evaluation policy. Yeah? In the final, ah, the final row, we have denominator, ah, the behavior policy. We get that as empirically? Good question. You know, where does the behavior policy come from, basically like- like what is these probabilities? Is that the question? So I think it depends, So in the case, um- if you're- if it's a machine learning algorithm, you still generate your data, you just know it. Like you would just look up in your algorithm and see what the probability distribution is. And for today, we're going to assume that this is known and it's perfect. Um, that is obviously not true when we get into people data. In those cases, um, there's a couple of different things you can do. One is that you can build estimators that are robust to this being wrong. So you can use other ways to try to kind of be doubly robust if that estimator is wrong. In others you can take the empirical distribution. And actually there's some cool work recently, I think was from Peter Stone's lab at UT Austin, showing that sometimes you- it's better to use the empirical estimate then even if you know the true estimate. Like in terms of the resulting estimator, which is kinda cool. Okay, so this is just writing that out in LaTex instead of me hand-writing it, um, and so this just writes out, um, [NOISE] the- the probability of a history. And now we can see that equation that I put here. So, um, you know, the value of the policy that you wanna evaluate, um, is gonna be this ratio of history's times the return of that history. And what we said here is, and this is, you know, one over n, is that this is simply the probability of taking each of those actions or I'll write it just in terms of the Pi notation. So this is Pi e of aj given sji, i equals one to the- the length of your trajectory divide by Pi b aji. All times the return of that particular history. [NOISE] So the beautiful thing is you can- this is an unbiased estimator. This really does give you a good estimate of the value under a few assumptions, um, which I'll ask you guys about in a second. And, um, you don't need to know the dynamics, so no dynamics. No reward. No need to be Markov. Can anybody, um, [NOISE] tell me case where maybe this doesn't work. So, I was just seeing that if you use this for running, then starting from your initial policy to your final policy, they couldn't be that different, right? Because otherwise then you- the samples [BACKGROUND] from the previous strikers won't be useful. That's a good question. That's exactly what I was asking about is, you know, how different can Pi e and Pi b [BACKGROUND] and allow us to do this. So can- that was exactly what I was about to ask you guys about. So, can anyone give me, uh, what they think might be a condition for this estimator to be valid. Like, where might be some cases where you would expect this might do badly in terms of differences between the evaluation policy and the behavior policy? And it has to do with the probability of taking actions in a certain state. If either of those probabilities are too small, you are gonna have things blow up in bad ways. Yeah, so in which that, either of these probabilities are too small. This might be bad. Which of these ways is worse? If they- uh, it's not [inaudible]. Right. So pi b is really small or at, you know, 0 [LAUGHTER]. Um, this could be really bad now. Um, pi b can't ever be 0 and us observe something. So that's good, because we're getting data from pi b. And so we have never observed a trajectory under which pi b is 0 but it could be really, really small. So it could be, you know, you see something and it's incredibly unlikely there, but your behavior policy would have generated that. Um, and what if it is 0? So it might be 0 for some actions. What would that do if it's 0 in places where pi e is not 0 ? So what happens if pi b of a some particular a is equal to 0, uh, but pi e of a greater than 0 is, you know, greater than 0, might be 1. [NOISE] What might be bad there? Like do you think this is a- let's raise hands for a second. If that happened, do you think we are hu- let's raise your hands, either yes if you think this is a good estimator or no if you think, oh, that's rough. So if the behavior policy is 0 probability of taking an action, but the evaluation policy has a positive probability of taking an action. Raise your hand if you think, um, this estimator could be really bad. That's right. Yes. So, you know, if there are cases where you're just never trying actions, like you never saw actions in your data that your new evaluation policy would take, you can't use this. So we often call this as coverage. So coverage or support. So we often make a few basic assumptions in order for this to be valid. So our coverage or support normally means that pi b of a given s has to be greater than 0 for all a and s, such that pi e of a given s is greater than 0. So you kind of have to have support over. It doesn't have to be non-zero everywhere. But for anything that you might want to evaluate, for anywhere if you're really going to generate data from your evaluation policy and it might take an action, you need to be able to get to that state and you need to be able to take that action with some non-zero probability. Yeah. So terminology questions. So, we're calling pi b is the same as pi 2 in the other part, right? That's right. Okay. Originally I thought you said that the evaluation policy was the one that you observed the data from. That's incorrect? Thanks for making me clarify that. Um, the- the- the behavior policy is always one you observe data from, evaluation is the way you wanna look at. And, I apologize, notation often here is a little bit [NOISE] snaggly because I think, um, people sometimes call the evaluation policy the target policy or evaluation policy or, you know, one or two, um, and most of the time, the policy used to gather the data is called the behavior policy. Yeah. [inaudible] you just like not include it in the product. Question is whether or not what if they're both 0, um, is that a problem? Would you ever get data from that? So, yes. So if they're both 0, it's okay. So you don't actually have to- it does not require you to take- to have a non-zero probability of taking every action in every state. Um, so it can be okay. So pi b of a given s can't equal 0, if pi e of a given s is equal to 0. It's fine for that to be the case. You just can't have any case where you would never have either reached the state or generated that action for things that you could have potentially done under your, um, evaluation policy. And that doesn't sound too strict. Um, but in practice, that can be a big deal. So if you think about like Montezuma's Revenge, excuse me, or different forms of Atari, like under a random behavior policy, you're never gonna get to see a lot of states, and you're never gonna take actions in those states. Like you're just- uh, it's incredibly unlikely, unless you have an enormous amount of data. So in practice, you can think of sort of the behavior policy you have is kinda of defining like, um, like a ball [NOISE] under which you can evaluate other potential policies. So if you have like- it's not actually a sphere, but like, you know, if it's, um, if you have a behavior policy here, you can think of kind of having some distance metric under which you can still get good estimates of pi e. So you kind of have a radius, and it depends on these are sort of essentially the- the policies for which you have support over and anything else you can't evaluate. Okay, all right. So just to summarize there, um, you know, importance sampling is this beautiful idea that works for lots of statistics, including for reinforcement learning. I think- The first introduction to this, um- I think it was first used for RL I think in, um, Doina Precup's paper, Precup 2000? Around then. It's been around for a lot longer, but, um, but I think for reinforcement learning, that was their first introduction of using this. And of course these ideas also come up in policy gradients type- type methods. Um, and the great thing is thi- is, is this unbiased and consistent estimator, just as a refresher consistent means that asymptotically it really will give you the right estimate. So, you know, as n goes to infinity, the r estimated v pi e goes to e pi e. Just kind of a nice sanity check. As you get more and more data, you will get the right estimator. Um, and just to check here, this is, you know, under- under a few assumptions. You have to have support. [BACKGROUND] All right. Now, um, in our particular case, I- we can leverage a few aspects of the fact that this is a temporal process. So again, like what we saw for policy gradient type work, um, we'll call policy gradient methods. We can leverage the fact that, um, the future can't affect past rewards. So when we think about generating these importance ratios, we only have to for a particular time step t, um, instead of multiplying it by- so I guess just to back up for there. Remember that Gt is defined to be equal to, [NOISE] you know, the rewards, [NOISE] like the sum of rewards. So when we think about this equation for importance sampling, um, let me just go back to here. So this could be expanded into, you know, r_1 + gamma times r_2 + gamma squared times r_3, dot, dot, dot. And right now in that equation, that's like multiplying your full product of importance ratios, times e to the rewards. Um, so we're not doing any, it's the same ratio of probability of action given by probability of action, multiplied by each of these different reward terms. But since, you know, r_3 can't be affected by any actions that are in r_4 longer. In some ways you're just- this isn't wrong. It's just that you're introducing additional variance. So similar to what we saw, um, in policy gradient stuff, we don't actually- we only have to multiply by that product of ratios, up to the time point at which we got that reward. So this allows us to get to per-decision importance sampling. So this is only up to- only [NOISE] up to point got reward [NOISE] because the- the future can't inputs past rewards. And again, this is independent of it being Markov or not. So this is just the fact that it's a temporal process and we can't go back in time. All right. So another thing just in general is that, um, in importance sampling, um, we're just sort of these weights, these weights to these products. You know, products of like, um, picking an action under different policies. So we often call these weight terms nicely and confusingly with all the weights we talked about with function approximation. Um, and weighted importance sampling compared to importance sampling just renormalizes by the sum of weights. The reason you might wanna do this is that as we were talking about before, if your pi_b is really tiny. So let's say this, this might be super tiny, super small [NOISE] for some trajectories, then that can mean that your importance weight for those trajectories is enormous. In fact, we have, um, a proof that, you know, that generally the size of your importance weights can grow exponentially with the horizon. Um, uh, and so these importance weights can be incredibly large in some cases. And so what weighted importance sampling does is it just renormalizes. So effectively, you're making it so all your importance weights are somewhere between 0 and 1. Then you're using that when you're reweighting your distribution. So when you do this, this is something that's very common, um, to do again this pre- predates the reinforcement learning side, but has also been used in reinforcement learning. Um, this is, uh, this is biased, um, still consistent. [NOISE] So that means asymptotically it's going to get to the right thing and lower variance. [NOISE] So we're essentially going to play, um, the bias variance trade-off that we've often seen. We can make Q learning versus Monte Carlo estimates. Monte Carlo were unbiased but very high-variance. Q Learning bootstrap so it's biased but often much better because you're so much lower variance. Weighted importance sampling is much lower variance empirically much, much better, most of the time, particularly for small amounts of data. Yeah. Yeah. I was wondering if you could comment on, um, you know, before you're saying that we were intentionally designing something to be unbiased. So we're going to ignore certain techniques and now we're reintroducing bias at the end? Right. I guess, what is the intuition behind when it's okay, I guess, to introduce bias and like when and why? It's great one. Okay. So you just made a big deal before about saying let's go for unbiased estimators and now you're telling us that we're going to go back to bias in, [LAUGHTER] that- that case, you know, how do we make decisions about when this is okay? Um, I think it totally depends on the domain. Uh, and it als- I think one challenge and issue that comes up is, if things are unbiased, it's often easier to have confidence intervals around them. We know better how to do that, um, when it's biased, it's often hard to quantify. Um, I think, uh, I'll talk briefly a little bit later about times where we really just want to directly optimize for bias plus variance. Like we want to look at accuracy, mean squared error, just the sum of bias and variance. And so then I think you can- it provides you a way to directly trade-off between those two because you're like, I know I want to minimize the mean squared error. And that's the sum of these two, that gives me a way like, a principled way, how to trade those off. I think another thing I often like just for sort of a sanity check is that, if it's biased but still consistent, um, that's sort of a nice sanity check. It's like okay maybe there's a small amount of bias early on, but eventually if I get a lot of data, it's really given me the right answer. And some of the function approximation stuff we solve for Q learning. That's not true, like we just, all bets are off, who knows what's happening asymptotically. But again, it depends on the- depends on the day. It's a great question. It's a big challenge in that area. Okay. So as I was just saying in this case, um, you know, weighted importance sampling, it's strongly consistent. You're going to converge, um, if you have a finite horizon or, um, one behavior policy or bounded rewards, and, uh, Phil and I, so that if you'd look at Thomas and Brunskill. [NOISE] We think about this for the RL case in our ICML 2016 paper. It's one reference for this. Okay. So we're going to have these estimates. Um, they may be, if we're using weighted importance sampling, there might be a little bit bias, but lower variance. Otherwise, there might be high variance but low bias, or zero bias. What's something else we could do? So, uh, let's briefly talk about control variance and be thinking in your head again at back to policy gradient, and thinking about baselines. So just from a statistics perspective, um, you know, if we have an X, we have the mean of that variable, um, if it's unbiased, it means that our estimate of the mean is matching the true expectation, and then we have our variance. So let's imagine that we just kinda shift these estimates. Okay. So we're just going to subtract a random variable Y, and then in your head you can think about this as like a Q function, something like that. So, you know, Y might be a Q function, and expected value of Y might be a V. Do you think about Q being, um, as an a, an expected value of Y being an average over all the actions you could take in that state. So then, i- if you redefine your mu so, um, X is gonna be like, let's, we're gonna be going back towards trying to get our estimate of V_pi_e. [NOISE] Then if you subtract off something else and add on this expectation, you still get an unbiased estimator. [NOISE] So we can just write that out here. So X - Y + expected value of Y, just equal to expected value of X, minus expected value of Y, plus expected value of Y. So you can do this. I mean, you can do this in statistics. You can subtract a variable and add on its expectation and on average, that does not change the mean of your original X. But you might ask, "Why would I do that?" [LAUGHTER] So you can do this for anything like any- these are where X and Y are random variables. So these are general, just any random variables, where Y is conc- called a control variant. And this may be useful if it allows you to reduce variance. And that means that. You know. Y has to have something to do with X. Okay. If you just subtract off something random, this is probably not going to be helpful. But if you subtract off something that allows you to have some insight on X, in our case we're gonna be interested in things that allows us to help predict the value, um, then we might get a lower variance. So we can look this by looking at what is the variance of this weird quantity mu hat, where we had this X - Y + the expected value of Y. [NOISE] We can write it down as follows, X - Y + the expected value of Y. Now, the variance of the expected value of Y, there's no more variability in that so we can just say that's the variance of X - Y. [NOISE] So what that means is that we're gonna get something which is variance of X, variance of Y, and the covariance of the two. So if it is the case that the covariance of X and Y, meaning that the, you know, there is relationship between these two variables, one of them is giving us information about the other. If that is bigger than the variance of Y, then you're going to have a win. Then you're gonna have that the resulting estimator. So if this is true, if true- if true, variance of mu hat is gonna be [NOISE] less than variance of X. So this is nice because it means that we didn't change the mean and we reduced the variance, which in some ways, kind of seems like a free lunch but it's not really a free lunch because we're using information that is actually telling us something about X. And this is very similar to us using a baseline term and policy gradient. Where instead of just relying on the Monte Carlo returns, we could also subtract off a baseline like the value function. [NOISE] So you can do this in importance sampling as well. Um, so where X is the importance sampling estimator and Y is some control variate. Typically, you know, this can either be a Q function which you build from an approximate model of the Markov decision process. It can be a Q function from Q learning. Um, this gives you some way to get- we can think of this as Q, you know, some- some estimate [NOISE] of state-action [NOISE] value, okay. And doing this is called a doubly robust estimator. Doubly robust estimator is again, um, where in statistics for a long time, um, in around 2011, were brought over to the, I remember were brought to the multi-armed bandit community, like the machine-learning multi-armed bandit community with Dudik et al. And why do they call it doubly robust? Well, the idea is that, um, if you use information both from like your normal importance sampling, um, plus some control variate like a Q value estimate, um, you can be robust too if either of those are wrong. So either if you have a poor model or you have a bad estimate if your control variate is bad. [NOISE] So you know, why would this be important? Well, if we go back to sort of questions, some other people's questions, um, an alternative is just to like do Q learning on your data, right? But Q learning might be biased or it might be a horrible estimate and who knows if it's good? Um, but it might be good. So if it's great- good you'd like to be able to say, "We've got a good estimate." Or if it's bad, you'd like to say that it's with your importance sampling can compensate for that, and this says that either if you have a bad model, um, or you have error in your estimates of pi B. So this is like those cases where we've got data from positions, so we don't really know what the behavior policy is. So if you have inaccuracies there, then you, um, also would like to if it turns out your control variate is accurate, then you could also do well. Now, if- in some cases both of these are bad and in that case, kind of all bets are still off. But it gives you more robustness about different parts of what you're you're trying to estimate how good your evaluation policy is. Okay, and Bill and I discussed sort of different ways to do this as well as doing it in a weighted way, so incorporating weighted importance sampling. So what does this allow you to do? Okay, I'll- I'll briefly show the equation. Then essentially, the idea here is these are like the importance sampling weights. This is the raw returns and then we can add and subtract. This was Y and this is the expected value of Y and these can be computed by computing like Q learning or doing like an empirical model and doing dynamic programming on it. You could get these sort of estimates of Q of pi E in lots of different ways, um, and they might be good or they might be bad. But you can plug them in and often, they're gonna end up helping you in terms of variance. So let's see empirically what this does. [NOISE] So this is a really small Gridworld. Think something on the order of like, you know, maybe five-by-five, four-by-four. This is a really small world and we're using it to try to illustrate and understand, um, the benefits of these different types of techniques. So what's on the x-axis? This is the amount of data. So this is the size of the dataset. What's on the y-axis? This is mean squared error. This is the difference between our estimate of the evaluation policy and the true evaluation policy, and you want this to be small. So smaller, better and this is a log scale. [NOISE] So what do we see here? So one thing you could do is you could build an approximate model with your data. That model might be wrong like maybe, you're making a Markov assumption and it's wrong. Or maybe, um, you know, there's other parts where you just can't estimate well. So this is the model-based. So this is just we use a model, and we compute V pi e for that model. So we take our data, we build an MDP model, then we, um, use that to- then that's like a simulator and then we can just compute V_pi e. So you can see here it's flat. Um, in this case, um, I'd have to remember here. I think we are using a different dataset. Either I- I would have to double check whether or not just after we have that number of episodes, the model just doesn't change with further data. Like the model just isn't great. Maybe we're not using all the features that we should be, um, or the world isn't really Markov, and so you kind of have this fixed bias. Your model can be asymptotically wrong. The estimates you get from it can be asymptotically wrong if your model is not a good fit to the environment. [NOISE] The- the second thing we can do is we can do importance sampling. So importance sampling is unbiased. It's going down as we get more data as we would hope. Eventually, it should collapse to 0. Um, but we'd like to do better than that. So now this is per decision importance sampling. You can see you get a benefit from leveraging the fact that rewards can't be influenced by future decisions. That reduces the variance, kind of gives you this nice kind of automatic shift down. Um, if you do doubly robust, you get a significant bump. So what's doubly robust doing again? It's combining our approximate model plus IS. So you can see again, here we're getting, ah, a significant bump. Now, I talk about this mostly in terms of mean squared error but I think it's really important to think about what mean squared error means. Um, so mean squared error here is, you know, how accurately are we estimating this V_pi E minus the true one? But we can alternatively think about this in terms of how much data we need to get a good estimate. So look at this. This is- you want a mean squared error of 1. Maybe that's sufficient. Maybe that's not. That means that under a per decision important sampling estimator, you would need 2,000 episodes and with doubly robust, you'd need less than 200. So that's awesome because it means that we need like an order of magnitude less data, in order to get good estimates, and a lot of real world applications we just don't have enough data. You know, there might not be a lot of patients with a particular condition and you'd really like to still be able to make good decisions for them, and so you need estimators that need much less data to give you good answers. And so that's why this is important. Okay? Right. And then in these cases, we can see also that if you do weighted importance sampling [NOISE] and weighted throughput decision, that also ends up helping a lot. [NOISE] Here, is if you do weighted doubly robust, again, sort of answering that question of when should you do, you know, how do you trade off between this bias and variance? Um, here, we can see that if we do some form of weighted, uh, doubly robust which is one of the things we introduced in our ICML paper. You again get a really big gain. So we had this to there, and this to there, right? So now you are needing like five episodes to get to the same. So again, some of these improvements can be- this is of course Gridworld, right? Like this- you- you need- one needs to look at this also for the particular domain one's interested in. But it indicates there are just some of these cases by being, um, a little bit better in terms of these estimators, you can get substantial gains in terms of how accurate your estimators are. [NOISE] All right, again to continue on this slide of like, how do you balance between variance, um, and bias. One thing that Phil and I thought about is, okay, well, you know, you might want to have, um, low bias and variance, ah, and- and how do you do this trade-off, let's just think about optimizing for it directly. So our magic estimator just tries to directly- directly minimize mean squared error. Okay. So- so again let's say mean squared error is gonna be, you know, it's a function of bias and variance. So if you knew what bias and variance was, you could hopefully just optimize this directly. Do we know what bias is? So bias again just to remember is bias is the difference between this minus this. That's bias. Do we know that? No, unfortunately, if we knew that, if we knew the bias, then we wouldn't have to do anything else because we would know exactly what the real value was. So a big challenge when we're trying to do this work is how do you get a good estimate of bias? Or like an under or overestimated bias? And just briefly, the idea that we had there is to say, if you can get confidence intervals, which you can using importance sampling. So let's say, I have these, you know, this is my estimate from importance sampling. And I have some uncertainty over it. So this gives me some estimate, [NOISE] with some upper and lower bounds. So you know, I'm like, you can do, ah, let's say the value is 5 plus or minus 2. Okay. And then let's say I have a model, um, I have a model that I built from this data and then I used it to evaluate and I got another estimate up here. So I have a V_pi, and this is using a model. Okay, so let's say this is 8. So what is the minimum bias my model has to have? How could you use these confidence intervals to get like a lower bound on how- how bad my- assume these confidence intervals are correct. Now that these are real confidence intervals so we know the real value has to be between 3 and 7. What's the- what's the minimum bias that my model would have? One right, because it's this difference, okay. So this gives you a lower bound on the bias. So how far off you are from these confidence intervals that gives you a lower bound on the bias? It's optimistic, your bias- your model might be way more biased. Um, uh, but it gives you a way to quantify what that bias is, and that's what we use in this approach. So we combine our importance sampling estimators and think about how variable they are. We have to get an estimate of their variance, um, as well as the bias on the model, and that allows us to figure out how to trade off between these. And again, you get, um, you get a really substantial gain often. This is still Gridworld but, um, you're gonna get again [NOISE] roughly an order by magnitude difference in some domains. You're gonna need an order of magnitude less data. And in this case, I've just zoomed in so you don't even see some of these other methods because they're so much higher up. [NOISE] Okay, so, you know, that's one thing that we could do to try to get sort of good off-policy policy evaluation estimates. Um, I haven't talked to you too much so far about like how are we gonna get these confidence bounds over those. But I've mentioned sort of a number of different ways that we could try to get just an estimate of, you know, V_pi E. So we want to get some estimate of this new alternative policy that we might wanna unleash on the wild. Um, I'll- I'm gonna skip this part, I'll just say briefly, you know, there are some subtleties here with whether or not, um, ah, you know, what's the support of your behavior policy, um, and how we do some different weighting and can we improve over this sort of weighted importance sampling? Um, [NOISE]. The answer is, is yes. You can do some slightly different weightings, um, and I'll- I'll defer that. And then also, another really important question, really important in practice is that, um, your distributions are often non-stationary. You know, imagine that like you're looking at patient data and during that time period like some new food pyramid came out from the Food and Drug Administration and so everyone changed how they're eating. Um, so now that, you know, the dynamics of your patients are gonna be really different than before. So you'd like to be able to identify whether or not you have sort of non-stationarity in your data-set. Like if the dynamics model of the world is changing. So we have some other ideas about how to handle that. [NOISE] Okay, but now let's go to and say, let's assume we've done this off-policy policy evaluation. We've gotten out some estimate, um, of these, how good our alternative policy would be, and we want to go beyond that and we want to get some confidence over it. So again remember we're trying to move to a world where we can say, you know, the probability that the policy output by our algorithm being better than our previous policy, [NOISE] is high. So with high probability we're gonna give you a policy that's better, which means not only do we need to have an estimate on how good that policy is but how much better it is or not than your behavior policy. Okay. How would we do that? So let's first consider using importance sampling plots Hoeffding's inequality. Again let's think back to what we are doing with exploration, to do high confidence off-policy policy evaluation. So just as a refresher, mountain car is the simple control task, [NOISE] where, you know, you have your agent trying to reach here, we get high reward. And we're gonna have data gathered from some behavior policy and we want to try to learn a better policy from it, and be able to get competence bounds over its performance. Okay, so remember that in Hoeffding's inequality, it's a way to look at how different your- your mean is from, um, your average so far. So how different can we- how can your- how different can your empirical mean be [NOISE] minus your true mean? [NOISE] And it gives you a bound on that in terms of the amount of data you have. Okay, so it's a function of the amount of data you have. And it depends on the range of your variables, okay. So thought about about this for arms, which might have rewards of between 0 and 1, and then b would be 1. Okay, so we can use Hoeffding's inequality also, we talked about it briefly for, um, uh, [NOISE] you can use a model-based reinforcement learning. Let's think about using that in the context of this off-policy evaluation. So we can also use it using our old data to try to estimate, um, what the- what our upper or lower bounds might be on the value of the evaluation policy. [NOISE] So let's imagine that we use 100,000 trajectories. And the evaluation policy's true performance is 0.19. And if we use Hoeffding's inequality, we're very confident that the new policy has at least -5 million. Okay. And we know that the real reward is somewhere between 0 and 1. But Hoeffding's inequality gives us this bound of -5 million, and that's true, right? Like [LAUGHTER] you know, is like 0.19 is greater than -5 million. But it's not particularly informative. Um, like we know that the real returns for the- for this domain is somewhere between 0 and 1. And if we use Hoeffding's inequality there, we're getting something that we'd call is vacuous. Okay, so you're getting a bound that is true but entirely uninformative. Because it is incredibly negative, right? Like we know that this is the true- true value is- for all policy is between 0 and 1. [NOISE] Okay. So why did this happen? Um, [NOISE] let's look at importance sampling. Importance sampling says we're gonna take this product of weights. Okay, and as we've talked about before, this might be pretty small. So let's imagine this is like, you know, 0.1. So then you have 10 to the L. Like if you take a series of actions, let's say for every single action of that trajectory it was pretty unlikely [NOISE] which you often need in domains like mountain car, because in mountain car you have to take a pretty specific sequence of actions in order to finally see some reward at the end, and under a policy [NOISE] that is not optimal, it might be pretty unlikely to see those series of actions. So let's say, you know, most of your data you never get up the hill, in like one or two of your data points you actually get up to the top of the hill, and those were very rare trajectories. Which means your, um, importance sampling weights are gonna be extremely high. It's gonna be this, you know, 1 divided by 0.1 up to the L. [NOISE] That's just enormous you know. Um, so these can start to be incredibly large. And Hoeffding's Inequality depends on this. The range of the potential returns you have. What are the range of our potential returns? The range of our potential returns are g times or product of i equals 1 to t of our importance weights. [NOISE] So b is equal to max over this. It says, "In the maximum case, what could it look like, your return is?" [NOISE]. And so it's gonna depend most on what your real range is. Our real [NOISE] b is going to be between 0 and 1, and this product of importance sampling weights, and that's where the problem is. The product of importance sampling weights can be enormous. Okay. Because you have really unlikely sequence of actions, and then you get this blow up. All right. So if we look at that here, we can get this distribution, um, and some of the, some of these are incredibly large, and that means that our Hoeffding Inequality ended up being, because Hoeffding again is you subtract b, basically -b times square root of 1 over n. So if b, let's say- let's say, your trajectory lengths are something like 200, which is pretty, somewhat reasonable for Monte Carlo, I'm sorry, for mountain car. Then you'd have something like 1 over 0.1 to the 200 times 1, times the square root of 1 over n, roughly right? And so [NOISE] you'd have just this crazy, crazy large term and you're subtracting this. So that would mean that your bounds are vacuous, basically I have no idea how good this evaluation policy would be. So does anybody have any questions about that, about why that issue occurs? Okay. So the insight that Phil had in some of his previous work is, just get rid of those, just cut it down. Um, so if you remove this tail, what does that do to your expected value? It just decreases it. So if you ignore those like really, really crazy high returns [NOISE] you're not gonna get an estimate anymore that's as good, but it's just gonna get smaller. You're not gonna overestimate it. So again, if we're thinking about say policy improvement, we're concerned about deploying policies that are worse than we think they are in practice. What this is gonna do, is say, we're gonna underestimate how good our behavior policy is, or our evaluation policy is. And so if we underestimate it, that's okay because that's safe, like if we don't deploy things that actually would have been good, maybe there's a lost opportunity cost, but it's not gonna be bad for the world, um, so that's what the insight was, for here is that you can like remove this upper tail. And so you don't need to read the proof, ah, um, but the idea is that you can basically define a new confidence interval, that is conservative [NOISE]. And you can think about how you choose that conservatism, depending also on the amount of data you have. So here's the beautiful take-home, um, so let's say that you kind of use 20% of your data to figure out exactly how to tune this confidence interval, so this is sort of sets your confidence interval. And then your next part to compute your lower bound, so for mountain car with the same amount of data, you've got 100k trajectories. So this is the new estimator, and you get the, um, the mu, the estimator of the V pi, so this is a lower bound on V pi e. [NOISE] It says it's gonna be at least 0.145, and the true value is 0.19. And this is compared to all these other forms of concentration inequalities, which were all, except for Anderson, really, really bad [NOISE]. So things like Chernoff-Hoeffding and these other ones you don't have to be familiar with all of these. But basically, it just says if you'd used other approaches, to try to get this lower bound [NOISE] they would have been entirely vacuous, whereas this one says, "Okay. We're not sure exactly how good it is. It's gonna be at least 0.145, and the real value is 0.19." It's not perfect, but you know, if- if your behavior policy was 0.05, that would be good enough to say you should use a new thing [NOISE]. So, um, they use this idea for digital marketing, this is some work that Phil had done in collaboration with some colleagues over at Adobe. Um, and the nice thing about this is you can say, you know, if I want it, I'll figure it out, if I [NOISE] am gonna deploy something, and get, um, more effective digital marketing, and I have access to our previous data. [NOISE] Can I say with what confidence, I can deploy something that's gonna, you know, generate higher clicks, get more revenue. [NOISE] And these confidence bounds turn out to be tight enough that you can actually know that the new policy is gonna be better, which is pretty cool. [NOISE] [inaudible] Yeah. You can also, so this is one form of trying to get those confidence bounds, turns out you can also use t-tests and empirically that's often very good. Um, and some of you guys might have seen some of these in, ah, some of your statistics class. I'll just really briefly take, because we're almost out of time, that you can combine these ideas, and then think about trying to get these lower bounds, um, here, and combine it with optimization. So you can think about doing this for a number of different policies, trying to compute lower bounds on all of them. And then using that information to try to decide which one to deploy in the future, in a way that is safe. Okay. So you can sort of say, "I'm gonna use some of my data to optimize, some of my data to, um, ah, to try to evaluate the resulting one, and make sure that it's got a good confidence bound before I deploy it. [NOISE] And again, you can do this in digital marketing, some of the other work that Phil Thomas and I have looked at is, using a diabetes simulator, and looking at whether or not we can infer higher-performing policies for things like [NOISE] insulin regulation, um, ah, using similar ideas in something that in a way that you could be, um, with high-confidence better before you deploy it. I'm gonna skip briefly through this. This- this is a really big ba- ah, like this is an increasingly, ah, big area of work in the community. Um, I think a lot of people are thinking about, it is counterfactual reasoning issue because we have more and more data from electronic medical records systems, that we'd like to use to improve patient health. We have data, um, you know, on- on online platforms etc. There's a lot of additional challenges, things like how do you deal with long horizons? Um, [NOISE] the fact that importance sampling can be unfair, ah, [NOISE] what do I mean by that? I mean that, essentially, different policies when you evaluate them, might have different amounts of variance depending on how well they match to your behavior policy. Because of that, it may be hard to decide which of those you should deploy. Um, we have various work thinking about when the behavior policy is unknown. Where we combine these ideas with deep neural networks, um, and we're also thinking about transfer learning. So I know Chelsea talked about meta-learning on Monday. Um, one of the interesting ideas here is, you're building these forms of models. Can you kinda use the same ideas of fine tuning in the reinforcement learning case? So can you think about building models for our policy evaluation that leveraged as data that does not match the policy you care about? In order to get generally better models, you know, things like health care, I think that can be pretty helpful. [NOISE] But there's a lot of additional work on this, and there's a number of other groups that are thinking about these types of ideas. Um, and also on campus if you're interested in these ideas in general. There's also a number of great colleagues that are thinking about this from other perspectives. People coming at it from the perspective of economics, or statistics, or epidemiology. And it's been really fun to try to get to collaborate with these people as well. So just to pop-up a level briefly, um, the goal in these cases is to think about if you have some set of data, we're gonna go from data to a good new policy [NOISE]. Okay. And you want it to be good in a way that you know something about the quality of it before you deploy it. And so that's really what's sort of safe off-policy policy evaluation and selection, or optimization is about. And so in terms of the things that you should understand from here, you should be able to define and apply importance sampling [NOISE] , know some limitations of importance sampling. [NOISE] List a couple of- of alternatives. Know, you- why you might want to be able to do this sort of safe reinforcement learning, like what sort of applications this might be important in. [NOISE] And sort of the- define what type of guarantees we're getting in these scenarios. So that's it, and then next week, we'll have the quiz on Monday, and then on Wednesday we'll talk about Monte Carlo tree search. Thanks. |
Stanford_CS234_Reinforcement_Learning_Winter_2019 | Stanford_CS234_Reinforcement_Learning_Winter_2019_Lecture_10_Policy_Gradient_III_Review.txt | So before we get started, I'm just gonna say a brief note about the logistics for the midterm. We're gonna be split across two rooms, the room you're in, it depends on your normal Stanford ID, whatever the first letter is. We'll send an email out about this to confirm it. But we're gonna be in either Gates B1 or Cubberley Auditorium, and it depends on the first- the first letter of your Stanford ID. In addition, you're allowed to have one page of notes, typed or written is fine, one sided. Anybody have any other questions about the midterm? Okay, well, reach out to us on Piazza if you have any questions about the midterm. Um, what we'll do today is we're gonna split- we're gonna go finish up the rest of policy gradient. So in terms of where we are in the class, right now. We are almost done with policy search. We're gonna have the midterm on Wednesday, Monday is a holiday and then we'll also be releasing the last homework this week, which will be over policy search. And so we're gonna have policy search, and then we're gonna have the project. That's the remaining sort of main assignments for the term. And then we're gonna be getting into fast exploration and sort of fast reinforcement learning after we come back from the midterm. So I wanted to make sure to get through policy search today because you're gonna have the assignment released later this week. So we'll spend hopefully around like 20 to 25 minutes on policy search, and then we're gonna do a brief review before we get into the midterm- before- about the midterm material. Does anybody have any questions? Oh, and just a friendly reminder to please say your name whenever you ask a question because it helps me remember. It also helps everybody else learn your names as well. All right. So where we were is that for the last couple of lectures, we've been starting to talk about policy-based reinforcement learning, where we're specifically trying to find a parameterized policy to learn how to make good decisions in the environment. And so just like what we saw with value function approximation and what you're doing with Atari, for our policy parameterization we're gonna assume there's some vector of parameters. We can represent policies by things like softmax or by a deep neural network. And then we're gonna wanna be able to take gradients of these types of policies in order to learn a policy that has a high value. So we introduced sort of the vanilla policy gradient algorithm, where the idea is that you start off, your initialize your policy in some way. And you also have some baseline, and then across different iterations you run out your current policy. And the goal is we're, by running these out that we're gonna be able to estimate the gradient. So we're gonna be doing this part to estimate the gradient of our policy at the current, so we wanna get sort of dV d theta with respect to our current policy. So what we talked about is that you would run out trajectories from your current policy. So you'd use your policy to execute in the environment. You would get state action rewards, next state, next section, next rewards. And then you would look at returns and advantage estimates, which compared your returns to a baseline, could refit your baseline, and then you could update your policy. And so this was sort of the most vanilla policy gradient algorithm we talked about. And then we started saying that there's a number of different choices um, that we're making in this algorithm and almost all sort of policy gradient based algorithms are gonna form uh, follow this type of formula. So, in particular, we are making a decision that's sort of estimating some sort of return or targets. Often we're picking a baseline, and then we had to make some decision about after we compute the gradient how far along the gradient do we go? So this is sort of helping us to determine how far do we move on our gradient? Okay, so for the first part, we talked about how do we estimate sort of the value of where we are right now, that we're gonna be using to try to estimate our gradient. And we talked about the fact that the most vanilla thing that we could do is just roll out the policy and look at the returns, that this was really similar to what we'd seen in Monte Carlo estimates. So we could do that, we could get sort of an estimate of the value function by just rolling out the policy for one episode, but that was just like what we saw in Monte Carlo, an unbiased estimator but high variance. And so we talked about how we could play all the sorts of- we use all the same tools as what we've been doing in the past to try to balance between bias and variance. So in particular, we talked about how we could introduce bias using bootstrapping and function approximation. Just like what we saw in TD MC and just like what we talked about with value function approximation. So we repeatedly see these same sorts of ideas of the fact that we're trying to understand what the value is of a particular policy. And when we're estimating that value, then we can trade off between getting sort of unbiased estimators of how good decisions are versus biased estimators um, that might allow us to propagate information more quickly and allow us to learn to make better decisions faster. We also talked about the fact that they're actor-critic methods that both maintain an explicit parameterized representation of the policy and a parameterized representation of the value function. But the thing that we really started getting into last time is to say, "Well, both you know there are all these sort of existing techniques that we know of to try to estimate these targets and estimate the value function." But then there's this additional question of how far do we move along the gradient? So once we estimate the gradient of the policy, we need to figure out how far along that gradient do we go in terms of computing a new policy. And the reason we argued this was particularly important in reinforcement learning versus supervised learning, is that whatever step we take, whatever new policy we look at, is gonna determine the data we get next. And so it was a particularly important for us to think about how far along do we wanna go on our gradient to get a new policy. And one desirable property we were talking about is, how do we ensure monotonic improvement? So what we would like here is we'd really like monotonic improvement, that's our goal. And we talked about wanting monotonic improvement, which was not guaranteed in DQN or a lot of other algorithms. Because in a lot of high stakes domains, like finance, customers, patients, you might really wanna ensure that the new policy you're deploying is expected to be better at least in expectation before you deploy it. So we talked about wanting this, but there's this big problem that we don't have data from the new policies that we're considering. And we don't wanna try out all the possible next policies because some of them might be bad. And so we wanted to try to use our existing data to figure out how do we take a step and determine a new policy that we think is gonna be good. So, in particular, our main goal for policy gradients is to try to find a set of policy parameters that maximize our value function. And the challenge is that we currently have access to data that was gathered with our current policy, which we're gonna call pi old. That's parameterized by a set of thetas, that we can also denote by theta old. And- and throughout policy, the policy gradient lectures that I've been through, going back and forth between talking about policies and talking about thetas. But it's good just to remember that there's, sort of, this direct mapping between pi and theta. You know there's- for each- for a policy, it's exactly defined by a set of parameters. [NOISE] So whether we're talking about policies, or we're talking about parameters, those are referring to exactly the same thing. Um, so the challenge is- is that we have data from our current policy, um, which has some set of parameters, and we want to predict the value of a different policy. And so this is a challenge with off policy learning. So what we tal- were talking about last time, um, is how do we express the value of a policy in terms of stuff that we do know? Um, and we talked about how we could write it down in terms of an advantage over the current policy. So if we think about a value being parameterized by a new set- a new policy with a new set of theta tilde parameters, it's equal to the value of another policy parameterized by a set of Theta plus the expected advantage. So we can write that as the distribution of states that we would expect to get to under the new policy times the advantage we would get under the old policy if we were to follow the new policy. And the reason- what- what we're trying to do in this case is just to- to keep us thinking about what the main goal is here. Is what we're trying to do is figure out a way to do policy gradient where we're guaranteed to have monotonic improvement, where our new policy is gonna be guaranteed to be better than our old policy. But we wanna do this without actually trying out our new policy. So we are trying to re-express what the value is of a new policy in terms of quantities we have access to. So what do we have access to? We have access to existing samples from the current policy, and we wanna use those and the returns we've observed in order to estimate the value of a new policy before we deploy it. So that's kinda where we're trying to get to. And we noticed here that maybe, you know, we can have access to an explicit form of a new policy, that's like whatever new parameters we're considering putting into our neural network. Um, and we could imagine estimating the advantage function, but we don't know the state distribution under the new policy, because that would require us actually to run it. So what we talked about is- let's just define a new objective function, um, which is just a different objective function. Might be good, might be bad. I'm going to argue to it's a good- it's good but this right now is just a quantity that we can optimize. So the quantity that we can optimize here, we're gonna call that sort of this new objective function, and it is going to be a function of the previous value. So here, remember this is always equal to [NOISE] direct mapping between thetas and pis. So we're going to just say, it looks like the objective function we just talked about, which really was the value of the new policy, but we don't know what that stationary weighted distribution is, of states under the new policy. So we're just gonna substitute in the stationary distribution under the current policy. Now in general, this is not gonna be- so it's not going to be equal to your new policy distribution. The only time you're gonna get the same state distribution under two policies is generally if they're identical. Occasionally, you can get the same state distribution under two different policies, but then that means they have the same value. So in general, we're going to expect that this is going to be different but we're going to ignore that for now. We're just going to say this is an objective function, this is something we can optimize. And a nice thing about this is that we have samples from the current policy. So we can imagine just using those samples to estimate this expectation. The thing that I also want us to note here is that, this is just this new objective function called L. If you evaluate the objective function L, um, at the current policy, so if you plug in your old policy into your objective function, it's exactly equal to the value of your current policy. So this second term becomes 0, [NOISE] because the advantage of the existing policy over the existing policy is 0. So this objective function is exactly equal to the value of the old policy, if you evaluated the old policy, and another case is for new policies it's gonna be something different. Yes. How's this similar to importance sampling? That's a great question. You asked, "How is this similar to importance sampling?" Um, if we were gonna do importance- Well, it's different in a number of ways. Um, in importance sampling, what we tend to do is we re-weight, um, uh, the distribution that we want by the distribution that we have. Um, in this case we're looking, and we normally do that on a per state level. In this case we are looking at the stationary distribution over states. It's actually a really cool paper that just came out in NeurIPS, like a month ago, um, [NOISE] 2018, with Lihong Li and some other colleagues. Um, they looked at, how would you re-weight stationary distributions to try to get off policy estimates of the value function? Um, and so to try to directly re-weight what, like, mu pi would be versus mu pi tilde. So we're not doing that here, there's some really nice ideas in that that could help really reduce the variance in long horizon problems. Um, in this case, we're just substituting, so we're ignoring the difference. We're not doing importance sampling, we're just pretending that the distribution of states that we get to is exactly the same. It's not, but we're gonna show that this is going to end up being a useful lower bound to what we wanna- what we actually want to optimize. Okay. So you might say, if you take this objective function which might be good, might not be good. If we optimize with respect to it, do we have any guarantees on whether the new value function that we get if we optimize with respect to this wrong objective function is better than the old value function? Because remember that's where we're trying to go to. We don't really care what we're optimizing, what we care about is that the resulting value function we get out is actually better than the old value function. [NOISE] So last time I said that if you have a mixture policy which blend between your current policy and a new policy, so let's say you have a pi old and you have some other policy, I haven't said how you get it, but just say you have some other policy, and that defines your new policy. So with probability 1 - alpha, you take the same action as you used to. With probability alpha, you take a new action. In this case you can guarantee a lower bound on the value of the new policy. So the value of the new policy is greater than or equal to this objective function we have here, minus this particular quantity. So that says that if you optimize with respect to this weird L objective function, you can actually get bounds on how good your new policy is. So that seems promising, but in general we're not going to want to just consider mixture policies. Okay. So what this theorem says is that, for any stochastic policy, not just this weird mixture, you can get a bound on the performance by using this slightly strange objective function. So in particular, um, define the distance of total variation as follows. So DTV between two policies, so I'm using a dot there to denote that there's, um, there's a number of different actions. The- the policies there are denoting a probability distribution over actions. This is equal to the max over all a, the distance between the probability that each of the two policies put on that action. So it's giving us sort of a maximum difference, in what's the probability of an action under one policy versus the other policy. And then we can do- [NOISE] Bless you. -we can do D max of total variation by taking the max of that quantity over all states. So it's essentially saying, over all states was the biggest difference that the two policies give over a particular action. So where do they most differ? And then what this theorem says is that, if you have that quantity, in general we're not gonna be able to evaluate that. But what that's saying is that, if you know what that quantity is, then you can define that. If you use this objective function L, that the new value of your policy is at least the objective function you compute minus this quantity, it's a function of the distance of total variation, the max distance and total variation. So this gives us some confidence that if we were to optimize with respect to the objective function L, then we can get a bound on the value function. Now this distance- this max or the total variation distance isn't particularly easy to work with. So we can use the fact that the square of it is upper bounded by the KL divergence, and then get a new bound which is a little bit easier to work with. That looks at the KL divergence between the two policies, and we again get the similar bound. Okay. So why is this useful? So what I've told you right now is that we have this new objective function. If we use this new objective function, we could- in principle get this lower bound on the performance of the new policy. So how do we use this to ensure that we wanna get monotonic improvement? So the goal is monotonic improvement. We want to have the V_ Pi i + 1 is greater than or equal to V_ Pi i. That is our goal. So i is iterations, we want that the new policy that we deploy is actually better than the policy we had before. So how are we gonna do this? So what we're gonna say is, first we have this objective function here, this lower bound objective function, [NOISE] and what we're going to define is that Mi of pi i is equal to L of pi i pi. So I'm just copying the equation from the previous, um, previous slide, - 4 epsilon gamma divided by 1 - gamma squared DKL_ max of pi i. Okay. So this is the lower bound. That's what we just defined on the previous slide. Okay? So what we said here is that, the value of our new policy- so this is equal to this M function I've defined. So we've said that the new value is gonna be at least as good as this lower bound. So we're gonna say V_i + 1 is gonna be equal to Mi of pi i + 1, which is equal to L pi i + 1. I'm just writing out what the definition is here. And again, what we're trying to do here is get to the point where we're confident that we can get something that's better than our old value function. Now, the thing that I want to now look at is, well, what is- if we were to evaluate the lower bound at the current policy what would that be? So let's look at Mi of pi_i. So that's going to be equal to L pi_i of pi_i. I'm just plugging it into this equation up there, - 4 epsilon gamma - gamma squared DKL max of pi_i, pi_i. Okay, so why is this nice? Well, this is nice because the KL diver- divergence between two identical policies is 0. So these are exactly the same. This is equal to 0. So now, this is just equal to L pi_i of pi_i. But what I told you before is that if we go back a few slides to what the definition is of pi_i, L of pi_i is that if you evaluate it at the current policy it's just equal to the value of that policy, okay? So if we evaluate this objective function at the current policy, it's just the same as the value of the current policy. So now, if we go back here this is just equal to V of pi_i. Okay. So what does this say? This says, that if I wanna look at how my I- my- the value of my i + 1 policy looks compared to the value of my old policy, we know that's greater than or equal to Mi of pi_i + 1. So because we said that the V that we knew from this theorem, that the new value of the policy is greater than or equal to this lower bound we computed. So it's greater than or equal to Mi of pi i + 1 - Mi of pi_i. So what does this say? This says that if your new value function has a better lower bound than your old value function, you have monotonic improvement. So if this is greater than 0, then monotonic improvement, which means that if you optimize with respect to this lower bound and you can evaluate that quantity and your new lower bound is higher than your old lower bound, then your value has to be better. So we can guarantee monotonic improvement. Yes. So just to clarify. So for these value comparisons, are we implicitly considering as an infinity norm, in terms of saying one is better? Yes, generally. Yeah, I think, I mean, they probably go through with L squared 2. But yeah, yeah. Um, question is whether or not we're always defining this with respect to L infinity norm almost always. Um, ah, there certainly is some analysis particularly when we get into a function approximation which looks at an L2 norm. Um, but most of this is all with respect to an L infinity norm, which means that when we're looking at this, for example, we're looking at, um, ensuring that for all states, um, the value of those states is at least as good as the previous value of the states. Yes. So the- the claim was if our lower bound improves, then it must be the case that what is the lower bound must also be improving, right? Yeah. Uh, so there's never a case where, for example, your lower bound might improve even though the actual value of the policy evaluated decreases, it seems like. That's right. So what this- this is- what this is asking us if you know this lower bound and what that relates to the actual value. What this is stating is that if you improve your two low- like if you have a lower bound of your existing policy and you get a new lower bound with for some new policy and that new lower bound is higher than your lower bound of the other one, that you're guaranteed to be improving. So this is what guarantee- so this is assuming you can solve this. But if you, um, if you can get this lower- if you optimize with respect to this lower-bound quantity, um, because when you plug in the lower bound for the current policy under that, that's exactly equal to th- the value of that policy then you are guaranteed to be improving. Because you're basically saying, here's my lower- here's my existing value. I have something whose lower bound is better than my existing value. And so I know my new thing has to be better. Okay. Yeah. How do you like, um, you get the epsilon term back because it seems like the epsilon changes depending on your pi and it's also like a global property. Absolutely. So, um, now this discussion is a great one. I- few might ask us about this. Um, ah, so note this, that your lower bound is in terms of epsilon. Epsilon is a max over all states and actions of your advantage. Um, in principle, you could evaluate this [LAUGHTER] particularly if you're at a discrete state and action space. I- in practice, that's something that you would not wanna do. Um, this, I view this part as sort of saying this is formally if you could evaluate this lower bound. Um, and what we're gonna do now is talk about, um, a more practical algorithm which tries to take, um, this guarantee of conservative policy improvement and actually make it practical, in terms of quantities that are a little bit easier to compute. Because that's right. Yeah, in general, it- it would be very hard to evaluate this this epsi- uh, this epsilon. Now you could take upper or lower bounds on it. Um, but you won't generally know what this epsilon is. And note that this, uh, as co- just pointing out, this epsilon is dependent on the policy. Um, so- okay. All right. But this is pretty cool. So it means that you can do this guaranteed improvement. This is a form of mineral- minimization maximization. Um, and it's this nice idea of sort of saying you can have this new lower bound that's guaranteed to be better than the value of your current policy. So you can get this sort of conservative monotonic improving policy. All right. So I just- I wanna make sure we have enough time to go through some of the midterm review. But I wanna briefly talk about how we would make this practical. Particularly because, um, trust region policy optimization is an extremely popular policy gradient algorithm. So I think it's useful for you guys just to be aware of. Um, you- some of you might use it in some of your projects. Um, we won't cover it in uh- won't be a mandatory part of the homework, um, or on the midterm. But I think it's just a useful idea to be familiar with. So again, if we look at sort of what this objective function was that we just discussed. We said we had this L function and then we turned it into a lower bound by subtracting off this constant that might be hard for us to compute. And so what we do in this case is we take this constant here and we turned it into a hyperparameter. So you could turn it into a constant C. Um, but the problem wi- always is that even if you could compute this, um, often we don't know what this is. But even if you could compute it or compute a bound on it, um, generally, if we use this, we would take very small step sizes. So intuitively, this is because, um, you know, it's often very hard to extrapolate far away, um, from your current policy. And so this would say if you wanna be really sure that your new value is better than your old value, then just take a very small step size. And intuitively, it's because if you change your policy very, very small amounts at least under some smoothness guarantees, um, the value of your policy can't change that much. You know, it should also be intuitive that, like, your gradient is often a pretty good estimate very close to your current value of your function. But we also need to quickly try to get to a good policy so this is not generally practical. Um, and so the idea of sort of TRPO, one of the main ideas is to think of it being kind of a trusted region. Um, and- and use this to constrain our step sizes. So again, if we go back to this sort of the gene- generic, um, ah, template for policy gradient algorithms, we have to make this choice of how far out to step in our gradient. Um, and the idea is we're going to sort of define a constraint. So we're going to have our objective function here. And instead of explicitly subtracting off our lower bound, we're just going to say you could move. You can change your gradient but not too far. We're gonna put, uh, a constraint on how far the KL divergence can be as a way to just sort of say you're kind of having this region of which in your- in your parameter space that allows you to know how far you can change your policy. Okay. Yeah. All right. So I'm just going to talk very briefly about how this is instantiated. Um, so the main idea is that if we look at, um, what these objective functions are, um, this may or may not be easy for us to evaluate. So if we look back at what, um, our theta is, um, even here, right, we have sort of our discounted visitation weights under the current policy, but we don't have direct access to that. We have only access to samples from rolling out our current policy. So the first idea is that instead of taking an explicit sum over the state space, where that state space might be, you know, continuous and in-infinite, we're just going to look at the states that were actually sampled by our current old policy and re-weight them. So that's the first depa- the first substitution we do. Yeah. What we're trying to do right now is say we have this objective function and we wanna make it so that this can be part of an algorithm where we can compute all the quantities we need to in order to take a step size where we think the new policy is gonna be better. Um, the second thing we do and this relates to, um, question about importance sampling. Um, I- is we have this second quantity in here, where this is the probability of an action under our new policy. Um, we do have access to that, in the sense that, if someone gives us a state we can tell, um, we can say exactly what our probability would be under all the actions. But again, this often can be a continuous set. And so instead of doing sort of this continuous set, we are just going to say we're gonna use importance sampling and we can take samples. This is typically goi- going to be from pi old. So we look at what times we have taken an action given our current policy and we re-weight them according to the probability we would have taken those actions that drive the new policy. So it allows us to approximate that expectation using data that we have. And then the third substitution is switching the advantage back to the Q function, and it's just important to note that all of these three substitutions don't change the solution to the object- to the optimization problem. These are all sort of taking at, uh, these different substitutions or different ways to evaluate these quantities, okay? So we end up with the following: um, uh, we have this objective function that we are optimizing. This is after we've done the substitutions I just mentioned, and we have this constraint on how far away we can be. Um, and empirically, they generally just sample, um, this sort of alternative sampling distribution Q is just your existing old policy. So there's a bunch of other stuff in the paper. It's a really nice paper. Um, a lot of really interesting ideas. Uh, I will just, I will skip through sort of exactly how they do some of the additional details. There's some nice complexity there. Um, but I will just say briefly the main thing they're doing here is they're sort of running a policy. They're computing this gradient. They have to, um, consider these constraints, um, and they do this sort of line search with a KL constraint. And perhaps the most important thing is just to be aware of this and just sort of understand kind of them being inspired by this conservative policy improvement, and then trying to make that more practical and fast. Um, they've applied it to a lot of different problems. Um, there's some really nice stuff on locomotion controllers, cases where you have continuous action spaces, continuous state spaces. These are cases where policy gradient is often very helpful, uh, and they have some very nice results. Um, I won't step through this here. Um, the main thing to know is that empirically this is a really good tool to know about. Often, if you're doing policy gradient-style approaches, TRPO can be a very useful thing to build on, um, and it's been incredibly influential. This was came out in ICML in 2015. There's hundreds of citations to it already. So this has sort of become one of the main benchmarks for policy gradient. Okay. So if we go back just to kinda what the, to summarize what the policy gradient algorithm template is, whether you're looking at the existing algorithms or whether you're trying to define your own, generally, they look like something like the following. For each iteration, you run your policy out and you gather trajectories of data by running that policy. You compute some target that might be just the rewards, that might be a Q function. We can trade off between bias and variance in that, um, and then we use that to estimate the policy gradient, and then we may want to smartly take a step along that gradient to try to ensure monotonic improvement. Um, the things to be aware of and some of the things you're going to have practice on soon i- is that you should be very familiar with these sort of vanilla approaches and REINFORCE, um, and this general template and sort of understand how some of the different algorithms we're talking about might instantiate these different things. Um, you don't have to derive and remember all the formula that I just went through quickly for TRPO,um, and you will have the opportunity to practice these more in homework 3, but we'll only cover these lightly in terms of the midterm. All right. So is somebody may have any questions about this before we go into sort of a short overview of the stuff we have done so far for- before the mid-term? [inaudible] Okay. All right. Let's switch over. Okay. So what this is going be is sort of a very short recap of what we have, uh, done so far. And in terms of why this is useful, um, there's certainly a lot of good evidence from learning sciences that space repetition of ideas is really helpful, as is forced recall, which is one of the other benefits of doing exams. Um, uh, so, uh, that's what we are going to do today is just sort of do a quick recap of a lot of the different main ideas. So again, uh, reinforcement learning generally involves optimization, delayed consequences, generalization, and exploration. We haven't really talked about that yet. Um, so that's not really going to be on the midterm. Um, we are going to start talking a lot more about that post the mid-term. It's an incredibly important topic, one I think is super fascinating and one of the main reasons why RL is interesting. Um, but these other things are really important too and we spent some time on those so far. So in terms of thinking about the mid-term and indeed thinking about the class, um, on the very first day, I put up this sort of blizzard of learning objectives, um, and I just want to highlight, uh, a few of these which are the things that I mentioned were going to explicitly evaluated in the exam, which is that, um, by the end of the class including on the exam, um, you should be very familiar with sort of what are the key features of reinforcement ment- learning that make it different than other machine learning problems, and other AI problems. So we spent some time on that on the first day, um, and I've sort of tried to talk about it throughout. But the fact that the agent is collecting its own data and that the data, um, it gathers influences the policies it can learn. So we sort of have this censored data issue. The agent can't know about other lives that didn't li- didn't live, it makes a very big difference compared to supervised learning. Um, a second really important thing is that if you are given an application problem, um, it's important to try to know why or why not to formula- formulate it as a reinforcement learning problem. Um, and if so, how you would. Generally, there's not a single answer to this. So it's good to think of like what is one or more way to define the state-space, the action space, the dynamics, and the reward model, um, and what algorithm you would suggest from class to try to tackle it. This is in general sort of, uh, something that you'll probably run into much more than like looking at any particular algorithm, um, particularly in industry. And then a third thing that I think is really important is to understand how we decide whether or not an RL algorithm is good. And so what is the criteria for performance and evaluation we can use to sort of evaluate, um, what are the benefits, strengths and weaknesses of different algorithms and how they compare. So this could be things like bias and variance. It also could be computational complexity, um, or sample efficiency or other aspects. So what we have covered so far is planning where we know how the world works, um, policy evaluation, um, model free learning how to make good decisions, value function approximation, and then imitation learning and policy search. And we've have also talked about the fact that for reinforcement learning in general, you can think of either trying to find a value function of policy or a model, and that model is sufficient to generate a value function which is sufficient to generate a policy, um, but they are not all necessary that you don't have to have a model in order to get a policy. So, um, I will go through this part pretty fast and so I think a lot of you guys have also seen some of this stuff, um, in previous classes. So we're- almost everything we have been talking about so far assumes the world is a Markov decision process. But I have mentioned that often the world is not a Markov decision process. Um, and in the MDP case, we assume that the, the state is sufficient. Um, a sufficient statistic of all the prior history. So we don't have to keep track of the full set of states and observations and actions rewards from the whole time period, but we can just look at the current observation in order to make good decisions in the world. Um, in terms of this, i- it's very useful to know what the Markov property is, why it's important, why it might be violated, what are things like models, value, functions, and queues, um, and what is planning, and what is the difference. So in planning, we assume that you are given a model of how the world works, you know the dynamics model, you know the reward model, it still can be really hard to figure out how to act. This is like knowing the game of Go, and it's still really, really computationally intensive and tricky to try to figure out what's the optimal decision to take in Go even though you know all the dynamics and all of the rewards. In learning, we don't know the dynamics and rewards and we still have to gather data in order to learn a good policy, which has a high value, a high discounted expected sum of rewards. We talked about the Bellman backup operator, which is a contraction. If your discount factor is less than 1, um, which means that with repeated applications you are guaranteed to converge to a single fixed point. We talked about value versus policy iteration. In value iteration on the iteration k, you are always computing the optimal value is if you only get to make k decisions, um, and then you use that to back up aga- and get the k + 1 policy. In policy iteration, you always have a policy and the value of that policy if you were to act using it forever. Um, but it might not be a very good policy and then you update this. And as we have seen, it's closely related to sort of policy gradient-style algorithms where you sort of try to estimate the gradient of a policy. So in policy iteration generally, and similar to what we have been seeing in policy gradient approaches, we intermix evaluation and improvement. So we compute the value of a policy and then we use that in order to take a step and improve it. Um, if we are in the case of being model-free, um, and not having extra model, we often want to compute Q-values instead so that we can directly improve the policy. So let's just take a quick second. Um, so these are check your understandings, they're good things to go back through. These are all sort of like, you know, um, sm- small conceptual questions of the type that we might ask you on the exam. So let's just take a minute, um, to check our understanding and think about for a finite state and action MDP, the lookup table representation, which means that we just have a table entry for each state and action. Um, gamma less than 1, uh, does the initial setting of the value function impact the final computed values? Why or why not? Does value iteration and policy iteration always yield the same solution? And, um, is the number of iterations needed for poli- policy iteration in a finite state and action MDP bounded? And if, so how many? Let's just take a minute and think about those. Feel free to talk to somebody next to you. [BACKGROUND]. All right. We're- We're gonna vote. So um, I'm gonna ask, who thinks that the initial setting of value- of the value function does not matter? Great. Yes, it does not matter, and it doesn't matter, so no. Doesn't matter and why not? Because there's only a single fixed point. Because it's like in- uh, the Bellman operator's a contraction operator. Just single. And how about- who thinks the value iteration and policy iteration always yield the same solution? Yes? No. Who thinks what- uh, why no. Give me an example where they might not. [NOISE] Yeah. Verbal [inaudible]. Exactly, yeah. So that it is correct. You- you're gonna get the same, uh, value function. So it depends which way you're trying to answer this. They're gonna have the same value, they could have different policies. And that's possible if there's more than one policy that has the same optimal [NOISE] value. That can come up because often there's, um, multiple policies where you're splitting ties. Um, I- who thinks that the number of iterations needed for policy iteration is bounded? [NOISE] That's right. Um, anyone wanna tell me how many it is? A S. [NOISE] That's the- that's the total number of policies, um, and policy iteration in tabular MDPs. With like pol- or policy improvement in tabular MDPs, you're guaranteed to be monotonically improving. So you can at most go through every policy once and then you're done. So it sort of relates to what we were just talking about. In that case, um, you definitely get guaranteed policy improvement, because every- there's no function approximation there, there's no errors, you exactly know what the current value is, then you can take a monotonic improvement step. All right. So now we're gonna talk briefly of a refresher on model-free policy evaluation. Um, so this is model-free policy evaluation was this sort of passive reinforcement learning, um, where we're just trying to understand how good an existing policy is. Ideally, with not too much data. Um, and so we want to either directly estimate the Q function or the value function of a policy. And so we talked mostly in this case about episodic domains. When I say episodic domains, I mean, that we are gonna act in the world for a fixed number of steps, or where we are in a setting where we know we have terminal states, so we know the episodes will end. With probability 1, they have to end. Um, and then at that point you reset to a start state with some fixed distribution. [NOISE] And in Monte Carlo approaches, we directly average the episodic rewards. It's pretty simple. We take our existing policy, we run it out for H steps or until the end of, um, the episode. We reset, we repeat that a whole bunch of times, then we just average. Um, but in TD learning or Q learning, we use a target to bootstrap. And I know you guys have seen this a number of times, but just as a refresher, um, and I like these diagrams to start thinking about the distinctions. So when we've talked about dynamic programming here, we've thought about the case where we know the transition model, we know the reward model. So when we think about what the value is of a policy, it's exactly equal to the expected distribution of states and actions we would encounter by following this policy of the reward we would get plus gamma times the value of the next state. So note that when we think about this expectation here there's really an s prime. So that expectation is thinking about all the next states that we might get to. And so in dynamic programming, we just explicitly think about that sum. That sum over all the next states we much reach- might reach, and the value of each of those states. So if we had started in a state, we take an action, we get to some next new states. In general, we could repeat this process all the way out till we reach, you know, the horizon H or terminal states. Um, and this- what we think of here is taking an expectation over the next states that we would reach. And what we do in dynamic programming is instead bootstrap. So what we mean by bootstrap here, is that instead of building this whole tree, we keep [NOISE] track of what the value is of all the states, and we use that to take an explicit expectation over the next states we'd reach, and average over the value of those next states. And note that in this case, we're assuming we know the model. Now, there are ways to extend this where we don't know the model, but we haven't talked very much about those, this term so, um. But when I say dynamic programming here, I mean, that unless we otherwise specified that we know the models of the world. So this is a case where we're bootstrapping because we are- our update is using V, for V uses an estimate. Okay? 'Cause those values are not going to be perfect estimates of the true expected discounted reward for those next states, because we're still computing them. So then we looked at Monte Carlo policy evaluation, and it looks pretty similar in many ways except for what we're doing is we're running a trajectory all the way out to the horizon, we're adding up all the rewards, and that is our target. And when we say that policy, um, evaluation with Monte Carlo is sampling, it means we're sampling the return. What is the expectation that we're approximating? We're expectation- uh, we're approximating an expectation over that probability of s prime. [NOISE] So we only got a single s prime, instead of getting an expectation over all the next ones. And the problem with that is that we said it was high-variance, even though it's unbiased. And then we talked about combining these ideas with temporal difference methods, um, where we're both gonna bootstrap and sample. So we're sampling because [NOISE] we are only looking at a single next state, [NOISE] and we're bootstrapping [NOISE] because we're plugging in our estimate of V. So we're sampling a single s, t+1, and we're bootstrapping because we're not rolling all the way out like we did with Monte Carlo, which is plugging in our current estimate of that value function. So let's do another quick understanding of, um, for each of these cases, um, it's good to know whether- uh, whether it applies to dynamic programming, um, which requires you to know the models, um, Monte Carlo or TD learning. So is it usable when we don't know the models of the current domain? Does it handle continuin- continuing non episodic domains? Does it handle non-Markovian domains? Um, let me be clear by that- clear what I mean by that. You can always apply any algorithm to anything. It just may give you garbage out. And so my question is, um, when you- when I say handling non-Markovian domains, is it guaranteed to do something good or does it fumbling-fundamentally make a Markov assumption? Um, does it converge to the true value of the policy and the limit of updates? Right now, we're thinking about tabular case. So the value function is exactly representable and is it giving us an unbiased estimate of the value along the way? The estimates still might be consistent, which means that eventually with an updater they converge to the right thing, but they could give you biased estimates along the way. So again, let's just spend like a minute or two and, um, they are- this- there's just binary answers to each of these. So yes or no for each of them. And feel free to talk to somebody next to you. [BACKGROUND] Converge to different quality [OVERLAPPING]. But you could start with [BACKGROUND]. Yeah, it's a great question. Policy iteration [inaudible] good question. [BACKGROUND] All right. I'm gonna ask people to vote again. Okay, so, um, I'll just ask you to raise your hands if the answer is, yes. So, um, is DP, is dynamic programming usable when there are no models of the current domain? No. Is Monte Carlo usable? Yes. Is TD usable? Great. Okay, um, does DP handle continuing non-episodic domains? Raise your hand if, yes. Correct. Yep. So, you can use dynamic programming. You can use Bellman operators and contractions even if the, you know, for infinite horizon domains. You generally want your gamma function to be less than one, so your values don't explode. Um, uh, but you can do it. It's fine. What about Monte Carlo estimates? No. Monte Carlo only updates when you get to the end of an episode. TD estimates? Yes. Great. Um, does DP handle non-Markovian domains? No. No. Does Monte Carlo? Yes. Yes. TD? No. Again, you can run all of these things wherever you want, but- Converges to the true value of the policy and the limit of updates for DP? Yes. Yes, Monte Carlo? Yes. Yes. TD? Yes. Yes. Unbiased estimate of the value, DP it's kind of not applicable, because we're not really using data. It's sort of a little bit different. Um, Monte Carlo was an unbiased estimate of the value. Yes. Yes, TD? No. Great. Okay. So, um, and if we're asking you about this in the exam we'd be sure to clarify whether we're talking about the tabular setting or the function approximation setting where everything can be very different. Yes? Can you explain exactly why TD doesn't work for non-Markovian? Yeah. So, yeah that's a good one. So why does TD not work for Markovian? Um, it's because it's fundamentally making a Markov assumption, um, about the domain. And the reason that comes up is here. So the way it is writing down the value function, is it saying that the expected discounted sum of rewards from the current state is exactly equal to the immediate reward plus the discounted future sum of rewards for each of the next states, where that's encapsulated only by St + 1. So that is where you're making the Markovian assumption, because your aliases- if, um, if you have an observation space which was aliased that would ignore the whole history. Whereas Monte Carlo is summing up all the rewards from that current state onwards. Good question. [inaudible] has that assumed Markovian process? Great question, and remind me your name. Saying, um, we talked almost everything we've been talking about is TD 0 where we just have this reward plus gamma times the value function, but we also talked briefly about N step. Um, where you sort of would do r1 + r2 + gamma times r2 et cetera. So for the n-step you'd have something like this. You'd have rt + gamma rt + 1 + gamma squared rt + 2 + gamma cubed V of st + 3. So that would be like an n-step. Um, and that is essentially making different notions of Markovian assumptions. Because you can have continuums you can either have completely non-Markovian domains or we can have things like n-step Markov domains. Which essentially means that if you're making it- keeping track of a certain amount of history. [NOISE] So um, just to sort of give an example that similar to some of the ones that we've seen before. We can think of something like a random walk process. So imagine that we have a domain where we have three states and two terminal states. So we always start in state B and then with probability of 50% we go left or right. Um, and if you reached either the black nodes then the process terminates. Um, and when you get there either you get + 1 on this one or you get 0. And it's a random walk with equal probability, um, until you get to a terminal state and then the process ends. And so in this case, we could try to compute like what is the true value of the state. Um, so the true value of a state in this case, um, would involve us thinking about what is the, uh, distribution of states that you would visit under this random walk pro- process. So for example, um, if we think about what the value is of- I'll do this here. So if you think about what the value is of state C, that's always gonna be equal to the immediate reward plus gamma times the sum over the next states, value of S prime. Well, let's call this one like I know, SD and this one S0. So SD's value, is always gonna be equal to + 1. So V of SD is equal to + 1, um, because you get that reward and then it terminates. So this would say with, um, gamma times half probability you would go to the value of SB, SB plus half you get 1. And eventually if you look at this distribution it's gonna be, um, so you could do this process for each of the different states. And what you would find when you do this is that you get through this random walk terminating on the right side or the left side in terms of the probability distribution and you could compute the values for this. Um, in an exam, we would probably make this a little bit easier, but it's good to be able to sort of look at this example and work through it, um, and see what this part would be in terms of the value function. Um, then the next question is let's imagine that we have a particular trajectory, we want to compare what would happen under different algorithms. So let's imagine what we have is we have, um, a trajectory where you go BC, BC terminal + 1. So that's our episode. So what is the first visit Monte Carlo estimate of B? One. One. That's right. So, so V of B is equal to 1. Why is that? Because what we do in Monte Carlo is we add up, versus at Monte Carlo, we look at the first time we visited the state and we add up all the rewards we get from that state till the end of the episode. In this case, that reward is just 1. So, the estimate of this would be 1. The only other, you know, thing that we might want to know about there is if you're doing sort of this sliding average like an alpha estimate to update the Monte Carlo estimate, you'd want to know what the initial values were and what alpha was. But let's imagine that here you just look at exactly taking that return. So this is equal to the return starting at B going to the end of the episode. So then the next question is, um, what are the TD learning updates given the data in this order? C terminal + 1 BC0, CB0 with a learning rate of A. Maybe just take like a minute or two and, um, do one or two of these updates. And then think about what would happen if we reverse the order of the data with the same learning rate. So, this relates to a point we've talked about a couple of times about whether or not the order of updates we do given some set of data matters in terms of the values we compute. So I guess I would go at this in the following way. I would first commit yourself either way whether or not the order matters in terms of the values we're gonna compute, um, and then try to compute one or two of them. So let's just spend like a minute or two to decide whether or not the order matters here [NOISE] in terms of the resulting values and then we can also compute. [NOISE] I know I'm not giving you guys enough time to do all the computations here, but this is mostly to just sort of do that forced recall aspect to try to remember exactly what the formulas are and then remember whether or not this matters. So I'm just gonna ask you to vote. Um, who here thinks that the order matters in terms of some of the values we compute? [NOISE] It's right. No, it won't always. Sometimes do- you can do things in different orders um, the fact that we've emphasized a lot might lead you to believe that it always matters, but it doesn't always matter. But in this case it does. So um, in this case, if we look at what the value is in the first-order, what we would do is we'd say V of C = 0 + alpha 1 - 0. So the new reward we've observed. That would be alpha, then when we're computing the value of B, we could use the new B of C we just computed because when we have this update, now we've already got a non-zero estimate for V of C. Um, note to be precise, I should have told you here exactly how we're initializing all the values. So in this case, we've implicitly assumed that the initial values are 0 which matters a lot. [NOISE] We'll talk some more, um, in a week or two about smarter exploration and the fact that being optimistic often really is very helpful. One challenge can be in deep neural networks is how to set things so that they're optimistic. Um, but in this case, so we're assuming that everything is 0, so V of B will be alpha squared, V of C will be the following expression. Um, these are basically me just applying TD learning to these cases. [inaudible] They're in the second line, yeah. Good catch. [NOISE] In the, where? Should be gamma squared. Gamma squared, yeah, yes, yeah, that final expression. Thanks. Um, so which appears in the third line. [LAUGHTER] If we do it in the reverse order, V of C will be 0 for our first update because C goes to B, B, B of B is 0. Then when we update BC0, the value of C is still 0, and we only update V of C in the final one. So this just points out that order matters. This comes up also when we're doing function approximation and episodic replay. Just in general, when we think about policy evaluation algorithms. It's good to be aware of the bias-variance tradeoff, data efficiency, and computational efficiency. TD learning tends to be pretty good on computational efficiency, um data efficiency-wise, it depends a little bit. Sometimes if you do experience replay with TD, it gets better. So um, it's useful to think about there's often a lot of variants of these algorithms. And so just being precise in whatever your, whatever your state is. And if you're just assuming the vanilla version, we're using or if you're like, well if you do this additional experience replay, this is how it can change. Okay. Now let's think about how we can do model free learning to make good decisions. Um, we've talked a lot about Q learning. Q learning is a bootstrapping technique that assumes Markovian, a Markovian world. Where we say that the value- the Q value is gonna be um approximated by the reward plus gamma times max over a prime with the next Q function. And we can use that as sort of our target, and then we do the slow slewing. We sort of have this updated learning rate, and our learning rate, [NOISE] where we're slewing between the one sample we just saw versus, um, our previous estimate. And we slowly slew this towards, um, we generally decrease alpha over time to try to converge Q to a single value. Uh, we talked about some conditions under which for Q learning to converge. Again this is all under sort of well, this is both under reachability assumptions and also we're right now we're talking about the tabular setting. [NOISE] So there's no function approximation going on. [NOISE] So if you act randomly, Q-learning will converge to Q star under mild reachability assumptions, um, which means that, you know, you can't have a helicopter which if you crash it, the world is over and you can't get any more samples. So you have to be able to sort of repeatedly visit all the states, an infinite number of times and try all of the actions an infinite number of times. Um, [NOISE] and it has this interesting property that, um, when you are doing Q-learning you can use data gathered by one policy to estimate the value of another policy. So this is where we're trying to estimate the optimal Q function, but we can use for example a random data, random samples, [NOISE] or random policy to try to estimate that. And the reason for that is because we're doing this max. We're always looking at what's the best thing we could do next. So that's a pretty cool property. Um, so then if we sort of think about in this case, um, there's some different things, we'll go through these I guess just briefly. Um, if you have a Q-learning policy, um, which has e-greedy, e-greedy here is with probability 1 - epsilon. You take the action which is expected to be best under your current Q function. Um, and with probability epsilon, you would act randomly. So if you were in a lookup table, this is guaranteed to converge to the optimal policy and the limit of infinite data. So this is yes, with mild reachability. [NOISE] Um, for this second one can we use Monte Carlo estimation and MDPs with large state spaces? Let's vote if yes. [NOISE] Whatever, I'll take a second, and just talk to your neighbor, and we'll vote again. [NOISE] I'm not saying that those people are wrong, I'm just saying that since most people didn't vote, I'm assuming that most people would benefit from just thinking about it for a sec. [NOISE] All right. Let's vote again. Um, vote if you think Monte Carlo estimation can be used in MDPs with large state spaces. Yes. Yeah. So it's not, um, it's not- you're not restricted to whether it's a large state space or not, you can use Monte Carlo estimation there, um, I, that, that's really, I guess it can. So no Monte Carlo can be used. That number can be point positive depending [inaudible] Yes. It's a great question. So, um, uh, Monte Carlo, if you- the number of data points per state could be very low. If you have a single start state, that's not too bad. If you have a distribution of starts space- states, that can be trickier, or we're gonna want to move into the value function approximation setting. But there's nothing a priori, which means you can't apply it. If you can put in there it might be really bad. [LAUGHTER] Um, we might need to start doing, ah, function approximation. The last thing I put on there- this is something that's, um, we haven't discussed a lot yet, um, but I think it's an interesting start for one to sort of start connecting these between the dynamic programming aspects we've talked about. Um, a model-based reinforcement learning is not necessarily always more data efficient than model-free, though we- we talked mostly about model-free. Um, so we haven't discussed this too much, but, well, it's a good thing to be thinking about particularly as we start getting into exploration. Um, and I mentioned briefly before that there's a nice new paper by Wen Sun and some colleagues at MSR, Microsoft Research New York City, that is showing that in some cases, um, model-based is strictly better than model-free. And the intuition there is that you can compactly represent the model, but you can't compactly represent the value function. So you don't need a lot of parameters to learn the model, and then you can plan with it, but if you try to directly learn the value function, you need a lot more. All right. So as we're sort of starting to move into an even just in that discussion, a lot of our recent focus has been in value function approximation. I'm including in homework 2. So um, we talked about if you were looking at Monte Carlo methods versus TD learning, um, what sort of convergence guarantees do we have in the on policy case? So this is, um, important to emphasize. So we're looking at evaluating the value of a single policy, and we talked about how we can think about the on policy stationary distribution. That when we define a single policy, then, uh, [NOISE] and we run it, that's like we're inducing a Markov reward process or a Markov chain, and we think- can think about the stationary distribution of states that we would visit under that policy. Um, and we talked about convergence properties. And in particular, we said that what Monte Carlo does, um, no matter what sort of function approximator you are using, is it tries to minimize, um, the mean squared error. So this style of techniques we've talked about with Monte Carlo, um, is that it simply tries to minimize the mean squared error of your data. [NOISE] And so we can think about this for linear value function approximators shouldn't- this also holds for other value function approximation- mators, it's gonna minimize the error. [NOISE] In the case of linear value function approximation with TD learner- learning, it is gonna converge to a constant factor of the best mean squared error. And what does that mean in this case? Um, here, what we have is- we might have a gap, so particularly if you have some like linear value function approximators, um, you just might not be able to write to exactly represent the value of all the states, use the- the chosen like a parametric family that you have. And so there might fundamentally just be a gap between, um, the value function that's representable with the space that you have, and the true value function. I often like to think about like this. There's a nice picture and set no partner about this two. This is sort of this, ah, showing with your set of W, what are the value functions you can represent, and it might be that your variable value function lives up here. You just can't with- for example, with a line be able to represent all of the true value functions. Um, if you think about this in two dimen- in, um, another dimension, you can imagine for a state maybe your real value function looks something like this, but you are using a line approximator, so you just can't represent that exactly a- a straight line. So Monte Carlo converges to the best mean squared error possible, giving your value function approximator space, and TD learner converges to that times this additional factor. [NOISE] Oop, see. Well, I think that's not going to like that. Okay. Um, so note that there's this. [NOISE] Okay. Well, now I'm just gonna make that not do that anymore, all right. We talked about the fact that when you're doing off policy learning, Q-learning with function approximation can diverge, which means that it doesn't even converge with infinite amounts of data. This is even separate than what it might converge to if it converges. It just says that the actual- your parameters just may never stop changing if you're doing sort of gradient updates. Yeah. Can we initialize the function approximator [NOISE] in those parameters in such a way that [inaudible] push convergence and not guarantee it's not like? Well, we have conditions on whether or not, um, the initialization of the parameters helps determine whether or not, for example, you might converge or diverge. Um, it's an interesting question, I don't think there's work that I know that formally tries to characterize this, like, you know, are there places where you could formally do this, um, so that, ah, in terms of your gradients, for example, they wouldn't start to explode? There might be, I suspect it depends a lot on the problem. And I also suspect that there might be pathological examples that you can construct where it's hard to do. But certainly worth a try. You can also ob- ob- observe whether or not your parameter estimates are continuing to change. Um, we talked quite a lot, you guys had a lot of practice with deep learning and model-free Q-learning, um, where we looked at having this Q-learning target and the Q network, and we're doing stochastic gradient descent. Um, we're using a deep neural network to approximate Q. [NOISE] Um, we talked about the- some of the challenges with this divergence might be that we have these correlated local updates. The value of a state is often very closely related to the value of its next successor state. Um, and that also by changing these targets frequently, um, then that might cause, ah, instability. So that's a lot of the recent progress over roughly the last five years has been in sort of ways to modify this equation in order to make it more stable when you're doing gradient descent. Um, and in DQN, it's sort of both introduced that we should do experience replay, so don't use each data point once, um, and also fix the target for a while. So you're sort of saying, "I'm going to use this." A fixed value function approximator my next state for a while, and then we can minimize our mean squared error. We talked about the fact that experience replay is particularly hugely helpful, um, and the targets is also quite. Ah, and there aren't good guarantees yet on convergence, though there's a lot of interesting work that's being done in this space. People are very interested in trying to understand the formal properties of these type of networks. Um, we also talked about double Q, dueling, um, and, uh, like prioritized replay, um, as things that we could look at to try to improve how quickly our Q functions converge to something reasonable. So I think this is the last one for today. Um, ah so quick question. Um, in finite state spaces with features that can represent the true value function. Does TD learning with value function approximation always find the true value function of the policy given sufficient data? So this is for TD learning. So we're- we're essentially doing policy evaluation right now. So in this case, are we guaranteed to find the true value function given sufficient data? Maybe take one, chat with a neighbor for one minute and then I'll ask people to vote. Should we assume this is on on policy? We're gonna assume this is on policy, or at least with sufficient amounts of data from the on policy distribution. [BACKGROUND] Who wants to vote yes that we do find the true value function approximator? That's right. Okay, so how could we have checked this? So if we go back to what I was saying over here. What I said is that we are going to converge to a constant factor of the best mean squared error. This mean squared error is always 0, if you can exactly represent the value in the current space. So that additional sort of constant factor is just a constant factor times 0. So in this case, yes. So I- because I said here that it- with features that can represent the true value function. So we've said that it is perfectly possible to represent the value function of this policy in the features that are given to you and so it will be possible to achieve that. Yeah. Is it true for a non-linear parameterization? For TD learning? Yeah. So for policy evaluation, um, uh, if you have a nonlinear- if you have features- like if you have a general representation that allows you to exactly represent the value function and you're doing on policy learning. So you're doing TD learning, um, you will be able to get zero error, with infinite, you know, sufficient data etc. Finite data, this is all of it. Yeah. One part that I- Remind me your name, please. I, I got a little confused given that we, we might have all of the features that we want, but we might not have any representative value function approximation that would actually be able to generate the Qs. Like the- a- are those two things like identical? Like, I guess the way I was thinking about this was we might have all the features, but we might not find a space of functions that actually would be able to represent the value function. Okay. So I think the question is say, okay. Well, what if we had a lot of features, but like- does that actually give us a parameterization of the value function that can represent the true value function? When I say here sort of features and representation I mean that we have picked a function class that can exactly represent the value function, if we have an algorithm to try to fit it well enough. So, um what I'm assuming- what I'm saying in this case is that if your value function- if someone could- if an oracle could give you those features- the- the- the parameter vector that would make, um, uh, that zero, that TD learning can find it. So this essentially like [inaudible] because we can generate the table, keep it. It doesn't have to be tabular. So to go over- this does not to have to only hold for tabular cases. It's that if like- so if we look at, um, something here. Let's imagine this is your state space. This is your value function. So if someone gives you, uh, a line, or a quadratic, or a deep neural network with enough parameters to exactly represent that line, what this statement is saying is that TD learning can find- can fit those parameters exactly. This is not true when we start to go into Q learning. So in some cases, you can have a representation that could optimally represent the value function, but you can't find it. Like Q learning will not identify it. So that's sort of the difference that we're trying to make here, is that in TD learning, if that exists and you're on policy, you can find it. Q learning, you may not be able to. Yeah. There's a question in the back. Your name first, please. So can I just clarify this is to whether the value approximation is linear or nonlinear? Yes. Yes. So what's we're trying- this is true, um, for generic representations. If your representation, whether it's linear, or tabular or, um, tabular generally always assume it's- it's exact. So linear or otherwise, then- then this is- this is true. Yes. Uh, does this have anything to do with if our value function approximator is a contracting operator or not? So. Yeah. It's a great question. You asked whether or not this has to do with whether or not our value function approximator is a contraction. You can think of when we're doing this sort of TD learning, that we have two steps. We're kinda doing our approximated Bellman. And our Bellman operator, if we could do it exactly we know is a contraction, then we have to do this additional part of fitting a function. And if you can exactly fit your function, um, then you're not going to introduce additional error during that part. Um, and that's- that's one of- that's one of the benefits in here. That cannot- that can start to be divert- So in this case again, it's all on policies so it's much closer to the supervised learning setting. When you start to be off policy this gets more complicated. Question or? No. Okay. All right. So let's just go really briefly through imitation learning, um, and policy search. This will be kind of at the same lighter level that, um, you'd be expected to know it for the exam. You haven't had the chance to practice either of these, um, except where from lecture. So imitation learning was the idea that the specification of reward functions can be really complicated. What if we could just have people demonstrate procedures and then learn from them? Behavior cloning is where we're doing supervised learning. So we're trying to learn a mapping of actions to states, and we're treating this as a supervised learning problem. So we just look at for an expert, pairs of states and action, and you can try to fit your favorite machine learning supervised, like classification algorithm to- to predict that. And the thing that can go wrong in this case is that your state distribution that you induce under your approximate policy, that's trying to mimic the expert, can be different than, um, the- the distribution of states you'd reach on to the expert policy. Which means that you can end up with these sort of different state distributions, and you don't know what the right thing is to do under these new states because you don't have any data about that. So things can go pretty badly in some of those cases. We talked about imitation learning, where the idea is that we have again trajectories of- of demonstrations. And now the goal is to directly learn rewards. A good thing to rethink about here is how many reward functions are compatible with an expert's demonstration. We talked about this before. If it's not clear, feel free to reach out to me either at the end of class or, um, on Piazza. And we talked about policy search. So just really briefly. These are the types of levels of, um, questions I would expect you to be familiar with. So why do we want to do stochastic parametrized policies? Can be a nice way to put it in domain knowledge. It can help us with non-Markovian structure. We talked about aliasing, and we talked about game theory settings, where deterministic policies would do badly, but stochastic ones would do well. Um, policy gradient methods are not the only form of policy search. We talked about exoskeleton optimization by my colleague Steve Collins, and the fact that, um, that worked pretty well. But generally, we're going to talk mostly about gradients. Um, the likelihood ratio policy gradient method does not need us to have the dynamics model, which is really important because when we don't have it. And then two ideas to reduce the variance of a policy gradient estimator is to use the temporal structure. And here, it involves the fact that the reward you get at a timestep now can't be influenced by your future decisions because of the structure of time. And then, um- and secondly baselines. So that's kind of the level, the sort of- the stuff we talked about in class, but not deep procedural knowledge. So just to summarize. Um, recommendations would be to go through lecture notes, look at things, like check your understanding. If you want to look at existing, uh, additional examples going through session, um, notes can be useful. Um, the practice midterms particularly last year will be more similar to the one from two years ago. Um, if you see some topic that we haven't covered in this class, it's not going to be covered on the midterm, um, but feel free to reach out to us if you have any questions, and you can bring a one-sided one page of notes that's handwritten or typed. Okay. Good luck. |
Stanford_CS234_Reinforcement_Learning_Winter_2019 | Stanford_CS234_Reinforcement_Learning_Winter_2019_Lecture_16_Monte_Carlo_Tree_Search.txt | Right. We're going to get started. This is the last lecture for the term. Um, uh, just a few logistics things that we're getting at the beginning. Um, so just a friendly reminder that the project write up is due on the 20th at 11:59 PM. There are no late days, and then the poster presentations are on Friday at 8:30 AM. [NOISE] Uh, you need to submit your poster as well and that should be submitted online to Gradescope by the same time. Um, we'll open up a submission for that in advance. Um, and there's also no late dates for that. Uh, you should have received an email with some details about the poster session. Any questions? Reach out to us. Does anybody have any questions right now? It's the last week of office hours. We won't have office hours next week. It's finals week for most people. Um, but you can reach out to us on Piazza or if you have extra questions, you know, we're happy to find the time. Okay. All right. So what we're going to do today is, uh, so last time, of course, was the quiz. Uh, and we're gonna be sending out- our goal is to send out grades for everybody that took it on Monday to send them out today. We're almost done grading those. Um, [NOISE] and for the- there are a few people who are still taking that late so though- uh, we have to grade the SCPDs. But everybody else should get their quiz scores who took it Monday, should get back today. And then today what we're gonna do is we're going to talk just a little bit about Monte-Carlo tree search as well as discuss some end-of-course stuff. So why Monte-Carlo tree search? Um, who here has heard of AlphaGo? All right. Yes. So I mean AlphaGo, you could argue is one of the major AI achievements of the last, you know, 10 to 20 years, um, and it really has been a spectacular achievement that was achieved much faster than was anticipated, uh, to beat humans on the board game Go, which is considered an extremely hard game. So the Monte Carlo tree search was a critical part of, you know, to achieve this success plus a lot of other additional things. But it's one of the aspects that we have not talked about very much so far in class. So I think talking about Monte-Carlo tree search, so you're familiar with some of the ideas behind that and therefore some of the ideas behind AlphaGo, um, are useful, uh, to be aware of. Uh, and then also because when we start to think about Monte-Carlo tree search, it's a way for us to think about model-based reinforcement learning, which is a very powerful tool that we haven't talked about as much in part because we haven't seen as much success in the deep learning case with models. And I'm happy to talk more about that either today or offline. But I think that going forward it's likely to be a really productive avenue of research. And we can talk about why that might be particularly useful in alpha in AlphaGo. Okay. So what we're gonna do first is we're gonna sort of talk a little bit again about sort of model-based reinforcement learning. And then we'll talk about simulates- simulation-based search, which is where Monte Carlo tree search comes up. Actually, just because everyone takes different classes and I'm curious, who here has covered Monte Carlo tree search in a, in another class? Okay. Just two. What class was it? [inaudible]. 238. Yeah. Yeah. Same? Same one. It was mentioned a little bit like [NOISE] It was mentioned briefly. Ah, yeah. Very brief. Yeah. And yeah? 217. Which is? General game play. Oh, yeah. General game play would be a good one to come and bring it in. Okay. Cool. Awesome. Oh, and also I just think people are interested. At the end, I'll also mention some other classes where you can learn more about reinforcement learning. All right. So model-based reinforcement learning. Um, [NOISE], most of what we've talked about this term though not all of it, but most of what we've talked about this term [BACKGROUND] particularly when we're talking about learning, which means we don't know how the world works, um, is that we're thinking about either learning a policy or a value function or both directly from data. Um, and what we're gonna talk about more today is talking about learning a specific model. So just to remind ourselves because it has been a little while. We're gonna be talking about learning the transition and/or reward model, and we talked about this a little bit maybe, I don't know, a number of- ah, a few weeks ago, It came up also in exploration. But once you have a model, you can use planning with that. And just to refresh our memories, planning is where we take a known model of the world and then we use value iteration or policy iteration or dynamic programming in general to try to compute a policy for those given models. In contrast to that, of course, we've talked a lot about Model-Free RL, where there's no model and we just directly learn a value function from experience. And now, we are going to learn a model from experience and then plan using that. Now, the planning that we do in addition to sort of the approaches that are known from classical decision, uh, like dy- dynamic programming also can be any of the other techniques that we've talked about so far in class. So, you know, once we have this, this is a- this can act as a simulator. And so once you have that, you can do model-free RL using that simulator, or you can do policy search or anything else you would like to do given that you have a model of the world. It basically just acts as a simulator, and you can use it to generate experience as well as do things like dynamic programming. Okay. You can do sort of all of those things. So once you have a simulator of the world, that's great. The downside of course can be if that simulator is not very good. What does that do in terms of the resulting estimates? Okay. So just to think of it again. We have our world. It's generating actions and rewards and states. Um, and now we're going to think about sort of explicitly trying to model those. So in a lot of cases you may know the reward function, not always. But in a lot of practical applications, you'll know the reward function. So if you're designing like a reinforcement learning based system for something like customer service, you probably have a reward function in mind, like engagement or purchases or things like that. But you may not have a very good model of the customer. So there are a lot of practical examples where you'll need to learn the dynamics model either implicitly or explicitly. But the reward function itself might be known. All right. So how do we think about this in terms of a loop? We think about having, um, [NOISE] some experience. So this could be things like, you know, state, action, reward, next state tuples that we then feed into a model, and this is going to output either a reward or a transition model. We do planning with that, which can be dynamic programming or Q learning or many of the other techniques we've seen here, policy search. And then that has to give us a way to pick an action. So that has to give us an action that we can then use over here, and we don't have to necessarily compute a full value function. All we need to know is what is the next action to take, and we're going to exploit that when we get to Monte Carlo tree search. That we don't necessarily have to compute a full value function for the world nor do we have to have a complete policy. All we have to know is what should we do for this particular action next. So some of the advantages about this is, um, we've a lot of supervised learning methods including from deep, uh, deep learning which we can use to learn models. Some of them are better or worse super, uh, suited. So our transition dynamics, we're generally going to think of a stochastic. So we're going to need supervised learning methods that can predict distributions. For reward models we can often treat them as scalars. So then we can use very classic regression-based approaches. And the, the other nice thing about model-based reinforcement learning is like what we talked about for exploration, um, that we can often have explicit models over our uncertainty of how good are our models. And once we have uncertainty over our models of the world, we can use that to propagate into uncertainty over the decisions we make. So in the bandit case that was pretty direct, because in the bandit case- so for bandits, we had uncertainty over the reward of an arm, and that just directly represented our uncertainty over the value because it was only a single timestep. In the case of MDPs, we could represent uncertainty over the, the reward and the dynamics model and forms of these sort of bonuses, and then propagate the- that information during planning. And that again allowed us to think about sort of how well do we know the value of different states and actions and what could it be, what sort of could it be optimistically. Now the downsides is that, you know, first we're gonna learn a model and then we're gonna construct a value function, and there could be two sources of approximation error there. Because we're going to get an approximate model and then we're gonna do approximate planning in general for large state spaces, and so we can get compounding errors in that case. Now, another place that we saw compounding errors earlier in this course was when we talked about imitation learning. And we talked about if you had a trajectory and then you tried to do behavior cloning and learn mappings from states to actions, and how if you then got that policy and followed it in the real world, you might end up in parts of the state space where you didn't have much data and you could have sort of these escalating errors because, um, again it could compound. Once you get in parts of the state space where you don't have much data, and then you're extrapolating, then things can go badly. So similarly in this case, if you build a model and you compute a policy that ends up getting you to parts of the world where you don't have very much data and where your model estimate is poor, then again your resulting value function in your policy might be bad. I guess I'll just mention one other big advantage I think of with model-based reinforcement learning is that it can also be very powerful for transfer. So when Chelsea was here and talking about meta-learning, one of the nice benefits of model-based RL is that if you learn a dynamics model of the world, then if someone changes the reward function, implicitly you can just do zero shot transfer, because you can just take your learned model of the dynamics and then your reward function, you can just compute a new plan. So like if I'm a robot and I learned how to navigate in this room, and so like now I know like, you know, what it's like to turn and what it's like to go forward, etc., and before I was always trying to get to that exit. But now I know what dynamic- I know my dynamics model in general for this room. And then someone says, "No. No. No. I don't want to go to that exit because that one's, you know, closed or something. So go to that other exit." And they tell me the reward function. They say, you know, there's a +1 for that exit now instead of there. Then I can just re-plan with my, my dynamics model. So I don't need any more experience. I can get zero shot transfer. So that can be really useful. So that's one of the other reasons why you might want to just build models of the world in general. And there's some interesting evidence that when people play Atari games, that they are probably systematically building models. What happens when I move the iceberg next to the polar bear? And because then you can generalize those models to other experience. Okay. So how are we gonna write, write down our model in this case. We're again just gonna have our normal state, action, transition, dynamics and reward, and we're gonna assume that our model approximately represents our, our transition model and our reward model. So we're assuming the Markov assumption here. So we can represent our next state is just the previous state and action in a distribution over that, and we'll similarly have that for the- for the reward. And we, we typically assume things are conditionally independent, like what we've done before. So we just have a particular dynamics model that's conditioned on the state and action and a reward that is conditioned on the previous state and action. And so if we wanted to do model learning, then we have the supervised learning problem that we've talked about a little bit before of, uh, you have the state and action, and you want to predict your reward in next state. And so we have this regression problem and this density estimation problem, then you can do it in all sorts of ways. You can, uh, you know, use mean squared error, you can use different forms of losses. Um, and in fact, one of the ways we've recently made progress on our off-policy reinforcement learning is by using different forms of losses than standard sort of maximum likelihood losses. Uh, but generally here, we're gonna talk about maximum likelihood losses. So we can just do this, and of course, in the- in the tabular case this is just [NOISE] counting. So if you just have a discrete set of states and actions, you can just count how many times did I start in this state and action, and go to state one, versus how many times they start in this state and action and go to state two. And so you just count those up and then normalize. And in general, there's a huge number of different ways that you can represent these. Uh, and I think one of the ones that I think is particularly interesting is Bayesian, not the- Bayesian Deep Neural Networks. They've been pretty hard to tune so far. Oh, another policy, you know, Bayesian deep neural networks. [NOISE] Um, I think one of the reasons those could be really powerful is they can explicitly represent uncertainty, but so far they've been pretty hard to train. But I think that there's, you know, a lot of really- there's some really simple models we can use as well as some really rich function approximators for these models. Okay. So if we're in the table lookup case, we're just averaging counts. So we're just counting as I said this state action, next state tuples dividing by the number of times we've taken, uh, that action in that state, and we similarly just average all the rewards, so this should be the reward scene, for the times that we were in that state, took that action, and what was the reward we got on that time step. So let's think about an example for what that looks like here. So a long time ago, we introduced this AB example, where we have, um, a state that goes to the- a state A that goes to action in state B, and then after that, it either goes to a terminal state where it got a reward of 1 with 75% probability, or it goes to a terminal state where it gets a reward of 0 with 25% probability. And imagine that we've experienced something in this world, so there's no actions here, there's a single action. It's really a Markov reward process rather than a decision process, but we can get our observations. So let's say, we start in state A, and then we got a reward of 0, and went to B and got a 0. And then we had a whole set of times, we have 6 times, where we started at B, and we've got a reward of 1, and then we got started in state B and got a reward of 0. And now we can construct a table lookup model from this. And just to refresh our memories, um, so we talked about the fact that if you do temporal difference learning in this problem with a tabular representation, meaning one row for each state, so that's just two states; A and B. That if you do infinite replay on this set of experience that it's equivalent if you took this data and estimated a Markov decision process model with it, and then did planning with that to evaluate the optimal policy, or the policy that you're using to gather the data in this case. So that was an interesting equivalence that the TD is, um, giving you exactly the same solution as what if you compute what's often called a certainty equivalence model, because you take your data, you estimate, you take the empirical as average of that data. So you can say, "If this was all the data in the world, what would be the model that could be associated with that, with a maximum likelihood estimate." And then we do planning. So TD makes that assumption. Let's just do a quick check of memory. Do Monte-Carlo methods converge to the same solution on this data? So maybe take a minute, turn to somebody next to you, and decide whether or not they do, and why or why not. Do you have a question? Uh, as an offering, yes. [LAUGHTER]. Okay. I think that Monte-Carlo methods will converge to solution with the minimum MSE's opposed to have MLE effect? Correct. The Monte-Carlo methods do not make an assumption of Markovian. Um, so the- they are suitable in cases where the domain is not Markovian. So in this case, they will converge to the- well in all cases for this policy evaluation. They're gonna converge to the minimum mean squared error. Yeah, question? So you're saying that if you used an ML model you probably converge them into MSE, what if you are using a different loss? [OVERLAPPING] Good question. This is- this is- this is going to converge to the minimum MSE not the MLE. [inaudible] If you are using a different loss [inaudible]. Would the Monte Carlo methods converge to the- I mean, depending on the loss or if you regularize. It's a great question, if you regularize it may converge to different solutions than the minimum mean squared error depending on how you regularize or the loss you use. But in general, it will not converge to the same thing as if you got the maximum likelihood estimate model, and then did planning with that or policy evaluation. And the key difference is Monte Carlo is not making a Markovian assumption. So it does- it does not assume Markov. And so in particular in this case, um, because I may have a guess of what the value of A will be under the Monte Carlo estimate. There's only one sample of it. 0. Yeah, um, yes. So there's only- for Monte Carlo here, we'll say I- I'm only looking at full returns that started with this particular state, and there's no bootstrapping. So, um, the only time we saw A was when the return from A was 0. Uh, but, you know, if the system is really Markov, that's not a very good solution because we have all this other evidence that B is actually normally has a higher value, and we're not able to take advantage of that, um, whereas TD does. So TD can say, "Well, I know that V of A was 0 this one time." But in general, we think that V of A is equal to the immediate reward plus, in this case there's no discounting, so value of B. And I have all this other- other evidence that the value of B is, in this case, actually exactly equal to 0.75, um, because we have six examples of it being 1, and two examples of it being 0. So we would, uh, have V of B is equal to 0.75, and both Monte Carlo and, um, TD would agree on that. Because for- if you look at every time you started on B, 75% or how- you know, 75% of time you got a 1, the rest of the time you got a 0. So the- the value of that is 0.75. So the TD estimate would say also V of A is equal to 0.75, the TD estimate. So one of the reasons this comes up here, a- and notice this is a- this is not due to a sort of finite number of backup, or sorry, I'll be careful, a finite amount of use of the data. So this is saying, if you sort of run this through TD many, many times, and the Monte Carlo estimate is also getting access to all of the data. It's just saying this is all the data there is. So an alternative would be, if you take this data and you build a model. So now we have a model that says, the probability of going to B given you started in A is 1, you always go from- from, um, A to B. In fact, you've only ever seen this once, but the one time you saw it, you went to B, uh, and we can use this to try to get simulated data. So let me just- well, I'll go a couple more. So the idea in this case is that once you have your simulator, you can use it to simulate samples, and then you can plan using that simulated data. Now, initially, that might sound like why would you do that because you hide your previous data and maybe you could have directly, you know, put it through a model-free based approach, like why would you first build a model, and then generate data from it. But we'll see an example right now from that sort of AB example of why that might be beneficial. So you- what we can do is we can get this- we can get the maximum likelihood estimate of the model or other estimates you might want to use, and then you can sample from it. So in that example we just had here, our estimated transition model, is that whenever we're in A we go to B. So when I'm in A, I can sample and I will go to B, and that generates me a fake data point. Okay? And I could do this a whole bunch of times, we get lots of fake data. Now, this fake data may or may not look like what the real-world does, it depends how good my model is. But it's certainly data I can train with, and- and we'll see, in a minute, in a second why that's beneficial. Okay? So if we go back to here, this is the real experience on the left. So on the left-hand side we had all this real experience, and then what we did is we built a model from that, and then we could sample from it. So we could have experience that looks very similar to the data that we've actually seen, but we could also have experience like this. Now, why could we have that, because we now have a simulator, and, uh, in our simulated- In our model, we've seen cases where we started in A and we went to B. And in our model there have been other times where we've started in B, and we've got a 1. So essentially we can kind of chain that experience together, and simulate something that we never observed in the real world. And we're leveraging the fact here that it's Markov. So if the domain isn't really Markov, we could end up getting data that looks very very different than what you could ever get in the real world. But if it's Markov, then, um, it may still be an approximate model because we only had a limited amount of data training our model. But now we can start to see conjunctions of states and actions, uh, that we maybe never saw in our data. But we could update [NOISE] our model as we sample? Uh, well, okay great question. Could you update your model as you sample? You could, but right now we're just sampling from our model. So this, this is not real-world experience. So that could lead to confirmation bias, because it's like your model is giving you data, and if you treat that as real data, and put that like into your model. It's not from the real world. So you could end up sort of being overly confident, in, um, uh, because you're generating fake data and then treating it as if it's real. How do we judge how confident we would be in our sample's experience? I guess like relative to how much training data we'd have to put in the model. Exactly. So that's like, how would be, how do we know how confident to be and, And in general this is the issue of your models are gonna be pretty bad. Sometimes if you have limited amounts of data. So some of the techniques we talked about for exploration, where we could, uh, drive these confidence intervals over how good the models are, they apply here as well. So, um, if you only have a little bit of data you can use things like Hoeffding to say, how sure am I about this reward model for example. Um, for most of today, we're not gonna talk about that that much, but you can use that information to try to quantify how uncertain should you be, and how would that error kind of propagate. Yeah. Um, so I guess I'm trying to like conceptually think about the next step is that we're, we're building this model. We're gonna use a method to learn some sort of a policy or some sort of like, way to act in the real world. If we have the model, can we just used a model when you are acting and just basically run our state through the model and get, maybe like a distribution and just take the maximum action, the m- The action that maximizes our reward? So, I guess, once you have the model you could use it in lots of different ways to do planning. So one is you could do, if it's a small case, like here it's a table. So you could use value iteration and solve this exactly. There's no reason to simulate data. Um, but when you start to think about doing like Atari or other really high-dimensional problems, the planning problem alone is super expensive. And so you might still want to do model-free methods for your planning with your simulated data. And one of the reasons you might want to do that is because, um, we've, we've talked about different ways in Q learning to be more sample efficient like you have a replay buffer and you can do episodic replay. But another alternative is you just generate a lot of data from your simulated model, and then you replay over that a ton of times. And so that's another way to kind of, um, make more use out of your data. Yeah, question in the back. If, um, if you don't want to make the Markov assumption, can you so do the same but, uh, condition on the past of [inaudible]? Yeah. Question, what if you don't wanna make the Markov assumption? Yes, and can you condition on the past, you absolutely can. Um, that means you would build models that are a full function of the history. The problem with that is, you don't have very much data. So you have to, if you want to condition on the entire history as essentially your state, you're always fine in terms of the Markov assumption, but you'll just have really really little data to estimate the transition models. Particularly as the horizon goes on. So it's often a trade off, like do you want to have better models? Well, it depends on your domain. Maybe it's really Markov. If it's not really Markov, do you want better models with very little data? So, um, in general this sort of gets to the function approximation error versus the sample estimation error. You can have error due to the fact that you don't have much data. Or you can have error due to the fact that your function approximation is wrong. And often you're going to want to trade off between those two. So, so this example, you know, you can end up getting this sampled experience that looks different than anything you've seen in reality. Um, and then in this case if you now run Monte Carlo on the new data, you may, you can get something that looks very similar if you've done TD on the original data. So basically, instead of taking our real experience and then doing Monte Carlo using that for policy evaluation. An alternative is to say, we really think that this is a Markov system, let's simulate a bunch of data and then you could run Monte Carlo learning on this, or TD learning on it. Um, and you would get probably the same answer. So this is, you know, in contrast to what Monte Carlo would've converged to before which was V(A) = 0 and V(B) = 0.75. Now again you might say, okay but do I really want to do this, like if, I, if I really didn't think the system was Markov, I wouldn't have run Monte Carlo on my fake data either. But I think this illustrates, um, what you can do once you have this sampling and just shows that allows you to make all sorts of choices. So maybe there you wanna have sort of a two-step Markov process or you want to do different, make different assumptions in your model. And then after that you wanna do a lot of planning with it. So that just allows you to first take your data, compute a model, and then you can decide how you're going to use that to try to do planning. And we'll see a particular way to do that shortly. Now as you guys were just asking me about, um, if we have a bad model then we're gonna compute a sub-optimal policy in general. You know, we might be incredibly lucky. Um, because ultimately for your decisions, we only need to decide whether, you know, V(s,a1) is greater than V(s,a2). So ultimately we're gonna be making comparison, pairwise comparison decisions. So you might be, have a really bad model and still end up with a good policy. Um, but in general, if your model is bad and you do planning in general, your policy is also gonna be sub-optimal. Um, and so one approach if the model is wrong, um, is to do model-free reinforcement learning on the original data. So not to do model-based planning. It's not clear that always solves the issue. So it depends why your model is wrong. Um, if your model is wrong because you've picked the wrong parametric class or because the system is not Markov, you're doing Q learning that's not gonna solve your problem because Q learning also assumes the world is Markov. So model-free, you know, it depends on why, you know, why is it wrong? It depends on why? Whether or not this helps. Um, another is to reason explicitly about your model uncertainty. And this is goes back to the exploration, exploitation. Now again this only deals with a particular form of wrongness. Um, it deals with the fact that you might have sampling estimation error. But it's still making the basic assumption that your model class is correct. So for example, if you're modeling the world as, um, a multinomial distribution, and you don't have very much data then your prior metric estimates will be off. But if the world really isn't a multinomial, um, then all bets are off. So it's always good to know sort of what assumptions we're making in our algorithms and then what sort of forms of uncertainty we're accounting for. Now I guess I'll just say, say one other thing which is a little bit subtle which I find super interesting which is, um, if you have a really good model, it is, a generally said it or if you have a perfect model, and perfect estimate of the parameters, that is sufficient to make good decisions. If you were trying to train your model and you have a restricted amount of data or restricted sort of expressivity of your model, um, then a model that has higher predictive accuracy can in fact be worse when you use it to make decisions. And the intuition I like to think of is this, we, we discovered this a few years ago, um, others have discovered it too, we were thinking about it for an intelligent tutoring system. Um, the challenges, let's imagine that you have like a really complicated state-space. Say, um, you're trying to model what a kitchen looks like when someone's making tea, and there's all sorts of features, you know, like there is steam, maybe it's a sunset outside or when there's also the temperature of the water. And in order to make tea you need the water to be over 100 degrees. And in fact that's the only feature you need to pay attention to in order to successfully make tea. But if you were trying to just do, um, build a model of the world, you're trying to model the sunset, you're trying to model the steam etc. And there can be a huge number of features that you're trying to capture in your sort of, you know, maybe slightly improvisional network. And so if you just try to fit the best maximum-likelihood estimate model, it might not be the one that captures the features that you need to make decisions. And so models that look better in terms of predictive accuracy may not always be better in terms of decision making. So that was this issue we encountered a few years ago, and I think it just illustrates why the models that we need to build when we want to make decisions are not necessarily the same models we need for predictive accuracy. So it's important where possible to know which of your features do you care about in terms of utility and value. Okay. So let's talk a little bit about simulation-based search. Um, who here has seen forward search before in one of their classes? Okay, a number people, but not everybody. Um, so forward search algorithms. What we're going to talk about now is different ways instead of doing Q learning on our simulated data. Okay, we've got a model, what's another way we can use it to try to make decisions? One way is to do forward search. So how does forward search work? The idea in forward search is that we're gonna think about all the actions we could take. So let's say here we just have two actions, A1 and A2. And then we're going to think about all the possible next states we might get to. So let's say it's a fairly small world, so we just have S1 and S2. So in this current state, I could either take action one or action two and after I take those actions I could either transition to state one or state two. And then after I get to whatever I state- I get state, I get to, I again can make a decision A1 or A2. Because that's my action space here. And then after I take that action, I again my transition or maybe sometimes I terminate it. So this is a terminal state. Maybe my robot falls off or falls down or things like that, or maybe else I go to another state. [NOISE] And I can think of sort of like these branching set of futures, and I can roll them out as much as I want to. Let's say I want to think about the next h steps for example. And then I halt. And once I have that, I can use that information and the- my sort of reward- well let me just say one more thing which is, um, as I do these sort of like simulated features, I can think about what the reward would I get under these different features. Because right now I'm assuming I have a model. So this is given a model. So I can think about if I took this state as t and took action a2 what reward would I get? And then down here, I can think about if I got what reward I would get in s2, a2. So I can think of sort of generating different features and then summing up the rewards, um, it will give me a series of reward along any of those different paths. And then in order to figure out the value of these different sort of actions or the best action I should take, what I can do is I can take a max over actions and I can take an expectation over states. And I always know how to take that expectation because I'm assuming I have a model. So I can always think about what would be the probability of me getting to that particular state given my parent action and the state I'm coming from. So what happens, in this case, is as I roll out all these potential futures up until a certain depth. In the case of Go or something like that, it would be until you win the game or lose the game. And then I want to back up. So you can think of this [BACKGROUND] as sort of doing a not very efficient form of dynamic programming [NOISE]. Because, why is it not so efficient? Because there might be many states in here which are identical. And I'm not- I'm not aliasing those or I'm not treating them as identical. I'm saying I'm going to separately think about for each of those dates I would reach what would be the future reward I would get from that state under different sequences of actions and resulting states. And then if I want to figure out how to act, I go from my leaves and I take a max over all the, so let's say in this case I just made up for like a symbol small one. This is a1, a2, and let's say at that point I terminate. So I got like r(s, a1) and r(s, a2). So anytime I have a structure that looks like that, what I do is I take a max and the reward here is equal to whichever of those was bigger. Let's imagine that it was a2 that was bigger. So if I want to compute I basically roll all of these out, computing like getting a sample of the reward, and the next state is I go all the way out. And then to get the value at the root, whenever I see two- two, like a series of action nodes, I take a max over all of them, which was just like in our Bellman backup we're taking a max over the action that allows us to have the best future rewards. And then whenever I get to states- I'll do another one here. So let's say now I have two states s1 and s2, one of them has this value s1 and this one has value s2. And I want to figure out for this action, what my new value is then this is going to be equal to the probability of s1 given. Let's say I came from s0, s0, a1 times V(s1) plus probability of s2, s0, a1 times V(s2). This is exactly like, uh, um, when we do a Bellman backup, that we think about all the next states I could get two under the current action and current state times the value of each of those. Does that make sense? So we construct this tree out, and then in order to compute the value, we do one of two operations for either we take the max over the actions, if it's, uh, um, an action nodes, or we take an expectation. So these are called expecting max trees. Some of you guys might have seen these before in AI. Sometimes people often talk about minimax trees. So if you're playing game theory where the other agent gets to try to minimize your value and you get to maximize it. This is very similar except for we're doing an expectation over next states and a max over actions, okay? And it's exactly like dynamic program but more inefficient. But we're gonna see why we would want to do that in a second. So does anyone have questions about this? Okay. All right. So this is all- and the way we do this is that we have to have a model, because if we don't have a model right now we can't, uh, compute this expecting max exact- exactly because we're using that we know- like we're only expanding two states here. Um, and we- we- in order to figure out how much weight we want to put on each of those two states, we need to know the probability of getting to each of them. And so that's where we- we're using the model here, and we're using the model to get the rewards. So simulation-based search is similar, um, [NOISE] except for we're just going to simulate out with a model. We're not going to compute all of these sort of, um, exponentially growing number of futures. Instead, we're just gonna say I'm gonna start here, and I have a model and I need to have some policy here. But let's say I have a policy pi, and then I just use that. So I look at my policy for the current state and it tells me something to do. So I followed that action, and then I go into my model and I sample in s prime. So I look up my model and I say, "What would be the next state, given that I was in this particular state and took that action and I just simulate one next state?" This is just like how we could have simulated data from our models before. So let's say that got me to here, which was state s1. And then I look up again. I look up to my policy and I say, "What is the policy for s1?" Let's say that's a2 and then I also- then I follow it down to here. So just simulate out a trajectory. Just following my policy, simulating it out and I go until it terminates. And that gives me one return of how good that policy is. So in these sort of cases, we can just simulate complete trajectories with the model. Uh, and once we have those you could do something like model-free RL over those simulated trajectories, which either can be Monte Carlo or it could be something like TD learning. So if we think of this as sort of doing- like, in order to do that simulation we need some sort of policy. So you have to have some way to pick actions in our sort of simulated world when we think about being in the state and picking an action, how do we know what action to take? We follow our current simulation policy. So let's say we wanted to do effectively one step of policy improvement. So you have a policy, you have your model, and then you start off in a state and for each of the possible actions you simulate out trajectories. So this is like doing Monte Carlo rollouts. So I started my state. So this is, let's say I'm really in a state s_t, and I will need to figure out what to do next. So I start in that state s_t, and then in my head before I take an action in the real world, I think about all the actions I could take, and then I just do roll outs from each of them under a behavior policy pi, then I pick the max over those. So I'm really in state s_t, and then in my brain, I think about doing a_1, and then I do lots of roll outs from that under my policy pi. And then I do a_2 and do lots of roll outs out of that. a_3, this is all in my head, and that basically gives me now an estimate of Q s_t, a_1 under pi. So it's as if I was to take this action then follow pi, what would be my estimated Q function, then I do that for each of the actions, and then I can take a max. So this is sort of like doing one step of policy improvement, because this is going to depend on whatever my simulation policy is, does that make sense? So we have some existing simulation policy, I haven't told you how we get it, and then we use it to simulate out experience. Okay. So the question is whether or not we can actually do better than one step of policy improvement, because like how do we get these simulation policies? Like, okay, if we had a simulation policy and it was good, we could do this one step, but how could we do this in a more general setting? Well, the idea is that, um, if you have this model, you could actually compute the optimal values by doing this Expectimax Tree. So like I was in this state St, and instead of just thinking about- so remember in simulation based search we're just going to follow out one trajectory, but in the Expectimax tree we can think of, well, what if I did a1 or a2, and after that, whether I went to S1 or S2, and then what action should I do there? And I can think of basically trying to compute the optimal Q function under my approximate model for the current state, okay? The problem with that is that this tree gets really big. So in general, um, the size of the tree is going to scale at least with S times A to the H. If H is the horizon, because at each step, this is why it's not as efficient at dynamic programming, at- at each step you're going to think about all the possible next states, and then all the possible next actions. And so this tree is growing exponentially with the horizon. And if you think of something like AlphaGo, um, that are playing, you know, for a number of time steps before someone wins or loses, this H might be somewhere between 50 to 200. So if you have anything larger than an extremely small state space like one, then this is not gonna be, not gonna be feasible, okay? So the idea with a Monte Carlo Tree Search is that, okay we'd like to do better. In any case we need some sort of simulation policy if we're going to do this at all, and we can't be as computationally intractable as full Expectimax. So how do we do something in between? So- so the idea with Monte-Carlo Tree Search is to try to kind of get the best of both worlds, where what we'd really like is the Expectimax Tree where we think about all possible futures and take a max over those, um, but instead, we need to do that in a little bit more computationally tractable way, and why might that be possible? Well, let's think about this. If we have our initial starting state, let's say this is our general tree, that's going to all of these nodes. Some of these potential ways of acting might be really really bad. So some of these that might be clear, very early, like with very little amounts of data that, or very little amounts of roll outs that in fact, these are ways you would never want to play Go, because you're going to immediately lose. And so then, you don't need to bother sort of continuing to spend a lot of computational effort fleshing out that tree when something else might look much better. So that's kinda the intuition here, is that what we're gonna do is we're going to get us, construct a partial search tree. So we're going to start at the current state, and we can sample actions in next state, just like in simulation based search. So maybe we first sample A1, and then we sample S1 again, and then we sample A2. So we start off and we're kind of, the very first round it looks exactly like simulation based search, but the idea is that then we can do this multiple times and slowly fill out the tree. So maybe next time we happen to sample A2, and then maybe we sample S2, and then sample A1, and so you can think of sort of slowly filling in this Expectimax Tree. And in the limit, um, you will fill in the entire Expectimax tree. It's just that in practice you almost never will because it's computationally intractable. So what we're gonna do is do this, and we'll do this for, sort of a number of simulation episodes, each simulation episode can be thought of is you start at the root. This is the root node and current state you're really at, and then you roll out until you reach a terminal state or a horizon H, and then you go back to the start state and you again make another trajectory. And when you're done with all of this, you can do the same thing you would do in Expectimax, in that you're always gonna take a max over actions and an expectation over states. You only will have filled in part of the tree, so part of the tree might be missing. So in order to do this, there's two key aspects. One is, what do you do in parts of the tree where you already have some data? So like if you already have tried both actions in a state, which action should you pick again, and then what should you do when you reach like a node where there's nothing else there or like there's only been one thing tried so far? So this is often called the tree policy and the roll out policy. The roll out policy is for when you get to a node where, you know, you've only tried one thing, or there's no more data, or you've never been there before. So for example, maybe you sample a state that you'd never reached before, and so that's now a new node, and from then on you do a roll out policy. We'll show an example of this in a second. And then the idea is, when we're thinking about computing the Q function, we're just gonna average over all rewards that are received from that node onwards. This should seem a little bit weird, because we're not talking about maxes anymore, and we're not talking about doing- considering explicitly like the expectation over states in a formal way, we're just gonna average this. The reason why this is okay is because, um, we're gonna, sort of sample actions in a way such that over time, we're gonna sample actions that look better much more, and so we expect that, uh, eventually, the distribution of data is gonna converge to the true Q. Yeah. Just to confirm, is it [inaudible] the simulation before, um, there's different kind of averaging and moving parts because it seemed before we were also doing a bunch of roll outs and then combining this, so that part is still the same, yes? Yes, great question, in many cases it's very similar though. We're still gonna be sort of doing a bunch of simulations where we're gonna be averaging them. The question is, what is the policy we're using to do the, the, uh, the roll outs is generally going to be changing for each roll out, instead of being identical across all roll outs, and then the way we are gonna average them is also different. And really the key part I think is that, instead of using, um, like when I said for, um, simple Monte Carlo search, that assumes that you fix a policy and get a whole bunch of roll outs from that policy, just starting with different initial actions, but then always following it. When you do Monte Carlo Tree Search and you also do say k roll outs, generally, the policy will be different for each of the k roll outs, and that's on purpose so that you can hopefully get to a better policy. So again, just to, and just to step back again, what are, you know, what's the whole loop of happening here? So what's happening in this case is like you have your agent, is our robot and it's trying to figure out an action to take, and then the real-world gives back a S prime and an R prime, and then what we're talking about right now is everything it needs to do in its head, like all these roll outs in order to decide the next action to take. So it's going to do a whole bunch of planning before it takes its next action, generally it may then throw that whole tree away, and then the world is gonna give it a new state and a new reward and then it's gonna do this whole process together again. And so the key thing is that we need to decide what action to take next. And we want to do so in a way that we're gonna get the best expected value, given the information we have so far. Now in general, um, in Monte Carlo Tree Search, you might also have another step here where you might compute a new model. So if you're doing this online, you might take your most recent data, retrain your model, given that model rerun Monte Carlo Tree Search and now decide what you're gonna do on the next time step. All right. So the key thing is really this tree policy and roll out, both of them make a difference. Um, the roll out often is done randomly at least in the most basic vanilla version, there's tons of extensions for, for this and the key part is that tree policy. So one of the really common ways to do this is called upper confidence tree search, and this relates to what we're talking about for the last few weeks with exploration. So the idea is, is that when we're rolling out, so let's say, we're in s_t and we're using our simulated models to think about the next actions and states we would be in. So let's say, um, I get to state one. And at this point, I want to, let's say, I've taken a1 and a2 in the past and I've ended up with, you know, I've done lots of roll outs from these and a number of roll outs from here. And let's say, three times I got, let's say, I won the game. So three 1s and I got one 1 and one 0. These guys. So, so the key is what, what action should I take next time I encounter s1 when I'm doing, where I roll out and the idea is to be optimistic with respect to the data you have so far. So essentially, we're going to treat this as a bandit. Uh, think of each decision point as its own bandit, its own independent bandit and say, well, if I've taken all the actions I could for my current state, what is my average reward I got from each of those actions under any roll out that I've taken this from that particular node in the graph and action, and how many times did I take it? So you can get your empirical average for that state one in the, in the graph a1 plus a discount factor that often looks like the number of times you've been in that particular node. Okay. This is really a node in the graph and this is also for that node. So we think about sort of, every time I've been in that node what, and taken that particular action, what's the average reward I've gotten plus how many times if I done that? And it just allows you to be, um, uses optimism again. Says, well, when I've reached different parts of the graph before using my simulated model, what things look good? I'm gonna focus on making sure that that's the part of the tree that I flesh out more. Because that's the one where I think I'm gonna get likely reach policies that have higher value. And so hopefully, that's gonna mean that I have le- need less computation in order to compute a good policy. Does that make sense? So like if you had an oracle, that could tell you what the optimal policy was, then, you would only need to fill in that part of the tree. And what we're using here is we're saying well, given the data we've seen so far, we, kind of, only, you know, focus on the parts of the tree that seem like they're going to be the ones that when we take our max over our actions, are gonna be the ones that we end up, uh, propagating the values back up to the root. All right. So we've maintained an upper confidence bound over the reward of each arm, um, using, sort of, exactly the same thing as what we've done before. And we're treating each of the nodes, so each state node as a separate bandit. And so that means essentially that, you know, the next time we reach that same node in the tree, we might do something different because the accounts will change. TD, uh, uh, [inaudible] this is kind of like TD in, in the sense that [NOISE] if you reward for the bandit problem, the value of that, of the, of the state or if the, um, existing action pair or is it the reward for just that transition? It's a great question. So, and what is the reward for this bandit? We are gonna treat it as, um, essentially the full roll out from that node, um, because that's what we're averaging over and we're cou- doing counts. It's not like TD in the sense that we're doing this per node. So as I mentioned before, you know, you might have s1, a1 appear in this part of the graph and s1 a1 appear over here, and we're not combining their accounts. We're treating every node as if it's totally distinct. Even though often the model we'll be using to simulate will be Markov. But in ter- in sort of the tree, you can do it, it's mostly that it becomes a lot more complicated for implementation. If you wanted to basically treat it as a graph instead of a tree. Now, I think the other point that you're bringing up, we'll come back to in a second which is, is this a good idea [LAUGHTER] is, is well, what are the limitations of the bandit setting? So we'll come back to that in a second. All right, so let's talk about this in the context of Go. For those of you that haven't played Go or aren't too familiar with it. It's at least 2,500 years old. It's considered the classic hardest board game. Um, and it's been known as a grand challenge task in AI for a very long period of time. Um, [NOISE] just to remind ourselves, um, this does not involve decision-making in the case where the dynamics and reward model are unknown, but the rules of Go are known. The reward structure is known, but it's an incredibly large search space. So if we think about the combinatorics of the number of possible, um, boards that you can see, it's, it's extremely large. So uh, just briefly there's two different types of stones. Um, most people probably know this. Uh, and typically it's played on a 19 by 19 board though people have also thought about, you know, some people play on smaller boards. And you want to capture the most territory, and it's a finite game because there's a finite number of places to put stones on the board. So it's a finite horizon [NOISE]. I'm gonna go through this part sort of briefly there's different ways to write down. You can write down the reward function, um, for this game in terms of, uh, you know, different, you could do different features. The simplest one is just to look at whether or not white wins or black wins on the game and in that case it's a very sparse reward signal and you only get reward at the very end. You just have to play out all the way and see which ga- which, uh, which person won. And then your value function is essentially what's the expected probability of you winning in the current state. So how does this work if we do Monte Carlo evaluation? So let's imagine this is your current board. And you have a particular policy. Then you play out against, um, a stationary opponent is normally assumed. And then, at the end you see your outcomes and maybe you won twice and you lost twice. And so then, the value for this current start state is a half. Okay, so how would we do Monte-Carlo tree search in this case? So you start off and you have one single start state. So at this point you haven't simulated any part of the tree. And so you've never taken any action, so you just sample randomly. So maybe you take a1. And then you follow your default policy and this is often random. Though for things like AlphaGo you often want much better policies but, um, you can just use a random policy and you'll just take random actions and get to next states and you do this until the end and you see you either win or lose. Now, um, in this case it's a two player game. So we do a minimax tree instead of expectimax, but the basic ideas are exactly the same. So that's my first roll out. I'm gonna do lots of these roll outs before I figure out how I'm going to actually place my piece. All right. So the next time, so this is my second roll out. This is the second roll out of my head [NOISE]. And say okay well, last time, um, you know, I, I took this action. So now I have my default policy. So this time I'm gonna take the other action. Generally, you want to fill in all, try all the actions at least once from a current node. Now, the particular order in which you try actions can make a big difference and, um, early on there were significant savings by putting in action heuristics of what order to try actions in. For right now let's just imagine you have to try all actions, um, equally. So in this case, now, you take the other action and then after that you've never tried anything from that action. So you just do a roll out. Again, just acting randomly. Okay, and then, you repeat this. So now, when you get to here, so let's say I've tried this before. So now, when I get to this node, I have to pick, um, I have to do a max maybe using my UCT. So I look at what was the reward for this one and the reward for this one, plus you know something about the counts. All right. This is Q to make it clear. Okay. And so I pick whichever action happened to have looked better in the roll outs I've done from it so far. And then, I'm gonna focus on expanding that part of the tree. And you keep doing this, and you're gonna slowly sort of build out the tree and you do this until your computational budget expires. And then, you go to the bottom of the tree and you go all the way back up where for each of the action nodes you're taking a max. And each of this state nodes you're doing an expecti- or you're doing a, in this case minimax. You just construct a [inaudible]. So, so what does the opponent do, is it like a stationary opponent, like, what does that mean? Great question. Okay. So, um, in reality, I think one of the other really big insights to why people got Go to work is self play. So typically in this, you would use the current agent as the opponent. So I take whatever policy I just computed for the other. It- Look, particularly, let's imagine that like, I kept that tree. So it already knows what it could do. So, um, at each point, I would look at I would have the other agent tell me what action it would do in that state. But one of the really so, so I think self play was an incredibly important, um, insight for this. And why is it so important? Because if I play against a grand-master in Go, I get no reward for a really long period of time. Um, and that's an incredibly hard thing for an agent to learn from, because there's no other reward signal. And so basically, you're just playing these tons and tons and tons of games and like, there's just no signal for a really, really long time. And so you need some sort of signal so you can start to bootstrap and actually get to a good policy. If I play against me like, five minutes ago, I'm probably gonna beat them [LAUGHTER]. Um, and or at least half the time maybe I'll beat them. And so that allows because you can have two players that are both bad and one of them's gonna win and one of them is gonna lose and you start to get signal. And so the self play idea has been hugely helpful in the context of games, like, two-player games. Because it can mean that you can start to get some reward signal about what things are successful or not and then you- It's both so, like, it both gives you, helps with this sparse reward problem and it gives you a curriculum. Because you're always kind of, only playing in an environment that's a little bit more difficult than perhaps what you can tolerate, can manage. And actually I think, um, it would be really, really cool if we could figure out how to take the same ideas to a lot of other domains. Like if there are other ways to essentially make self play for things like medicine or, [LAUGHTER] um, uh, customer relations or things like that. It would be really great because it's often really hard to get the sort of reward signal you want. And that's one of the really nice things here. Self-play, what if we get stuck in some local extrema? Absolutely can happen, yes. So what in self play, how does it arrive if you get stuck in, you could but you always try to max. So it's a little bit like policy improvement. You're always trying to do a little bit better. You're still trying to win. Um, so it's possible you could get stuck in a case where you're both just, you know, winning half the time, but then there should be something that you can exploit. And if there's something you could exploit, if you do enough planning you should be able to identify it. Yeah. Do you imagine that there would be this kind of transition point where you might get added benefit from transitioning to a more expert player to play against versus yourself? You need to kind of start slowly and ease into it but then you might actually do learn faster by playing against somebody harder modes. Yeah, question is, like, you know, would you also always wanna kinda just, like, self play against, you know, yourself five minutes ago or maybe at some point you- It will be more efficient to go at someone harder. I think it's a great question. I think probably that's the case. Like, probably there will be cases where you could do bigger curriculum jumps, um, and that might accelerate learning. But I think it's a tricky sweet spot there if like, you still need to have enough reward signal to bootstrap from. Absolutely. All right. So, you know, the benefits of doing this is it becomes this highly selective best-first search, because you're sort of constructing part of the tree but you're constructing it in a very specific way. And the goal is that you should be way more sample efficient than doing Expectimax, making the whole tree but you're gonna be much better than doing just a single step of policy improvement, with some fixed, um, you know, simulation-based policy. And it's also, you know, parallelizable anytime, anytime in the sense that like, whether you have one minute or you have three hours to compute the next action. And you know, three hours can be very realistic if it's something like, you know, a customer recommendation article or a thing that's gonna make or maybe you're gonna make one decision per day and so you can run it overnight for eight hours and then compute that one decision. So, um, it allows you to, to take advantage of the computation you have but then always provide an answer no matter how quickly you need to do that, cause you just do less roll outs. Okay, um, I'm gonna skip this for now. I just want to mention briefly, um, I think it was a question. It's a little weird that we do bandits in each of the nodes. And intuitively, the reason it's a little bit weird is because in bandits, why do we do optimism under uncertainty? We do it because we're actually incurring the pain of making bad decisions. And so the idea with optimism under uncertainty is that either you really do get high reward or you learn something. The weird thing about doing that for planning is that we're not suffering if we make bad decisions in our head. Um, essentially, we're just trying to figure out what actions can I take as quickly as possible in terms of, like, value of information so that I know what the right action is to get the route. And so it doesn't actually matter if I simulate out bad actions. If it allows me to get a better decision of the route. We just give a really quick example of where that could be different. So if you have something like this. Let's say this is the potential value of Q. Okay. So this is the value of A1 and this is the value of A2 and this is our uncertainty. Well, if you're being optimistic, you're always gonna select this, because it's got a higher value. But if you wanna be really confident that A1 is better, you should select A2 because likely when you do that, you're gonna update your confidence intervals and now you're gonna be totally sure that A1 is best. But this approach won't do that. Okay, because it's, it's, um, it's like, no I'm gonna suffer the cost in my head of taking the wrong action so I'm gonna take A1. But if ultimately, you just need to know what the right action is to do, then sometimes in terms of computation you should take A2 because now, your confidence intervals will likely separate. You don't need to do any more computation. Um, so it's not clear that the bandits, um, at each node is the optimal thing to do but it's pretty efficient. All right. So that's basically all we're g- I'm gonna say about Go. There's, um, some really beautiful papers, uh, about this including the new recent extensions. Um, and they have applications to chess as well as, you know, a number of other games, uh, which I think are really, they're, they're amazing results. So I highly encourage you to look at some of those papers. Let me just briefly talk about sort of popping back up to the end of the course, um, because it's the last lecture. All right. So I just wanted to sort of refresh on, you know, what were the goals of the course, um, as we end and some of these you guys had a chance to practice on, on Monday. Um, but I just wanted to say what I think of as kinda the key things that I hope you got out of it. So then one is sort of, what is the key features of RL versus everything else? Um, both AI and supervised learning. And to me is that really this key, uh, issue of, um, the agents being able to gather their own data and make decisions in the world. And so it's the censored data that's very different than the IID assumption for supervised learning. And it's also very different from planning because you are reliant on the data you get about the world in order to make decisions. Um, this is probably- The second thing is probably the thing that for many of you might end up being the most useful, which is just how do you figure out given a problem, um, whether you should even write it down as an RL problem and how to formulate it. And we had you practice this a little bit on, uh, Monday and also it's a chance to think about this a lot in some of your projects. But I think this is often one of the really hard parts. It has huge implications for how easy or hard it is to solve the problem. Um, and it's often unclear. There's lots of ways to write down the state space of describing a patient or describing a student or describing a customer. And in some ways it goes back to this issue of function approximation versus sample efficiency. If I treat all customers as the same, I have a lot of data. That's probably a pretty bad model. So there's a lot of different trade-offs that come up in these cases and I'm sure all of you guys will, um, think about interesting, exciting new ways to address that. And then the other three things were, you know, to be very familiar with a number of common RL algorithms, which you guys have implemented a lot. Um, to understand how we should even decide if an RL algorithm is good, whether it's empirically, compu- you know, in terms of its computational complexity or things like how much data it takes or its performance guarantees, um, and to understand this exploration exploitation challenge, which really is quite unique to RL. It doesn't come up in planning, doesn't come up in ML. Um, and again, it's this critical issue of like, how do you gather data quickly in order to make good decisions. Um, if you wanna learn more about reinforcement learning, there's a bunch of other classes, uh, particularly, Mykel Kochenderfer has some really nice ones. And also Ben Van Roy does some nice ones particularly looking at some of the more theoretical aspects of it. And then I do an advanced survey of it where we do current topics and it's a project-based class. Um, and I'll just I guess I'll, um, two more things. One is that I think, you know, we see some really amazing results, uh, Go is one example. Uh, and we're seeing-starting to see some really exciting results in robotics but I think we're missing, most of us do not have RL on our phone yet in the way that we have face recognition on our phone. And so I think that the potential of using these types of ideas for a lot of other types of applications is still enormous. Um, and so if you go off and you do some of that I would love to hear back about it. Um, uh, in my lab we think a lot about these other forms of applications. And I think another really critical aspect of this is thinking about when we do these RL agents, um, how do we do it in sort of safe, fair and accountable ways because typically, these systems are going to be part of, you know, a human in the loop system. And so allowing the agents to sort of expose their reasoning and expose, um, their limitations will be critical. So the final thing is that, um, it's really helpful to get you guys this feedback. Um, it allows us to improve the class for future years, um, either to make sure we continue to do things that you found helpful or that we stopped doing things that you didn't find helpful. So I'd really appreciate it if we could take about 10 minutes now to go through the course evaluations, um, and just feed it, uh, let us know what helped you learn, what things we could do even better next year. Thanks. [APPLAUSE] |
Stanford_CS234_Reinforcement_Learning_Winter_2019 | Stanford_CS234_Reinforcement_Learning_Winter_2019_Lecture_6_CNNs_and_Deep_Q_Learning.txt | So, homework two is out now. I recognize that there's a really broad spectrum of background in terms of whether people have seen deep learning, or not before, or, or taken a class, [NOISE] or used it extensively. [NOISE] Um, just a quick humble, who, which of you have used TensorFlow or, um, PyTorch before? Okay. A number of you, but not everybody. So, what we're gonna be doing this weekend sessions is, we gonna be having some more background on deep learning. You're not expected to become, or, or to be a deep learning expert to be in this class, but we, you only need to have some basic skills in order to do homework two, um, be able to use function approximation with a deep neural network. So, I encourage you to go to session this week if you don't have a background on that. We're gonna today cover a little bit on deep learning but very, very, very small amount, um, and focus more on deep reinforcement learning. [NOISE] Uh, but the sessions will be a good chance to catch up on that material. Um, we're also gonna be reaching, uh, releasing by the end of tomorrow. Um, what the default projects will be, uh, for this class. Er, and you guys will get to pick whether or not you wanna do your own construction project or, uh, the default project. Um, and those proposals will be due, um, very soon, er, in a little over a week. Are there any other questions that people have right now? Yeah. The assignments, [inaudible] are they limited to TensorFlow? asked the question if, if the assignment is limited to TensorFlow. I'm, I'm pretty sure that everything relies that, er, you're using TensorFlow. So, yeah. Just a-feel free to reach out on Piazza and double-check that, but I'm pretty sure any Oliver auto-graders is just set up for TensorFlow, for, so for this one even if you use PyTorch, some way please use TensorFlow. Um, I'll believe you guys also should have access to the Azure credit. Um, If you have any questions about getting setup without feel free to use the Piazza, uh, Piazza channel. We also released a tutorial for how to just sort of set up your machine last week. So, if you're having any questions with that, that's a great place to get started. Um, you could look at the tutorial, you can look at the video, or you can reach out to us on Piazza. Any other questions? All right. So, we're gonna go ahead and get started. Um, uh, what we're gonna be covering today is sort of a very brief overview about Deep Learning, um, as well as Deep Q Learning. Um So, in terms of where we are in the class, we've been, we have been discussing how to learn to make decisions in the world when we don't know the dynamics model of the Reward Model in advance. Um, and last week, we were, we were discussing value function approximation, particularly linear value function approximation. And today we're gonna start to talk about other forms of value function approximation in particular, um, uh, using deep neural networks. So, the- why do we wanna do this at all? Well, the reasons we wanted to start thinking about, uh, er, using function approximators is that if we wanna be able to use reinforcement learning to tackle really complex carry problems. Um, we need to be able to deal with the fact that often we're gonna have very high dimensional input signals or observations. Um, so, we wanna be able to deal with sort of pixel input, like images, or we wanna be able to deal with really complex, um, information about customers, or patients, or students, um, where we might have enormous state and our actions spaces. I'll note today that we're mostly not gonna talk so much about enormous action spaces, but we are gonna think a lot about really large state spaces. And so, when we started talking about those, I was arguing that we either need representations of models. Those mean that's sort of the dynamics, or the reward models. T, T or R or a state-action values Q, or V, or our policies, um, that can generalize across states and our actions. With the idea being that we may in fact never encountered the exact same state again. You might never see the exact same image of the world again, um, but we wanna be able to generalize from our past experience. And so, we thought about it, instead of having a table to represent our value functions, um, we were gonna use this generic function approximator where we have a W now, which are some parameters. [NOISE] And when we thought about doing this, we said what we're gonna focus on is we're gonna focus on function approximations that are differentiable. Um, and the nice thing about differentiable rep-representations is that we can use our data, and we can estimate our parameters, and then we can take use gradient descent to try to fit our function, to try to write that into write, or represent our q function or a value function. So, I mentioned last time that most the time we're gonna think about trying to quantify the fit of our function, compared to the true value as, um, a mean squared error. So, we can define our loss j, and we can use gradient descent on that to try to find the parameters w that optimize. And just as a reminder stochastic gradient descent was useful because when we could just slowly update our parameters as we get more information. And that information now could be [NOISE] in the form of episodes or it [NOISE] could be individual tuples. [NOISE] When I say a tuple, I generally mean a state-action reward next state tuple. Um, and the nice thing is that the expected stochastic gradient descent is the same as the full gradient update. So, just to remind ourselves, last time we were talking about linear value function approximations. Um, and that meant that what we're gonna do is, we're gonna have a whole bunch of features to describe our world, um, as so, you know, these features we will input our state, state as the real state of the world and we would output our features. And so, this could be things like a laser range finder for our robot, which told us how far away the walls were in all 180 deg- degree directions. We talked about the fact that that was an aliased version of the world because, um, multiple hallways might, might look identical. So, our value function now is a dot product between those features, that we've got out about the world, um, with the weights. Our objective function is again the mean squared error. And then we could do this same weight update. And the key hard thing was that we don't know what this is. So, this is the true value of a policy. And the problem is we don't know what the true value of a policy is, otherwise we wouldn't have to be doing all of this learning. Um, and so we needed to have different ways to approximate it. And so, the two ways we talked about last time was inspired by a work on Monte Carlo, or on TD learning is we could either plug-in the return from the full episode. This is the sum of her words. Or we could put in a bootstrapped return. So, now we're doing bootstrapping. Where we look at the reward, the next state, and the value of our next state. And in this case we're using a linear value function approximators for everything, which gave us a really simple form of what the derivative is, of this function with respect to W. Basically it's just our features times essentially this prediction error. So, people sometimes call this is the prediction error. Cause it's the difference between the value, or right now we're using GT as the true value. Of course in reality it's just a sample of the value but, well, it's the difference between, um, the true value and our estimated value. I'm gonna shrink that difference. So, in this case I've written, um, these equations of all for linear value function approximation, but there are some limitations to use the linear value function approximation, even though this has been probably the most well-studied. So, if you have the right set of features, and historically there was a lot of work on figuring out what those rights set of features are. They often worked really well. And in fact when we get into, I think I mentioned briefly before. When we start to talk about deep neural networks you can think of, a deep neural network is just a really complicated way to get out features, plus the last layer being a linear combination of those features. For most of the time when we're talking about deep RL with, um, a deep neural networks represent the Q function. That's the type of representation will be looking at. So, linear value function is often really works very well if you're the right set of features, but is this challenge of what is the right set of features. Um, and there are all sorts of implications about whether or not we're even gonna be able to write down the true p- um, value function using our set of features, and how easy is it for us to converge to that. So, one alternative that we didn't talk so much about last time is to use sort of a really, really rich function approximator class. Um, where we don't have to, have to have a direct representation of the features. Er, and some of those are Kernel based approaches. Um, has anybody seen like, ah, Kernel based approaches before? Or like k-nearest neighbor type approaches? If you take a machine learning you've heard of k-nearest neighbors, those are sort of these non-parametric approaches, where your representation size tends to grow with the number of data points. Um, and then they can be really nice and they have some actually really nice convergence properties for reinforcement learning. The problem is, um, that the number of data points you need tends to scale with the dimension. So, um, if you have let's say those 180, um, features, um, the number of points you need to tile that 180 degrees space, generally scales exponentially with the dimension. So, that's not so appealing both in terms of computational complexity, memory requirements, and sample complexity. So, these actually have a lot stronger convergence results compared to linear value function approximators. Um, but they haven't far been used for in a very widespread way. Yeah. Um, and everyone just [inaudible] name first please to stop me. Yes. Yeah. [LAUGHTER] So, can you repeat again why the exponential behavior happening? Yeah, student's question was why does the exponential behavior happen and a lot of these sort of kernel based approximators or non-parametric. The intuition is that if you want to have sort of an accurate representation of your value function, um, and you're representing it by say, uh, local points around it. For example, like, with the k-nearest neighbor approach. then the number of points you need to have everything be close like in an epsilon ball scales with the- the dimensionality. So, basically you're just gridding the space. So, if you think of -- if you think you have sort of- if you want to have any point on this line, be close, then you could put a point here and a point here in order to have everything be sort of epsilon close for all points on that line to have, uh, a neighbor that's within epsilon distance. If you want to have it in a square, you're gonna need four points so that everything can be somewhat close to one of the points. Generally, the number of points you need this going to scale exponentially with the dimension. [NOISE] But they are really nice, um, uh, because they can be guaranteed to be averagers which we talked about really briefly last time that views a linear value function approximator. Um, when you do a bellman backup, it's not necessarily a contraction operator anymore which is why you can sometimes blow up as you do more and more backups. A really cool thing about averagers is sort of by their name. Um, when you use this type of approximation, you don't- they're guaranteed to be- to be a non-expansion, which means that when you combine them with a bellman backup it's guaranteed it'd still be a contraction which is really cool. So, that means these sort of approximators are guaranteed to converge compared to a lot of other ones. All right, but they're not gonna scale very well and in practice you don't tend to see them, though there's some really cool work by my colleague, Finale Doshi-Velez, over at Harvard who's thinking about using these for things like, um, health care applications and how do you sort of generalized from related patients. So, they can be useful but they generally don't scale so well. So, what we're gonna talk about today is thinking about deep neural networks which also have very flexible representations but we hope we're gonna scale a lot better. Um, now, in general we're going to have almost no theoretical guarantees for the rest of the day, um, and- but in practice they often work really really well. So, they become an incredibly useful tool in reinforcement learning and everywhere else really in terms of machine learning. So, what do we mean by deep neural networks? Well, a number of you guys are experts but, um, what it generally means in this case is we're just gonna think of com- making a function approximator which is a composition of a number of functions. So, we're gonna have our input x and I'm gonna feed it into some function which is gonna take in some weights. So, in general, all of these things can be vectors. So, you're gonna take in some weights and combine them with your x and then you're going to push them into some function and then you're gonna output something which is probably gonna be also a vector. Then you're gonna push that into another function, and throw in some more weights. I'm gonna do that a whole bunch of times, and then at the very end of that you can output some y which you could think of as being like our Q. Then, we can output that to some loss function j. So, what does that mean here? It means that y is equal to hn of hn minus one dot dot dot dot dot h1 of x. I haven't written all the weights that are going in there but there's a whole bunch of weights too, and then this is sort of loss function like before and this you can think of as like our Q for example. But these are- happen a lot in unsupervised learning like predicting whether or not something is a cat or not or, you know, an image, uh, of a particular object, um, or for regression. So, why do we want to do this? Well, first of all it should be clear that as you compose lots of functions, um, you could represent really complicated functions by adding and subtracting and taking polynomials and all sorts of things you could do by just composing functions together that this could be a really powerful space of functions you could represent. But the nice reason to write it down like this is that you can do the chain rule to try to do stochastic gradient descent. So, how does this work? Well, that means that we can write down that dj. So, we really want, you know, dj with respect to all these different parameters. So, what we can write down here is we can write down dj of hn and dhn of dwn and we can do- do this kind of everywhere. So, dj of h2, dh of h2, and dh2 of dw2. So, you can use the chain rule to propagate all of this the- the gradient of, um, your loss function with respect to w, all the way back down all of these different compositions by writing out the chain rule. Um, so that's nice because it means that you can take our output signal and then propagate that back, um, in terms of updating all of your weights. Now, I'm gonna date myself. So, when I first learned about deep neural networks, you have to do this by hand. Um, and, uh, so as you might imagine, this was a less popular assignment and, uh, it's called backpropagation. So, you can derive this by hand, um, and I'll talk in a second about what these type of functions are, you know, you need differentiable functions for h. But I think one of the major major innovations that's happened over there, you know, roughly what? Like last 5 to 8 years is that there's auto differentiation. So, that now, um, you don't have to derive all of these, uh, gradients by hand instead, you can just write down your network parameter. Um, and then your network of para- which includes a bunch of parameters and then you have software like, um, TensorFlow to do all of the differentiation for you. So, I think these sort of tools have made it much much more practical for people- lots and lots of people to use, um, deep neural networks because you don't- you can have very very complicated networks so very very large number of layers and there's no sort of hand writing down of what the gradients are. So, what are these h functions? Generally, they combine, um, both linear and nonlinear transformations. Um, basically they just- they have to be differentiable. So, you know, this- this h need to be differentiable if we're gonna use gradient descent to fit them. So, the common choices are either linear so you can think of hn is equal to whn minus one or non-linear where we can think of hn is equal to some function hn minus one. If it's nonlinear, we often call this an activation function. Due to time, I'm not gonna talk in class much about the connections with neural networks which is what inside of our brain which was what's inspiring, these sort of artificial neural networks. Um, but inside of the brain, people think of there is being the sort of non-linear activation functions where if the signal passes a certain threshold then, for example, the neuron would fire. So, these sort of non-linear activation functions can be things like sigmoid functions or ReLU. ReLU's particularly popular right but- um. So- so, you can choose different combinations, uh, of linear functions or non-linear functions, um, and as usual we need a loss function at the end. Typically, we use mean squared error. You could also use log likelihood but we need something that- that we can differentiate how close we are achieving that target in order to update our weights. [NOISE] Yeah? Name first. So, this ReLU function is not differentiable, right? It is differentiable, like, you can- you- you can- you can take it to- the- the- differentiable and it's ended up being a lot more popular than sigmoid recently, though I feel like it [OVERLAPPING]. It's not differentiable at one point? Yes. But I don't see how gradient [inaudible] is gonna work on the part where it's flat. Well, if it's flat, it's zero. So, that ends up just- your gradient is just zero. [OVERLAPPING] Yeah. The question is about how for- for ReLU, there's a lot of it where it's flat. Um, and so if your gradient is zero then your gradients can vanish there. Um, in- in- in general actually, we're not gonna talk about this at all in class but, uh, um, there's certainly a problem is you start having very deep neural networks. Um, but because of some of these functions you can sometimes end up sort of having, um, almost no signal going back to the- the earlier layers. But I- I'm not gonna talk about any of that. We'll talk- we'll talk some about that in sessions. Um, they're good to be aware of, um, and we're also happy to give other pointers. But yeah, if it's flat, it's okay, you can still just have, uh, a zero derivative. Okay. All right. So, why do we want to do this? Well, it's nice if we can use this sort of like much more complicated representation. Um, another thing is that, um, if you have at least one hidden layer, um, if you have a sufficient number of nodes. Um, nodes you can think of as a- if you're not familiar with this is basically just sort of a sufficiently complicated, uh, set of, uh, combination of features, um, and functions. Um, this is a universal function approximators which means that you can represent any function with the deep neural network. So, that's really nice. We're not gonna have any capacity problems if we use a sufficiently expressive function approximators. Um, and that's important because if you think about what we're doing with linear value function approximators, it was clearly the case sometimes that you might have too limited features and you just wouldn't be able to express the true value function for some states. What the universal function approximator, um, property is stating is that that will not occur for, um, uh, deep neural network if it is, uh, sufficiently rich. All right. Now, of course, you can always think of doing a linear value function approximator with very very rich features and then that becomes equivalent. So, given that, you know, what's another benefit, um, another benefit is that potentially you can use exponentially less nodes or parameters compared to using a shallow net which means not as many of those compositions, um, to represent the same function and that's pretty elegant and, uh, I'm happy to talk about that offline or- or we can talk about on Piazza. Then the final thing is that you can learn the parameters using stochastic gradient descent. All right. So, that's pretty much that, you know, deep neural networks in like five seconds. Um, we're now gonna talk a little bit about convolutional neural networks. Um, and again this is all gonna be a pretty light introduction because you're not gonna need to know the details in order to do the homework except for mostly the fact of understanding that these are sort of very, um, expressive function approximators. So, why do we care about convolutional neural networks? Um, well, they're used very extensively in computer vision, um, and if we're interested in having robots and other sorts of agents that can interact in the real world, one of our primary sensory modalities is vision, um, and it's very likely that we're going to want to be able to use similar sorts of input on our- our robots in our artificial agents. So, if you think about this, um, think about there being an image, in this case, of Einstein. Um, and there's a whole bunch of different pixels on Einstein, um, of this picture of Einstein. Let's say it's 1,000 by 1,000. So, it's 1,000 by 1,000, you know, x and y. So, we have 10 to the 6 pixels. So, this standard often called feedforward deep neural network. Um, you would have all of those pixels, um, and then they would be going as input to another layer and, um, you might want to have a bunch of different nodes that are taking input from all of those and so you can get a huge number of weights. So, you might have 10 to the 6 weights per- the st- the- often, we think about sort of- I know I haven't given you enough details about this, but often we think of there as being sort of this deep neural network where we have many functions in parallel. So, it's not just like a single line but we might have x going into, uh, h1, h2, h3, h4 then all of those would then be going in some complicated way to some other functions. So, you can have lots of sort of functions being computed in parallel. So, you can imagine your image goes and you've got one function that computes some aspect of the image and another function that compute some other aspect of the image and then you're gonna combin- combine those in all sorts of complicated ways. So, what this is saying is, well for that very first one there's maybe gonna be, you know, a whole bunch of n different functions were computing of the image. There'd be 10 to the 6 parameters here. So, if we have these weight times x, then that would be 10 to the 6 parameters to take in all of that x. That's a lot. Um, and then if we want to do this for doing different types of weights all in parallel, then that's gonna be a very very large number of parameters and we do have a lot of data off it now but that's still an enormous number of parameters to- to represent and it also sort of misses some of the point of what we often think about with vision. So, if we think about doing this many times and having lots of hidden units, we can get a really an enormous number of parameters. Um, so to avoid sort of this space-time complexity and the fact that we're sort of ignoring the structure of images, convolutional neural networks try to have a particular form of deep neural network that tries to think about the properties of images. So, in particular, images often have structure, um, in the way that our- our brain promises images also has structure and this sort of distinctive features in space and frequency. So, when you have a convolutional neural network, we think of there being particular types of operators. Having so operators again here are like our functions, h1 and hn, which I said before could either be linear or nonlinear and then convolutional neural network learn a particular structures for those, um, uh, for those functions to try to sort of think about the properties that we might want to be extracting from images and kind of the key aspects here is that we're gonna do a lot of weight sharing to do parameter reduction. So, instead of saying, "I'm going to have totally different parameters each taking in all of the pixels." I'm gonna end up having sort of local parameters that are identical and then I apply them to different parts of the image to try to extract, for example, features. Because ultimately, the point of doing this is gonna be trying to extracting features that we think are gonna be useful for either predicting things like whether or not, you know, a face isn't an image or that are gonna help us in terms of understanding what the Q function should be. So, the key idea- one of the key ideas is to say that we're gonna have a filter or a receptive field which is that we're gonna have some hidden unit. Um, so it's gonna be a function that's applied to some previous input. At the beginning, that's just gonna be a subset of our image and instead of, um, taking in the whole image, we're just going to take in part. So, we're just gonna take in a patch. So, we're gonna take the upper corner and we're gonna take the middle. So, it's like we're just gonna try to compute some properties of a particular patch of the image. So, then we can imagine taking, it's often called a filter, that little, um, those set of weights that we're applying to that patch and we could do that all over the image, um, and we often called the- there's a stride which means sort of how much you move, um, that little patch at each time point. There's also this thing called zero-padding which is how many zeros to add on each input layer and this determines sort of help, um, helps determine what your output is. So in this case, if you have an input of 28 by 28 and you have a little five-by-five patch that you're going to slide over the entire image, then you're gonna end up with a 24 by 24 layer next because basically you just take this and then you move it over a little bit. You move it over, and each of those times you're gonna take those 25. So, this is five-by-five so you're gonna have 25 input x's and you're gonna dot-product them with some weights and that's gonna give you an output. So, here in this case that means we're gonna need 25 weights. Okay. So, one thing is instead of having our full x input, we're just gonna take in- we're gonna direct different parts of the x input to different neurons which you can think of just different functions. Um, but the other nice idea here is that we're going to have the same weights for everything. So, when we took those weights we're going to have sort of, um, you can think of them as trying to extract a feature from that sub patch of the image. For example, whether or not there's an edge. So, you can imagine I'm trying to detect whether or not there's something that looks like a horizontal edge in that part of the image and I try to- and that is determined by the weights I'm specifying and I just move that over my entire image to see whether or not it's present. So, now the weights are identical, and you're just moving them over the entire image. So, instead of having, um, you know, 10 to the 6 weights, I might only having 25 weights and I'm applying those to the same- uh, just applying them to lots of different parts of the image. Okay. So, this is sort of what that would look like. You sort of have this input, you go to the hi- um, the hidden layer and, yeah, you're sort of do- also down-sampling the image. Um. Why would you want to do this? Well, we think that often, the brain is doing this. It's trying to pick up different sort of features. In fact, a lot of computer vision before deep learning was, um, trying to construct these special sorts of features, things like sift features or other features that they really think captures sort of important properties of the image, but they're also may be invariant to things like translation. Because we also think that, you know, whether I'm looking at, um, the world like this, or I move my head slightly, um, that the features that I see are often gonna be identical, whether I moved to the left or right a little bit. There are particular salient aspects of the world that are gonna be relevant for detecting whether or not there's a face, and relevant for deciding my value function. So, we want to sort of extract features that we think are gonna represent this sort of translation in variance. This means also that rather than just computing, uh, you- you can do this. You'll use the same weights all the way across the feature, er, all across the image and then you can do this for multiple different types of features. So there's a really nice, um, discussion of this that goes into more depth from 231-n, which some of you guys might've taken. Um, and there's a nice animation where they show, okay, imagine you have your input, you can think of this as an image, and then you could apply these different filters on top of it, which you can think of as trying to detect different features, and then you move them around your image, and see whether or not that feature is present anywhere. So you can do that with multiple different fype- types of filters. You could think of this as trying to look for whether something's like that or something's horizontal or vertical, different types of edges, um, and, uh, these give you different features essentially that are been extracted. Um, the other really important thing in CNNs, is what are known as pooling layers. They are often used as a way to sort of down-sample the image. So you can do things like max pooling to detect whether or not a particular feature is present, um, or take averages or other ways to kind of just down, ah, and compress the, the information that you got it in. So, just remember in this case and in many cases, we're gonna start with a really high dimensional input, like x might be an image and output, a scalar, like, um, you know, the Q value. So we're somehow gonna have to go for really high dimensional input and kind of average and slow down until we can get to, um, a low dimensional output. So, the final layer is typically fully connected. So we can again think about all of these previous processes as essentially computing some new feature representation, so essentially from here to here. We're kind of computing this new feature representation of the image, and at the very end, we can take some fully connected layer, where it's like doing linear regression, and use that to output predictions or scalars. Again, I know either for some of you, guys, this is sort of a quick shallow refresher. For others of you, this is clearly not in, ah, this is, ah, would be a whirlwind introduction, um, but we won't be requiring you to know a lot of these details. And again, just go to a session, if you have some questions and feel free to reach out to us. Okay. So these type of representations, both Deep Neural Networks and Convolutional Neural Networks, are both used extensively in deep reinforcement learning. So it was around in 2014, um, where I- the workshop where David Silver started talking about, um, how we could use these type of approximations for Atari. So why was the surprising? I just sort of wandering back. So in around 1994, personally in 1994, um, we had TD backgammon which used Deep Neural Networks. Well, they used neural networks. I think there was someone that deep, and I think out like a world-class backgammon player out of that. So, that was pretty early on. And then we had the results that were kind of happening around like 1995 to maybe like 1998, which said that, "Function approximation plus offline off policy control, plus bootstrapping can be bad, can fail to converge." So, we talked about this a little last time. That in general, as soon as we start doing this function approximation even with the linear function approximator, um, that when you're combining off policy control, bootstrapping, which means we're doing like TD learning or Q learning, um, and, uh, in functional approximator, then you can start to have this, uh, challenging triad, um, which often means that we're not guaranteed to converge. And even if we're guaranteed to converge, the solution may not be a good one. So sort of there was this early encouraging success and then there were these results in sort of the middle of the nineties that we're trying to better understand this, that indicated that things could be very bad, and the risk was some of the- In addition to the theoretical results, there were sort of these simple test cases, that, you know, these simple cases that went wrong. So, it wasn't just sort of in principle this could happen, uh, but there were cases which failed. And so I think for a long time after that, the, the community was sort of backed away from Deep Neural Networks for a while. People were quite cautious about using them because they were clearly, even simple cases where things started to go really badly with function approximation. And theoretically, people could prove that it could go badly, and so there's less attention to it for quite a while. And then, um, there was the rise of Deep Neural Networks in sort of, you know, the- the mid 2000s, like, to now. So, uh, Deep Neural Networks became huge, and there was called a huge success in them for things like vision, in other areas, there was a whole bunch of data, there's a whole bunch of compute. They we're getting really extraordinary results. And so, then, perhaps it was natural that, like, around in like 2014, DeepMind, DeepMind combined them and had some really amazing successes with Atari. And so I think it sort of really changed the story of how people are perceiving using, um, this sort of complicated function approximation, meters, and RL, and that yes, it can fail to converge. Yes, things can go really badly, and they do go really badly sometimes in practice, but it is also possible of them that despite that- you know, the fact that we don't always fully understand why they always work, um, that often in practice, we can still get pretty good policies out. Now, we often don't know if they're optimal. Often, we know they're not optimal because we know that people can play better, but that doesn't mean that they might not be pretty good, and so we sort of saw this resurgence of interest in- into deep reinforcement learning. Yeah. [NOISE] Um, I guess is there anything from your perspective that the, the deep learning has solved the problems that they had come up with in the mid '90s? Or, is it just that kind of through increases in computational power and the ability to gather a lot of data, that when it failed, it kinda doesn't matter, and we can try some different, like we- you know, try it again and kinda put it together and just keep trying until it works? I guess my question is, did we actually overcome any of the problems that arose in the late '90s, or is it just that we're just kinda powered through? The question is, you know, how we sort of fundamentally resolve some of the issues of the late my '90s, or, um, we kind of brute forcing it. Um, I think that some of the issues that were coming up in 1995 to 1998 in terms of convergence, there are some algorithms now that are more true stochastic gradient algorithms that are covered in chapter 11, um, so that I- a- a- are guaranteed to converge. They may not be guaranteed to converge to the optimal policy, um, so there's still lot of- there's still a ton of work, I think to be done to understand function approximator and off policy control and bootstrapping. I think there's also a couple algorithmic ideas that we're gonna see later in this lecture, that help the performance kind of avoid some of those convergence problems. So I think people knew about this when they started going into 2013, 2014. And so they tried to think about, "Well, when might this issue happen, and how could we avoid some of that stuff?" Like, what's causing that? And so at least algorithmically, can we try to make things, that people often talk about stability, so can we try to make sure that the Deep Neural Network doesn't seem to start having weights that are going off towards infinity and at least empirically have sort of more stable performance. Yes, [inaudible] for me. So with the Atari case specifically, did you- did you avoid that problem? Well, sort of, that if you tried it by having on policy control? I just don't know. [inaudible] there wasn't the case that in fact, that in your Deep Neural experiment, they updated the performance to match the Apple policies [inaudible]. The question is whether or not in this sort of, um, ah, if I understood correctly, in the Atari case like, you know, where they changed things to be more on policy or, or which we know can be much more stable. Um, ah, they are doing deep learning in, in this case, Deep-Q Learning. And so it can still be very unstable, but they're gonna do something about how they do with the frequency of updates to the networks, to try to make it more stable. Um, and it's a great question for me. We'll see how it works here. Anyone else? Okay, cool. So, um, we'll- we'll see an example for breakout shortly, um, of what they did. Um, so again right now, we're gonna be talking about using Deep Neural Networks to represent the value function. Um, we'll talk about using Deep Neural Networks to represent the policy pretty shortly, next week. So what are we going to do? We're gonna, again, have our weights. We're gonna have our same approximators. Now, we're gonna be using Deep Neural Networks. And in this case, again, just, uh, to be clear we're gonna be using a Q function, um, because we're gonna wanna be able to be doing control. So, we're gonna be doing control, in this case. Um, and so, we're gonna need to be learning the, the values of the actions. Just to be clear here, an Atari, it generally, doesn't have a really high dimensional action space. It's discrete. It's normally somewhere between like four to 18, depends on the game. Um, so, it's fairly low dimensional, fairly, um, uh, it's discreet and fairly small. So, though the state space is enormous because it's pixels it's images, um, uh, the, the action space is pretty small. Okay. So, just as a reminder, for Q learning, what we saw is Q learning looks like this for our weights. We have to have the derivative of our function. This is not necessarily gonna be linear anymore, um, but the way we updated our weights, was we did this, um, TD backup, where we have this target. Um, but we're now gonna be taking a max over a, over our next state and our action and our weights. Now, notice in this equation, all the W's you see are identical on the right-hand side. So, we're using the same weights to represent our current value and we're using our same weights to plug in and get an estimate of our future value as well as in our derivative. And whether we're gonna see is, is uh, an alternative to that. Okay. So, their idea was, we'd really like to be able to use this type of function approximator. These deep function approximators to do Atari. They picked Atari in part. Well, I think at least Dennis and I think, Dennis and David, both had sort of a joint startup on, um, video games, a long time ago. I think it was before David went back to grad school if I remember correctly. Um, so, they're both interest in this it's clearly, uh, games are often hard for people to learn. I'm so, it's a nice sort of, uh, demonstration of intellect and they thought well, we can get access to this and, and there was a paper published. I'm forgetting when, maybe 2011, 2013, talking about Atari games and emulators as being sort of interesting challenge for RL. So, what happens in this case, well, the state is just gonna be the full image. The action is gonna be the equivalent of actions of what you could normally do in the game. This is normally somewhere between four to 18, approximately four to 18 actions. Um, and the reward can be, well, really whatever you want but you can use the score or some other aspect to, um, uh, as proxy reward. Generally we're gonna think about score. So, what's gonna happen? Well, they're gonna use a particular input state. We've talked before about whether or not, um, a representation is Markov. In these games, you typically need to have velocity. So, because you need velocity, you need more than just the current image. So what they chose to do is, you'd need to use four previous frames. So this at least allows you to catch for a velocity and position, observe the balls and things like that. It's not always sufficient. Can anybody think of an example where maybe an Atari game, I don't know how many people played Atari. Um, uh, that might not be sufficient for the last four images still might not be sufficient or the type of game where it might not be sufficient. Yeah. [inaudible]. Microbes exactly right. So like things like Montezuma's Revenge, things we often have to get like a key and then you have to grab that key and then, uh, maybe it's visible on screen, it maybe sad, um, and, er, maybe it stored in inventory somewhere. So you have to sort of remember that you have it in order to make the right decision much later or there might be some information you've seen early on. So there are a lot of games and a lot of tasks where, um, the, even the last four frames will not give you the information you need. But it's not about approximation, it's much easier than representing the entire history. So they started with that. Um, er, so, here in this case, there's 80 joystick button positions, um, may or may not need to use all of them in particular game. And the reward can be the change in score. Now notice that that can be very helpful or may not be it, depends on the game. So in some games it takes you a really, really long time to get to anywhere where your score can possibly change. Um, so in that case, you might have a really sparse reward. In other cases, you're gonna win a reward a lot. And so that's gonna be much easier to learn what to do. What are the important things that they, um, did in their paper, this is a nature paper from 2015, is they use the same architecture and hyperparameters across all games. Now just to be clear, they're gonna then learn different Q functions and different policies for each game. But their point was that they didn't have to use totally different architectures, do totally different hyperparameter tuning for every single game separately in order to get it to work. It really was the sort of general, um, architecture and setup was sufficient for them to be able to learn to make good decisions for all of the games. I think it was another nice contribution to this paper is to say well we're going to try to get sort of a general algorithm and setup that is gonna go much beyond the sort of normal three examples that we see in reinforcement learning papers. But just try and get to try to do well in all 50 games. Again, each agent is gonna learn from scratch in each of the 50 games, um, because it's gonna do so with the same basic parameters, same hyperparameters and same neural network so same function approximators act. And the nice thing is that, I think this is actually required by nature. They, they released the source code as well. So you can play around with this. So how did they do it? Well, they're gonna do value function approximators. So they're, they're representing the Q function. They're going to minimize the mean squared lost by stochastic gradient descent. Uh, but we know that this can diverge with value function approximators. And what are the two of the problems for this? Well one is that, uh, there is this or the correlation between samples which means that if you have s, a, r, s prime, a prime, r prime, double prime. If you think about what the value function or the return is for us and the value function and the return for S prime, are they independent? No, right. In fact, like you expect them to be highly correlated with their part, you know, I mean, it depends on the probability of S prime. If this is a deterministic system, the only difference between them will be R. So, so these are highly correlated, this is not IID samples when we're doing updates, there's a lot of correlations. Um, and also this issue with non-stationary targets. What does that mean? It means that when you're trying to do your supervised learning and train your value function predictor, um, it's not like you always have the same v pi oracle that's telling you what the true value is. That's changing over time because you are doing Q-learning to try to estimate what that is and your policies changing and so it's huge amounts of non-stationarity. So you don't have a stationary target when you're even just trying to fit your function because it could be constantly changing at each step. Um, so you change your po-, you change your weights, then you change your policy and then now you're gonna change your weights again. And so, perhaps it's not surprising that things might be very hard in terms of convergence. So the way that sort of, uh, DQN, deep Q-learning addresses these is by experienced replay and fixed Q-targets. Experienced replay, prime number if you guys have heard about this, if you learned about DQN before is we're just gonna stroll data. We've talked about a little bit before of how like TD learning in their standard approach, just uses a data point. Now what I mean by a data point here is really one of these sar, S prime tuples. In the simplest way of TD Learning or Q-learning, you use that once and you throw it away. That's great for data storage, um, it's not so good for performance. So the idea is that we're just gonna store this. Uh, we're gonna keep around some finite buffer of prior experience and we're gonna re-basically redo Q-learning updates. Just remember a Q-learning update here would be looking like this. We would update our weights, that's considered one update to take a tuple and update the weight. It's like one stochastic gradient descent update. And so you can just sample from your experience, um, ah, replay, your replay buffer and compute the target value given your current Q function and then you do stochastic gradient descent. Now notice here because your Q function will be changing over time. Each time you do your update of the same tuple, you might have a different target value because your Q function has changed for that point. So this is nice because basically it means that you reuse your data instead of just using each data point once, you can reuse it and that can be helpful. And what we'll look at that more in a minute. So, even though we're treating the target as a scalar, the weights will get updated the next round which means our target value changes and so, um, you can sort of propagate this information and essentially the main idea, is just that we're gonna use data more than once, um, and that's often very helpful. Yes, and, um, name first. Um, my question is is this equivalent to keeping more frames in our, uh, representation or is this, uh [inaudible] It's a great question which is is this equivalent to keeping more frames in our representation? It's not. Though that's a really interesting question. Um, more frames would be like keeping, uh, a more complicated state representation. But you can still just use a state action or word next state tuple once and throw that data way. This is like saying that periodically I- like let's say I went s1 a1 r1 s2 and then I keep going on and now I'm at like s3 a3 r3 s4. So, that's really where I am in the world, I'm now in state four. It's like I suddenly pretend, oh wait, I'm gonna pretend that I'm back in s1, took a1, got r1, and went to s2 and I'm gonna update my weights again, and the reason that that update will be different than before is because I've now updated using my second update and my third update. So, my Q function, in general, will be different than before. So, it'll cause a different weight update. So, even though it's the same data point as before, it's gonna cause a different weight update. In general, one thing we talked about a long time ago is that if you, um, uh, do TD learning to converge it, which means that you go over your data mu- like, um, an infinite amount of time. Um, at least in the tabular case, that is equivalent to if you learned an MDP model. You learned the transition dynamics in the reward model and you just did MDP planning with that. That's what TD learning converges to, is that if you repeatedly go through your data in infinite amount of time, eventually it will converge to as if you'll learn to model, a dynamics model, a word model, and then the planning for that which is pretty cool. So, this is getting us closer to that. But we don't wanna do that all the time because there's a computation trade-off and particularly here because we're in games. Um, there's a direct trade-off between computation and getting more experience. It's actually a really interesting trade-off because in these cases it's sort of like should you think more and plan more and use your old data or should you just gather more experience? Um, but we can talk more about that later. Yeah, question and name first please. And so, the experienced replay buffer has like a fixed size. Just for, like, clarification of understanding, are those samples, like, replaced by new samples after fixed amount of time? Or is there, like, a specific way to choose what samples to store in the buffer? That's a great question which is, okay, this is presumably gonna be a fixed size buffer. Um, and if it's a fixed size buffer, how do you pick what's in it? Um, is it the most recent and- and how do you get things, uh, how do you remove items from it. It's a really interesting question. Different people do different things. Normally, it's often the most recent buffer, um, can be for example the last one million samples, which gives you a highlight of how many samples we are gonna be talking about. But you can make different choices and there's interesting questions of what thing that you should kick out. Um, it also depends if your problem is really non-stationary or not, and I want to mean there's, like, the real world is non-stationary, like your customer base is changing. Yeah? Uh, I'm trying to strike the right balance between continuing experience like new data points versus re-flagging it. Can we use something similar to like exploitation versus exploration. Um, essentially like with random probability just decide to re-flag [inaudible]. The question is about how would we choose between, like, what, um, you know, getting new data and how much to replay et cetera, um, and could we do that sort of as an exploration-exploitation trade-off. I think this is generally understudied but there's lots of different heuristics people use. Often people have some of- sort of a fixed ratio of how much they're updating based on the experience replay versus getting, um, putting new samples into there. So, generally right now is really heuristic trade-off. Could certainly imagine trying to optimally figure this out but that also requires computation. Um, this gets us into the really interesting question of metacomputation and metacognition. Um, but if, you know, your agent thinking about how to prioritize its own computation which is a super cool problem. Which is what we solve all the time. Okay. So, um, the second thing that DQN does is it first have- it first keeps route this old data. The second thing that it does is it has fixed Q targets. So, what does that mean? Um, so to improve stability, and what we mean by stability here is that we don't want our weights to explode and go to infinity which we saw could happen in linear value function. Um, we're gonna fix the target weights that are used in the target calculation for multiple updates. So, remember here what I mean by the target calculation here is that reward plus Gamma V of S prime. So, this itself is a function of w and we're gonna fix the w we use in that value of S prime for several rounds. So, instead of always update- taking whatever the most recent one is, we're just gonna fix it for awhile and that's basically like making this more stable. Because this, in general, is an approximation of the oracle of V star. So, you'd really like an oracle to just give this to you every time you reach, you know, an S prime or you take an action [inaudible] and go to S prime, you'd like an oracle to give you what the true value is. You don't have that, um, and it could change on every single step because you could be updating the weights. What this is saying is, don't do that, keep the weights fixed that used to compute VS prime for a little while, maybe for 10 steps, maybe for a 100 steps, um, and that just makes the target, the sort of the thing that you're trying to minimize your loss with respect to, more stable. So, we're gonna have, um, we still have our single network but we're just gonna maintain two different sets of weights for that network. Um, one is gonna be this weight minus. I'll call it minus because, um, well there might be other conventions but in particular it's the older set of weights, the ones we're not updating right now. Those are the ones that we're using them as target calculation. So, those are the ones we're gonna use when we want to figure out the value of S prime and then we have some other W which is what we're using to update. So, when we compute our target value we, again, can sample and experience tuple from the dataset from our experience replay buffer, compute the target value using our w minus, and then we use stochastic gradient descent to update the network weights. So, this is used with minus this is used with the current one. Yeah? So, uh, I guess two questions like intuitively, why does this is help and, like, why does it make it more stable and, like, secondly, like, are there any other benefits on the stability from doing this? These are two questions, one is, intuitively, why does this help? Um, which is a great question and second of all, beyond the stability, is there any other benefits? So, intuitively, why does this help, um, in terms of stability? In terms of stability, it helps because you're basically reducing the noise in your target. If you think back to Monte Carlo, um, there instead of using this target like this bootstrap target where we're using GT. So, in Monte Carlo, we used GT and I told you the nice thing about that was that it was an unbiased estimator of V pie. But the downside was that it was high variance because you're summing up the rewards till the end of the episode. Um, and so if things are high-variance when you're trying to regress on them, it's gonna be more noisy, um, and you could [inaudible] gradients. Imagine that we do something- take the extreme of this, if we want to be, um, for stability, you could always make your target equal to a constant. You could always make it equal to zero for example, and if you kept your target fixed forever, you would learn the weights that- that minimize the error to a constant function and that would then be stable because you always have the same target value that you're always trying to predict and eventually you'd learn that you should just set your w equal to zero and- and that would be fine. So, this is just reducing the noise and the target that we're trying to sort of, um, if you think of this as a supervised learning problem, we have an input x and output y. The challenge in RL is that our y is changing, if you make it that you're- so your y is not changing, it's much easier to fit. Um, unless I convince whether or not there's, uh, any benefit beyond stability. I think mostly not, um, I- the- this is also sort of reducing how quickly you propagate information because you're using an- a stale set of weights to represent the value of a state. So, you might misestimate the value of a state because you haven't updated it with holding permit- with new information. Yeah? Uh, assuming we want to do [inaudible] approximator. Is there something that's specific to the deep neural networks?. That's a great questions which is, is this specific to deep neural networks or can we use this with linear value function approximate or any value par-, you can use those to any value function approximators. Yeah, this is not specific. This is really just about stability and that's- that's true for the experience replay too. Experience replay is just kinda propagate information more- more effectively and, um, this is just gonna make it more stable. Uh, so these aren't sort of unique using deep neural network. I think they were just more worried about the stability with these really complicated function approximators. Yeah, in the red. Do you every update the- Minus at all, or is that [inaudible]. Great question. So, the- Di- Dell? Dian. Dian. Dian's question is whether or not, um, we ever update w minus, yes we do. We pu- can periodically update w minus as well. So, in a fixed schedule, say every 50 or, you know, every n episodes or every n steps, you, um, update sort of like every- and you would set w minus dw. Yeah? I was thinking, like, given that we know that this is work for gradient descent and you're not using the same kind of structure as gradient descent, you're using to create, you know, different option subtracting the [inaudible] value of- of the function. How is this supposed to, like, not grade- grade- gradient descent, like, all those assumptions from [inaudible] Your question is, okay this- does this really work in terms of gradient descent? This is not- I mean, the- it's a great question, and these sort of Q learning are not true gradient descent methods. They're- they're are approximations to such, they often do shockingly well given that. Some of the more recent ones which, um, uh, Chapter 11 has a nice discussion of this, sort of the GTD's or gradient temporal difference learning are more true gradient descent algorithms. These are really just approximations and it's, uh, this, um, uh, as to the point, this has no guarantees of convergence still. This is hopefully gonna help but we have no guarantees. Yeah? Uh, George, uh, so in practice, do people have some cyclical pattern and how can they refresh the- the gradient that's used to compute, uh, the gradients? Yeah, his question is, um, you know, in practice are there some sort of cyclical pattern of how often you update w minus. Yes, yeah there's often particular patterns or- or hyper- it's a hyperparameter choice of how quickly and how frequently you update this. Um, and it will trade-off between propagating information fester, um, and possibly being less stable. So. If you make, um, you know, if n here is one that you're back to standard TD learning. If n is infinity, that means you've never updated it. Um, so, there's a- there's a smooth continuum there. William? Uh, we notice, like, for w, there are better initializations than just like zero, uh, something, like, if you take into account, I guess like the mean and variance. Uh, would you initialize w minus just two w or is there like an even better initialization for w minus? Yeah, his questions is about, you know, the- the impact of how we, um, uh, initialize w ca- can matter. Um, uh, and is the- how do we initialize w minus. Typically, we initialize w minus to be exactly the same as w at the beginning. Um, the choice of it will also affect, uh, certainly the early performance. Those are great questions. Let me keep going because I wanna make sure we get to some of the extensions as well. Um, so just to summarize how DQN works. Um, the main two innovations that data- it uses experienced replay and fixed Q targets. It stores the transition in this sort of replay buffer, a replay memory, um, use sample random mini-batches from D. So, normally sample in mini-batch instead of a single one. So, maybe a sample 1- or whatever other parameter. You do your gradient descent given those. Um, you compute Q learning using these old targets and you optimize the mean squared error between the Q network and Q learning targets, use stochastic gradient descent, and something I did not mention on here is that we're typically doing E-greedy exploration. So, you need some schedule here too for how to do E-greedy. So, they were not doing, um, sophisticated exploration in their original paper. So, this is what it looks like. You sort of go in and you do multiple different convolutions. They have the images, um, and they do some fully connected layers and then the output a Q value for each action. Let me just bring it up. Um, for those of you who haven't seen it before. So, the nice thing is what they- so, you're about to see breakout which is, um, an Atari game and what they do is they show you sort of the performance of what the agent is doing. So, remember the agent's just learning from pixels here how to do this. So, it was pretty extraordinary when they showed this in about 2014. Um, and the beginning of its learning sort of this policy. You can see it's not making- doing the right thing very much, um, and that over time as it gets more episodes it starting to learn to make better decisions about how to do it. Um, and one of the interesting things about it is that as you'd hope, as it gets more and more data, it learns to make better decisions. But one of the things people like about this a lot is that, uh, you can learn to exploit, um, the reward function. Uh, so in this case, um, it figures out that if you really just want me to maximize the expected reward, what the best thing for me to do is to just kind of get a hole through there and then as soon as I can start to just bounce around the top [inaudible]. Um, and so this is one of the things where, you know, if you ask the agent to maximize the reward, it'll- it'll learn the right way to maximize the reward given enough data. Um, and so this is really cool that sort of it could discover things that maybe are strategies that people take a little while to learn when they're first learning the game as well. So, when they did this, they then showed, um, some pretty amazing performance on a lot of different games. Many games they could do as well as humans. Now, to be precise here- oh yeah I'm sorry. Uh, I'm just wondering why, um, it's playing, like, why was it [inaudible] around a lot, like, it wasn't sure of its movements like it moved around places often, like [OVERLAPPING]. Yeah. Yeah, so, um, you might see, uh, I think she is talking- she is referring to the fact that the paddle was moving a lot. As the agent is trying to learn, like, when we see that, you sort of go, "Why would he jerk a lot." From the agent's perspective, particularly if there's a cost to moving, then it may just be kind of babbling, uh, and doing exploration just to see what works and- and it- from our perspective that's clearly sort of an inexperienced player to do that. That would be a strange thing but from the agent's perspective, that's completely reasonable. Um, and it does not give him positive or negative reward from that. So, it can't distinguish between, you know, stay in stationary versus going left or right. If you put it in a cost for movement that could help. Yeah? This might become a little bit of [inaudible] but is there a reason to introduce a pulling layer? Puling layer? There might be one in there. I- the- the- I don't remember the complete arc- network architecture, um. The question is whether or not there's a pulling layer in there. I think there prob- there might be inside. There has- they have to be going from images all the way up. But they have the complete architecture. It's a good question. So, the next thing that you can see here is that, um, they got sort of human level performance on a number of different Atari games. There's about 50 games up here. Um, just to be clear here. When they say human level performance that means asymptotically. So, after they have trained their agent, this-, uh, they're not talking about how long it took them or their agent to learn and as you guys will find out for homework two, it can be a lot of experience. Um, uh, a lot of time to learn how to make- do a good performance. But nevertheless, there are a lot of cases where that might be reasonable in terms of games. So, they did very well on some domains. Some domains, they did very poorly. Um, there's been a lot of interest in these sort of games on the bottom end of the tail which often known as those hard exploration games. We'll probably talk- uh, we'll talk a lot more about exploration later on in the course. So, what was critical? So, I- I like the, uh, there's a lot of really lovely things about this paper and one of the really nice things is that they did a nice ablation study, um, uh, for us to sort of understand what were the important features and if you look at these numbers. Um, I think that it's clear that the really important feature is replay. So, this is their performance using a linear network, deeply network seemed to not help so much. Using that fixed Q. Um, fixed Q here means you seem like a fixed target. Okay, that gets you a little bit three from ten. You do replay and suddenly you're at 241. Okay, so throwing away each data point what- after you use it once is not a very good thing to do. You want to reuse that data. Um, and then if you combine replay and fixed Q you do get an improvement over that but, uh, it's really that you get this huge increase, um, uh, at least in break out in some of the other games by doing replay. Now, in some other ones, um, you start to get a significant improvement as soon as you use a more complicated function approximators. But in general, replay is hugely important and it just gives us a much better way to use the data. Yeah? Um, you know, because here in this table it seems like you'd want to use replay and fixed Q with the linear model and that it might be a mistake to be using, uh, a deep model here. Do you agree with that table reference table or? So, the question is like, "Well, maybe we could use, like, linear-" also I guess I should be clear. So, this is all- everything on the- the next four were all deep. So, they don't have here linear plus replay. But you could certainly imagine trying linear plus replay and it seems like you might do very well here, it might depend on which features you're using. There's some cool work, um, uh, over the last few years looking also at, uh, whether you can combine these two. So, we've done some work, um, using a Bayesian last layer, using like Bayesian linear regression which is useful for uncertainty. Other people have just done linear regression where the idea is you- you sort of, um, uh, deep neural network up to a certain point and then you do, um, kind of direct linear regression to fit exactly what the weights are at the final layer. So, that can be much more efficient, um, but you still have a complicated representation. All right. So, since then, there's been a huge number amount of interest in this area. Um, ah, so, again, dating myself reinforcement learning, we used to go and give a talk about reinforcement learning, and like 40 people would show up, but most of them you knew, and, um, and then, uh, and then it started really changing. I think I was maybe in 2016, when, um, er, ICML, I was in New York and like suddenly there were 400 people in the room for reinforcement learning talks. Um, and then, this year at NLP's which is one of the major machine-learning conferences, it's sold out in like eight minutes. Um, so, there were 8,000 people there, and there was a huge amount of interest in deep learning, and for the deep learning workshop, you sort of have a 2,000 person auditorium. So, there's been a huge amount of excitement based on this work, which I think really a huge credit to- to deep mind and to the work that David Silver and others have been doing, um, to sort of show that this was possible. Uh, some of the immediate improvements that we're going to go through really quickly here is, um, Doubled DQN, prioritize replay, and dueling DQN. Um, there's been way, way, way more papers in that, but these are some of the early really big improvements on top of DQN. So, double DQN is kind of like double Q learning, which we covered very briefly at the end of a couple of classes ago. The thing that we discussed there was this sort of maximization bias, is that, um, the max of estimated state action values can be a biased estimator of the true max. So, we talked really briefly about double Q learning. Um, so, a double Q learning, the idea was that we are going to maintain two different Q networks. Uh, we can select our action using, like an E Greedy Policy where we average between those Q networks, and then we'll observe a reward in a state and we basically use one of the Qs as the target for the other. So, if, you know, with a 50 percent probability, we're going to update one network, and we're going to do that by using picking the action from the other network. This is to try to separate how we pick our action versus our estimate of the value of that action to deal with this sort of maximization bias issue. Then with 50 percent other probability, we update Q2, and we pick the next action from the other network. So, this is a pretty small change, it means you have to- you have to maintain two different networks or two different sets of weights, um, and it can be pretty helpful. So, um, if you extend this idea to DQN, you have sort of our current Q network, w select actions, and this older one to evaluate actions. So, you can put this in there to do action selection, and then you can evaluate the value of it with your other networks- other, other network weights. So, it's a fairly small change, it's very similar to what we were doing already for the target network, network weights. It turns out that it gives you a huge benefit in many, many cases for the Atari games. So, uh, this is something that's generally very useful to do, um, and gives you sort- of sort of immediate significant boost in performance, sort of, you know, for the equivalent of like a small amount of coding. That's one idea, and that's sort of a direct lift up from sort of, you know, double Q learning. The second thing is prioritized replay. So, let's go back to the Mars Rover example. Um, er, so, in Mars Rover we had this really small domain, we are talking about tabular setting through just seven states, um, and we're talking about a policy that just always took action a1 which turned out to mostly go left. So, we had this trajectory, we started off in state s3, we took action a1, we got rewarded zero, we went to s2, we stayed in s2 for one round when we did a1, and then eventually went to s1, and then we terminated. So, it was this. And the first visit Monte Carlo estimate of v for every state was 1110000, and the TD estimate with alpha equal one was this. That was when we talked about the fact that TD only uses each data point once and it didn't propagate the information back. So, the only update for TD learning was when we reached state s1, we took action one, we got a reward of one, and then we terminated. So, we only updated the value of state one. So, now let's imagine that you get to do- now let's think about what your- like your replay back up would be in this case. You'd have something like this, you'd have s3, a1, 0, s2, s2, a1, 0 s2, s2, a1, 0, s1, s1, a1, 1, terminate. That's what your replay back up would look like. So, let's say you get to choose two replay backups to do. So, you have four possible replay backups, you can pick the same one twice, if you want to, and I'm going to ask you to pick to replay backups to do to improve the value function, in some way. Um, and I'd like you to think for a second or talk to your neighbor about which of the two you should pick, and why, and which order you do them in as well, and whether it makes any difference. Maybe it doesn't matter if you can just pick any of these, you're going to get the same value function no matter what you do. So, are there two- two updates that are particularly good, and if so, why and what order would you do them in? [NOISE] Hopefully you had a chance to think about that for a second. First of all, does it matter? So, I'm going to first ask you guys, uh, the question. Vote if you think it matters which ones you pick, in terms of the value function you get out. That's right. So, it absolutely matters which two you pick in terms of the resulting value function, you will not get the same value function no matter which two you pick. Um, uh, now as for another voting. So, I will ask for which one we should do first? Should we update- should we do four first? Four is the last one on our replay buffer. Should we do three first? Should we do two first? Okay. All right. So, 3s have it, does somebody want to explain why? Yeah. I think. You've got to back-propagate from the information you're already [NOISE] have on step one to step two. Right. Yeah. So what the student said is right. So, if you pick, um, backup three, so what's backup three? It is, S2, A1, 0, S1. So if you do the backup, that's, zero, plus gamma V of S prime, S1. And this is one. So that means now you're gonna get to backup and so now your V of S2 is gonna be equal to one. So you get to backup that information. Yeah. So I, I wasn't extremely specific on what like, the right thing to do here is, that, that the- um, that my main thing is that I wanted to emphasize that it makes the big difference and that, and that, um, it's gonna matter in terms of order. What's the next one we should do? Should we do, raise your hand if we should do, three again. Raise your hand if we should do two. Raise your hand if we should do one. Yeah. The ones have it. I- someone want to explain why? Yeah, in the back. And that's the same as the last time I [inaudible] That's right. Yes. So, um, if you wanted to get all the way to the Monte Carlo estimate. What you would wanna do here, is you'd wanna do S3, a1, 0, S2 which would allow your V of S3 to be updated to one. And at this point your value function will be exactly the same as the Monte Carlo. So it definitely matters. It matters the order in which you did, do it. If you had done S3, a1, 0, S2, your S3 wouldn't have changed. Um, so ordering can make a big difference. Uh, so not only do we wanna think about like, what, um, was being brought up before but I think to say like what should we be putting in our replay buffer, not only do we wanna think about what- should be in a replay buffer but also what order do we sampled them can make a big difference in terms of convergence rates. Um, uh, and in particular, there's some really cool work from a couple of years ago looking at this formally of like how, at what the ordering, matters. Um, so, there is this paper back in 2016 that tried to look at what the optimal order would be. So imagine that you had an oracle that could, um, exactly compute. Now, this is gonna be computationally intractable, we're not gonna be able to do this in general, but imagine that the oracle could go through and pick and figure out exactly what the right order is. Um, then what they found out in this case is that, for this or a small chain like example, um, you'd get this exponential improvement in convergence, which is pretty awesome. So what does that mean? The number, of, um, updates you need to do until your value function converges to the right thing. It can be exponentially smaller, if you update carefully and you, you could have an oracle tells you exactly what tuple the sample. Which is super cool. Um, so you can be much much better. But you can't do that. You're not gonna spend all this. It- it's very computationally, expensive or impossible in some cases to figure out exactly what that uh, that oracle ordering should be. Um, but it does illustrate that we, we might wanna be careful about the order that we do it and- so, their, intuition, for this, was, let's try to prioritize a tuple for replay according to its DQN error. So, the DQN error, um, in this case is just our TD Learning error. So it's gonna be the difference between, our current. This is basically our prediction error. So this is our prediction error [NOISE], Almost our prediction error, I'll just call it TD, because it's not quite because we were doing the max. So this is like, sort of our predicted, TD error minus our current. Let us say, if you have a really really big error, that we're gonna prioritize, updating that more. And you update this quiet quantity at every update, you set it for new tuples to be zero and one method- they have two different methods for, for trying to do this sort of prioritization. That one method basically takes these, um, priorities, raises them to some power alpha, um, and then normalizes And then that's the probability, of selecting that tuple. So you prioritize more things that are weights. Yeah. Doesn't freezing. Name first, please. Oh, Sorry. Doesn't freezing in the old ways were a counter to propagating back the information there? It's like, you first the old ways and uh, example we're just going through, after you like propagated the one back once, you wouldn't be able to do anymore because your value's totally zero It's a great point, which is, if you are fixing, um, uh, your w minus, then, if you were looking at our case that we had before, then you wouldn't be able to continue propagating that back, because you wouldn't update yet, yet, that's exactly right. So there's gonna be this tension between, when you fix things versus her propagating information back. Um, I, and, it's a tention that one has to sort of figure out, there's not necessarily principled ways for, exactly what the right schedule is to do that, but it's a hyperparameter to do. So why does it, what does ordering matter, that if you're fixing, and so you are not changing, uh, like, then it wouldn't matter what order we sampled those previous ones, right? Uh, okay. So basically, ordering matter at all, in that case. It still matters because we're still gonna be doing replay, o- over, uh, the weights will be changing during the time period of which will be replayed over that buffer. So that buffer could be like, million and you might re-update your weights like every 50 steps or something like that. So there's still gonna be a whole bunch of data points in, uh, in your replay buffer, that it's useful to think about, now that your weights have changed, what ordering do you wanna go through those? It's a great question. Okay. So what method is this? Lemme just, just to clarify, if we set, um, alpha equal to zero, what's the rule for selecting among the existing tuples? So out- so Pi is our, uh, sort of basically our DQN error. If we set Alpha equal to zero, you know, it's right. Yeah. So, so this sort of trades off between uniform, no prioritization to completely picking the one that, um, like if alpha's infinity then that's gonna be picking the one with the highest DQN error. So it's a trade-off, it's a stochastic. All right. So, um, then, they combine this with, sort of- the reason why I'm picking these three [inaudible] they are sort of layer on top of each other. So prioritise replay versus, um, I think this is prioritise replay plus D, um, double DQN versus just double DQN. Most of the time, um, this is zero would be, they're both the same underneath means flat, uh, vanilla DQ, double DQN is better. Above means that prioritize replay is better. Most, the time prioritize replay is better and there's some hyper parameters here to play with but most of the time it's, it's useful. And it's certainly useful to think about, you know, we're order might matter. All right. We don't have very much time left so I'm just gonna do, short through this just so you're aware of it. Um, one of the best papers from ICML 2016 was dueling. Um, the idea is that, if you want to, make decisions in the world, they're working some states that are better or worse, um, and they're just gonna have higher value or lower value, but that- what you really wanna be able to do is, figure out what the right action is to do, in a particular state. Um, and so that- what you want us to have understand is, this, this advantage function. You wanna know, how much better or worse taking a particular action is versus following the current policy. That really like I don't care about estimating the value of a state, i care about being able to understand which of the actions has the better value. So I'm looking at this advantage function. So, what they do to do this is that in contrast to DQN, where you output all of the Q's.They're gonna separate and they're gonna first estimate the value of a state, and they're gonna estimate this advantage function, which is Q of s. One minus V of s, Q of s, a2 minus V of s. [inaudible] just gonna separate it. It's an architectural choice and learning a recombine these for the Q. And I get this is gonna help us refocus on the signal that we care about which is, um, you know,after accurately estimate which action is better or worse. Um, there's intruding questions about whether or not this is identifiable, I don't have enough time to go into these today. It is not identifiable. I'm happy to talk about all of that off light, um, uh, the, the reason this is, uh, important is, they just forces one to make some sort of default assumptions about, um, specifying the appendage functions. Empirically, it's often super helpful. So, again compared to double DQN with prioritize replay, which we just saw, which was already better than w- double DQN, which is also better than DQN. Um, this again gives you another performance gain, substantial one. So basically these are sort of threes, three different ones that came up within the- for two years after DQN that started making some really big big performance gains compared to destroy completely vanilla DQN. For homework two, you're gonna be implementing DQ and not the other ones, they welcome to implement some of the other ones. They just good to be aware of- those are some and sort of the major initial improvements to giving it substantially better performance on ATARI. Um, I'll leave this up. We're almost out of time. Uh, feel free to look at the last couple slides of this for some practical [NOISE] tips that came from John Schulman, um John Schulman was a PhD student at Berkeley, that is now of the heads of open AI. Um, I- just one thing that I will make sure to highlight, it could be super tempting to try to start, by like implementing Q learning directly on the ATARI. Highly encourage you to first go through, sort of the order of the assignment and like, do the linear case, make sure your Q learning is totally working, um, before you deploy on ATARI. Even with the smaller games, like Pong which we're working on, um, it is enormously time consuming. Um,and so in terms of just understanding and deep again, it's way better to make sure that you know your Q Learning method is working, before you wait, 12 hours to see whether or not, oh it didn't learn anything on Pogge. So, that, that- there's a reason for why we, sort of build up, the way we do in the assignment. Um, another practical, to a few other practical tips, feel free to, to look at those, um, and then we were on Thursday. Thanks. |
Stanford_CS234_Reinforcement_Learning_Winter_2019 | Stanford_CS234_Reinforcement_Learning_Winter_2019_Lecture_3_ModelFree_Policy_Evaluation.txt | So, what we're gonna do today is we're gonna start to talk about Model-Free Policy Evaluation. Um, so, what we were discussing last time is we started formally defining Markov processes, Markov reward processes and Markov decision processes, and we're looking at the relationship between these different forms of processes which are ways for us to model sequential decision-making under uncertainty problems. So, what we're thinking about last week was, well what if someone gives us a model of how the world works? So, we know what the reward model is, we know what the dynamics model is. It still might be hard to figure out what's the right thing to do. So, how do we take actions or how do we find a policy that can maximize our expected discounted sum of rewards? Um, if- even if we're given a model, then we still need to do some computation to try to identify that policy. So, what we're gonna get to very shortly is how do we do all of that when we don't get a model of the world in advance. But, let's just first a recap, um, sort of this general problem of policy evaluation. So, we heard a little bit about policy evaluation last time when we talked about policy evaluation as being one step inside a policy, um, iteration which alternated between policy evaluation and policy improvement. So, the idea in policy evaluation is somebody gives you a way to act and then you want to figure out how good that policy is. So, what is the expected discounted sum of rewards for that particular policy? And what we're gonna be talking about today is dynamic programming, Monte Carlo policy evaluation, and TD learning. As well as some of the ways that we should think about trying to compare between these algorithms. So, just as a brief recall, um, remember that last time we defined what a return is for Markov reward process. And a return for a Markov reward process that we defined by G_t was the discounted sum of rewards we get from that particular time point t onwards. So, we're gonna get an immediate reward of Rt and then after that, we're gonna get Gamma, where Gamma was our discount factor. And remember we're gonna assume that's gonna be somewhere between zero and one. And so, we're sort of weighing future awards generally less than the immediate rewards. The definition of a state value function was the expected return. And in general, the expected return is gonna be different from a particular return if the domain is stochastic, because the [NOISE] reward you might get when you try to drive to the airport today is likely gonna be different than the reward you get when you drive to the airport tomorrow, because traffic will be slightly different, and it's stochastic, varies over time. And so, you can compare whether, you know, on a particular day if it took you two hours to get to the airport versus on average, it might take you only an hour. We also defined the state action value function which was the expected reward, um, if we're following a particular policy Pi but we start off by taking an action a. [NOISE] So, we're saying if you're in a state s, you take an action a, and from then onwards, you follow this policy Pi that someone's given you. What is the expected discounted sum of rewards? And we saw that Q functions were useful because we can use them for things like policy improvement, because they allowed us to think about, well, if we wanna follow a policy later but we do something slightly different to start, can we see how that would help us improve in terms of the amount of reward we'd obtain? So, we talked about this somewhat but as a recap, um, we talked about doing dynamic programming for policy evaluation. So, dynamic programming was something we could apply when we know how the world works. So, this is when we're given the dynamics, and I'll use the word dynamics or transition model interchangeably in this course. So, if you're given the dynamics or the transition model p and the reward model, then you can do dynamic programming to evaluate how good a policy is. And so, the way we talked about doing this is that you initialize your value function, which you could think of generally as a vector. Right now, we're thinking about there being a finite set of states and actions. So, you can initialize your value function for this particular policy to be zero, um, and then you iterate until convergence. Where we say the value of a state is exactly equal to the immediate reward we get from following that policy in that state plus the discounted sum of future rewards we get [NOISE] using our transition model and the value that we've computed from a previous iteration. And we talked about defining convergence here. Convergence generally we're gonna use some sort of norm to compare the difference between our value functions on one iteration and next. So, we do things like this, V_Pi_k minus V_Pi at k minus one [NOISE]. And wait for this to be smaller than some Epsilon. Okay. So, just as a reminder to what is this quantity that we're computing representing? Well, we can think of this quantity that we're computing, um, as being an exact value of the k horizon value of state s under that policy. So, on any particular iteration, it's as if we know exactly what value we would get if we could only act for a finite number of time steps like k time steps. Says, you know, how good would it be if you followed this particular policy for the next k time steps? Equivalently, you can think of it as an approximation of what the value would be if you acted forever. So, if k is really large, k is 20 billion, then it's probably gonna be a pretty good approximation to the value you'd get if you'd act forever. And if k is one, that's probably gonna be a pretty bad estimate. This will converge over time. So, I think it's useful to think about some of these things graphically as well. So, let's think about this as you're in a state s, which I'm denoting with that white circle at the top and then you can take an action. So, what dynamic programming is doing is it's computing an estimate of the V_Pi here at the top by saying, "What is the expectation, expectation over Pi of RT plus Gamma V_k minus one. And what's that expectation over it's gonna be the probability of s prime given s, Pi of s. Okay. So, how do we think about this graphically? Well, we started in this state, we take an action and then we think about the next states that we could reach. We're kind of again assuming that we're in a stochastic process. So, maybe, you know, sometimes the red light is on and sometimes the red light is off. So, depending on that, we are gonna be at a different next state, we're trying to drive to the airport. And then we can think about after we reach that state, then we can take some other actions. And in particular, we can take one action in this case because we're assuming we're fixing what the policy is. And then from those, that, those actions would lead us to other possible states. So, we can think of sort of drawing the tree of trajectories that we might reach if we started in a state and start following our policy, where whenever we get to make a choice, there's a single action we take because we're doing policy evaluation. And whenever there's sort of nature's choice, then there's like a distribution over next states that we might reach. So, you can think of these as the S-prime and the S double-primes kind of time is going down like this. So, this is sort of you know the, the potential futures that your agent could arise in. And I think it's useful to think about this graphically because then we can think about how those potential futures, um, how we can use those to compute what is the value, a difference of this policy. So, um, in what dynamic program what we're doing and in general when we're trying to compute the value of a policy is, we're gonna take an expectation over next states. So, the value is the expected discounted sum of future rewards if we follow this policy, and the expectation is exactly over these distributions of futures. So, whenever we see an action and then we think about all the next possible nodes we could get to, we want to take an expectation over those features and expectation over all the rewards we could get. So, that's what dynamic programming is or that's what we can think of this graph is doing. And when we think about what dynamic programming is doing, is it estimates this expectation over all those possible futures by bootstrapping and computing a one timestep expectation exactly. So, what does it do? Again, it says, "My V_Pi of s is exactly equal to r of s, Pi of s, my immediate reward plus Gamma sum over probability of s prime given s a V_Pi k minus one, the best part. So, it bootstraps, and we're using the word bootstraps there because it's not actually summing up all of these lower down potential rewards. It's saying, "I don't need to do that." Previously, I computed what it would be like if I started say in this state and continued on for the future. And so, I, now I already know what the value is at that state, and I'm gonna bootstrap and use that as a substitute for actually doing all that roll-out. And also here, because I know what the expected discounted or I know what the, um, sorry, the model is, that it can also just take a direct expectation over s prime. So, my question is, is there an implicit assumption here that the reward at a given state and thus the value function of evaluated states doesn't change over time. So, like because you're using it from the prior iteration? So, I think that question is saying, um, is there an explicit assumption here that the value doesn't change over time? Yes. The idea in this case is that the value that we're computing is for the infinite horizon case and therefore that it's stationary. It doesn't depend on the time step. From that way we're not gonna talk very much about the finite horizon case today, in that case it's different. In this situation, we're saying at all time steps you always have an infinite number more time steps to go. So, the value function itself is a stationary quantity. So, why is this an okay thing to do like we're bootstrapping? Um, the reason that this is okay is because we actually have an exact representation of this V_k minus one. You're not getting any approximation error of putting that in instead of sort of explicitly summing over lots of different histories. Sorry, lots of different future rewards. So, when we're doing dynamic programming the things to sort of think about here is if we know the model, then know dynamic model and know the reward model, that we can compute the immediate reward exactly. We can compute our expected sum over future states exactly, and then we substitute in instead of thinking about, we, instead of thinking about expanding this out as being a sum over rewards, we can just bootstrap and use our current estimate of V_k minus one. And the reason that I'm emphasizing this a lot is that when we start to look at these other methods like Monte Carlo methods and TD methods, they're not gonna do this anymore. They're gonna do other forms of approximation of trying to compute this tree. So, ultimately to compute the value of a policy, what we're essentially doing is we're thinking about all possible futures and what is the return we'd get under each of those futures. And we're trying to make it tractable to compute that particularly when we don't know how the world works and we don't have access to the dynamics model or the reward model. Okay. So, just to summarize dynamic programming, we should talk a little- a little bit about last time, but we didn't really talk about the bootstrapping aspect. Dynamic programming says the value of a policy is approximately equal to the expected next- the expectation over pi of immediate reward plus gamma times the previous value you computed requires a model, it bootstraps the future return using an estimate, using your V_k minus 1. And it requires the Markov assumption. And what- what I mean by that there is that, um, you're not thinking about all the past you got to reach a certain state. You're saying no matter how I got to that previous state, my value of that state is identical, um, and I can sort of assume that, and I can compute that singly based on the current observation. So, may I have any questions about this. So, right now we're mostly recap of last time, um, but sort of slightly pointing out some things that I didn't point out before. Okay. So, those things are useful now that we're gonna be talking about policy evaluation without a model. So, what we're going to talk about now is Monte Carlo policy evaluation which is something that we can apply when we don't know what the models are of the world, and we're gonna talk a little bit about how we can start to think about comparing these different forms of estimators, estimators of the value of a policy. So, in Monte Carlo policy evaluation, um, we can again think about the return. So, the returning and G_t are discounted sum of future rewards under a policy, and the value of a policy we can represent now is just, let's think about all the possible trajectories we could get, um, under our policy and what's average all their returns. So, we can again think about that tree we just constructed. Each of those different sort of branches would have had a particular reward, um, and then we're just going to get the average over all of them. So, it's a pretty simple idea. The idea is that the value is just equal to your expected return. And if all your trajectories are finite, you just can take a whole bunch of these and you average. So, the nice thing about Monte Carlo policy evaluation is it doesn't require you to have a sp- a specific model of the dynamics or reward. It just requires you to be able to sample from the environment. So, I don't need to know a particular like parametric model of how traffic works. All I have to do is drive from here to the airport, you know, hundreds of times, and then average how long it takes me. And if I'm always driving with the same policy, let's say I always take the highway, um, then if I do that, you know, 100 times, then I have a pretty good estimate of what is my expected time to get to the airport if I drive on the highway. Well that is my policy. So, it doesn't do bootstrapping, it doesn't try to maintain at this root V_k minus 1. Um, it's simply sums up all the rewards from each of your trajectories and then averages across those. It doesn't assume the state is Markov. Just averaging doesn't- there's no notion of the next state and whether or not that sufficient to, um, to summarize the future returns. An important thing is that it can only be applied to what are known as episodic MDPs. You act forever if there's no notion of- if this is sort of like averaging over your life this doesn't work, [LAUGHTER] because, you only get one. So, you need to have a process where you can repeatedly do this many times and the process will end each time. So, like driving to the airport might be really long, but you'll get there eventually and then you can try again tomorrow. So, this doesn't work for all processes like if you have a robot that's just going to be acting forever, can't do Monte Carlo policy evaluation. Okay. So, we also often do this in an incremental fashion which means that after we maintain a running estimate, after each episode, we update our current estimate as V_pi. And our hope is that as we get more and more data, this estimate will converge to the true value. So, let's look at, um, what the algorithm for this would be. So, one algorithm which is known as the First-Visit Monte Carlo on policy evaluation algorithm, as we start off and we assume that we haven't- N here is essentially the number of times we visited a state. So, we start off and this is zero. Also the return- the- or average return from starting in any state is also zero. So, we initialize say right now or we think that you know we get no reward from a state and we haven't visited any state. And then what we do is we loop. And for each loop we sample an episode which is we start in the starting state and we act until our process terminates. I start off at my house and I drive until I can get to the airport. And then I compute my return. So, I say okay, well maybe that took me two hours to get there. So, now my G-i is two hours. Um, but you've just compute your return and you compute it for every time step t inside of the episode. So, G_i,t here is defined from the t time step in that episode, what is the remaining reward you got from that time step onwards, and we'll instantiate this in our Mars Rover example in a second. And then for every state that you visited in that particular episode, for the first time you encountered a state, you look- you increment the counter and you update your total return. And you use, then you just take an average of those estimates to compute your current estimate of the value for the state. Now why you might be in the same state for more than one time step in an episode. Well let's say I get to the red light, let's say I've discredited my time steps. So, I look at my state every one minute. Well, I got to a red light and there was a traffic accident. So, on time step one I'm at the red light, time step two I'm on the red light, time step three I'm on the red light. And so you can be in the same state for multiple time steps during the episode. And what this is saying is that you only use the first time step you saw that state. And then you sum up the rewards you get til the end of that episode. Okay. We saw the state but in, I guess like different time steps and the same episode, we'd still be incremented twice because it's not- there's gonna be a gap between them? The question is, what happens if we, um, see the same state in the same episode? In first visit, you only use the first occurrence. So, you drop all other ones. So, the first time I got to my red light then I would sum up the future rewards till the end of the episode. If I happen to get to the same red light during the same episode, I ignore that data. We'll see a different way of doing that in a second. Okay. So, how do we estimate whether or not this is a good thing to do. How do we evaluate whether or not this particular- this is an estimate. It's likely wrong at least at the beginning where we don't have much data. So, how do we understand whether or not this estimate is good and how are we going to compare all of the estimators and these algorithms that we're going to be talking about today. So, um, actually just raise your hand because I'm curious. Um, who here has sort of formally seen definitions of bias and variance in other classes. Okay. Most people but not quite everybody. So, just as a quick recap, um, let's think about sort of having a statistical model that is parameterized by theta, um, and that we also have some distribution over some observed data p of x given theta. So, we want to have a statistic theta hat which is a function. So, theta hat is a function of the observed data and it provides an estimate of theta. So, in our case, we're going to have this value, this estimate of the value we're computing. This is a function of our episodes and this is an estimate of the true discounted expected rewards of following this policy. So, the definition of a bias of an estimator is to compare what is the expected value of our statistic versus the true value, for any set of data. So, this would say, if I compute, you know, the expected amount of time for me to get to the airport based on trying to drive there three times. Does the algorithm that I just showed you is that unbiased? On average is that the same is the true expected time for me to get to the airport. The definition of a variance of an estimator compares my statistic to its expected value squared. Expected over the, er, the, um, the type of data I could get under the true parameter and the mean squared error combines these two. Mean squared error is normally what we care about. Normally, we ultimately care about sort of how far away is our estimate of the quantity we care about versus the true quantity? And that's the sum of its bias and its variance. And generally, different algorithms and different estimators will have different trade offs between bias and variance. Okay. So, if we go back to our First-Visit Monte Carlo algorithm the V_pi estimator that we use there is an unbiased estimator of the true expected discounted sum of rewards from our policy. It's just a simple average, um, and it's unbiased. And by the law of large numbers, as you get more and more data, it converges to the true value. So, it's also what is known as as consistent. Consistent means that it converges to the true value as the- as data goes to infinity. So, this is reasonable, um, but it might not be very efficient. So, ah, as we just talked about, you might be in the same state, you might be at the same stoplight for many, many time steps. Um, and you're only going to use the first state in an episode to update. So, every visit at Monte Carlo, simply says well every time you visit a state during the episode, look at how much reward you got from that state till the end and average over all of those. So, essentially every time you reach a state, you always look at the sum of discounted rewards from there to the end of the episode and you average all of that. Which is generally going to be more data efficient. Bias definition, I guess I'm just a little confused how we would get biased, even if we don't actually know theta. How we compute bias. [NOISE] Yeah, given that we don't know theta. It's a great- the question is how do you compute bias? Yes, if you, uh, if you can compute bias exactly that normally means you know what theta is, in which case why are you doing an estimator? Generally, we do not know what bias is, um we can often bound it. So, often using things like concentration inequalities we can, um, well concentration qualities are more for variance. Often, um, we don't know exactly what the bias is, unless you know what the ground truth is. And there are different ways for us to get estimates of bias in practice. So, as you compare across different forms of parametric models, um, sometimes you can do is structural risk, ah, ah, structural risk maximization and things like that to try to get sort of a quantity of how you compare your estimator and your model class. I'm not going to go very much into that here but I'm happy to talk about it in office hours. So, in every visit Monte Carlo, we're just gonna update it every single time. And that's gonna give us another estimator. And note that that's gonna give us generally a lot more counts. Because every time you see a state, you can update the counts. But it's biased. So, you can show that this is a biased estimator of V_pi. May have intuition of why it might be biased. So, in the first case for those of you that have seen this before or not necessarily this particularly but seen this sort of analysis. First visit Monte Carlo, is you're getting IID estimates of a state, of a state's return right? Because you only take that, um, each episode is, is IID because you're starting at a certain state and you're estimating from there. Ah, and you only use the return for the first time you saw that state. If you see a state multiple times in the same episode, are their returns correlated or uncorrelated? Correlated. Okay. So, your data is no longer IID. So, that's sort of the intuition for why when you mod- move to every visit Monte Carlo, your estimator can be biased 'cause you're not averaging over IID variables anymore. Is it biased for an obvious reasons to inspectors paradox? [inaudible] I don't know. That's a good question. I'm happy to look at it and return. However, the nice thing about this is that it is a consistent estimator. So, as you get more and more data, it will converge to the true estimate. And empirically, it often has way lower variance. And intuitively, it should have way lower variance. We're averaging over a lot more data points, uh, typically in the same. Now, you know, if you only visit one- if you- if you're very unlikely to repeatedly visit the same state, these two estimators are generally very close to the same thing in an episode. Because you're not gonna have multiple visits to the same state. But in some cases you're gonna visit the same state a lot of times and you get a lot more data and these estimators will generally be much better if you use every visit, but it's biased. So, there's this trade-off. Empirically, this is often much better. Now, of course in practice often instead of the- often you may wanna do this incrementally. You may just want to kind of keep track of a running mean and then you keep track of your running mean and update your counts sort of incrementally. And you can do that if also as you visit you don't have to wait until the end lessons- oh, that's wrong. You do have to wait till the end because you always have to wait till you get the full return before you can update. Yeah, in the back. So, a question on that, if you could like- if you condition on the fact that you have the same number of estimates approximately in each of the states, would then the two be more or less equivalent but the other one would be less biased. For example, if you did I guess there is no way you could have for example a same number of episodes, ah, the same number of count in each state with the first visit approximation. But if you did have that, would you imagine that the episode would be lower in that case? I would- expressions about if you have the same number of counts to a state across the two algorithms. And in terms of the episodes, you couldn't have that be the case unless- so they'd need to be identical if you only visit one state, um, once in an episode and then they'd be totally identical. If it's not the case, if you visit, um, a state multiple times in, in one episode, then, uh, by the time you get to the same counts, the one for the single visit would be better 'cause it's unbiased and it would have basically the same variance. Any other questions about that? Cool. Um, so, incremental Monte Carlo, um, on policy evaluation is essentially the same as before except where you can just sort of slowly move your running average for each of the states. And the important thing about this is that, um, as you slowly move your estimator, if you set your alpha to be 1 over Ns, it's identical to every visit Monte Carlo. Essentially, you're just exactly computing the average. Um, but you don't have to do that. So, you can skew it so that you're running average is more weighted towards recent data. And the reason why you might wanna do that is because if your real domain is non-stationary. We have a guess of where, where domains might be non-stationary. It's kind of an advanced topic. We're not gonna really talk about non-stationary domains for most of this class though in reality, they're incredibly important. Um, I don't know if your mechanical parts are breaking down or something's off. Example of like if you're in a manufacturing process and your parts are changing- are breaking down over time. So, your dynamics model is actually changing over time. Then you don't want to reuse your old data because you're- actually your MDP has changed over time. So, this is one of the reasons often empirically like when people train recommender systems and things like that, you know, the, the news all these things are non-stationary. And so people often retrain them a lot to deal with this non-stationarity process. Do I see a question on the back? Okay. Yeah. So, empirically that's often really helpful for non-stationary domains, but if it's non-stationary there's all- there's a bunch of different concerns. So, we're going to mostly ignore that for now. Okay. So, let's just check our understanding for a second. For Monte Carlo, for on policy evaluation. Let's go back to our Mars rover domain. So, in our Mars rover, we had these seven states. Our rover dropped down. It was gonna explore, a reward is in state S_1, one and state S_7 it's plus 10 everywhere else at zero. And our policy is gonna be A_1 for all states. And now imagine we don't know what the dynamics model is. So, we're just gonna observe trajectories. And if you get to either state one or state seven, the next action you take terminates the reward. I don't know. Maybe it falls off a cliff or something like that. But whenever you get to S_7 or S_1, then the next action you take so you get whatever reward. You either get the one or you get the 10 and then your process terminates. So, let's imagine a trajectory under this policy would be you start in S_3. You go to action- take action A_1, you get a reward of zero. This is for reward. Then you transition to state S_2, you take an action of A_1, you get a zero. You stay in the same state. So, you stay in S_2 again. Take action A_1, you get another reward of zero and then you reach state S_1, take an action A_1, you get a 1 and then it terminates. So, it's one experience of your Mars rover's life. So, in this case, how about we just take a minute or two, feel free to talk to a neighbor and compute what is the first visit Monte Carlo estimate of the value of each state and what is the every visit Monte Carlo estimate of state S_2? Then I put the algorithm for both first visit at every visit above just depends on whether you update the state only once for this episode or whether you can potentially update it multiple times. [NOISE] You may ask the question too if we have not seen it yet what is the value we use. So, the value you can also say that you initialize V_Pi of S equal to zero for all S if you haven't seen it yet. [NOISE] All right. Raise your hand if you'd like a little bit more time otherwise we'll go ahead. Okay. So what- someone wanna to share what they and maybe somebody nearby them thought was the first visit Monte Carlo estimate of V for every state. I think the first- this estimates is really one for every single state except for the last one that's sent. Which states? We've only updated a few of them so far. Which why don't you give me the full vector. Like okay we'll just start here. So, V of S_1 is what? Is one. Okay. And V of S_2? Is also one. And V of S_3? Is also one. And V of S_4? Also one. [NOISE]. Anybody disagree. Zero. Zero. Okay and V of S_5? Zero. And V of S_6? Zero. And V of S_7? [OVERLAPPING] Yeah. So, we only get to update in this one that the states we've actually visited. Okay. So, here it's one, one, one. Zero, zero, zero, zero. Now what about for every visit the Monte Carlo estimate of just S_2. So, I picked only S_2 'cause that's the only state we visit twice. What's its, what's its estimate? Well, we increment. Yeah. Is it still gonna be one? Yeah, yes it is and why? Because incremental also we have Ns is two at the end of it but Gs is also two. So, the increment both twice. Exactly. So, the return from both times when you started in S_2 and got an added up till the end of the episode was one in both cases. So, it was one twice and then you average over that so it's still one. Yeah. Is the reason that they're all one because gamma's one? 'Cause like shouldn't there be some gamma terms in there. Oh, good question. So, here we've assumed gamma equals one, otherwise there would be- there'd be a gamma multiplied into some of those two. Yeah, good question. I chosen gamma equal to one just to make the math a little bit easier. Otherwise, it'd be a gamma factor tpo. Okay great. So, you know, the, the second question is a little bit of a red herring because in this case it's exactly the same. But if the return had been different from S_2, um, like let's say there was a penalty for being in a state, then they could have had different returns and then we would have gotten something different there. Okay. So, Monte Carlo in this case updated- we had to wait till the end of the episode, but when we updated it till the end of the episode, we updated S_3, S_2, and S_1. So, what is Monte Carlo doing when we think about how we're averaging over possible futures. So, what Monte Carlo is doing, um, I've put this sort of incremental version here which you could use for non-stationary cases but you can think of it in the other way too. Um, so, and remember if you want this just to be equal to every visit, you're just plugging in 1 over N of S here for alpha. So, this is what Monte Carlo Evaluation is doing is it's just averaging over these returns. So, what we're doing is if we think about sort of what our tree is doing, in our case our tree is gonna be finite. We're gonna assume that each of these sort of branches eventually terminate. They have to because we can only evaluate a return once we reach it. So, at some point like here when we got to state S_1 or S_7 in our Mars example, the process terminated. And so what does Monte Carlo policy evaluation do? It approximates averaging over all possible futures by summing up one, uh, trajectory through the tree. So, it samples the return all the way down till it gets to a terminal state. It adds up all of the rewards along the way. So, like reward, reward, reward. Well, I'll be more careful than that. Reward, reward. Here you get a reward for each state action pair. So, you sum up all the rewards in this case. Um, and that is its sample, um, of the value. So, notice it's not doing any, um, er, the way it's gonna get into the expectation over states, is by averaging and across trajectories. It's not explicitly looking at the probability of next state given S and A and it's not bootstrapping. It is only able to update, when you get all the way out and see the full return. So, so, this is it samples. It doesn't use an explicit representation of a dynamics model, and it does not bootstrap because there's no notion of VK minus 1 here. It's only summing up a- all of the returns. Questions? Scotty. [inaudible] policy evaluation like this would do a very poor job in rare occurrences? Well, it's interesting. Question is, is it fair to say that this would do a really bad job in very rare occurrences? It's intriguing. They're very high variance estimators. So if you're- Monte Carlo, in general, you essentially just like rolling out futures, right? And often you need a lot of possible futures until you can get a good expectation. On the other hand, for things like AlphaGo which is one of the algorithms that was used to solve the board game Go, they use Monte Carlo. So, you know, I think, um, you wanna be careful in how you're doing some of this roll out when you start to get into control. And when you start to- because then you get to pick the actions, um, and you often kind of want to play between, but it- it's not horrible even if there's rare events. Um, er, but if you have other information you can use, it's often good. It depends w-what your other options are. So, generally this is a pretty high variance estimator. You can require a lot of data, and it requires an episodic setting because you can't do this if you're acting forever because there is no way to terminate. So, you have to be able to tell processes to terminate. So, in the DP Policy Evaluation we had the gamma factors, because we wanted to take care of the cases where state were seen in-between that started with a probability equals to one. But in this case, um, if we had such a case that would never terminate, right, because the episode would never end. So, technically, do we still need a gamma factor to evaluate policy equation, uh, policy evaluation on? The question was about, do we still need a gamma factor in these cases, and what about cases where you could have self-loops or small loops in your process? So, um, this G in general can, you know, can use a gamma factor. So, this can include a gamma when you compute those. You're right, that if the process is known to terminate, you don't have to have a gamma less than one because your reward can't be infinity because your process will always terminate. Um, this could not handle cases where there's some probability it will terminate. So, if there is a self-loop inside of- or a small loop inside of your process, such that you could go round it forever and never terminate, you can't do Monte Carlo, and having a good discount there won't help. There are physical reasons why you might have a gamma models like that, which is great, say you model the fuel cost or something, or something would interact, would that be reasonable? The question is whether or not there might be a physical reason for gamma like fuel costs or things like that. I mean, I think normally I would put that into the reward function. Good. So, if you have something like- you can have it. So, I keep thinking about cases where basically you want to get to a goal as quickly as possible, um, and you want to sort of do a stochastic shortest paths type problem. Um, I think generally there I would probably rather pick making it a terminal state and then having like a negative one cost if you really have a notion of how much fuel costs. Um, but you can also use it as a proxy to try to encourage quick progress towards a goal. The challenge is that how you set it is often pretty subtle because if you set it too high you can get weird behavior where your agent has sort of effectively like too scared to do anything, it will stay at really safe areas. Um, and if it's too high in some cases, if it's possible to get sort of trivial reward, your agent can be misled by that. So, it's often a little bit tricky to set in real-world cases. Okay. So, they're high variance estimators that require these episodic settings, um, and, um, there's no bootstrapping. And generally, they converge to the true value under some, uh, generally mild assumptions. We're gonna talk about important sampling at the end of class if we have time. Otherwise, we'll probably end up pushing that towards later. That's for what how we do this if you have off policy data, data that's collected from another policy. Okay. Now let's talk about temporal difference learning. So, if you look at Sutton and Barto, um, and if you talk to Rich Sutton or, ah, number of, uh, and a number of other people that are very influential in the field, they would probably argue that these central, um, contribution to reinforcement learning or contribution to reinforcement learning that makes it different perhaps than some other ways of thinking about adaptive control, is the notion of temporal difference learning. And essentially, it's going to just combine between Monte Carlo estimates and dynamic programming methods. And it's model-free. We're not going to explicitly compute a dynamics model or reward model or an estimator of that from data and it both bootstraps and samples. So, remember, dynamic programming as we've defined it so far, um, it bootstraps, er, and the way we have thought about it so far you actually have access to the real dynamics model and the real reward model, but it bootstraps by using that VK minus one. Monte Carlo estimators do not bootstrap. They go all the way out to the end of the trajectory and sum up the rewards, but they sample to approximate the expectation. So, bootstrapping is used to approximate the future discounted sum of rewards. Sampling is often done to approximate your expectation over states. The nice thing about temporal difference learning is you can do it in episodic processes or continual processes. And the other nice aspect about it is that you don't have to wait till the end of the, uh, the episode to update. So as soon as you get a new observation, taking, ah, starting in a state taking an action and going to a next state and getting some reward, um, you can immediately update your value. And this can be really useful because you can kind of immediately start to use that knowledge. Okay. So, what are we gonna do in temporal difference learning? Again, our aim is to compute our estimate of v pi. And we still have the same definition of return, um, and we're gonna look at remind ourselves of the Bellman operator. So, if we know our MDP models, our Bellman operator said we're gonna get our immediate reward plus our discounted sum of future rewards. And in incremental every visit Monte Carlo, what we're doing is we're updating our estimate using one sample of the return. So, this is where we said our va- our new value estimate of the value is equal to our old estimate plus alpha times the return we just saw minus V. But this is where we had to wait till the end of the episode to do that update. What the inside of temporal difference learning is, well, why don't we just use our old estimator of v pi for that state and then you don't have to wait till the end of the episode. So, instead of using GI there you use the reward you just saw plus gamma times the value of your next state. So, you bootstrap. Say I'm not going to wait till I get only an episode, started my state, I got a reward, I went to some next state. What is the value of that next state? I don't know. I'll go look it up in my estimator and I'll plug that in and I'll treat that as, uh, as an estimate of the return. So, the simplest TD learning algorithm is exactly that, where you just take your immediate reward plus your discounted expected future value where you plug that in for the state that you actually reached. Now, notice that this is sampling. There is no- normally we would have like that nice sum. The Bellman operator we would normally have a sum over S prime probability of S prime given s a of v pi of S prime. We don't have that here. We're only giving you a single next state. And we're plugging that in as our estimator. So we're still going to be doing sampling to approximate that expectation. But just like dynamic programming we're going to bootstrap because we're gonna using our previous estimate of v pi. We also write this as like a sub a and sub k minus one to show like the iterations. Yeah. I might have down there if you want to see. No, I don't in this case. You could also write this with- um, question is if we want just to be clear about what is happening in terms of iterations. You can also think of this as p of k plus one and this is V of k, for example, you're updating this over time. The thing is is that you're doing this for every single state compared to dynamic programming, where you do this in ways where for all states- so you have sort of a consistent VK and then you're updating. Here we can think of there as just being a value function and you're just sort of updating one entry of that value function depending on which state you just reached. So there's not kind of this nice notion of the whole previous value function of any value function. I'll keep that there just for that reason. Now, people often talk about the TD error, the temporal difference error. What that is is it's comparing what is your estimate here. So, your new estimate, which is your immediate reward plus gamma times your value of the state you actually reached minus your current estimate of your value. Now, notice this one should have been sort of essentially approximating the expectation over S prime. Because for that one you're going to be averaging. And so this looks at the temporal difference. So this is saying how different is your immediate reward plus gamma times your value of your next state, versus your sort of current estimate of the value of your current state. Now note that that doesn't have to go to zero because that first thing is always ever just a sample, it's one future. The only time this would be defined to go to zero is if this is deterministic, so there's only one next state. So, you know, if half the time when I try to drive to the airport I hit traffic and half the time I don't, then that's sort of two different next states that I could go to for my current start state, either hit traffic or don't hit traffic. Um, and so I'm either going to be getting that v pi of hitting traffic or v pi of not hitting traffic. So this TD error will not necessarily go to zero even with infinite data because one is an expected thing from the current state and the other is which actual next state did you reach. So, the nice thing is that you can immediately update this value estimate after your state action reward s prime tuple and you don't need episodic settings. Yeah, Scotty? Does that affect convergence if you keep alpha constant? Yes, good question. Does this affect convergence if you keep alpha constant? Yes, and you normally have to have some mild assumptions on decaying alpha. So, things like one over T is normally sufficient to ensure these estimates convert. Yeah, question? Um, can you say anything about the bias of this estimator? Yeah. The question was whether- question was a good one, what can you say anything about the bias of this estimator? Am I having a sense of whether this is going to be a biased estimator? What of your previous or we have a sense of whether it's going to be biased? Well think back to dynamic programming, was V_k minus one. Um, an unbiased estimator of infinite horizon. Like, let's say, k is equal to two if we want the infinite horizon value. Is that- no matter how you've done those updates, it's not going to be cool. Generally, when you bootstrap, um, it's going to be a biased estimator because you're relying on your previous estimator which is generally wrong. [LAUGHTER]. So, that's going to be biasing you in one particular direction. So, it's a definitely a biased estimator. Um, it also can have fairly high variance. [LAUGHTER]. So, it can both be high-variance and be biased. But on the other hand, you can update it really, really quickly. Um, you don't have to wait till the end of the episode and you can use a lot of information. So, it's generally much less high-variance than, um, im- um, Monte Carlo estimates because you're bootstrapping and that sort of helping average over a lot of your of variability. [inaudible] Now, this question is whether or not it's a function of the initialization. It's not. It's a, it's a function of the different properties of the estimators you could initialize differently. Um, the, the bootstrapping is because you're using a- by bootstrapping and using this V_Pi as a proxy for your real expected discounted sum of returns, um, unless this is the true value, it's just going to bias you. Note that this, um, this doesn't- you don't get biased in dynamic programming when you know the models because that V_Pi, when you bootstrap it's actually V_Pi. This is actually the real value. So, the problem is the- here is that it's an approximation of the real value and that's why it's biasing you. So bootstrapping is fine if you know the real dynamics model. The real reward functions, you need computed the Pi of k minus one exactly, um, but it's not okay here because we're introducing bias. So, how does TD zero learning work? Um, I do zero here because there's sort of some interesting, um, in-between between TD learning and Monte-Carlo learning where instead of doing an immediate reward plus the discounted sum of future rewards versus summing all of the rewards, you can imagine continuums between these two where you may be- some up the first two rewards and then Bootstrap. [NOISE]. So, there's, um, there's a continuum of models, there's a continuum of algorithms between just taking your immediate reward and then bootstrapping versus never bootstrapping. Um, but we're just gonna talk right now about taking your immediate reward and then bootstrapping. So TD learning works as follows: You have to pick a particular alpha, um, which can be a function of the time-step. Um, you initialize your value function, you sample a state action reward, next state. Now in this case, because we're doing policy evaluation, let me- this will be equal to Pi of st, and then you update your value. Okay. So let's look, um, again at that example we had before. So we said that for first visit Monte Carlo, you will get 1110000, for every visit it would be one. What is the TD estimate of all states at the end of this episode? So, notice what we're doing here. We loop, we sample a tuple, we update the value of the state we just reached. We get another tuple, we sample it. So, what would that look like in this case? We would start off and we'd have S3, we'd have S3, A1, zero, S2. You'd have S2, A1, zero, S2, S2, A1, zero, S1, S1, A1 plus one, terminate. So, why don't you spend a minute and, and think about what the value would be under TD learning, and what implications this might have too. [NOISE]. Does anybody wanna say what the value is, that you get? [NOISE]. Yeah. Uh, one followed by all zeros. That's right. Okay. One followed by all zeros. So, we only updated the final state in this case. I also just wanted to- yeah, question. Um, explain why that happens. Yeah, because, um, what we are doing in this case is that we get a data point so what- we're in a state, we take an action, we get a reward, we get next state. We update the value only for that state. So what we did here is we got S3, we update it, we did action A1, we got a zero S2. So our new value for S3 was also equal to zero. Then we went to S2, we took action A1, we got a zero, we went to S2, we got- so we updated S2, it was also zero. We did that again. We finally got to state S1 and we got a one. So, the thing about this that can be beneficial or not beneficial is you throw away your data in the most naive format. You have a SAR S-prime tuple and then it goes away again. You don't keep it. So when you finally see that reward, you don't back up, you don't propagate that information backwards. So what Monte Carlo did is, it waited until he got all the way there and then it computed the return for every state along the episode which meant that that's why we got 1111. But here you don't do that. By the time you get to, um, [NOISE] S1, you've thrown away the fact that you were ever in S3 or S2 and then you, you don't update those states. I mean total reward is proportional to the number of samples you need to get a good estimate of value function? Say that again. Ah, I'm assuming that like the longer it takes for you to get a rewards, the more samples, you'd need to like properly estimate, uh, value of the function. Question out [inaudible] is sort of, you know, how long does it take you to get a reward and how many samples do you need to get a good estimate of the value function? Um, you mean for all states? It's a little nuanced. Um, it depends on the transition dynamics as well. Um, you couldn't- say for a particular, like how, how many, um, samples you need for a particular state to get a good estimate of its reward? Let's say your rewards are stochastic. But in terms of how long it takes you to propagate this information back, it depends on the dynamics. Um, so in this case, you know, if you had exactly the same trajectory and you did it again, then you'd end up updating that S2 and then if you got that same trajectory again, then you would propagate that information back again to S2 and then one more time and then you get it back to S3. I should S3 and there's the third one. So, you can slowly this- propagate this information back, um, but you don't get to kind of immediately like what Monte Carlo would do. Question. I was wondering if you could highlight the differences between this and the Q learning that we talked about last time? Because they seem like kind of similar ideas. That's great. So, exactly right. In fact, TD learning and Q learning are really close analogs. Q learning is, um, when you're gonna do control. So, we're going to look at actions. TD learning is basically Q learning where you're fixing in the policy, Yeah. Next question back there. Like you're actually like implement this so you would you would keep looping right, and updating or you just run through, uh, rewards? It depends. So, um, it depends who asked you. So if you're really, really compare- concerned about memory, um, you just drop data, so then you're on [inaudible]. If, um, in a lot of the existing sort of deep learning methods, you maintain a sort of a, a episodic replay buffer and then you would re-sample samples from that and then you would do this update for the samples from there. So you could revisit sort of past stuff and use it to update your value function. Um, you could also- it can, it can matter the order in which you do that. So in this case, you could do a pass through your data and then do it- another pass or maybe go backwards from the end. [inaudible] it will end up propagating. Some alpha back to S_2 there. Yeah. So, you just go into like convergence or- We'll talk about that very shortly. Yes. That's a great question. Like so what happens is you do this for convergence and we'll talk about that in a second. Yeah. So, just so I make sure I understand. So, when we talk about sampling of tuple, what's really happening is you're going to a trajectory and you're iterating through the SAR, the SAR tuples in that trajectory in order. Right. But we're thinking of this really as acting as- to repeat the question. The question is like we're going through this trajectory we're updating in terms of tuples. Yes, but we're really thinking about this as like your agent being in some state taking an action getting reward again and getting to a next state. So, there doesn't exist a full trajectory. It just like I'm driving my car, what's gonna happen to me in the next like two minutes? So, I don't have the full trajectory and that I'm iterating through it. It's like this is after every single time step inside of that trajectory, I update. So, I don't have to wait till I have the full trajectory. Right and, and I guess I'll just the order in which those tuples are chosen. I- I'm guessing it matters or with the values that you're getting and estimates. Yes. So, the question is like, you know, the order in which you receive tuples, that absolutely affects your value. Um, so in, uh, if you're getting this in terms of how you experience this in the world, it's just the order you experience these in the world. So, this S_t plus prime- T plus one prime becomes your ST on the next time-step. So, these aren't being sampled from a trajectory. It's like that's just wherever you are now. Um, if you have access to batch data, then you can choose which ones to pick and it absolutely affects your convergence. The problem is you don't have to know which ones to pick in advance. Questions. The other thing I just want to mention there is it's a little bit subtle, um, that if you set alpha equal to, like, you know, 1 over T or things like that, you can be guaranteed to, um, for these things to converge. Uh, sometimes if alpha is really small, um, also these are going to be guaranteed to converge under minor conditions. Um, but if you said something like alpha equals one, it can definitely oscillate. Alpha equals one means that essentially, you're ignoring your previous estimate, right? So, if you set alpha equal to one then you're just using your TD target. All right. Okay. So, what is temporal policy difference policy evaluation doing if we think about it in terms of this diagram and thinking about us as taking an expectation over futures. So, it's, um, this is the equation for it up there. And what it does is it updates its value estimate by using a sample of S_t plus 1 to approximate that expected next state distribution or next future distribution. And then it bootstraps because it plugs in your previous estimate of V_pi for this plus 1. So, that's why it's a hybrid between dynamic programming because it bootstraps and Monte Carlo because it doesn't do an explicit expectation over all the next states, just samples one. Okay. So, now why don't we think about some of these things that, like, allow us to compare between these different algorithms and their strengths and weaknesses and it sometimes depends on the application. Um, uh, you've had to pick which one is most popular, probably TD learning is the most popular but it depends on the domain. It depends on, um, whether you're constrained by data or, um, you know, computation or memory et cetera. All right. So, um, why don't we spend a few minutes on this briefly. So, let us spend a minute and think about which of these properties from what you remember so far apply to these three algorithms. So, whether they're usable when you have no models of the current domain, um, whether they handle continuing non-episodic domains, they can handle non-Markovian domains. They converge to the true value in the limit. We're assuming everything's tabular right now, we're not in function approximation land. And whether or not you think they give you an unbiased estimate of the value. So, if at any time point if you were to take your estimator if it's unbiased. So, why don't you would just spend a minute see if you can fill in this table. Feel free to talk to someone next to you and then we'll step through it. [NOISE] All right which of these are usable when you have no models of the current domain? [NOISE] Does dynamic programming need a model of the current domain? Yes. Yes. Okay. What about Monte Carlo? Usable. Usable. What about TD? Usable. Usable. Yeah. Do either of those, TD is known as what? As a model free algorithm, doesn't need an explicit notion. It relies on sampling of the next state from the real world. [NOISE] Which of these can be used for continuing non-episodic domains? So, like, your process might not terminate, ever. Okay. Well, can TD learning be used? Yes. Yes. Can Monte Carlo be used? No. No. Can DP be used? Yes. Yes. Okay. Which of these, um, does DP require Markovian? Yes. It does. Which- does Monte Carlo require Markovian? No. Does TD require Markovian? Yeah, it does. So, um, uh, temporal difference and dynamic programming rely on the fact that your value of the current state does not depend on the history. So, however you got to the current state, it ignores that, um, and then it uses that when it bootstraps too, it assumes that doesn't- so, Monte Carlo just adds up your return from wherever you are at now till the end of the episode. And note that depending on when you got to that particular state, your return may be different and it might depend on the history. So, Monte Carlo doesn't rely on the world being Markov. Um, you can use it in partially observable environments. TD assumes that the world is Markovian, so does dynamic programming in the ways we've defined it so far. So, you bootstrap you say, um, for this current state my prediction of the future value only depends on this current state. So, I can say I get my immediate reward plus whatever state I transition to. But that's sort of a sufficient statistic of the history and I can plug-in my bootstrap estimator. So, it relies on the Markovian assumption. What about non-Markovian domain where do we apply it? Um, the question is well, what do you mean by non-Markovian? Like, these are algorithms you could apply them. Um, so yeah. You can apply these algorithms to anything. The question is whether or not they're guaranteed to converge in the limit to the right value. And they're not, if the world is not Markovian and they don't. Like [LAUGHTER] we've seen in some of our work on intelligent tutoring systems, earlier on we were using some data, um, from a fractions tutor and we're applying Markovian techniques and they don't converge. I mean, they converge to something that's just totally wrong and it doesn't matter how much data you have because you're- you're using methods that rely on assumption that is incorrect. So, you need to be able to evaluate whether they're not Markovian or try to bound the bias or do something. Um, otherwise your estimators of what the value is of a policy can just be wrong even in the limit of infinite data. Um, what about converging to the true value in the limit? Let's assume we're in the Markovian case again. So, for Markovian domains, does, um, DP converge to the true value in the limit? Yes. What about Monte Carlo? Yes. Yes. What about TD? Yes. Yes. They certainly do. The world is really Markovian, um, everything converges. Asymptotically no under minor assumptions, all of these require minor assumptions. Um, uh, so under minor assumptions it will converge to the true value of the limit, depends on, like, the alpha value. Um, uh, what about being an unbiased estimate of the value, is Monte Carlo an unbiased estimator? Yes. Yes. Okay. TD is not. DP is a little bit weird. It's a little bit not quite fair question there. DP is always giving you the exact VK minus one value for that policy. So, that is perfect, that's the exact value. If you have K1- K minus 1 more time steps to act, that is not going to be the same as the infinite horizon value function. Yeah. Can you explain how the last two lines are different. Like I don't understand the difference between unbiased estimator of value and something that converges to the true value of order. Your question's great. So, the question is what's the difference between something being unbiased and consistent? Um, so when we say converges to the true value in limit, that's also known as formally being a consistent estimator. So, the unbiased estimate of the value means, if you have a finite amount of data and you compute your statistic, in this case the value, and you compare it to the true value, then on average, that difference will be zero and that is not true for things like TD. But, um, and that can be- that's being evaluated for finite amounts of data. What consistency says if you have an infinite amount of data, will you get to the true value? So, what that implies is that, say for TD, that asymptotically the bias has to go to zero. If you have infinite amounts of data, eventually its bias will be zero. But for small amount, you know, for finite amounts of data and really, you know, you don't know what that N is. Uh, it could be a biased estimator but as the amount of data you have that goes to infinity then it has to converge to the true value. So, you can have things that are biased estimators that are consistent. Yeah. For Monte Carlo, I thought you said that the implementation has an impact on whether or not it's biased is- I thought you said if it's every visit then it is unbiased [OVERLAPPING] Good question. So, I, um, the question is good. So, this is, um, it's an unbiased estimate for the, um, first visit. And for every visit, it's biased. Great. Question? Um, this might be a dumb, uh, a dumb question but are there any, uh, you know, model free policy evaluations models that aren't actually convergent? Yes. Question was, are there any model free policy evaluation methods that are not convergent? Yes, and we will see them a lot when we get into function approximation. When you start- so right now we're in the tabular case which means we can write down as a table or as a vector what a value is. We move up to infinite state spaces. Um, a lot of the methods are not even guaranteed to converge to anything [LAUGHTER] Not even- we're not even talking about whether they converge to the right value, they're not even guaranteed to stop oscillating. And they can just keep changing. Okay. Yeah. Question. So, is there any specific explanation why TD is not unbiased? Is- is what? Why TD is not unbiased? Why it's not unbiased? Yeah. Great question. So, the question was to say, you know, why is TD biased. TD is biased because you're plugging in this estimator of the value of the next state, that is wrong. And that's generally going to leave to- lead to some bias. You're plugging in an estimator that is not the true V pi for S prime. It's going to lead to a bit of bias. So, it's really the-the bootstrapping part that's the problem. The Monte Carlo was also sampling the expectation and it's unbiased, at least in the first visit case. Problem here is that, um, you're plugging in unexpected discounted sum of rewards that is wrong. All right. So, um, that just summarizes those there. I think the important properties to think- to compare between them. Um, how would you pick between these two algorithms. So, I think thinking about bias and variance characteristics is important. Um, data efficiency is also important as well as computational efficiency, and there's going to be trade offs between these. Um, so if we think about sort of the general bias-variance of these different forms of algorithms, Um, Monte Carlo is unbiased, generally high-variance, um, and it's consistent. TD has some bias, much lower variance than Monte Carlo. TD zero converges to the true value with tabular representations, and as I was saying it does not always converge once we get into function approximation and um, we'll see more about that shortly. I think with the last few minutes, we won't have a chance to get through, um, little bit about how these methods are related to each other when we start to think about the batch setting. So as we saw in this particular case for the Mars Rover just again contrast them. Um, Monte Carlo estimate waited till the end of the episode and then updated every state that was visited in that episode. TD only used each data point once, and so it only ended up changing the value of the final state in this case. So what if- happens if we want to go over our data more than once. So if they we're willing to spend a little bit more computation, so we can actually get better estimates and be more sample efficient. Meaning that we want to use our data more efficiently so that we can get a better estimate. So often we call this batch or offline, um, mal- policy evaluation where we've got some data and we're willing to go through it as much as we can in order to try to get an estimate of the policy that was used to gather that data. So let's imagine that we have a set of k episodes, and we can repeatedly sample an episode. um, and then we either apply Monte Carlo or TD to the whole episode. What happens in this case? Um, so there's this nice example from Sutton and Barto. Um, let's say there's just two states. So there is states A and B, and Gamma is equal to one, and you have eight episodes of experience. So you either the first episode you saw A, 0, B, 0. So this is the reward. In B, you saw, in the sorry- in the, and then another set of episodes you just started in B, and you got one, and you observe that six times, and then in the eighth episode you started in B and you got a zero. So first of all, can we compute what V of B is in this case? So the, the model of the world is going to look something like this. A to B and the B sometimes goes to one, and B sometimes goes to zero and then we always terminate. So in all eight episodes we saw B. In six of those we got a one, in two of them we got a zero. So, if we were doing Monte Carlo, what would be our estimate of B, value of B. So we do a Monte Carlo estimate using these eight episodes and we can go over them as many times as we want. We don't just have to experience each episode once. This is the batch data set. Someone already collected this for us, can do Monte Carlo updates of this data set as much as you want. What will be the estimate of V of B in this case? [NOISE] which is just equal to six divided by eight. In the Monte Carlo estimate or do TD, what will be the TD estimate of B? Remember what TD does is they get this S-A-R-S prime and you bootstrap. You do Alpha times R plus Gamma V of S prime, and then do one minus Alpha of your previous estimate. What is the [inaudible]. Um, here in this case you can make Alpha anything small you're gonna do it in infinite number of times. So this is the batch data settings so for TD you're just going to run over your data like millions and millions of times. Until convergence basically. Somebody have any guesses of what V of B would be for TD. It's also three quarters. It's also three quarters. Okay, so for, for TD it's the same because whenever you're in B you always terminate. So it's really like just a one step problem from B, and so for TD it'll also say V of B is equal to six divided by eight which is equal to three quarters. So the two methods agree in the batch setting for this. If you can go for your data an infinite amount of time, V of B is equal to 3- 6 8ths since so is um, ah, under both methods. Um, does anybody know what V of A would be under Monte Carlo? Okay, yeah. V of A under Monte Carlo is going to be 0. Why? Yeah good. Because the only [NOISE] only trajectory where you have an A and I will be [inaudible] from here. okay there's only one trajectory we have an A and it got a zero reward. What do you think might happen with TD? Is it going to be 0 or non-zero? Its non-zero. Non-zero, why? Because you bootstrap the value from A. Could you bootstrap right so, so yes there's only time you're in A you happen to get zero in that trajectory, but this is- in TD you would say, well, you got immediate reward of zero plus Gamma times V of B, and V of B is three quarters. So here Gamma is equal to one. So your estimate of this under a TD would be three quarters. We don't converge to the same thing in this case. So why does this um, ah, so this is what we just went through and we can think about it in terms of these probabilities. Um, So what is- what's happening here? Monte-Carlo in the batch settings converges to the minimum mean squared error estimate. So it minimizes loss with respect to the observed returns. Um, and in this example V of A is equal to zero. TD zero converges to the dynamic programming policy with a maximum likelihood estimates for the dynamics and the reward model. So it's equivalent to if you essentially just- just through counting you estimated P hat of S prime given S a. So for this would say the probability of going B given A is equal to 1. [inaudible] Because the only time you've been on A you've went to B and then the reward for B is equal to three quarters and the reward for A is equal to 0, and then you would do dynamic programming with that, and you would get out, get out the value. So, TD is converging to this sort of maximum likelihood MDP value estimation, and Monte Carlo is just converging to the mean squared error. It's ignoring- well it doesn't assume Markovian. So it's not using them this Markov structure. Question. Just to confirm on the previous slide um,if I'm going over data many times because for TD learning on the first iteration V of A would be zero, right? Because V of A has [inaudible] just assuming. But after a while V of B has converged to three quarters? [OVERLAPPING]. So, In, in, in the online setting, um, if you just saw this once, ah, then this- then V of A would be zero for that particular update. It just that if you did it many, many, many times then it would converge to this other thing. So you know which one is better? Well if your world is not Markovian you don't want to converge to something as if it's Markovian so Monte Carlo was better. But if you're world really is Markov, um, then you're getting a big benefit from TD here. Because it can leverage the Markov structure, and so even though you never got a reward from A, you can leverage the fact that you got lots of data about B and use that to get better estimates. Um, I encourage you to think about how you would compute these sort of models like it's called certainty equivalence. Where what you can do is take your data, compute your estimated dynamics and reward model, and then do dynamic programming with that, and that often can be much more data efficient than these other methods. Um, next time we'll start to talk a little bit about control. Thanks. |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | Problem_Set_8_Problem_2_Bioenergetics_of_the_Electron_Transport_Chain.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality, educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. BOGDAN FEDELES: Hi, everyone. Welcome to 5.07 Bio Chemistry Online. I'm Dr. Bogdan Fedeles. I'm going to help you work through some more biochemistry problems today. I have here question 2 of Problem Set 8. Now, this is the question I put together to get you thinking about the electron transport chain. As you know, the electron transport chain is a fundamental redox process through which we convert the chemical energy of the covalent bonds into an electrochemical gradient. This electrochemical gradient is like a battery, and it can be used inside the cell to generate, for example, ATP, which is the energy currency of the cell, or it can be dissipated to generate heat. We're going to see both of these modes in action in this problem. Now in most organisms, the electron transport chain helps to transfer electrons all the way to molecular oxygen. However, in this problem, we're dealing with an organism that lives deep inside the ocean where the atmospheric oxygen is not available. And it turns out this organism transfers its electrons to sulfate. Sulfate is the final electron acceptor. Part A of this problem asks us to write the order of the electron carriers as they would function in an electron transport chain for this organism. Now, for a number of redox processes, the problem provides a table with the electrochemical reducing potentials, as you see here. Now, I've selected the ones that are mentioned in the problem, and I put them into a smaller table here. As you can see, we're dealing with cytochrome A, B, C, C1. This is the flavin mononucleotide. This is the sulfate, the fine electron acceptor, and ubiquinol. Now, on this column here we have the redox potential, which are the electrochemical reduction potentials denoted by epsilon, or e0 prime. Now, e0, as you know from physical chemistry or physics, denotes the electrochemical potential in standard conditions. However, in biochemistry, we use the e0 prime notation to denote that the pH is taken into account, and it's not what you would expect, like of hydrogen ion's concentration equals 1 molar, but rather it's a pH of 7. The hydrogen ion's concentration equals 10 to the minus 7. So therefore, these numbers are adjusted to correspond to pH 7. The electrochemical potentials we see in this table are reduction potentials, and they tell us how easy it is to reduce a particular species. Therefore, the higher the number, the easier it is to reduce that particular species and the more energy the reduction of that species will generate. Therefore, the electron transport chain will go from the species that hardest to be reduce towards the species that are easiest to be reduced. Therefore, the order of the electron carriers will be from the ones that have the lowest reductive potential to the ones that have the highest reductive potential. So now if we're going to sort all these electron carriers in order of their potential, we're going to get the following order as you see here. So the electrons are going to flow from the flavin into the coenzyme Q, and then the electrons are going to flow coenzyme Q to cytochrome B, and then Cytochrome C1, C, A, and sulfate. And as you can see, flavin has a negative reduction potential. It's like the hardest to be reduced. And the next one is ubiquinol. It's barely positive. And then the highest number is sulfate 0.48 volts. Now, let's take a closer look how the electrons are going to be transferred through this proposed electron transport chain. In the first reaction, here we have the flavin, I've written the flavin adenine dinucleotide, FADH2, the reduced version, is going to be converted to the oxidized FAD version of it. And in this redox reaction, we're going to use the coenzyme Q, the oxidized version and reduce it in the process. So the electrons get transferred from FADH2 to coenzyme Q. Now, in the next reaction, the reduced version of coenzyme Q is going to get oxidized back to coenzyme Q and in the process cytochrome B is going to go from its oxidized form to its reduced form. Now, this process continues with every single step, every single electron carrier up until we get to the sulfate where the reduced form of the cytochrome A will donate its electrons to the sulfate, and sulfate would get reduced to its reduced form. It's called sulfite. So if we were to draw how the electrons move through this chain, the electrons are going to start at FADH, and then they're going to be transferred to coenzyme Q in the reduced form. And then coenzyme Q is going to pass it to the cytochrome B. That's going to be in its reduced form. And then cytochrome B is going to pass it to cytochrome C1, and then cytochrome C, cytochrome A, and finally, they're going to end up in sulfite. Another thing to notice here is that except for the initial flavin and the final electron acceptor, sulfate, all the other intermediates get regenerated. So we go from the oxidized version to the reduced version and back to the oxidized version. So all these electron carriers are going to be sufficient only in catalytic amounts. So the only thing that gets consumed is the FADH2 and the sulfate. These are two reactants. And we get in this reaction FAD and sulfite. What we just said will help us segue into the Part B of the problem, which asks us to calculate how much energy do we get by converting one molecule of FADH2 and one molecule of sulfate into FAD and sulfite, respectively. Now as we pointed out here, only the FADH2 and sulfate are consumed in this reaction. All the other electron carriers are recycled and regenerated in the course of the electron transport chain. In order to calculate the energy, it's useful first to write the half reaction of the redox processes. Here are the two half reactions of this redox process. FADH2 gets oxidized through FAD and donates its two electrons. And the epsilon, or e0 prime is minus 0.22 volts. Now, this is the potential from the table, and that's a reduction potential. The equation as written is an oxidation, and therefore, the potential that we need to take into account is the minus of this one. Sulfate is then going to accept the two electrons and going to get reduced to the sulfite and water. And the electrochemical potential for this is 0.48 volts. So now when we add these two together, we get the overall process where FADH2 gets oxidized by sulfate to generate FAD and sulfite. And the electromotive force is just the mathematical sum of these two keeping in mind that this has to be taken as a negative sign. Because, again, as written, this is an oxidation and this the potential for the reduction reaction. So electromotive force is actually 0.7 volts. Now, we can easily convert from the electromotive force to a delta g0 prime value, and the relationship is written here, delta g0 prime. It's minus nF delta e0 prime and is the number of electrons in the process as we see here. Two, F is the Faraday's constant and delta e0 prime is going to be the electromotive force. And if we go through the number crunching, we get a delta g0 prime minus 135 kilojoules per mole. Notice because it's a negative number that means there's a spontaneous process as written. And as you know, the negative delta g will correspond to a positive electromotive force. Now, we're just one step away from calculating how much ATP we can produce with this energy. As you know, we generate ATP out of ADP and phosphate, and this is the reaction that's catalyzed by ATP synthase. And it takes about 30.5 kilojoules per mole to form ATP out of ADP and phosphate. Therefore, the 135 kilojoules per mole that we generated from 1 mole of FADH2, it's going to be enough for about 4 molecules of ATPs. This is in contrast, which was the normal processes that use oxygen as their final electron acceptor where out of one FADH2 molecule, will generate at most 2 molecules of ATP. So in some ways, sulfate is actually a better electron acceptor and can give us more energy. Part C of these problem deals with a culture of this microorganism in the lab. And we're adding to this culture dinitrophenol, a compound we're told has a pKa of about 5.2. So let's explore what happens to the electron transport chain of the organism when we add dinitrophenol. Here I put together a cartoon representation of the electron transport chain of our organism. So as you can see here, this is the extracellular environment. This is the outer membrane. This is the inner membrane where we have all these complexes I denoted here with these rectangles of the electron transport chain. And FADH2, for example, is going to donate its electrons. They're going to be passed along all the way to sulfate. And in the process, protons are going to get pumped into this intermembrane space. Now, these protons can be used in the ATP synthase as they travel back into the intercellular space. Their energy can be used to convert ADP and organophosphate to ATP as we just discussed in Part 2. Now, to this organism, we said we're going to add dinitrophenol. Here is the structure of dinitrophenol. And we're told the pKa of this proton, right here, the pKa is about 5.2. When this compound diffuses through the membrane, it's going to go through this intermembrane space, which has a very low pH and also in the intercellular space in the cytosol, which has a much higher pH. So because pKa 5.2, it's a relatively low, much lower than 7, pKa, in the intermembrane space where it's more acidic, it's going to be protonated. So we can write, for example, dinitrophenol OH in equilibrium with dinitrophenol O minus plus a proton. Now, because here we have a lot of protons, this equilibrium will be shifted to the left. That is the protonated form of dinitrophenol. However, here in the cytosol, the NPOH, it's going to be in the same equilibrium O minus plus H plus. But because the pH is fairly high, that is there are not a lot of protons, this equilibrium is going to be shifted to the right. This equilibrium is going to be shifted to the left. So now look what happens. So because this equilibrium has shifted to the left, it's going to keep soaking up a lot of these protons. Then the neutral dinitrophenol molecule is going to diffuse through the membrane as such and enter the intercellular space to cytosol where it's going to be deprotonated. The equilibrium is shifted to the right. So in effect, dinitrophenol is going to carry the protons from the intermembrane space inside the cell. Now it's going to do that in parallel with the protons that are going to be flowing through the ATP synthase to generate ATP. So in effect, we're discharging this battery where the concentration of protons is basically our electrochemical gradient. It's going to be discharging the battery without producing ATP. So as you know, if you short circuit a battery, the battery is going to heat up because you're discharging an electrochemical gradient. Similarly, dinitrophenol, by taking these protons from the intermembrane space and bringing them inside into the intercellular space, it's going to be generating heat. Therefore, we can answer Part C by saying that the medium in which these cells are growing is going to heat up when we add dinitrophenol to it. The processes described in this problem are fairly universal. Now, in eukaryotes, like more evolved organisms, they would happen in the mitochondria. Now, if you look back at this diagram, if this was the double membrane of the mitochondria, this would be the inside of the cell that contains the mitochondria, this would be the intermembrane space, and this will be the inside of the mitochondria or the mitochondrial matrix. Similarly, by adding a compound like dinitrophenol, who can dissipate the electrochemical gradient in the mitochondria and cause the cell to heat up. In fact, this process is actually used by a number of organisms to generate heat instead of chemical energy, or ATP. For example, the brown fat cells in newborns in mammals have a special protein that allows to dissipate this electrochemical gradient in the mitochondria to generate heat. Another good example is the seeds of many plants. When they germinate, they actually generate a lot of heat that can be used to melt the ice or the snow around them. That's why some of the plants can start growing even before the snow has melt in the early spring. I hope that working through this problem will help you understand better the inner workings of an electron transport chain and how it can convert the chemical energy of chemical bonds into an electrochemical gradient, which can then be used to generate high energy compounds like ATP. Or it can be dissipated to generate heat. |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | Problem_Set_2_Problem_1_Primary_Structure.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality, educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. BOGDAN FEDELES: Hi, and welcome to 5.07 Biochemistry Online, the biochemistry course on MIT OpenCourseWare. I'm Dr. Bogdan Fedeles. Let's metabolize some problems. Now today we're going to do a really nice problem. This is Problem 1 in Problem Set 2. Now, this is a problem about elucidating the primary structure of a protein. Problems like this one that we're about to discuss are a lot of fun because they're really biochemistry puzzles. We're given a number of pieces of data or clues, if you want, and then we'd have to use, not only our biochemical sense, but also deduction and elimination process in order to come up with a final answer. Well, in practice, elucidating the primary structure of a protein is now a largely automated process and utilizes high resolution mass spectrometry. Some of the traditional chemical methods that we're discussing here are still occasionally useful. But most importantly, the logical step-wise process through which we analyze and use each piece of data to construct a big picture result is really representative of the process by which we make discoveries in biochemistry. Before we begin, let me just say that this problem assumes familiarity with the structures and abbreviations of the 20 natural amino acids. Feel free to pause this video and review the relevant chapters in the book and the lecture notes before continuing. One important tool that we have for elucidating the primary structure of proteins is proteases. Proteases are enzymes that can hydrolyze the peptide bonds of a polypeptide chain. Now, proteases that can cleave in the middle of a polypeptide chain are also called endopeptidases. Now you notice a lot of these names that end in "ase" denote enzymes, and peptidase means an enzyme that acts on a peptide. It's the enzyme that hydrolyzes the peptide bond. Endo, in this case, refers to the fact that it acts in the middle of a polypeptide chain. Now, we're going to be learning in this problem about trypsin and chymotrypsin. These are proteases that cleave in the middle of a polypeptide chain. They are endopeptidases. One important feature of proteases is that they are specific. They don't just cut any which one peptide bond, but rather they recognize a specific sequence of amino acids. In the case of trypsin, for example, we are told that it cleaves adjacent to positively charged amino acid. As you know, positively charged amino acids would be lysine or arginine. So trypsin will always be cutting after arginine or lysine. Let's take a look. Now, here is a polypeptide chain with amino acid residues, R1, R2, R3, R4. Now, let's look at this peptide bond right here in the middle of the chain. Now let's say in order for trypsin to cut, to hydrolyze this peptide bond, it means that R2, the amino acid residue adjacent to it, should be a positively charged one. So if R2 is lysine or arginine, then this bond here becomes a good substrate for trypsin. And what it's going to do, it's going to use a water molecule-- it's going to put here trypsin-- and it's going to hydrolyze forming a carboxyl end and an amine end of this peptide bond. So we're getting this is a carboxyl end and this is the amine end of that original peptide bond. So what I want you to remember is that if we're having a reaction with trypsin on a polypeptide chain, then the carboxy end of the peptide that results, which is this particular amino acid, is going to be one of either lysine or arginine. So in other words in a trypsin digest, all the smaller peptides that we obtain are going to end in arginine or lysine except, of course, the very end of the chain, which might have a different amino acid at its carboxyl end. Now, this consideration we've just made about the trypsin digest, in fact, answers part 1 of the problem, which is asking what is common about all these peptides generated by a trypsin digest. And as we've just explained, all these peptides should be ending with a positively charged amino acid such as lysine or arginine. This problem also mentions another protease chymotrypsin, which is a protease that has a different specificity from trypsin. It actually cleaves after amino acids that are either very hydrophobic and large or aromatic. So let's write that down and remember. So if we were to do a digest with chymotrypsin, then our R2, the residue that's recognized by the protease, is going to have to be either aromatic, so phenylalanine, tyrosine, or tryptophan, or something that's large and hydrophobic such as leucine, isoleucine, or even valine sometimes. Put this in parentheses. So if any of these residues are at this position, R2, then this protease, chymotrypsin, is going to generate two peptides. And once again, the resulting peptide, R2 at the carboxy end is going to be one of these amino acids. That's going to be the signature of a chymotrypsin digest. Now, keeping these things in mind, we're ready to tackle the rest of the problem. Question 2 asks about the use of DTT, or dithiothreitol. Now, DTT is a commonly used reducing agent, which can reduce disulfide bridges in proteins. There are many reasons why we want to use DTT. For example, when proteins form disulfide bridges, you may shield certain amino acids from being accessible by proteases, and therefore, they're not going to be cleaved and we'll get a mixture of peptides. So it makes our analysis and our results very difficult. Now, disulfide bridges can also hold two peptides together that have no other covalent attachment between them. So in that case, we get one fragment instead of two fragments, and once again, it complicates our analysis of the protein. But more importantly, because we're typically purifying proteins in the air, in an oxygen atmosphere, proteins can acquire disulfide bonds, which weren't there in the beginning. So in that case, we can get very unusual results, unreproducible and artefactual. That's why using DTT can prevent formation of spurious disulfide bridges. And finally, DTT is also useful to tell if there were any disulfide bridges in the protein to begin with. Because if we're looking at the analysis before and after using DTT, we can tell if the results change, and that will tell us whether disulfide bridges were there to begin with. Now, let's take a look at the DTT chemistry. Here we have a disulfide bond or bridge in a protein, and we're going to treat this with DTT, which looks like this. This is DTT or dithiothreitol. Now, if we substitute the SH groups with OH's, you notice we're going to have four OH's and four carbons. That's just an alcohol derived from a sugar, which is called threose That's why the threitol part of the name. All right. So when we do this chemical reaction, this disulfide bridge is going to transfer between the two sulfur atoms of DTT. They're going to form a intramolecular sulfide bond. So our cysteines are going to get reduced, and from DTT, we're going to get this intramolecular sulfide. And because entropical considerations, make the formation of the six-member ring very, very easy, then this equilibrium will typically shift to the right. Of course, we're also using vast quantities of DTT to make sure that the entire protein becomes reduced, and then our agent is going to pick up this [INAUDIBLE] off the bond. This sums up the question 2 of the problem. Next, we're going to look at distinguishing between a couple of different peptides that we generate during our analysis of our mystery protein. So we're told that we're isolating by HPLC peptides of the following composition. One has tryptophan, phenylalanine, valine, aspartate, lysine, cysteine. That's peptide one. Another one has phenylalanine, serine, cysteine, and an unknown amino acid. Finally, the third one has alanine and lysine. All right. Now, obviously, these peptides have different compositions, so if you could just put them through mass spectrometry, we're going to get different masses, so we can tell very quickly which one is which. But if we don't have mass spec available, we can also tell them apart only using UV-Vis spectroscopy. So all you need to remember is that an amino acid like tryptophan that we have here, W, has a very strong absorption in the 280 to 300 nanometers. Whereas, amino acids like phenylalanine, present here or here, they absorb only around 260 nanometers. Most of the other amino acids don't absorb in this range at all. So if we were to plot the UV-Vis spectra of these peptides, this is going to be our absorption, and this is going to be lambda in nanometers, the wavelength. So let's look in the range from 200 to 400. 300 is about here. 250 is about here. And let's label these. Let's say this peptide is red, this peptide is blue, and this one is green. All right. So now the red peptide, as I told you, contains both tryptophan and phenylalanine, so it's going to absorb both around 260 and both around 280 to 300. So it's going to have a pretty big hump in this area from 250 to 300. Now, the blue peptide only has phenylalanine, so above 280 or so, nanometer is going to drop off. So it's going to look more like this, whereas, the green one, well, it has neither phenylalanine or tryptophan, so below 240 or so, it's going to have no UV absorption at all. So it's going to drop off right here. So basically, if we're just looking, say, around 260 nanometers, we should see a stronger peak from the red peptide and a weaker peak from the blue one. But if we're going to look around 300, we should only see the red peptide. So by just using the UV-Vis absorption, we can tell these three peptides apart. Now we're ready to figure out the structure of our mystery protein. This is exactly what the last part of the problem is asking. So we're going to take each one of the clues provided-- A, B, C, and D-- and analyze and see what information we can derive from each one of them. The first piece of information that we're given is the result of a total hydrolysis of our peptide with six molar HCl. So we're told we get the following amino acid composition. We get two phenylalanines, one methionine, alanine, valine, two lysines, one serine, two cysteine, and one aspartate. Now, recall this is not a comprehensive list because the hydrolysis and six smaller HCl may destroy some of the amino acids. And specifically, we know for sure something like tryptophan, threonine, and tyrosine. They're going to be destroyed, and we're not going to see them here. So these amino acids may still be present in our protein, but we're not going to be able to see them in this situation. Now, let's analyze these amino acids and see what we can derive from this. So first of all, we have cysteines. So two cysteines, it means our protein can form disulfide bridges. So from the get go, two cysteine, it means we can form an internal disulfide bridge or our protein contains a disulfide bridge between two otherwise unconnected peptides. So here are some possibilities. For example, our peptide is one chain, and our cysteines are not actually connected by a disulfide bridge. Another possibility is we do have a disulfide bridge between them, like that. Or yet another possibility is that we have two pieces, two polypeptide chains, and the disulfide bridge is the only thing that's holding them together. Now, in each one of these cases, obviously, these polypeptide chains are oriented. So in one end, they're going to have the amine group. Let me draw it here, NH3. Here we're going to have two, and the other end is going to be the carboxyl group, COO minus. And once again, in this case, we're going to have two of them. All right. Now, how can we narrow down from these couple of possibilities? Well, the fact that we're getting two lysines here is a very important clue. Now remember, we're told that our peptide, this mystery protein, it actually comes as a result of a trypsin digest. And if you recall our discussion earlier, we said that upon trypsin digest, all the smaller peptides that were obtained will end in lysine or an arginine. The fact that we have two lysines here, it means both of them they need to be at the carboxy ends of peptides. Because if they were in the middle of a chain, the trypsin would have cut that chain in half. So two lysines means we've got to have two carboxy ends. So therefore, this seems to be the only possibility in which we can accommodate two lysines. Basically, the two carboxy ends that we see here, each one has to be a lysine. This possibility only has one carboxy end, so the other lysine that we have to place will not be at the end, so would not be compatible with a trypsin digest. Same applies for this case. So these possibilities are not consistent with our data. So we know already that our protein must look like this, two polypeptide chains, each one ends with lysine, and there must be a disulfide bridge. The second clue we're given is that the Edman degradation of the protein yields valine. Now, as you know, Edman degradation is a chemical reaction by which we can digest the protein from the N terminus, from the amino terminus of a polypeptide chain. Now as you recall, we have established in the first part that we have two amino termini in our protein. So the fact that we only get one amino acid and that is valine, it says that the other amino terminus might be blocked or somehow unavailable for the Edman degradation. So let's update the structure of our protein to take into account the second clue. So we said we have two polypeptide chains. There's a disulfide bond in between them. Now, each one of them ends with a lysine. This is a carboxy end, and now we know from the Edman degradation that amino end of one of them has to be valine. The other one-- we're going to put a box like that-- is a blocked end. So the amino terminus is not available. So this is as much as we can tell from these first two clues. We're given the products of the chymotrypsin digest of our protein after it was previously treated with DTT. So we know first DTT is going to cleave the disulfide bond, and then chymotrypsin, as we talked previously, is going to cut after large, hydrophobic, or aromatic amino acids. Now, we're told we're getting five smaller peptides. Let's take a look. Here are the five fragments. One is tryptophan, valine; one is cysteine, phenylalanine, another one is aspartate lysine, another one is methionine, alanine, cysteine, and lysine; and finally, the last one has serine, phenylalanine, and something that's not an amino acid. It's going to make it like a small x. All right. Now, from what we know about chymotrypsin digest, we should be able to orient these peptides basically to tell which amino acid is at the N terminus and which amino acid is at the C terminus of each one of them. Now the first one. Well, we know from the second clue that valine is at the N terminus, so that makes it pretty easy. Then the sequence has to be V-W. So valine is the N terminus, W is the C terminus, and as we said, tryptophan is one of the amino acids recognized by chymotrypsin, so it's going to end up at the carboxy terminus. C-F, that has to go C and F, phenylalanine, another chymotrypsin amino acid that's left of this carboxy terminus. D-K. Now, neither of these amino acids is recognized by chymotrypsin, but we remember from clue one that D-K must be the carboxy terminus. So the sequence can only be D-K. Now, here M, A, C, and K, none of these is actually on our recognition list for chymotrypsin, but once again, we know that K must be in the carboxy end. So for now we're going to have M, A, and C in a particular order, which we cannot establish just yet and K at the carboxy end. And finally, we do an S, F and x. Well, x is not even an amino acid, and F is an amino acid that's left of the carboxy terminus by chymotrypsin, so it's probably x, S, and F. Now, what about x? We're told that x is not an amino acid and is hydrophobic and it has a molecular weight of 256 Daltons. All right. So in order to figure out what x is, we have to read back the beginning of the problem, which tells us that we're looking at a protein that's associated with a plasma membrane. And one way for proteins to associate with a plasma membrane is to be modified, to incorporate a fatty acid, such as palmitate. So perhaps x, which is blocking the N terminus of this peptide is a fatty acid. Now, the general formula for fatty acids is something like this, CH3 CH2 repeated, say, n times, and then COOH. So let's see what would be the fatty acid that has the molecular weight 256 Daltons? Well, the mass of a methyl group is 15. The mass of this carboxylate is 45, so we have about 60 plus 14n equals 256 or 14n equals 196. Therefore, n is 14. So the fatty acid that would fit these criteria-- it's hydrophobic, it has a mass of 256-- will be CH3, CH2 14 times COOH, or the fatty acid was 16 carbons. This is palmitic acid. Now, the answer that we got, palmitic acid, was anticipated in the text of the problem because it gave us an example one way to associate proteins to the plasma membrane is to form a covalent linkage with a fatty acid such as palmitate. But how does a protein associate with a plasma membrane when it has a palmitic acid residue as part of it? Well, let's take a look at a diagram. Here we have a representation of the plasma membrane, where we have the phospholipids like the hydrophilic head pointing inside and outside the cell and the hydrophobic tails of the fatty acids lined up to each other. Now, imagine we have a protein that's modified to contain one of these fatty acids. Then this fatty acid can just insert right next to the phospholipids of the plasma membrane. And that way it tethers the protein right at the plasma membrane. The final clue that we're given in order to figure out the structure of our mystery protein is the digest with an inorganic agent this time. It's called cyanogen bromide. Cyanogen bromide reacts quite specifically with methionines, and it cleaves the peptide bond after methionine leaving behind an unnatural amino acid called a homoserine lactone. Now, let's take a look at what happens when we treat our protein with cyanogen bromide. So we're told we're getting the following peptides, A, K and W, F, V, D, K, C and F, S, C, and an unnatural amino acid. As I just explained to you, the unnatural amino acid probably is this homoserine lactone. So it's most likely, instead of this amino acid in the actual sequence, we had a methionine. So we know these four residues-- F, S, C, and M-- go together. All right. Now, we can start putting together the clues from part 3 and 4 and try to figure out the final structure of our protein. So the peptide A, K, we know has to have K at the C terminus. Now, from part 3, we found out that there was a peptide that contained M, A, C, and K, after the chymotrypsin digest. So now we know K has to be the carboxy end and A has to be right before K, so in order to get an A-K peptide, then A has to be right after methionine because that's where cyanogen bromide is going to cleave the peptide bond. So the only sequence that we can have here is going to be C followed by M followed by A followed by K. So when we treat the cyanogen bromide, we're going to be cleaving this bond between M and A, generating M as a homoserine lactone. Now, we also know that M has to be in this small peptide, and we know it has to be C, M as a sequence. So therefore, S and F have to be on the amino terminus of this peptide. And we also have a clue from part 3, which said that S, F, and this non amino acid moiety were in the same peptide. So from here we said that, well, the sequence there, it's probably x modifying the amino S and then F with a carboxy end. So putting these three things together, we can come up with a sequence for this strand, which is going to be x, S, F, then C, M and then A and K. So this is probably one of the peptide chains in our mystery protein. Then, of course, the other is going to be composed of these amino acids. Now, we already know something about the sequence of these. For example, we know V goes before W. We also know D goes before K, and also know C goes before F. Now, we know this has to be the carboxy end of the peptide chain. Let's write it here, COO minus. And V has to be the amino end. So then C-F, there's no other way, has to be in the middle. So the only possible sequence for our second chain is going to be V, W, C, F, D, and K. Now that we've established the exact sequence of each one of these peptide chains, then we can put together the final structure of our mystery protein. So I'm going to transcribe these here, the first chain V, W, C, F, D, and K. And we know the cysteine is going to have our disulfide bridge to the other cysteine, which goes C, M, A, K, and then F, and then S. And now we know the N terminus of this peptide, we're going to have our fatty acid residue CH2 14 times CH3. So let's just mark, once again, the carboxy ends here and the amino end here. So this is the final answer for our problem and the structure of our mystery peptide. Well, that's it for this problem. I hope you enjoyed this little protein mystery hunt. Now, remember that the strategy that we used here in which we logically string together pieces of data to build a big picture, it's really the same strategy that has been used and is being used right now to advance our knowledge about living systems and their underlying biochemistry. |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | Lexicon_of_Biochemical_Reactions_Cofactors_Formed_from_Vitamin_B12.txt | SPEAKER 1: The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Today what I want to do within the lexicon is tell you about nature's most spectacularly beautiful cofactors. And these are formed from vitamin B-12, which you find in your vitamin bottle. OK. So what is the structure of vitamin B-12, and why do I say they are spectacularly beautiful? So it's very hard to see, but if you look at the structure of this, where have you seen a molecule this complicated with five membered rings, each of which has a nitrogen in this? You've seen this when you studied hemoglobin, and you think about heme and proto protoporphyrin IX. If you look at the biosynthetic pathway of heme, a branchpoint of that pathway is to make this ring, which is found in adenosylcobalamin and methylcobalamin, which is what we're going to be focusing on today. And this ring is called the corrin ring. So what I want to do is introduce you a little bit to this corrin ring and what's unusual about it compared to protoporphyrin IX that you've seen before. So the vitamin, as in the case of all vitamins that we've talked about over the course of the semester, is not the actual cofactor used in the enzymatic transformation. The vitamin, which in this case would have this group replaced with cyanide, is vitamin B-12. The actual cofactors that bind to the enzyme have that cyanide replaced with either a methyl group-- and that's called methylcobalamin-- or they have it replaced with 5-prime-deoxyadenosine, and that's called adenosylcobalamin. And so the rest of the molecule is exactly the same. The only thing that's distinct is the axial ligands. So you remember from your transition metal chemistry you had when you were freshman that here you have a cobalt III, and it's coordinated to four nitrogens. OK? So this would be the equatorial plane, and we're going to draw it like this, subsequently, because you can see how complicated this molecule is. And the middle sits right in this plane, but you can see that you don't have complete saturation of these pyrrole- type rings, so there's some pucker in contrast with hemes which are very flat. And then these are called equatorial ligands, and then you have axial ligands. And so there are a number of things that again I wanted to point out that are unusual about B-12. One is the fact that the corrin ring again is much more reduced than a pyrrole ring. And so it's puckered. And if you look at the visible spectrum, or you use your eye and use your eyeball method, you see that cyanocobalamin is bright purple. If you replace this R group with a methyl group, this color turns out to be sort of yellow-orange. And if you replace this with 5-prime-deoxyadenosine, it turns out to be pink. So that's why I say that this is nature's most spectacularly beautiful cofactor. So the other thing is you have two axial ligands. I just introduced you to these guys on the top face. But what you also see on the bottom face-- the bottom axial ligand-- is this unusual structure, which is called dimethylbenzimidazole. And it's attached to a ribose, but the unusual thing is in most nucleosides that you've encountered like ATP, this configuration is in the beta position-- it's on the top face of the sugar-- and here it's in the alpha position. So that's distinct. And the other thing that's distinct is the decorations around the side chain compared to what you see in porphyrins. So this is the cofactor we're going to be talking about, and one of the questions that you're interested in is where do we find thes cofactors in metabolism. In a mammalian metabolism, the appearance of these cofactors is quite limited. So the only place you see it in metabolism in mammalian systems is you see methylcobalamin-- OK, where this is a methyl group-- in formation of the amino acid methionine. And so the methyl group from methionine-- I'll show you briefly at the very end of this little presentation-- comes from this methyl group. You use adenosylcobalamin in odd chain fatty acid metabolism. So you have fatty acids that are either an even or an odd chain. When you break the odd chain ones down, you get proprionyl CoA. The proprionyl CoA, through a series of steps, is converted into malonyl CoA, which then gets converted to succinyl COA, which feeds into the TCA cycle. And in that pathway, you use an enzyme-- a mutase-- that uses a adenosylcobalamin. We'll talk briefly about the chemistry of a adenosylcobalamin; also methylcobalamin. OK. So there were a few things that I wanted to say about-- some generalizations that I wanted to make about these cofactors. And at the first one is that again, the corrin ring is much more reduced than the pyrrole ring that you see in protoporphyrin IX, which you've seen in hemoglobin before. The second thing is that you have this unusual dimethylbenzimidazole axial ligand, which you see nowhere else in cofactor chemistry. It's only found in this particular cofactor. The second thing, which I think is the most amazing, is that what you see if you look back here is that you have-- if this is a methyl group, or this is this 5-prime-deoxyadenosine-- you have a carbon cobalt bond. Well, this was discovered in the 1950s, and the first structure was solved of this molecule by Dorothy Crowfoot Hodgkin in 1964, and she won the Nobel Prize for this work. No chemist had ever seen a carbon cobalt blonde bond, and thought in fact biochemists were crazy that they even proposed such a structure. So this structure-- and we'll see that this is where all the chemistry happens-- is completely unique. And people spent 25 years figuring out how this cofactor actually worked to do the transformations that I'll very briefly introduce you to. So this is the first example of a carbon cobalt-- and will see that cobalt is in the plus 3 oxidation state bond-- so it's the first organo-metallic cofactor. And what people also found by studying this molecule is that the cobalt can actually exist in three oxidation states. It can exist in the cobalt I state, where in the d z squared orbital-- if you don't remember what a d z squared orbital is, you need to go back and look at your freshman chemistry-- you have two electrons. And it turns out that cobalt I is a super nucleophile. And we'll see that that plays a key role in the chemistry. And so again, this is the cobalt. The d. We're only looking at one of the orbitals of the cobalt. On the other hand-- and this is found in methyl-- this is going to be used in methylcobalamin. So when you have a methyl as the axial ligand, you're going to use cobalt in this oxidation state, which is the plus one state. For almost all other B-12 dependent reactions, you have cobalt II, which has-- you've lost an electron, so you have only one electron by itself. And its d z squared orbital. And this really dictates the chemistry. And so this is found in adenosylcobalamin chemistry. And then in the resting the state you have cobalt III. And cobalt III has none of the electrons in the d z squared orbital. And this is basically the resting state-- its most stable state-- where you find this cofactor. So in cyanocobalamin, which is vitamin B-12, the cobalt is in the plus 3 oxidation state. OK. So we're going to see very briefly-- we're not spending much time on this-- is the cobalt I state. It's a super good nucleophile. It affects the chemistry of the cobalt II state. It has one unpaired electron. You haven't been introduced to chemistry with one unpaired electron, which is radical chemistry-- that most people don't spend a lot of time talking about adenosylcobalamin because it's radicals, and they don't learn much about radicals in introductory organic chemistry. But I'll show you briefly how the enzymes work that use this cobalt II state. The other thing I wanted to mention was the colors. And so if you want to understand how these cofactors work, the different number of electrons govern what colors you can end up seeing. And again I always use this is as an example on MIT'S campus in the spring. Cobalt II has spectacular orange color like the orange azaleas, and cobalt III is like the pink azaleas. And so they're really dramatically different. And this is why I say this is nature's most beautiful cofactor. OK. So what I want to do now is briefly talk about mechanism. And I'm going to mostly talk about mechanism of the cobalt II state, and how adenosylcobalamin works, since that's the one that's most complicated. So what I'm drawing here is if we look at the structure of adenosylcobalamin, you see you have a cobalt with four nitrogen. These are the four nitrogens and these are the two axial ligands. So this is the abbreviation I'm going to be using in all the subsequent chemical descriptions of what I'm talking about, OK? So I'm not going to draw this structure out over and over again. OK. So here's our adenosylcobalamin that we just talked about. And there's DMB. It's a dimethylbenzimidazole axial ligand, and then you have the 5-prime-deoxyadenosine in the top face, which is where all the chemistry is going to happen. So the business end of the molecule is going to be this part of the molecule. And the key to everything is this carbon cobalt bond. Now, what's unusual about a carbon cobalt bond? It's very weak. If you go to 40 degrees, the bond breaks. Most things-- you can go to hundreds of degrees, and the bond is stable. So it's thermally labile. And then the other thing that's unique about this cofactor is that it's light sensitive. So if you put adenosylcobalamin out on the bench top here, within two minutes the whole thing would be destroyed because you would break this carbon cobalt bond. OK? And that's the key to the chemistry of how this cofactor works. OK? So I'm going to give you a generic mechanism, and then I'm going to focus on the mechanism of methylmalonyl-CoA mutase, which you find in human metabolism. Before I get started, let me just show you what the reaction is and show you why it's unusual, and then we'll go back to the actual mechanism. OK. So what does methylmalonyl-CoA mutase do? Again it plays a central role in odd chain fatty acid metabolism. OK, so this is CoA. You need to go back and look at your lexicon if you can't remember the structure CoA. But just remember it's a thioester. That's all you need to know. OK. So this is the substrate, and this is called methylmalonyl-CoA. OK. So adenosylcobalamin catalyzes rearrangements. And this reaction has fascinated chemists for 30 years because there was no chemical precedent-- just like there was no chemical precedent for carbon cobalt bonds. When biochemists discovered this, there was no precedent for the reaction I'm going to show you. So what is the reaction? It's a rearrangement. And it's a rearrangement because this hydrogen moves from this carbon to this carbon. And this whole group-- this thioester-- moves from this carbon to this carbon. So that's the actual reaction. It's reversible. And so what you do is you generate succinyl CoA. OK so this hydrogen is moved over here. And this is succinyl CoA. And we haven't talked about it yet, but succinyl CoA plays a central role in metabolism in the TCA cycle. OK. So the question that we want to focus on now is how do you catalyze this weird rearrangement. What is this cofactor? This big, huge molecule with this weak carbon cobalt bond. What does it do? OK. So I'm going to come back to this in a minute, but just let me show you what it does. OK. So this is a generic mechanism, because while I said there's only one enzyme in humans that uses adenosylcobalamin, if you move into fungi, or you move into archaea, or bacteria, you find there are many B-12 dependent reactions that also do rearrangements, but different kinds of rearrangements. OK. So what's the generic mechanism? OK. So the generic mechanism is the following. So here we have our adenosylcobalamin. Here's our substrate. OK. And the idea is we need to move the hydrogen from here to here. OK? Does it just jump through space? And the answer is no. The cofactor adenosylcobalamin is going to remove the hydrogen, and then it's going to transfer it. You're going to generate a reactive speciea, and then it's going to transfer it back to form a new product. So the cofactor-- and this took people a long time to figure out because there was no chemical precedent for this-- is mediating the hydrogen transfer. So that's what I'm going to show you-- how that actually works. So here what happens is the whole key to the chemistry of adenosylcobalamin-- which again, most of you haven't seen something like this before because you're not exposed to radical chemistry-- is that you have homolysis of the carbon cobalt bond. So what does that mean? It means one electron goes to the axial ligand, and one electron goes to the cobalt. So the cobalt III is reduced from cobalt III to cobalt II, and what you're left with is this radical species of 5-prime-deoxyadenosyl radical. OK. So this is the reactive species. Because what you're doing in these transformations is pulling off an amazingly non acidic hydrogen. And normal amino acid side chains cannot do that kind of chemistry. You have to go to these reactive radical species to be able to do this tough chemistry. So here we generated a radical species. It is sitting right next to the substrate in the active site of the enzyme. And so what does it do? It removes a hydrogen atom-- a hydrogen with one electron. OK? And so again, this is free radical chemistry that most of you don't think about that much. But what you do is the hydrogen from the substrate is now transferred to this axial ligand and that stays stuck in the active site. So we've seen this guy move over here. And now what you've done is transferred one radical from the cofactor into a second radical-- the substrate. OK. So that's the key. Now it's carrying this hydrogen, and eventually it wants to put the hydrogen back on to form the product. OK? That's part of the rearrangement reaction. So what happens now is-- and this looks like magic. I'm just showing that a substrate radical goes to product radical, and I will show you how this works in the case of methylmalonyl-CoA mutase in a minute, OK? So there's some kind of rearrangement. Remember we had two things rearranging the hydrogen, but we also had a second-- a thioester rearranging. So we have a rearrangement reaction, and this is reversible in the reaction I'm going to be talking about today. And so this is the same is this. Structurally these are the same. I've just written the methyl group. And so we have a substrate radical converting into a product radical. OK? And now what we want to do is generate the product. And so the hydrogen from this carrier-- your axial ligand-- is now going to be transferred by hydrogen atom transfer back to p dot to form the product. So again, it's one electron chemistry, and doing one electron chemistry, the hydrogen is transferred back to p-- and then what do you do? You lose one radical, and you generate another radical. You regenerate the radical we started with-- the 5-prime-deoxyadenosyl radical on the axial ligand. Now I have written here hydrogen in black and in red. Why is that true? Because here's a methyl group. And if you have free complete freedom of rotation around that carbon carbon bond, you can pull off either-- the methyl hydrogens become equivalent, so it can pull off a hydrogen red or a hydrogen black. OK? Can't distinguish in the active site of the enzyme. So what you now generate is the same thing we started with, except we have a product. But we have 5-prime-deoxyadenosyl radical cobalt II. Here we have 5-prime-deoxyadenosyl radical cobalt II. And what you do is you re-form the carbon cobalt bond at the end of every turnover, and now you're ready to start all over again. So that's the reaction. This 5-prime-deoxyadenosyl radical. This axial ligand sitting over here acts as a hydrogen atom transferring agent to remove a hydrogen from the substrate and to transfer it back to the product. OK? I'm going to show you how this works now in the case of methylmalonyl-CoA mutase. So here's our methylmalonyl-CoA, and we're going to be converted into succinyl CoA. So this guy is migrating, and this guy is migrating there. That's the goal. And so what happens here is you cleave the carbon cobalt bond to form cobalt II and 5-prime-deoxyadenosyl radical. And now this 5-prime-deoxyadenosyl radical can remove a hydrogen atom from the substrate. So it leaves you with another radical. This radical goes away. You generate another radical. And now this hydrogen from the substrate is transferred to our cofactor. So it's the hydrogen transferring agent. So now the question is, how does this weird rearrangement reaction end? How does this CoA migrate from one place to another? Again, there was no chemical precedent in the literature for this. And the answer is we still don't know the answer. So there are two mechanistic possibilities. One is that you go through a three membered ring-- cycle propane ring intermediate. So what you can picture happening is that one of the electrons from the carbonyl forms a bond with the unpaired electron on this carbon. And you generate this cycle propane intermediate. So you've gone from this radical to another radical. We haven't lost any radicals. But now what we want to do is we want this group to migrate. So now what happens-- this intermediate can collapse. It can collapse back to form starting material and the 5-prime-deoxyadenosyl radical. Or you can break this bond, in which case you collapse to form the direct precursor to the product. So it can break down in either direction, and the reaction's reversible. And if it breaks down in this direction, you form a new radical. This radical is distinct from this radical. And now what happens is that the hydrogen that you removed from your starting material can be returned back to the product. And what you generate then is 5-prime-deoxyadenosyl radical and cobalt II. And now when you transfer the hydrogen back, you form your product like I showed you in the previous slide. And now you re-form the carbon cobalt bond to re-form the active form of the cofactor. So alternatively you can make this arrangement happen by another mechanism which is called a fragmentation mechanism. And I won't go through the details of this, but you can sit and look at this for those of you who are really interested in sophisticated proposals for the chemical mechanisms of the rearrangement. So adenosylcobalamin chemistry was unprecedented every step along the way. And it's now known to be widely used, but not so much so in humans. But really humans are only a small part of the world. We have many, many more bacteria and archaea than we have humans. So this is a pretty important transformation. So hopefully now when you see this again it won't be completely magic. But this is a challenging reaction that most of you haven't been exposed to before. But hopefully some of you get excited about radical mediated transformations. Just let me close by saying-- from bioinformatic studies in the last five years or so, we now know there are over 50,000 reactions that use free radical chemistry, yet we don't talk about radical chemistry in 507. So hopefully some of you will get interested enough in biochemistry to come in and start figuring out how all these radical dependent reaction occur, not in primary metabolism, but in many secondary metabolic pathways. OK. Thank you. |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | Carbohydrate_Biosynthesis_II_Gluconeogenesis.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOHN ESSIGMANN: The second carbohydrate biosynthetic pathway we're going to look at is called gluconeogenesis. Gluconeogenesis is the synthesis of glucose from noncarbohydrate precursors. Let's look at Panel A of this storyboard, Storyboard 31. There are some organs in the body that require glucose as their metabolic fuel, yet they don't have the capacity to make it, or to make much of it. Examples are the brain, renal medulla, red blood cells, and the testes. To give an idea of the scale of the problem the body faces, we only have about 200 grams of glucose or glycogen stored away in our body, and the brain alone uses more than half of that each day, so we don't have much in the way of a carbohydrate reserve. The body compensates by having certain organs, specifically the liver and renal cortex, act as specialized manufacturing centers to produce and export glucose. The chemical raw materials they use include organic acids such as lactate, some amino acids, and the three-carbon residues from odd-chain fatty acids that enter catabolism. To get an idea of how gluconeogenesis works, let's take a look at the box here at the bottom of Panel A. You see a muscle working very hard, as evidenced by the fact that it's secreting lactate out into the blood. The lactate travels from the working muscle by the blood to the liver, where the lactate is taken in. The pathway of gluconeogenesis, with energy and reducing equivalent input, rebuilds glucose from that lactate precursor. The glucose is then sent back out into the blood, returns to the muscle, enabling the muscle to continue hard work. As I've described earlier in 5.07, this functional relationship between a muscle and the liver is called the Cori Cycle. Let's look at Panel B. Depending on how you look at it, there are seven or possibly eight noncarbohydrate precursors to glucose in gluconeogenesis. They're listed in this panel. And I've indicated these precursors by numbers 1 through 8. The first seven are not carbohydrates, so they qualify as noncarbohydrate precursors as fits the definition for a gluconeogenic compound. The eighth is actually a carbohydrate. It's ribose. I'm semi-illegally squeezing it in here because ribose acts as a major gluconeogenesis precursor by way of the Pentose Phosphate Pathway, or PPP. We haven't covered the Pentose Phosphate Pathway yet, but we'll see, when we do cover it, exactly how this pathway works to manufacture glucose. In other words, ribose from the diet can be converted into glucose by way of the Pentose Phosphate Pathway coupled to gluconeogenesis. A little bit later in this lecture, we'll see how each of these precursors-- that is, numbers 1 through 8-- are converted to glucose by the pathway of gluconeogenesis. The best way to start thinking about gluconeogenesis is to think of it as glycolysis running in reverse. As shown here in Panel C, glycolysis is conversion of glucose, ultimately, to pyruvate. As we know, that pathway goes with a net generation of ATP. That is, it is exergonic. Accordingly. If we go from pyruvate back to glucose, we know that in order to make that reverse pathway exergonic, we're going to need to have energy input. As we learned earlier, there are 10 steps in glycolysis going from glucose to pyruvate. Three of those steps are thermodynamically irreversible. Those are the steps that commit molecules to flow from left to right, as drawn in this panel. The thermodynamically irreversible steps of glycolysis are first, hexokinase/glucokinase, that is, the conversion of glucose to glucose 6-phosphate. The second irreversible step is phosphofructokinase 1, the conversion of fructose 6-phosphate to fructose 1, 6-bisphosphate. The third irreversible step is pyruvate kinase, the conversion of phosphoenolpyruvate to pyruvate. We have to find a way in gluconeogenesis to bypass these three thermodynamically irreversible steps. The enzymes that were invented to circumvent the three thermodynamically irreversible steps of glycolysis are shown in green asterisks in this panel. For example, glucose 6-phosphatase hydrolyzes the sixth phosphate from glucose 6-phosphate, converting it to glucose. A second asterisked enzyme of gluconeogenesis is fructose 1,6-bisphosphatase, which converts fructose 1,6-bisphosphate by hydrolysis to fructose 6-phosphate. The third irreversible step that we need to bypass is pyruvate kinase. Two enzymes were invented to enable this bypass. Pyruvate carboxylase, which we have studied before in several contexts in 5.07, and a new enzyme, phosphoenolpyruvate carboxykinase, or PEPCK. Pyruvate, carboxylase, and PEPCK work in a partnership with several other enzymes, actually in a mini-pathway, to bypass the pyruvate kinase step of glycolysis, and hence, enable gluconeogenesis. Lastly, at the right of this panel is a large inverted L. This is meant to symbolize the 8 precursors to gluconeogenesis. These precursors will enter the gluconeogenesis pathway, and flow by the hatched arrow lines all the way up to glucose. This network of reactions allows the conversion of molecules, such as the amino acid glutamate, all the way to glucose. Now we're going to turn to Storyboard 32. Let's look at Panel A. And let's start out by looking at the mechanisms by which the four key enzymes of gluconeogenesis works. The first two enzymes, glucose 6-phosphatase and glucose 1,6-bisphosphatase, carry out simple phosphate-ester hydrolyses as shown. The third enzyme is pyruvate carboxylase. We looked at this enzyme mechanistically back in the fatty acid catabolism lectures, when I taught about the mechanisms by which CO2 can be captured and added to an organic acid. Pyruvate carboxylase will convert pyruvate to oxaloacetate. Hence, it's in an anaplerotic enzyme that helps maintain the levels of TCA intermediates. In the cases we've looked at so far, PC helps maintain specifically oxaloacetate levels. Refer back to the lecture on carboxylases to see how this enzyme works from a mechanistic perspective. Now let's look at Panel B. The last and fourth key enzyme I want to discuss in gluconeogenesis is phosphoenolpyruvate carboxykinase, or PEPCK. We haven't seen an enzyme that works like this before, so let me go through its mechanism in some detail. PEPCK is going to convert oxaloacetate to phosphoenolpyruvate. The enzyme is active in the liver and renal cortex under physiological conditions that mandate gluconeogenesis be turned on. The enzyme takes oxaloacetate, which, as we know, is a beta keto acid. Thus, it's prone to decarboxylation. When CO2 is removed from oxaloacetate, you get a transient pyruvate enolate, which is an anion. The oxygen anion of the pyruvate enolate will attack the gamma phosphate, that is, the terminal phosphate of GTP, and thus, the pyruvate enolate will become phosphorylated. The product of phosphorylation is phosphoenolpyruvate, or PEP. Hence, the concerted action of two enzymes, pyruvate carboxylase and PEPCK, enables reversal of the otherwise irreversible PK step, the pyruvate kinase step, and allows the cell to do gluconeogenesis. Let's look now at Panel C. Now that we've seen the identities of the eight precursors to glucose in gluconeogenesis, and we've done a little bit of learning about the mechanisms of some of the key enzymes in the pathway, we can turn our attention to a network analysis of the overall gluconeogenic pathway. The pathway in this panel may seem a bit daunting, because it's pretty much everything we've learned about metabolism so far in 5.07. So let's go through it slowly in little pieces. Let's start over on the left, where the glucose molecule is situated in a squiggly box. Note that the glucose in the squiggly box is being sent out of the cell-- that is, it's going in the opposite direction from the direction in which we have dealt with glucose so far in metabolism. Usually, we think of glucose as coming into the cell and entering metabolism. This is a liver cell or renal cortex cell that's in the process of manufacturing glucose from those eight precursors, the ones I mentioned above, and sending the manufactured glucose out into the blood. In this metabolic map, the hatch lines-- that is, the lines that look like railroad tracks-- represent that gluconeogenic pathways. Now let's start walking through the metabolic map. Slightly to the right of the middle of the diagram, you see the amino acid alanine and the three-carbon organic acid lactate. Lactate is precursor number one in my scheme. The numbering, again, is in Panel B of the previous storyboard. And alanine is precursor number two. Let's start with lactate. Lactate would typically come into the liver from the blood, perhaps from a working muscle. Think about the Cori Cycle for a moment. Lactate dehydrogenase will convert the lactate into pyruvate in the liver or in the renal cortex, in the other gluconeogenic organ. Let's look next at alanine. Alanine, gluconeogenic precursor number two, is, of course, an amino acid. And a PLP-mediated reaction will convert it to pyruvate. That reaction involves specifically the loss of alanine's amino group. Now imagine the resulting pyruvate, that is, from either lactate or alanine, translocating to the right into the mitochondrion, where it's carboxylated by pyruvate carboxylase into oxaloacetate. This is the reaction mechanism we looked at a short time ago. Now remember that malate dehydrogenase, or MDH, thermodynamically favors the formation of malate from oxaloacetate. So our two molecules navigating the gluconeogenic pathway-- that is, precursors one and two, transit from oxaloacetate into malate. Malate is in the mitochondrion at this point. In the next step, it exits the mitochondrion and goes into the cytosol. As I've mentioned before, the transporter for malate works in both directions. Malate, now transported into the cytosol, can be converted back to oxaloacetate by the cytoplasmic version of malate dehydrogenase. The oxaloacetate thus produced in the cytosol goes upward, following the vertical arrow. Lastly, PEPCK, phosphoenolpyruvate carboxykinase, converts the oxaloacetate into cytoplasmic phosphoenolpyruvate. Let's pause here for a minute and review. Look at where phosphoenolpyruvate is on the metabolic map, and then look to the right. On the right, you'll see pyruvate on the rightward side of the one-way arrow. You will also see lactate and alanine, molecules one and two, respectively, having their carbons flow into the pool of pyruvate. The pyruvate pool, which starts in the cytosol, gets shuttled into the mitochondrion, where pyruvate carboxylase converts the molecules derived from pyruvate-- that is, lactate and alanine-- into mitochondrial oxaloacetate. Then the mitochondrial pool is converted from oxaloacetate into malate, which exits the mitochondrion. And the molecules now represent a malate pool that's in the cytosol. That malate pool, which began as pyruvate, alanine, and lactate, is converted to oxaloacetate. And lastly, our new enzyme, PEPCK, converts the oxaloacetate into cytoplasmic phosphoenolpyruvate. All that this was done in order to bypass the one-way step between phosphoenolpyruvate and pyruvate. In other words, I can go from phosphoenolpyruvate to pyruvate quite easily by pyruvate kinase. But because this reaction is so strongly exothermic, I cannot go the other way. The reverse step is irreversible. The mini-network that we've just navigated with all of its steps, some of which require ATP, was done in order to bypass the irreversible pyruvate kinase reaction. We've just worked very hard to move atoms from pyruvate, alanine, and lactate into the cytosol, where those atoms are now packaged as PEP, or phosphoenolpyruvate. Now, let's focus on phosphoenolpyruvate, and let it flow backward through the reverse of glycolysis all the way up to the next irreversible step at fructose 1,6-bisphosphate. All of the steps between phosphoenolpyruvate and fructose 1,6-bisphosphate are common to both gluconeogenesis and glycolysis. Fructose 1,6-bisphosphate cannot be converted to fructose 6-phosphate by reversal of the PFK 1, phosphofructokinase 1, enzymatic step. That step is too thermodynamically irreversible. Consequently, Nature invented fructose 1,6-bisphosphatase. We saw the mechanism of that enzyme earlier in the lecture. It allows the conversion of fructose 1,6-bisphosphate to fructose 6-phosphate. Then the molecule continues upstream to form glucose 6-phosphate. And once again, a gluconeogenic-specific enzyme, glucose 6-phosphatase, will convert glucose 6-phosphate into glucose. Lastly, the manufactured glucose is exported from the cell by the glucose transporter. So while it was a long passage, the carbon atoms from alanine and lactate have traveled all the way to glucose. And then the glucose has been sent out of the cell. That is, their carbons are now incorporated into the molecules of glucose that go off from the liver and renal cortex to organs that need glucose to meet their energy needs, or their needs for the glucose skeleton for other biochemical purposes. Gluconeogenic precursor three is glutamate. This is the amino acid equivalent of the organic acid alpha ketoglutarate. So look at alpha ketoglutarate at about 3 o'clock on the TCA cycle. You'll see glutamate precursor three being de-aminated by a PLP-mediated process. This converts glutamate into alpha ketoglutarate. Most of the carbons of the alpha ketoglutarate will then move clockwise around the TCA cycle, and end up at about 9 o'clock as malate. Then, as we saw for precursors one and two, the malate will be exported from the mitochondrion. And next, most of its atoms will find their way back to glucose, just as we saw with alanine and lactate earlier. Precursor four is aspartic acid. Aspartic acid is the amino acid equivalent of oxaloacetate. Aspartate is de-aminated into oxaloacetate, which enters the gluconeogenic pathway at about 10 o'clock on the TCA cycle. At this point, the oxaloacetate formed from aspartate joins the oxaloacetate made from alanine and lactate, precursors one and two, and continues all the way back to glucose. Gluconeogenic precursor five comes from odd-chained fatty acids, which are depicted way over on the right of the metabolic map. As we have seen, odd-chain fatty acids are catabolized into three- carbon compounds, such as propionyl coenzyme A. One of the carboxlysases, propionyl coenzyme A carboxylase, will add a carbon to this three-carbon molecule. Then methylmalonyl-coenzyme A mutase and its partner, epimerase, will complete conversion of the three-carbon propionyl-CoA to the four-carbon product succinyl-CoA, which will then integrate into the TCA cycle. Succinyl-coenzyme A enters at about five o'clock on the TCA cycle. The carbons then flow clockwise to malate. Just as we've seen before, most of those carbons will flow all the way back to be built into glucose. Gluconeogenic precursors indicated by number six are the amino acids, isoleucine, methionine, and valine. These are, once again, way over on the right-hand side of the figure. These amino acids are also converted to propionyl-CoA. And just as with the odd-chain fatty acids, they will be converted into succinyl-coenzyme A. Then their atoms follow the pathway all the way back to glucose, as we have seen now several times. Slightly to the left of center, you'll see gluconeogenic precursor number seven, which is glycerol. Glycerol is a tri-alcohol produced from the metabolism of complex lipids. It derives from the three-carbon backbone that holds fatty acids and phosphates by ester linkages. Glycerol is first phosphorylated to glycerol 3-phosphate. And then that product is oxidized to dihydroxyacetone phosphate. We've seen this chemistry before. From DHAP, or dihydroxyacetone phosphate, the molecules flow leftward on the chart to fructose 1,6-bisphosphate, and then they continue on through gluconeogenesis to glucose. Lastly, precursor number eight is ribose. We mainly obtain ribose from our diet. In the next lecture, I'm going to cover the pentose phosphate pathway. In this pathway, ribose from the diet will enter what I call the "carbon scrambling phase" of the pentose phosphate pathway. Ribose goes into the carbon scrambling phase of the pathway, and GAP, glyceraldehyde 3-phosphate and fructose 6-phosphate come out. Both of these molecules, GAP and fructose 6-phosphate, will flow from right to left all the way to glucose in the pathway that I've drawn out in Panel C. Hence, ribose is a good gluconeogenic precursor. Let me summarize the functional dimensions of the gluconeogenesis pathway from a high altitude. Imagine that overnight, as you sleep, you're not consuming any food. Consequently, at some point, your blood sugar is going to drop. Your gluconeogenic organs, the liver and renal cortex, will sense this drop in blood sugar, and turn on the hatched-line pathways we've seen above, in order to take readily available noncarbohydrate precursors, such as lactate, alanine, ribose, and so on, and convert those noncarbohydrate precursors into glucose. The liver and renal cortex will then send that glucose out into the bloodstream to client organs, for example, the brain, red blood cells, and renal medulla. Ultimately, these client organs will be able to rely on a constant level of glucose. That, in a nutshell, is the role of the gluconeogenic pathway. |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | Problem_Set_4_Problem_2_The_Mechanism_of_HMGCoA_Synthase.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality, educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. BOGDAN FEDELES: Hello, and welcome to 5.07 Bio Chemistry Online. I'm Dr. Bogdan Fedeles. Let's metabolize some problems. Today we're going to be talking about Problem 2 of Problem Set 4. Here we're going to be discussing in detail the mechanism of HMG-CoA synthase, a key enzyme in central metabolism responsible for making the five carbon building blocks from which all sterols, such as cholesterol and steroid hormones are made. HMG-CoA synthase catalyzes the following reaction. It takes acetyl-CoA, which we're going to encounter a lot in the central metabolism, and combines it with acetoacetyl-CoA, another thioester. In this process, the one molecule of CoA is lost, and we get the product hydroxymethylglutaryl CoA or HMG-CoA, as you notice the initials H, M, and G are part of this name. To help us understand the mechanism, a crystal structure of this HMG-CoA synthase is provided in this problem, and it's shown here. Question 1 of this problem is asking us to provide a detailed curved arrow mechanism of the reaction catalyzed by the HMG-CoA synthase. And for that we're going to use the information provided in the crystal structure. And we are also given an additional hint and that is the acetyl CoA substrate provides the CH2CO2 CO2 minus moiety in the HMG-CoA product. Let's take a look. So the hint is telling us that these two carbons, which I'm going to label in blue from acetyl-CoA, are, in fact, these two carbons, CH2 COO minus in hydroxymethylglutaryl CoA. That means that the other four carbons in the product must be the carbons from the acetoacetyl-CoA. Let's label these red. So now since we have a thioester functionality in both cases, these are probably the same, so the carbons from a acetoacetyl-CoA are as follows. So notice the HMG-CoA synthase, it accomplishes the formation of a carbon-carbon bond, and that is this bond right here. It's between the CH3 carbon acetyl-CoA and the carbonyl carbon of acetoacetyl-CoA. And in the process, this carbonyl will become a hydroxyl group. Now, let's take a look at the crystal structure and see what information we can gather there. Now this is a picture of the active site of the HMG-CoA synthase. We notice the acetyl-CoA moiety is actually now bound to this C111. As you know C is a cysteine-- so it's actually covalently bound to a cysteine-- in the active site. Now, notice this is the other substrate, which is the acetoacetyl-CoA and bound in the active site. So as we've discussed before, we're going to make this carbon-carbon bond, and that's going to happen between this carbon here and this carbon here. So this bond needs to be formed. However, let's take it one step at a time. The reaction will start by forming this thioester between the acetoacetyl-CoA carbons and the cysteine in the active site of the enzyme. In many reactions that use a cysteine in the active site, which is used to form a covalent bond to the substrate, we first need a base to deprotonate the cysteine and make it a really good nucleophile, which will subsequently attack the substrate and form a covalent bond. Now, in a lot of these cases, the base is a histidine. Let's take a look if we see a histidine in our structure. There is a histidine in the structure, H233. But if you look at this histidine, it's quite far from our cysteine here. Well, obviously, because this is a projection, this is a two-dimensional structure so that we only see here a projection of a three-dimensional structure, it's very hard for us to tell if this histidine is, in fact, close enough to deprotonate this cysteine. So even though we have this crystal structure, from this one picture, we do not have enough information to figure out, well, what is the base that will deprotonate cysteine before it reacts with acetyl-CoA. Remember this is often the case with crystal structures that because of the resolution which we can collect them, we cannot see the protons. And if we cannot see the protons, it's very hard to tell which amino acid residues are protonated and can serve as a general acid or general bases. A lot of the researcher's intuition comes into play to draw these kind of structures. And it's only with collecting many different kinds of experimental evidence that we can put together a more definitive mechanism. Therefore, we might not know for sure from this one picture, which is the general base that deprotonates the cysteine at 111. So we're not going to assign it for now. Let's try to write the mechanism for that part. Here is the acetyl-CoA substrate, and here is the active site cysteine 111 with its thiol group. And again, we're not going to assign it, but there will be a base in the active site of the enzyme that will need to deprotonate this cysteine before it can react. Therefore, the reaction first involves activation of the cysteine, is deprotonated, and then cysteine can attack the thioester and form a tetrahedral intermediate. As we've seen it before, for a lot of these reactions involving thioesters. In the first step, we will form four bonds to this carbonyl carbon, and we have formed a new bond between the thiol of the cysteine 111 and the starting material, acetyl-CoA. Of course, there is a negative charge here. And our base in the active site of the enzyme will now be protonated. Now that the tetrahedral intermediate is going to fall apart by kicking off the CoA moiety. Of course, this will get protonated presumably by the same base. Now it's a general acid that will protonate a CoA. So we obtain the thioester with the cysteine in the active site and one molecule of CoA is going to be leaving. And here is our base. Notice the tetrahedral intermediate that we're forming here. We have an oxygen that develops a negative charge. And this is very similar to the kinds of intermediate you've seen in the serine protease mechanism. Now in that case, such an intermediate was stabilized with hydrogen bonds from the backbone of the protein in a structure that was called an oxyanion hole. Now let's take a look at the crystal structure of HMG-CoA to see how a tetrahedral intermediate might be stabilized. Here is the acetyl of the acetoacetyl-CoA substrate. The acetyl now is bound to the cysteine 111 here in the active site. Now, let's take a look if there are any other residues close enough to this oxygen. One of them that's shown here is this NH, which belongs to an amide bond, the backbone amide, and that's part of the amino acid serine 307. So this distance here, C3.06 Angstrom, it's actually close enough for a fairly good hydrogen bond. Presumably, this interaction will stabilize the binding of acetyl-CoA in this region of the active site and may also be involved in catalyzing the reaction with the cysteine by stabilizing the tetrahedral intermediate. In that case, this distance will have to become even smaller, that is to form a really good hydrogen bond on the order of 2.6, 2.7 Angstrom. Since this is only one snapshot of the reaction, we don't have enough information to tell if this is a key catalytical interaction. Once acetyl-CoA has reacted with the enzyme, hence, formed the thioester with the cysteine 111, we're now ready to proceed with the reaction and form a carbon-carbon bond. Let's take a look. So in the next step of the reaction, we want to form a carbon-carbon bond between this methyl group here and this carbon of the carbonyl of the other substrate. So first of all, we need to deprotonate this methyl group. We're going to form an enolate. Then the enolate is going to attack this carbonyl. Once again, we look for a suitable base, and we see, for example, this glutamate 79 may be, in fact, serving as a general base to deprotonate the methyl group here and form the enolate. Once the enolate is formed, it's going to attack this carbonyl, and this oxygen is going to start developing a negative charge, which needs to be compensated by a general acid. And it looks like this histidine 233 is close enough to donate a proton and generate a hydroxyl group here. All right. So let's try to write a mechanism based on what we just said. Here is the thioester between cysteine 111 and acetyl. And let's show the general base. It's going to be glutamate 79. Because it's a base, I'm going to put a negative charge on it. There you go. And this will serve to deprotonate the methyl from the acetyl, moiety here, and generate an enolate. So it's picking up a proton, electron move, and it's going to generate, as before, a negative charge on the oxygen, which may be stabilized by that interaction with the [INAUDIBLE] hydrogen that we just discussed. Now, let's try this again. And here is our enolate. In the next step, the enolate now is going to attack the carbonyl of the acetoacetyl-CoA and form the carbon-carbon bond. All right. This is the acetoacetyl-CoA, and we discussed that next to this oxygen there is our histidine 233, which we're going to show it as being protonated. Therefore, the reaction proceeds as follows. Here we're forming the carbon-carbon bond. Then the oxygen is going to pick up the proton from the protonated histidine and it's going to form a hydroxyl group. Therefore, we get cysteine 111 is still attached to what is known the HMG-CoA product. So this is our carbon. This is the new carbon-carbon bond. We have here a methyl group. We have the new hydroxyl, which were formed here, and the rest of the molecule. And our histidine, it was protonated here, now it's going to be deprotonated. So we just saw how the carbon-carbon bond is formed in the course of this reaction. And now we're left with a six-carbon thioester with a cysteine 111 in the active side of the enzyme. Therefore, the last step of the reaction would involve hydrolysis of this thioester to free the product and regenerate the system. So for the last step, we need to hydrolyze this thioester, and for that, we're going to need to activate a water molecule. Once again, we don't know-- here is the water molecule-- we don't know what's the general base, the residue in the active site, which removed this proton from water to allow it to attack the carbonyl of the thioester. So we're going to call it a general base attached to the enzyme. So this base is going to pick a proton from water, and then the water is going to add to the carbonyl and once again generate a tetrahedral intermediate. Say, this the OH from water, and this is the rest of the molecule. Once again, this is a tetrahedral intermediate, very similar to the one we saw in the first step when we formed this thioester, and presumably, it's going to be stabilized in a fairly similar manner. Let's also show that this base attached to the enzyme is now protonated, and it would probably donate this proton to reprotonate the cysteine and reform that thiol group. So in the second step, thiol takes the electrons, which picks up the proton from the general base in the active site. So at this point, we're just going to release the thiol of the cysteine 111, and the rest of the molecule is exactly our product, which is the HMG-CoA. The curved arrow mechanism we just wrote answers part 1 of the problem. Second question of this problem is asking about the stabilization of the tetrahedral intermediate, which actually we have just discussed. But let me reiterate as you guys saw in the serine protease mechanisms, whenever we're forming this tetrahedral intermediate, they tend to be stabilized in an oxyanion hole, which is basically a structure of the enzyme, which can form hydrogen bonds with the partial negative charge or full negative charge that develops in a tetrahedral intermediate. Presumably, similar structure exists for this HMG-CoA synthase, but because we're only given one snapshot, one crystal structure, that is not sufficient information to say for sure which are the key interactions to stabilize these tetrahedral intermediates. A lot more work, a lot more experimental data is necessary to figure out which are the key hydrogen bonds and interactions that stabilize the tetrahedral intermediates. Question 3 is asking us to review the mechanisms by which enzymes can achieve their amazing rate acceleration, which is on the order of 10 to the 6 to 10 to 15 times over the uncatalyzed reaction and of course, to point out these mechanisms in the context of the HMG-CoA synthase. As you have seen over and over in this course, the three general mechanisms by which enzymes accelerate reactions are binding energy, general acid/general base catalysis and covalent catalysis. Now let's take a look at our structure and figure out how each one of these mechanisms might be operating. To start off, covalent catalysis, it's pretty obvious here the acetyl-CoA substrate first reacts and forms a covalent bond with the cysteine of the HMG-CoA synthase. So this covalent attachment to the enzyme allows this residue to be positioned just right so that it can react with the other substrate. Now, general acid/general base catalysis, we've seen all these residues that participate. Obviously, in order for this to react, we need a base to deprotonate the cysteine. Then we need a base, presumably, this glutamate 79, to deprotonate the methyl group here to form the enolate. And we need an acid to stabilize and form this hydroxyl group that will be developing on this oxygen. So covalent catalysis and general acid/general base catalysis, that's pretty obvious. Now when it comes to binding energy, this is not something that we can obviously derive from the structure, but we can postulate the number of ways in which binding energy contributes to this reaction. First of all, both of the substrates need to bind to the enzyme. In order to do that, they need to be desolvated, that is to remove all the water molecules that surround them. So that by itself requires energy. The binding energy is also derived by when we align the substrates in the active site of the enzyme, we align them so closely the right geometry and within a few tenths of an Angstrom so that the right orbitals overlap and allow the reaction to happen. So also the ability to align this residue so closely that also contributes to the binding energy. And finally, the binding energy also manifests when we're stabilizing, for example, the transition state of the reaction relative to the binding of the substrates. So if, for example, for our tetrahedral intermediate that will form here, if that transition state or the tetrahedral intermediate is stabilized more than the substrate, then the reaction is accelerated and proceeds towards that pass. Part 4 of the problem is asking us to look up the structure of coenzyme A, or CoA and then contrast the reactivity of say, acetyl-CoA, the thioester with CoA, with the reactivity of a thioester with a much smaller thiol group. Let's see what the coenzyme A looks like. Here we have the structure coenzyme A. The business end of the molecule is this thiol group, which is attached to a substantially long arm. And at the end here, we have, as you can recognize, two phosphates, the ribose and the base adenine. This is a nucleotide is ADP. There's also another phosphate here. But the business end of the molecule is this thiol group, and we're asked to contrast whether a thioester form with coenzyme A would behave similarly to a thioester form with this right-hand portion of the molecule, which I wrote here. So it's a much shorter thiol. Now, it turns out these thioesters will be very, very similar because really it's only the thiol moiety that we need to form thioester. So those thioesters will behave very, very similarly. Now, the advantage of having such a long arm for the coenzyme A is that it provides a way to insert the substrate, which will attach here, to say, acetyl-CoA, to insert acetyl-CoA in very deep into the active site of the enzyme. Having a long arm to guide the thioester may be important. As you will see later in the course, fatty acid synthases, which are these mega dalton complexes, have multiple active sites. So the fatty acid attaches a thioester to coenzyme A allows it to be moved through different active sites and through a same kind of chemistry in multiple steps over and over again. The advantage of having a coenzyme A thioester is that it may provide some additional binding energy. That nucleotide portion of coenzyme A may interact specifically with the enzyme near the active site, whereas, a much smaller thiol like the one we just saw will not have that kind of interaction available. Since we've been talking about thioesters throughout this entire problem, the last question is asking us to rationalize why we're seeing thioesters in metabolism as opposed to oxygen esters. Now, the key message you need to remember here is that the resonance that we observe in oxygen esters is almost completely absent in thioesters. And this fact makes the thioesters less stable and therefore, more reactive. Not only their carbonyl group behaves more like a ketone group, but their alpha hydrogens are more acidic. Let's take a look. Here is an oxygen ester that shows a proton in alpha position. And as you know, the lone pairs on this oxygen can conjugate with the carbonyl group and form certain resonance structures, one of which being this one. So the electrons can move like this, and then we're going to have a negative charge here and a positive charge here. And this is possible because the electrons on both oxygens are found in orbitals of comparable energies. By contrast, in the case of a thioester, we have a sulfur. Now, the electrons on the sulfur are in orbitals that are not in comparable energies with the ones on the ketyl group and therefore, this kind of conjugation does not, in fact, happen. And because this doesn't happen, therefore, the electrons on the carbonyl stay localized, and that makes the carbonyl a better reactive site. So it's a better electrophile to react as nucleophiles behaves more like a ketone. For the same reason, when we are to deprotonate the alpha position on a thioester, the density on the carbonyl is there to stabilize the enolate much more so than it would be for an oxygen ester. Therefore, the pKa, the acidity constant for this hydrogen on the thioesters is close to 18, which is smaller, therefore more acidic than the pKa of the alpha position in oxygen esters, which is 22. This sums up Problem 2 of Problem Set 4. I hope you now have a much better feel of how we can use crystal structured data to propose a reasonable mechanism for enzyme catalyzed reactions. Keep in mind that writing mechanisms on paper is relatively easy. But to truly confirm that that mechanism is taking place in real life inside the cells, it takes a lot more experimental work and evidence. |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | Introduction_to_Carbohydrate_Catabolism.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOHN ESSIGMANN: Welcome to the metabolism portion of Biological Chemistry 5.07. My name is John Essigman and I teach this course with JoAnne Stubbe. I'm primarily a chalkboard teacher, as you'll notice from my notes. In order to make things as simple as possible, I use a lot of abbreviations, and my abbreviations list will be appended elsewhere. Let's start by looking at a metabolic chart, figure 16-1 from the textbook. Obviously it looks very complex. Part of the challenge of both teaching and learning biochemistry is finding a way to look at this pig's breakfast of biochemical reactions and find its underlying structure. For the purpose of this course, the underlying structure we're going to look for is the biochemical pathways that let us function. For example, a pathway could involve making ATP or it could be putting an amino group onto an alpha-keto acid in order to make an amino acid. Although few people think of biochemistry as simple, formatting the pig's breakfast into pathways that have functional meaning helps us simplify them and make them easier to remember. The second simplifying point is that all biochemical pathways are reversible. The gray vertical bar in the middle of this figure is the biochemical pathway of glycolysis going from glucose to pyruvate. There are 10 steps. Some of these steps have a negative delta G. That is, they are favorable in the direction drawn, and some of them have a positive delta G. That is, those steps are favorable in the opposite direction. But overall, when you sum them up, all of the free energies of the glycolitic pathway, you get a negative number, which means that the pathway of glycolysis is overall thermodynamically irreversible, and progresses from glucose to pyryvate. It turns out that there's another pathway called gluconeogenesis. That pathway involves taking non carbohydrate precursors, such as pyruvate-- that's just one example molecule-- and converting that precursor to glucose. In other words, gluconeogenesis, in effect, is the reverse of glycolysis. As I said, the pathway going from glucose down to pyruvate is exergonic. So what about the pathway from pyruvate up to glucose? In order to make the pathway go from pyruvate to glucose, what nature did was invent several biochemical steps that are highly exergonic. That is, what we'll see is they're going to require ATP or some other form of energy input in order to make the entire pathway of gluconeogeneisis exergonic, as is required of all pathways. So we can indeed convert pyruvate to glucose, but it will require energy input to make the pathway of gluconeogenesis have a net negative delta G. That is, be favorable. The third thing that actually makes biochemistry tractable is the fact that nature uses only a very limited repertoire of chemical reactions. JoAnne showed us that despite the complexity of this overall metabolic chart, there are only about nine or 10 discrete chemical reactions that nature uses. So, at any given step, for example, in glycolysis, you really only have about nine or 10 options, and of that nine or 10, only about one or two, perhaps three, would be chemically reasonable. If you know your organic chemistry and these nine or 10 reaction types, you should be able to navigate the biochemical chart with comparative ease. The fourth point I wanted to make is perhaps one of the most important. It's that all biochemical pathways are regulated. Chaos would result, for example, if you made glucose and at the same time took that molecule of glucose and degraded it. That's an example of what we call a futile cycle. Sometimes futile cycles can be beneficial to a cell. For example, that could be a way of generating heat, but most of the time we want to avoid them. Because we usually want to avoid futile cycles, nature uses pathway regulation to avoid them. To provide directionality to pathways, what nature does is work thermodynamically irreversible steps into the front and back end of the pathway. Sometimes nature puts in an irreversible step in the middle of a pathway if that pathway has a branch point. That happens with glycolysis, as we'll see. Sometimes regulation is effected by putting covalent functionalities, such as a phosphate, onto an amino acid on a protein. That modification could increase or decrease the biochemical activity of that protein. Often a phosphorylated protein is highly active, turning on a pathway, and it is dephosphorylated when the pathway needs to be turned off. A second way to regulate a step in the pathway is by allosteric regulation of the enzymes of the pathway. In that case, a small molecule will interact with the enzyme in order to increase or decrease its activity. JoAnne taught us about allostery when she taught us how hemoglobin is regulated. To reiterate, nature uses these regulatable C enzymes at key places and overall metabolic pathways to enable us to be able to achieve the function of the pathway without wasting energy or resources. The last introductory point I want to make is that all biochemical pathways tend to be compartmentalized, at least in mammalian cells, although it's increasingly becoming obvious that even in bacteria there is some form of compartmentalization. That is, clustering of enzymes for a particular pathway in a particular area of the cell. For example, in a mammalian cell, the mitochondrian is the site of fatty acid beta-oxidation, or break down. The tricarboxylic acid cycle and enzymes of the pyruvate dehydrogenase complex are also mitochondrial. The cytoplasm is the site where we do fatty acid biosynthesis, most of gluconeogenesis, glycolysis, and the pentose phosphate pathway. Compartmentalisation helps keep the metabolic chart tidy and organized. With that in the way of an introduction, let's turn to my lecture notes, which I present in the format of storyboards. In the first storyboard, panel A, I give the definition of metabolism as the linked set of biochemical reactions by which we obtain and use free energy. That is, delta G, for life. We use that free energy for a lot of things, but the use is really divide into three areas. The first is to do mechanical work, the second is to generate concentration gradients, and the third is for biosynthesis. In a few minutes, I'm going to be giving you an example of all three. Panel B. Biochemists divide metabolism into two subcategories. Catabolism and anabolism. Briefly, catabolism consists of the energy yielding pathways, and anabolism is basically biosynthesis. We use the free energy that we generate through catabolism in order to assemble complex molecules by way of anabolism. Let's look at panel C. We're going to start 5.07 with a discussion of catabolism. That is the energy yielding pathways. By way of definitions, a reduced molecule is one that is abundantly supplied with electrons. Examples are carbohydrates, fats, and proteins. These are typically our foods. The process of catabolism involves finding a way to liberate the electrons from those reduced substances and transfer those electrons to mobile electron carriers, such as NAD-plus to form NADH, or NADP-plus to form NADPH. The process by which electrons are removed from a molecule is called oxidation, and the oxidation products are, for example, carbon dioxide that we breathe out, or in some organisms, lactate, or other simple molecules that are excreted. Simple, that is, compared to the complex reduced molecules that we consume as food. NADH and FADH2, molecules that JoAnne taught us, are what I'll refer to as mobile electron carriers. They can usually move around inside the cell, although sometimes they're embedded within enzymes. That is usually the case with FADH2. We consume an enormously large number of reduced substances due to the complexities of our diets, and catabolism involves taking the electrons out of this enormously vast array of substrates and transferring those electrons to a small number of reduced electron carriers. For example, NAD or flavins. In the process of respiration, which we'll come to later, the electrons are further transferred from the reduced electron carriers all the way to molecular oxygen. The reduction of molecular oxygen is a highly exothermic reaction. We're going to be able to use the energy that's generated by oxygen reduction in order to make ATP, and to do some other important biochemical activities. Now I'm going to introduce a physiological scenario and use it in order to introduce the pathway of glycolysis. In class at this point, I usually ask a student to stand up and sit down. As you can imagine, being called on in class creates more than a little bit of anxiety in the student. Let's look at panel D. The signal to stand up and sit down is processed by the brain, and an electrical signal then goes by the nerves to the muscles enabling the student to stand up. The anxiety of being called on by the teacher in class results in a nervous excitation of the adrenal gland, specifically in the adrenal medulla, which releases epinephrine. Epinephrine is also known as adrenaline. It goes to the muscles as well as other organs of the body, as we'll see later in 5.07. And the epinephrine will interact at cell membranes by way of transmembrane receptors in order to turn on pathways of catabolism, in order to generate the ATP that's needed to help the student deal with this stressful situation. This is sometimes called the fight or flight response. At this point in the class, if I really wanted to make my point and have it transferred to every student, I'd announce a pop quiz. The announcement of a pop quiz would cause kind of this cold feeling throughout your gut. That's actually the feeling of adrenaline preparing your body to deal with the stress situation, in that case, of the quiz. Let's look at panel E. As the scenario progresses, let me introduce a few biochemical players. Glycogen phosphorylase, glucose, glucose six phosphate, and glucose one phosphate. In panel F we see a muscle cell. The epinephrine in the blood is only going to reach a concentration of about 10 to the minus 10 molar. That's a very, very low concentration. The epinephrine is going to interact at the beta and alpha adrenergic receptors in the muscle cell membrane. This interaction results in the activation of a kinase, that is a molecule that transfers a phosphate group. And that kinase, which is called SPK, for synthase phosphorylase kinase, phosphorylate serine 14 on glycogen phosphorylase. Phosphorylation activates glycogen phosphorylase, which will then begin to degrade glycogen-- the polymeric storage form of glucose in us, that is, mammals-- to produce initially glucose one phosphate. Glucose one phosphate will then go through a number of transformations and eventually be converted into other forms of glucose and then other molecules that will generate energy. In a nutshell, this stand up, sit down scenario results in the generation of energy in a matter of seconds which is one of the things that we use metabolism for. Another thing I said earlier was that you use metabolic energy to generate concentration gradients. And it turns out that generation of concentration gradients is absolutely critical for muscle activity. The sarcoplasmic reticulum in our muscle cells must accumulate calcium and release it at precisely defined moments in order to enable the muscle to be able to work. That concentration grading of calcium has to be created, and it's created with the energy that we get from metabolism. Again, we'll see how calcium gradients help boot up energy generation in the last lecture, which deals with pathway regulation. |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | Problem_Set_5_Problem_5_How_Mannose_an_Isomer_of_Glucose_Enters_Glycolysis.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. BOGDAN FEDELES: Hi, I'm Dr. Bogdan Fedeles. I'm a research associate at MIT doing research with Professor John Essigman. I love biochemistry, and I'm going to help you solve some problems today. Let's work together through question 5 or problem set 5. This is an excellent question about sugar biochemistry. Specifically, we're going to find out how mannose an isomer of glucose gets to enter metabolism by being converted into fructose 6-phosphate. Now mannose is a common sugar that's found in polysaccharides and glycoproteins. Here is the transformation that converts mannose, shown here, to mannose 6-phosphate in first step. And then in the second step, to fructose 6-phosphate. So our goal here will be to figure out the mechanism for these two transformations. Let's take a look at the first transformation. We're converting mannose to mannose 6-phosphate. Now, as you guys have encountered in biochemistry, adding a phosphate group is a ubiquitous transformation and it's catalyzed by proteins called kinases. Now, kinases typically use ATP, the energy currency of the cell, as a phosphate donor, and transfer the furthest most phosphate, so-called gamma phosphate, onto the substrate, and leaving behind ADP. A similar transformation will be happening here in part one of the problem, where mannose is going to be converted to mannose 6-phosphate by a kinase. And the source of the phosphate is going to be ATP, which, in the process, will yield ADP. Now, in order for the kinase to work, to react with ATP, we also need magnesium. So a salt of magnesium with ATP is actually essential for this reaction to work. Now, let's take a look at the structure of ATP to gain a little bit more mechanistic insight. As you know, ATP, or adenosine triphosphate, has these three phosphate groups attached to a 5-carbon sugar called ribose and a base, adenine, which we're not going to draw out here. Now, the phosphates in ATP are typically labeled alpha, the one closest to the sugar, beta, and gamma. Now, in order for ATP to react, notice these four negative charges on the phosphates. They need to be neutralized in order for the molecule to become reactive. So this is what magnesium is doing. So at least early on in the reaction, magnesium is going to neutralize two of these four negative charges, while the other two negative charges will be neutralized by positively charged amino acids in the active site of the kinase. Once all the charges are neutralized, then the phosphorus at the gamma position will become available to be attacked by a nucleophile. In our case, the 6 hydroxyl of mannose. So therefore, the reaction proceeds as follows. A base in the active site will activate our hydroxyl, which then can attack the gamma position of the phosphate. And then the phosphate gets transferred to the mannose, and it's going to leave behind ADP. Now, notice during this reaction, magnesium, which was coordinating two of these negative charges, will probably move to coordinate these two charges or the newly formed charge here in ADP. And that would also help stabilize the ADP into the active site of the kinase. This mechanistic insight basically answers part one of the problem. To talk about part two of the problem, we need to remember something fundamental about sugar biochemistry, namely that 5- and 6-carbon sugars exist in equilibrium between a linear form and cyclic forms. Now, sugars that you've encountered already, like glucose and fructose, they are in equilibrium between linear and cyclic forms. So let's take a look at them first. This is glucose. As you know, it's an aldehyde that has hydroxyl group on all the other carbons in this stereochemistry. Now, this is a linear form of glucose, and as an aldehyde, it can react with good nucleophiles, like these hydroxyls, to form cyclic hemiacetals. If the reaction occurs with a hydroxyl position four, we're going to close a five-membered ring hemiacetal, which is shown here. These kind of structures are called furanoses. So this is the glucofuranose. Now if, instead, we react with a 5-carbon, then we're going to close a six-membered ring, which is called a pyranose. This is glucopyranose. Now, while the sugars are in equilibrium between the linear and the cyclic forms, the cyclic forms tend to be predominant. So in equilibrium, it will be primarily 99% cyclic form and only about 1% linear form. Nevertheless, the presence of the linear form allows the sugars to undergo some interesting transformations, one of which we are exploring in this question. Similarly, here is fructose. Fructose has the carbonyl at the 2 position. It's a keto group, and then hydroxyl groups on all the other carbons. And just like glucose, it can form cyclic hemiacetals. So if the reaction happens between the carbonyl and the hydroxyl at position 5, it's going to close a five-member ring shown here. This is too a furanose. This will be fructofuranose. If the reaction happens with the hydroxyl at the position 6, then we're going to be closing a six-member ring shown here. This is fructopyranose. Once again, just like for glucose, the equilibrium, it's strongly shifted towards the cyclic forms. Nevertheless, there is enough of the linear form to allow certain kind of chemistry to happen. Now, going back to solving part two of our problem, that is converting from mannose 6-phosphate, which is a sugar with a six-member ring cyclic structure, to fructose 6-phosphate, a sugar with a five-member ring cyclic structure. Keeping in mind what we just discussed, that sugars, they're in equilibrium between these cyclic structures and their linear structures, linear forms, one good place to start to figure out how this transformation will happen is to write the linear forms of the two molecules here. Now, for the mannose 6-phosphate, we notice we have a hemiacetal functionality at carbon 1, which is attached to both the hydroxyl group and another oxygen. So this carbon will form an aldehyde in the linear form. So let's write the linear form of mannose 6-phosphate. All right, so notice at carbon 1, we're going to have an aldehyde. And then all the other carbons, we have a 6-carbon chain. Now, in terms of mechanism, this is just a reverse of the hemiacetal formation, and it's going to require a base to deprotonate the hydroxyl here in carbon 1. And then we're going to need to reprotonate at the hydroxyl in position 5 to form this hydroxyl here. Now, something to keep in mind, though, the way this enzyme works is we don't really know if the enzyme is catalyzing this transformation as it's written here, or the enzyme is just binding the linear form that will be in equilibrium in solution of mannose 6-phosphate. Similarly, for fructose 6-phosphate, we can write its linear form. Notice, fructose 6-phosphate has a carbon too, it's a hemiketal functionality. This carbon is attached to both a hydroxyl and an oxygen. So that will open up to form a ketone. All right, this is our fructose 6-phosphate in a linear form. And notice at 2 position, we have the ketone. Now, it would be a good idea at this point to number the carbons to see what do we need to do, going from this linear form of mannose 6-phosphate to this linear form of fructose 6-phosphate. So here we have carbons 6, 5, 4, 3, 2, 1. And we have carbons 6, 5, 4, 3, 2, 1. And if we contrast the two structures, we notice that carbons from 3 to 6 in both cases are, in fact, the same. And we even have the hydroxyls in the right stereochemical orientation, in the right stereochemistry. And all we need to do is move the carbonyl group from the 1 position here in mannose 6-phosphate to the 2 position in the fructose 6-phosphate. So this transformation can be accomplished by an intermediate, which you have seen already in glycolysis, the intermediate that allows to go from glucose 6-phosphate into fructose 6-phosphate, which is a cis-enediol. So it's essentially enolization with the hydrogen on carbon 2. So if we were to form the enol here, which, as you know from the carbonyl video, so a base could deprotonate and allow the electrons to move and form another hydroxyl on the 1 position. Because this is a cis-enediol, the two hydroxyl are going to end up on the same side of the double bond. Once again, let's number our carbons. 4, 3, 2, 1. And this is our cis-enediol. So we have two hydroxyls. They're both attached to a double bond. And cis means they're both on the same side of the molecule. And this allows the enzyme to basically switch back and forth between which one of these hydroxyl becomes the carbonyl. If we go backwards, we have the carbonyl at the 1 position. However, if we go forward, we can move the carbonyl on the 2 position. So that's just the reverse of the enolization reaction in which we will deprotonate this hydroxyl and reprotonate at carbon 1 to take us forward to generate the 2-keto group, which is fructose 6-phosphate in the linear form. Now from here, in the very last step, we just need to close the ring. So that's just a hemiketal formation. We have a base deprotonated hydroxyl at position 5. And this will attack the ketone to form the hemiketal group that we see in fructose 6-phosphate. So now this is basically the curved arrow mechanism from going from mannose 6-phosphate to fructose 6-phosphate. Now, some key points to notice here is that this enolization reaction where the base removes this alpha hydrogen would not have been possible in the cyclic structures. If you look here, this hydrogen, the PKA is about 30. Now, once we form the linear structure here, the PKA of this hydrogen, now it's about 18. So that's like 12 order of magnitude more acidic, and that allows the reaction to for this cis-enediol, which allows the formation of the fructose 6-phosphate. So the take home message here is that opening and closing the ring of the sugars allows a lot of interesting chemistry to happen. Part three of this question asks us to comment on why, when going from mannose to fructose, we have to go through the mannose 6-phosphate intermediate. Now, this turns out to be a very common motif in carbohydrate chemistry, because adding a phosphate group is a way of regulation of which product is being formed. Taking a closer look at our transformation, we see that if we didn't have the phosphate group at the 6 position, this hydroxyl would be available and could potentially be involved in forming cyclic forms. As we recall our discussion on fructose, whenever we have a 6-hydroxyl available, the fructose can form the fructopyranose form in addition to the fructofuranose form. So by blocking this position with a phosphate, we're limiting the reaction to produce only one product, namely the fructofuranose. Additionally, adding a phosphate group to a sugar imparts a negative charge, and that allows the sugar to be trapped inside the cell and allows metabolism to occur much more efficiently. This is something that you've already encountered in glycolysis, where glucose, in the first step, is first converted to glucose 6-phosphate. |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | Problem_Set_7_Problem_1_Tracing_Labels_through_Pathways.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. BOGDAN FEDELES: Greetings, and welcome to 5.07 Biochemistry online. I'm Dr. Bogdan Fedeles. Let's metabolize some problems. Today we're going to be talking about problem one of problem set seven. Now this is a problem we are chasing labels through biochemical pathways. Although it sounds funny, it's actually one of the established ways through which we can test whether the mechanism we propose for these pathways is in fact consistent with what we observe inside the cells. In part a of this problem, we're going to be looking at glycogen, and try to figure out which carbon's in glycogen end up being lost as CO2 in the pyruvate dehydrogenase step of the metabolism. Here is a shorthand representation of glycogen. As you know glycogen is a polymer formed of glucose monomers. Here is a cyclic form of glucose, and in glycogen we have these one four linkages as shown here. Occasionally we'll have one six linkages, as in the case for branches. But for simplicity we're not going to represent them here. Now we want to figure out which one of these carbons in glycogen-- we can label them starting from here. One, two, three, four, five, and six. Which one of these carbons is going to be lost as CO2 in the pyruvate dehydrogenase step. Now let's take a look at the pyruvate dehydrogenase reaction. As you remember, one of the endings of glycolysis is the pyruvate dehydrogenase reaction, in which pyruvate loses a CO2 molecule and forms acetyl CoA which later can enter the TCA cycle. Now this carbon in the carboxyl or group of pyruvate-- I'm going to label with a red dot. This is the carbon that is being lost as CO2. So we want to find out which of the carbons in our glycogen molecule ends up being this red dotted carbon that's being lost as CO2. To figure this out we have to backtrack from pyruvate all the way to the beginning of glycolysis to figure out where this carbon is coming from. Shown here is a layout of the entire glycolysis pathways starting from glycogen. Let me walk you through it very quickly. Glycogen, shown here-- we've shown only a couple of monomers attached to the glycogen and protein. It's going to get hydrolyzed by glycogen-phosphorylase to form glucose-1-phosphate, which then mutates to glucose 6-phosphate shown here. And then becomes fructose-6-phosphate, fructose 1,6-bisphosphate. Then the aldolase reaction splits it into dihydroxyacetone-phosphate and glycerol-dihy-3phosphate, or GAP. Then the gap dehydrogenase converts it to 1,3-bisphosphoglycerate. And then it would go down, 3-phosphoglycerate, 2-phosphoglycerate, phosphopyruvate, and finally, pyruvate. And here, I've also written the pyruvate dehydrogenase reaction. Pyruvate becomes acetyl-CoA by losing the CO2. Once again, this carbon that's lost, the CO2, is the carbon that we want to track. So we're going to put a red dot on it. And as we just said, this carbon is the carboxyl group in the pyruvate. So the way to solve this problem is basically go backward through the pathway and figure out where this carbon is coming from in the original glycogen molecule. For these couple of steps, it's pretty clear. It's going to be the carboxyl in each one of these molecules all the way to 1,3-bisphosphoglycerate right there. So now this 1,3-bisphosphoglycerate is coming from GAP. So this carbon is, in fact, the aldehyde carbon in GAP. Now, as you guys know, trios isomerase interconverts between dihydroxyacetone phosphate and GAP. So this carbon in dihydroxyacetone phosphate is actually this carbon. Because the phosphate group is going to stay unchanged, and then the carbonyl group at C2 can interconvert with C1 to form an aldehyde here. So any of these two carbons, if they were labeled, they would end up being lost as CO2 in the pyruvate dehydrogenase step. Now, if we go backwards in the aldolase reaction, GAP and DHAP, when put together, these carbons are going to be carbons 3 and 4. So counting here, 1, 2, 3. This is a carbon that comes from dihyrdoxyacetone phosphate. And this is the carbon that comes from GAP, so carbons 3 and 4. And obviously, they're going to be staying carbons 3 and 4 all the way back to glucose, 1, 2, 3, and 4 right there. And also in glucose, 1 phosphate, and consequently in glycogen as well. So to answer part one, we can now write here that carbons 3 and 4 of glycogen are going to be lost at the pyruvate dehydrogenase step as carbon dioxide. Part B of the problem deals with the metabolism of glycerol. As you know, glycerol is formed by the hydrolysis of triacylglycerides. Now, we are asked to trace a label from the C2 carbon of glycerol all the way to the first step, in which this carbon is last as carbon dioxide. Let's first take a look at the metabolism of glycerol. As I've shown here, triacylglycerides can be hydrolyzed to form glycerol, which is 1, 2, 3 propane triol. Now, as you know, glycerol is metabolized in two steps. First, we have a glycerol kinase that's going to form a glycerol 3-phosphate, as shown here. And then, we're going to use a dehydrogenase that uses NAD to oxidize the second carbon of glycerol to dihydroxyacetone phosphate. Then it can enter glycolysis very conveniently right here. And then it's going to continue getting metabolized towards pyruvate and acetyl-CoA, as we've seen before. Now, the second carbon in glycerol is C2 right here. I'm going to mark it with a blue square. So this carbon is right here, and it's going to end up right there. Now, this second carbon in dihydroxyacetone phosphate is going to be the second carbon in GAP and then second carbon here, here, here. Isn't this fun? Then the second carbon in pyruvate. Now, once the pyruvate decarboxylates, then it's going to be the carbonyl of acetyl-CoA. It's this carbon right here. Now, so far, this carbon has followed the metabolic pathway, but it has not left yet as CO2. Now, what happens to acetyl-CoA? It's going to enter the TCA cycle. Now let's take a look at the TCA cycle. As we just said, we're looking now at the carbon, the first carbon in acetyl-CoA here, the carbon that has the carbonyl group on. So as I've shown here in the to TCA cycle, the two carbons in acetyl-CoA are marked with this red line. And they will enter and combine as oxaloacetate to form citrate. Then citrate isomerizes to isocitrate. Then we're going to lose a CO2 molecule, which is this middle guy, to form alpha-ketoglutarate. But notice, the two carbons from acetyl-CoA are still in the molecule. Then we're going to lose another CO2 with this bottom one to form succinyl-CoA. But once again, the two carbons that came from acetyl-CoA are still here. So in the first TCA cycle, none of these CO2 will contain the label that came from the glycerol. Now, as we go through the TCA cycle, we reach this step where it's succinate. Now here, I stopped putting this red mark, because succinate is a symmetric molecule. So therefore, if these two carbons were coming from acetyl-CoA, at this point they will scramble. So we won't be able to tell whether it's these two carbons or these two carbons. Now, let me backtrack and put in the labels. So acetyl-CoA, we have this carbon came from the C2 of glycerol. So we'll find it in this carbon, this carbon, this carbon, this carbon. Now, we get to the succinate step, and we said, well, it was here. This would be the carbon that corresponds to succinyl-CoA. But because this molecule's symmetric, by the time we get to the malate step to add this hydroxyl group, it's going to be to the carbon next to the label or the carbon further from the label. Now, because the molecule is symmetric, we can't-- the fumarase enzyme cannot tell which carbon was labeled and which wasn't. Therefore, malate is going to be-- half of the molecules is going to have the label on this carbon, and half is going to have the label on this carbon. So I'm going to write like 1/2 square and 1/2 square. Similarly, when we get to oxaloacetate, the label is distributed 1/2 on one carboxyl group, and the other 1/2 is going to be on the other carboxyl group. So we've gone through the TCA cycle once, and we have not lost the carbon that came from the glycerol. But look what happens when we continue the TCA cycle a second time. So now let's say we combine with an acetyl-CoA that doesn't have any label at all. Now, these two carboxyl groups are going to be these two carboxyl groups in citrate. And as we discussed, both of these two groups are going to be lost as CO2 in these two steps. First is the middle carboxyl group that's being lost here. And this other carboxyl is going to be lost at the alpha-ketoglutarate dehydrogenase step. So the second time we go through the TCA cycle, we lose half of the label at the isocitrate dehydrogenase step and half of the label at the alpha-ketoglutarate dehydrogenase step. So I'm going to circle these. So to answer part B, the C2 carbon of glycerol is going to be lost as CO2 in the TCA cycle. But we have to go once through the cycle first, through which none of the label will be lost. And then the second time, first in the isocitrate dehydrogenase, then at the alpha-ketoglutarate dehydrogenase, we're going to be losing the CO2 that came from the C2 carbon of glycerol. As you might imagine, glycerol can also be used to produce energy. In fact, some bacteria can grow on glycerol using no other carbon source. Now, in part C of the problem, we're going to explore how much energy we can get from one molecule of glycerol. Let's first review the metabolism of glycerol. As we just discussed, glycerol enters metabolism with glycerol kinase, which then in two steps becomes dihydroxyacetone phosphate, which can enter glycolysis all the way to pyruvate, and then acetyl-CoA. Here, the pyruvate dehydrogenase allows us to lose one CO2, and then acetyl-CoA will enter the TCA cycle, where within one cycle, we're going to lose two more CO2s. So that's a total of three carbons that we lose. And that's exactly how many carbons we have in the glycerol. Now, what we need to keep track in order to evaluate how much energy we get from one molecule of glycerol is, whenever we need to, use ATP. For example, we need to put in energy, or whenever we generate NADH or FADH2 molecules, then we can then take to the electron transport chain and generate ATPs out of them. So I've put together a list of the steps in the pathway where the energy balance is affected, either we need to use energy or we are generating energy in the form or ATP or in the form of redox cofactors, such as NAD and FAD. So the first step, glycerol kinase, we're going to need to spend one molecule of ATP. So I'm going to put a minus 1 here for ATP equivalents. Now, in the glycerol 3-phosphate dehydrogenase step, which is shown right here, we are generating one molecule of NADH. So plus 1 NADH. Now, for the purpose of this problem, we're going to use the convention the 1 NADH is worth about 3 ATP equivalents. Now, later on in the pathway, we're going to get to the GAPDH step where we generate one more NADH molecule. So GAPDH, another NADH molecule. That's equivalent to 3 ATPs. So moving ahead, we have the phosphoglycerate kinase step, where we generate 1 ATP, and then we have the pyruvate kinase step where we generate 1 more ATP. I have these written here. So plus 1 ATP, and plus 1 ATP. Now, finally, we know that pyruvate, it's going to be decarboxylated by the pyruvate dehydrogenase, and here, too, we're generating 1 NADH. So plus 1 NADH, that's going to be equivalent to 3 ATPs. And finally, the TCA cycle. Now, we have one molecule of acetyl-CoA that enters the TCA cycle. As you guys know for every molecule of acetyl-CoA that enters the TCA cycle, we're going to be generating 1, 2, 3 NADHs. 1 FADH2, and 1 GTP. So the tally is 3 NADH, 1 FADH2, and 1 GDP. Now, GDP is equivalent to an ATP. FADH2 counts as 2 ATPs, and NADH counts as 3 ATPs. So that's a grand total of 12 ATPs. So putting all of this together from one molecule of glycerol, when fully metabolized to CO2, we get 22 ATP equivalents. So that's the final answer for part C. From one molecule of glycerol, we get about 22 ATP equivalents. In part D of the problem, we're tracing the same labels we had in part A, but instead of tracing them to CO2, we're going to trace them to the amino acid alanine. As you know, one way to produce alanine is by transamination from pyruvate. Since we already tracked the label to pyrate, we need to know how do we convert pyruvate into alanine. Let's take a look at that reaction. As you know, alpha-keto acids, such as pyruvate, can be converted into amino acids by a transamination reaction. Here, we're going to use another amino acid to donate the amino group to the alpha-keto acid pyruvate to form alanine. Now, all these transamination reactions are catalyzed by PLP or pyridoxal 5-phosphate, which is a cofactor derived from vitamin B6. So when it's left to right, in this transformation is the other pair of amino acid-- alpha-keto acid. So basically, where is this amino group coming from? Typically for most transaminases the other pair is glutamate alpha-ketoglutarate. So I'm going to have glutamate is going to donate the amino group, and in the process is going to become an alpha-keto acid alpha-ketoglutarate. So in this way, pyruvate becomes alanine. Now, we were tracking the label from this carbon, so the carbon that will be lost as CO2 in the pyruvate dehydrogenase reaction. So that is this carbon right here in pyruvate. So in the transamination reaction, this carbon becomes the carboxyl carbon of alanine. So if you were to start with a glycogen that was labeled at the 3 or 4-- the carbons 3 and 4, that label would be lost as CO2 in the pyruvate dehydrogenase reaction that we saw in part A. But also, that label would be incorporated in alanine at the carboxyl group of this amino acid. Parts E and F of the problem deal with tracing labels to the amino acids glutamate and aspartate. Now, both of these amino acids have their corresponding alpha-keto acids as part of the TCA cycle. So let's first take a look at this TCA cycle. Here, we have the TCA cycle where I highlighted alpha-ketoglutarate going into glutamate through a transamination reaction. Alpha-ketoglutarate and alpha-keto acid can undergo a PLP-catalyzed transamination to form glutamate. And similarly, oxaloacetate, another alpha-keto acid can transaminate to form aspartate. Now, in part B of the problem, we were looking at the label present at carbon 1 in acetyl-CoA. And we said that this label will stay inside the intermediates of the TCA cycle for the whole round. Now, when this label gets the alpha-ketoglutarate, it's going to be on this carboxyl group. So in the transamination reaction, the label is going to end completely on the furthermost carboxyl in glutamate. So that takes care of part E. Now, if we continue chasing this label, once we get to the succinate, the label is going to split half and half between these two carboxyl groups, because we cannot tell which one-- because the molecule is going to be symmetric. Similarly, for fumarate and in malate as well. So in oxaloacetate, the label is going to be on both of the carboxyl group. Half of the molecules will have it one. Half of the molecules will have it on the other. So therefore, the aspartate is going to mirror that label distribution, 1/2 label on one carboxyl, 1/2 label on the other carboxyl. So that should answer the final part of this problem. Now, I hope this problem gave you a better understanding of what it means to chase labels through biochemical pathways, and also that chasing labels can help us better understand the mechanisms of biochemical transformations. |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | 5_Enzymes_and_Catalysis.txt | SPEAKER: The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOANNE STUBBE: OK, so welcome to class today. Today, I'm going to be talking about one of my favorite topics-- enzymes and catalysis. And what I would like to do is give you an outline of where we're going today. First, we're going to define what a catalyst is. And we're going to focus on enzymes as catalysts. Then what we're going to do is describe the theory of catalysis. And we'll show how the theory can account for all experimental observations or most experimental observations. We'll then talk about the mechanisms of catalysis, and we'll see that there are three basic mechanisms. I won't write them out, but we'll come to them. And then if time allows, we'll talk about another property of enzymes. These are all focused with amazing rates of reaction. And the second property of enzymes, besides the fact that they can accelerate reactions by a million to a billion fold, is their specificity. So that's where we're going. And what I'd like to do in the very beginning is show you why-- spend a little time to show you why enzymes are important. Why do you care about enzymes? That's why you care about enzymes. Look at this mess. That's what's going on inside your body. There are thousands of reactions going on inside your body. Without enzymes, no reaction. So you must care about enzymes. So what we're going to see over the course of the semester is that we can break down this mess into a few basic reactions. OK, so here is Waldo. And over the course of this semester, as you've seen many times, we walk through central metabolism and all of the reactions. Now, a second thing about enzymes that I think will be what you guys do for a living if you decide to become biochemists and enzymologists is can we take our understanding of how these amazing catalysts work and design our own protein catalysts to do any reactions we want not involved in the 10 or 12 basic reactions we have going on inside our body? And we can't do that now, but I would argue that understanding catalysis is a key requirement for getting to the point where we can actually do catalytic design. And the third thing that I think is really important is that 40% to 50% of all the drugs we presently use in treatment of antibacterial infections, anti-viral infections, anti-cancer infections are all inhibitors of enzyme-based reactions. And understanding catalysis helps us to design better inhibitors. So understanding catalysis is central to many things that are important to all of us in society. So let me just tell you how I got excited about enzymes. So I went to graduate school. Never had a biochemistry course. They didn't do anything about biochemistry at the molecular level. When I went to graduate school, I went to a lecture in the first year of graduate school that was given by a faculty member at Stanford. And he talked about an enzyme called lenosterol cyclase that converts a linear molecule. So here's the linear molecule, but I have it folded up into four rings. And these four rings provide the basis for all steroids like estrogen and testosterone and cholesterol. And look at what this reaction does. One enzyme in a single step converts this linear molecule through a series of cascade transformations in hydride and methyl shifts into this molecule, putting in six asymmetric centers in a single step in 100% yield at 37 degrees in a pH 7. I said, my god, why do I want to be a chemist? You sweat. There are no blocking [INAUDIBLE]. You to sweat to put in any kind of an asymmetric center, and here, this little protein has figured out how to do all of this under really mild conditions. And so this was a transformative experience for me. I remember the lecture clearly because I thought it was so amazing and I'd never seen that enzymes could catalyze reactions like this. So enzymes really are amazing. So what you want to do now is start by defining what a catalyst is. And a catalyst, it can be for those of you who have had more chemistry, it can be an inorganic ion for example. It can be a small organic molecule. But for us, we're going to be focused on large macromolecules. And the macromolecules, we'll see that we're getting focused on, could be proteins or RNA. But most of the reactions found in our body are catalyzed by protein catalysts. And these catalysts increase the rate of the reaction without themselves being changed during the reaction. And furthermore, while they can increase the rate of the reaction, they don't affect the overall equilibrium of the reaction. They just increase the rate of approach to equilibrium. So they have no effect on equilibrium insolution, but increase the rate of approach to equilibrium. So what I want to do now is define some of the basic properties of the catalysts that we'll be talking about for the next 15 minutes or so. So the first thing is that the catalysts we're going to be focused on are enzymes. Remember, we've spent the last few lectures talking about proteins. Enzymes are simply proteins, but then we see that they have special regions in the protein structure which allow them to accelerate rates of defined reactions. I also will mention that we have inside of us a machine called the ribosome. And the ribozome is the machine that makes proteins, makes polypeptide bonds. We're not going to talk about that in 507, but we talk about it in 508. And the amazing observation was made really initially by the seminal experiments by Harry Knoller at UC Santa Cruz that you don't need any proteins to make peptide bonds. That was heresy at the time. In 2001, Steitz won the Nobel Prize for the structure of the ribosome and Harry didn't get the Nobel Prize. Bad. He's the one that made the seminal discovery, although the structure of a ribosome, which is 2.3 megadaltons, is really sort of spectacular. I still get goosebumps when I think about that structure that was published in 2001. But Harry didn't get it. Anyhow, so that was a digression. So that took a few minutes off my 50 minutes. Anyhow, we're going to be focused on enzymes as catalysts. So why are enzymes important. They're important because as I already told you, they accelerate the rates of reaction 10 to the 6-- a million-- to 10 to the 15-fold. Whoa. Can you imagine that? Essigmann always used to say to me, give them an example. That's a lot. It's faster than a speeding bullet. Do you know where that came from? Faster than a speeding bullet? See, if this is when I have a disconnection with my audience. It's a bird, it's a plane, able to leap tall buildings in a single bound. Superman, course five. That's where our course five logo came from. So let me just give you an example of this. And so this is taken from an article by Wolfenden, and this is the expanse of reactions that it rates that enzymes can catalyze that also can occur in solution. So if you take down at this end a half life of adding water to CO2 is five seconds. That's pretty fast. Why do you need water to hydrate CO2? Anybody got any ideas? Where have you seen that in the last few lectures? Hemoglobin. Why do you need that? Because in your tissues, all of the fatty acids of the glucose gets breakdown to CO2. The CO2, where does it come out? You exhale it. Somehow it has to be carried around and into your lungs from your tissues. And there's a key enzyme called carbonic anhydrase that accelerates even this very fast reaction by a million fold. Let's look at another one that might be familiar to you. Let's think about peptide bond hydrolysis. We just told you the ribosome makes peptides. What about peptide bond hydrolysis, which plays a major role in cell media death and blood coagulation and controlling the levels of proteins inside the cell? If you look at the half life of peptide bond hydrolysis, 450 years. That means if we needed this reaction in our lifetime, it wouldn't ever happen. So if you actually look at the rate acceleration, proteases, which hydrolyze peptide bonds, have rate constants of about 50 per second. This rate constant is about 10 to the minus 9 per second. That's the rate acceleration of 10 to the 12. So without these kinds of enzymes and many other kinds of enzymes, we would not be alive and we would not be able to function. So the description of rate accelerations is given by a term we're going to derive in the next lecture-- kcat over KM. A kcat is a turnover number. It tells you how good your catalyst is in terms of per second. KM has a concentration dependence. So this is a second order rate constant-- concentration inverse, time inverse. And this is what we use to think about how efficient enzymes are, as we'll see in the next lecture. So what I want to show you here is another graph that was made by Wolfenden, who we were talking about data in the previous slide. And what I want to do is show you his comparison of enzyme catalyzed reactions and non-enzyme catalyzed reactions. And we just heard with peptide bond hydrolysis, 450-fold rate acceleration. That's a lot. What do you notice immediately about enzyme catalyzed reactions? The kcat over KM is on the order of 10 to the 6 to 10 to the 8 per molar per second. Does that ring a bell with anybody? Where have you seen a number like 10 to the 6 to 10 to the 8 per molar per second? What that is is a diffusion constant of any two molecules finding each other in solution. So what is that telling us? That's telling us that inside the cell, enzymes have evolved to be so efficient that the rate-limiting steps are going to be finding each other in solution. It's physical. It has nothing to do with the chemistry. So you've had billions of years to figure out all this chemistry, and what limits everything-- and we'll come back to this in a minute-- is the enzyme and the small molecule finding themselves. And so that's where this number-- of 10 to the 6 or 10 to the 8 per molar per second comes from. If you look at the non-enzymatic reactions, we just talked about hydration of CO2 versus enzyme catalyzed reaction, what you see is that they're all over the place. So the staggering rate accelerations of 10 to the 6 to 10 to the 15 that you see are really based on the rates of the non-enzymatic reactions. And the enzymes have evolved-- most of them have evolved over billions of years to be incredibly effective at what they do. So the other thing that I wanted to say about enzymes at this stage is that enzymes are usually in addition to being great catalysts, they're also-- you learn, I think, if you've seen enzymes before that they are very specific for the substrates, which I'll call S and we'll come back to this in a minute. So they only react-- you have hundreds of metabolites inside of our body. That only will pick up and react with one of those metabolites. But in reality, I think what we found over the last 15 years or so is enzymes are not all that specific. They are specific for what they encounter inside us. So if you take them out as a biochemist and start messing around with them, they aren't anywhere near as specific. They don't have to be that specific because they never encounter these molecules inside the cell. So they are very specific for substrates in vivo. And in fact, many of them are promiscuous in vitro. And I think that's something that's been under-appreciated. So this is number three. I wanted to talk about specificity. Number four, enzymes in general, if you look at that metabolic chart, almost all those reactions can be subdivided into 10 to 12 reactions. And those 10 to 12 reactions, even though it looks like a jungle and a mess, are found in the lexicon that you have been given in the first lecture. So that lexicon provides a framework to think about all of primary metabolism. Now, in reality, there are many other kinds of reactions. But the ones that you're going to see in 507 can be limited to 10 to 12 reactions. So enzymes have a limited repertoire of reactions in primary metabolism. And so in this case, let me give a plug for the chemists. Chemists have the whole periodic table. Do we have a periodic table here? No. We're in the wrong department. We're in the wrong building. Anyhow, we have hundreds-- not hundreds-- we have 50 elements where we can use to catalyze reactions. We can do all kinds of reactions catalytically, and we can do it with something small, like a proton, or something small, like a metal with a little organic spinach hanging off of it. But what are we doing with enzymes? We have these big huge molecules. So there's a playoff. Enzymes have a very limited repertoire of reactions they catalyze, while chemists actually are limited by their imagination to catalyze these reactions. However, as the world becomes more and more green, chemists are no longer allowed to use metals. For example, they can be toxic to people. And so people are rethinking and refocusing on developing green catalysts. So the question that you can ask yourself, is there any way that enzymes can enhance their repertoire of reactions that they can catalyze? And they can. They do that by using the vitamins on the vitamin bottle. So enzymes have a limited repertoire, but they increase this repertoire using vitamins. This is what we eat out of our vitamin bottle that are converted into co-factors. So the vitamins we eat have to be subtly modified and then get incorporated into the protein catalysts and greatly expand the repertoire. So many of you probably-- how many of you take vitamins? Everybody should be taking vitamins. Why don't you take vitamins? Anyhow, so you can see vitamin B6, vitamin B2, vitamin B1. And over the next three weeks or so, we talk about the chemistry of how these vitamins interact with the protein catalyst to increase the repertoire of reactions to 10 that actual enzymes can catalyze. But in addition to the vitamins, I want to make mention of another type of catalyst. So most of the vitamins are organic molecules. One also needs to think about inorganic molecules. Inorganic molecules-- copper, zinc, iron, all those if you look at your vitamin bottle are at the bottom and they're labeled inorganic. And they almost always in introductory biochemistry courses get swept under the table. And in fact, many biologists don't think about metals at all. But 30% to 35% of all the enzymes have metals incorporated. And these metals are essential for the repertoire of reactions that enzymes can catalyze. So without going into any details, I just want to whet your appetite. Look at this guy. Well, what are we looking at here? These yellow things are sulfurs. The purple thing is molybdenum, and the green things are iron. And in the middle of all these irons is this silver thing, which is a carbon bonded to four irons. Most of you probably aren't sophisticated enough yet to think that's amazing, but it was only two years ago that the x-ray crystallography where we can look at things at atomic resolution was good enough so we can see that guy. So what does this guy do? What's its function? Pretty damn important. It converts nitrogen into ammonia. So it turns out to be an eight electron reduction because not only do you produce ammonia-- two molecules of ammonia-- but you also have to produce a molecule of hydrogen during that reaction. So this is the basic way we control nitrogen-- one of the basic ways we control nitrogen in the environment. So chemists would love to understand how this spectacular inorganic molecule can mediate what turns out to be a six electron reduction. Another molecule-- co-factor molecule that's all metal-based that I think is equally amazing is this one. We recently got an atomic resolution structure down to 1.5 angstroms. It has four manganese and a calcium. Anybody have any idea what this does? This is the co-factor that takes water in the presence of light-- sunlight-- and converts it to oxygen gas. Why is that important? Because we need oxygen gas to breathe. So anyhow, on this one co-factor mediates that transformation. Pretty amazing. And that's a major focus of people who want to think about how these catalysts actually work, but we won't be discussing that further. We won't be discussing that further in 507. So I just wanted to point out here that, again, enzymes have a limited repertoire. Their repertoire is much less than what chemists can do, but they're amazingly efficient at what they do. So I would argue if we really could understand the basis of catalysis and how these things evolve to be able to do these amazing transformations, we might, if I was able to come back 50 years from now, see that we had designer proteins all over the place that could catalyze the specific reactions that we're interested in, not the ones that are found in our bodies. OK, so the next thing I want to briefly mention is that enzymes, so if you look at an enzyme, it's a big macro molecule. We've looked at these in the last few lectures. The region where the chemistry or catalysis occurs is called the active site. And we've seen this before in the TIN barrel superfamily of proteins. And so there's a region of about 10 angstroms. We have your amino acid side chains that I asked you to try to remember and think about. We'll see those are key to making these rate accelerations so fantastic. This is where the chemistry happens. But I think it's now clear from studies that have been done in the last 15 years or so this is not true. One can make changes out here or here. One can change the amino acids and totally turn off the enzyme or turn on the enzyme. So chemists use these small little molecules, biology uses big huge molecules. Everybody initially focused on this one little region where you can see the chemical transformation occurring. But what about the rest of the molecule? The rest of the molecule is also important. You cannot remove, in general, all of this spinach and come up with a catalyst that has these amazing rate accelerations. So the active site is very important. But so are specific amino acids outside of the active site. And people have studied this because of technology of sight directed mutagenesis, which many of you have probably done in either 702 or in 335. So what implications does that have? And I just want to mention one more thing. I don't want to spend a lot of time on this, but our thinking about catalysis is changing dramatically and has changed and continues to change. I continue to study this, even to teach 507. Because it turns out, how does change out here govern what's going on in this region where you think the chemistry happens? And it governs that chemistry because of conformational changes and movements. So another thing about enzymes that we need to do more thinking about-- and this is a major focus of what people are thinking about now-- is dynamics in enzyme catalyzed reactions. And so if you look at the time scale-- and I made you think about size scale in the first few lectures. Like how long is a hydrogen bond? How long is a carbon nitrogen bond? A carbon oxygen bond? You also need to think about time scales. And this is particularly true in the case of catalysis. What happens on the fantasecond time scale? That's pretty fast. That's a vibration of the bond. But what are you doing during an enzyme catalyzed reaction? You're breaking the bond and you're making the bond. So we'll see that the transition state of the reaction happens on the fantasecond time scale. Yet, if you look at the criteria kcat, which is a turnover number, the enzyme, which is given in time inverse, they're usually on 10 per second to 1,000 per second. So they're on the millisecond to second time scale. So catalysis is happening way up here. Now, I've just told you that mutations outside the active site can affect catalysis, and so one also needs to think about the time scales in between these two extremes. I've also told you that finding an enzyme, finding its substrate in solution, can often be the slow step. So here you have nanosecond, microsecond time scales, and I'm not going to spend any time on this, but you come back and look at this and think about you've got all these side chains of your amino acids. You might have loops that are moving in and out and covering the active site. All of this dynamic interaction plays a key role in catalysis, making the enzyme as a whole important in the overall catalytic process. So that's my introduction to you for what an enzyme catalyst is. And so now what I want to do is look at the second bullet we were going to talk about, which I've already lost. How do we describe catalysis? How do we try to conceptualize in a theoretical framework all of the experimental observations that have been made for decades? And there are many things that are wrong with this theory, but this theory has stood the test of time, not only for biochemists, but for also chemists. And I think it helps us to think about how enzymes are able with just the amino acid side chains for protein to give us these amazing rate accelerations and specificity that we actually observe. So what we want is a theory to conceptualize catalysis. And this is transition state theory. And this is-- many of you have seen this in some form before, either in freshman chemistry or maybe if you've had 560. People go through and derive all of the rate equations. What I'm going to do is just show you a picture of how this theory helps us think about these catalytic transformations and then how this picture helps us think more specifically about these amazing rate accelerations that we actually observe. OK, so I can't remember what's on the next slide, but this is a picture you often see when you're thinking about catalysis. So this is chemical catalysis, but again, chemical catalysis, biological catalysis, really the same basic principles hold that we have some substrates A and B going to products and what's required. So I think all of this is intuitive, but if you have two things coming together, they have to come together in exactly the right way to be able to make a bond. They have to remove all the solvent from outside them. They have to come together with enough force to be able to get over the barrier, whatever it is, to break one bond and to form a new bond. So that's true of all reactions and everybody faces the same issues in terms of conversion of substrate into products. And the highest point along the reaction coordinate-- so this is what we call a reaction coordinate diagram. And this is energy. So the highest point along the reaction coordinate diagram is called the transition state-- TS. This is transition state theory. OK, TS theory. And this is where-- this is the point where we can ever isolate it because this is a point where all the chemistry is happening. The bonds are being made and broken. And the lifetime I just showed you on the previous slide is fast-- fentaseconds. So you can never isolate a transition state. Everything needs to be aligned. That doesn't come free of charge. You have to do a lot of work to get to the stage where you can get this chemistry to happen. That's what our catalysts are doing. And then bang, the reaction is over at that time. So this is another way of describing the transition state of the reaction. And in reality, this is the cartoon you see in most introductory textbooks that are describing rates of reaction. But the reaction coordinate is much, much more complicated. And that's true in enzymatic reactions as well. So it's true of chemical reactions, it's true enzymatic reactions. So you might have a plus b, and they might form two or three intermediates along the reaction pathway where you have many transitions-- you have many transition states along the reaction pathway. And each of these transitions states would be non-isolable. But what about these little valleys? These little valleys are where you might have a chance to see an intermediate during the conversion of a plus b into p plus q. So an intermediate-- and if you're interested in studying catalysis and the chemistry of the reaction and you need to define what these intermediates are, they can be high or that could be lower in energy. They may be easy to isolate, not easy to isolate. But they have all covalent bonds intact. So in contrast to the transition state where the bonds are being made and broken, you can never isolate this. You have a chance to be able-- if you're clever and creative, which people that study mechanisms are, you can actually look at the intermediates along the reaction coordinate. So that's a reaction coordinate diagram. We're going to come back to these because I think they really help us to conceptualize how enzymes can go about achieving these fantastic rate accelerations. So from transition state theory, one assumes the following-- I'm not going to go through the details of this at all. But the key point that one needs to think about in transition state theory is that-- and this was first put forth by Linus Pauling. Who's Linus Pauling? He's my hero. OK, Linus Pauling, he's the vitamin C guy. He lost it when he got old, but in the early days, he's the one that could take a polypeptide chain-- just a string of amino acids-- and he sat there and he played with it. And lo and behold, he says, we're going to have alpha helices in proteins. How amazing is that? You've heard me talk about him before. He was the one that I think conceptualized-- first conceptualized-- how an enzyme might catalyze a reaction. What do you want to do to catalyze a reaction? You want to lower this barrier. So how do you lower the barrier? You don't want the enzyme to bind the substrates tightly, and I'll come back to this in a minute. You want to bind the transition state tightly. So he put forth in the 1940s that the way enzymes might be able to catalyze their reactions is by tightly binding-- uniquely and tightly binding the transition state of the reaction. And I think that turns out to be a really good way to conceptualize most enzymatic reactions. Now, transition state theory tells us, which again is not so appealing to me but it works to describe most experimental data, that the ground state-- so this would be the ground state-- is in equilibrium with the transition state. So you might ask yourself, how the heck can you ever be in equilibrium with something with such a short half life? That's a good question to ask. But in fact, this framework-- transition state theory-- allows us to able to explain almost all the experimental observations that we make as both chemists and biochemists. So this goes through and derives that equation, which I'm not going to do today. In the old days, I used to spend a lot of time deriving equations. Nowadays, I don't derive equations anymore. But the key equation that you need to think about is shown here. And the consequences of this equation are quite simple. It tells you that the rate constant for the reaction-- so from transition state theory, the rate constant for the reaction. And where is this rate constant? Where does this rate constant come from? A is going to some product p. You can measure it experimentally. So k observed is an experimentally measurable parameter is equal to a bunch of constants called the transmission coefficient. This should be a cappa. Boltzmann's constant, temperature in degrees Kelvin, Planck's constant times e to the minus delta g dagger over rg. So this is the equation. This is a constant. This is Planck's constant, Boltzmann's constant. This you can measure experimentally. Cappa is telling you basically-- the transmission coefficient is telling you the frequency that this transition state breaks down to form products versus going back to starting materials and in general, is on the order of one in most reactions. And so the key thing to remember from this equation, which explains the data and helps us to think about catalysis, is that as you increase the rate of the reaction, it's inversely related to the activation barrier. So what you want to do, this equation tells you, is you lower this barrier. The rate of your reaction becomes faster and faster. So the whole goal is, then, to figure out how to lower the barrier. If you can lower the barrier, this theory predicts that the rate of your reaction will be faster. So that's what we want to be able to do. The rate constant is inversely related to the activation barrier. And so now let's look at an enzyme system specifically. So I'm going to draw the same kind of reaction coordinate that we've drawn over there for a chemical reaction. And I'm going to use a simple equation. E is the enzyme, s is the substrate forms an enzyme substrate complex. The substrate binds in the region that we call the active site over here. Somehow, the enzyme is able to convert itself into product. Now, most reactions are much more complicated than this. You have many substrates. You have many products. But it doesn't affect anything in terms of thinking about the problem. And then in the end, the product dissociates. So that's a simple reaction. You get something in there, a catalyst works on it, it gets converted to the desired product, and the product is released. So what I've told you now a couple of times is that enzymes have evolved to such an extent that often the physical steps and not the chemistry is rate limiting. So what are the physical steps? Here are the physical steps. Enzyme finding substrate and solution, that's a physical step. What is limited by? It's limited by diffusion control. How fast can they find each other in solution? That's the number 10 to the 8 per molar per second that limits most enzyme-based reactions that I showed you several slides ago. What about this? We have product dissociation. What about product dissociation? That's a physical step too. You made the product sitting around, but in order for the enzyme to turn over, again, to free up the active site, the product has to come off so it can bind another substrate. And here is the chemistry. Ah, that's what I care about. But what happens, now, is that if these steps are rate limiting, then you can't see the chemistry. So it's really challenging, often really challenging, to study the chemistry of a reaction because the rate limiting steps have nothing to do with the chemistry. So let me just draw a diagram. So you can draw a reaction coordinate diagram. And so what you have is some enzyme plus substrate and it can form an enzyme substrate complex. You have a transition state of your reaction. The enzyme product complex can then dissociate to form enzyme plus product. So what you need to think about if you're thinking about how to accelerate the reaction is what is the bottleneck in the overall reaction? You don't want to start mucking around with something that doesn't control the rate of the reaction. So you need to know what the rate limiting step is in the reaction. And the rate limiting step is the highest barrier along the reaction coordinate. OK, now I've already told you that this is a simple case. We have one substrate getting converted into product. Most enzymatic reactions are going to have many barriers. And so in order to affect the overall rate of the reaction, you need to figure out what's rate limiting, and somehow the enzyme has figured out how to lower the barrier to make this reaction easier to occur. Remember, I just told you that the rate constant is inversely related to this activation barrier. So if we can lower this barrier somehow, what we're going to see, if we can lower this barrier, now we have a lower overall rate of the reaction. So this theory allows us to think about what we need to do to make these catalysts actually work with rate constants of 10 to 6 to 10 to 15 times faster than non-catalyzed reactions. And I want to say one other thing before you move on. As with everything, I think it's good that we're in a field that's rapidly changing. Remember, I told you have to think about dynamics. We no longer think about a single reaction barrier. That's in most of the textbooks now. Really what we think about is we bring dynamics into this. I told you things outside the active site can modulate what's going on inside the active site. What we think about is a reaction landscape. And so one has many barriers that one has to get over. Almost all reactions involve multiple barriers. So you've got to figure out which one is rate limiting and lower that activation barrier. And enzymes, if you think about this, they're huge. Do they all fold exactly the same way? No. So we always think we have a homogeneous enzyme. No. If any of you work in UROPs, you'll find that out pretty fast. You use recombinant technology to fold things inside the cell. They don't all fold right. So you have all mixtures of things. And so you get a reaction landscape. And so this axis is bringing in the dynamics that I told you about earlier on that you need to think about-- the conformational changes that occur every step along the reaction pathway. The enzyme is moving at all kinds of steps, reorienting everything to get the chemistry exactly right. So what I want to do now is-- so that gives you a way to conceptualize rate accelerations. Now what I want to do is tell you what the major mechanisms are that the enzyme uses to enhance the rates of these reactions. How do we lower these energy barriers? So let me see. I need to start erasing somewhere. OK, so we're on the third bullet over here. Mechanisms of catalysis. And what we're going to be talking about is multiple mechanisms of catalysis. We're going to be talking about binding energy, which is the one people have most trouble thinking about. We're going to be talking about general acid, general base catalysis. And we're going to be talking about covalent catalysis. And we will see that over the course of the rest of the semester, when we start talking about metabolic pathways, all of these mechanisms are used in almost all enzyme catalyzed reactions to give us these tremendous rate accelerations. What I want to do-- that's the first time I did that. That wasn't too bad. OK, so what are the mechanisms of catalysis? How do we get 10 to the 6 to 10 to the 15 accelerations? And so the first thing, and I think the one that really is unique to enzyme catalysts compared to small inorganic or organic molecules, is the use of binding energy in catalysis. So this is the one-- and this is also the one that's thought to contribute the greatest amount to these factors of up to 10 to 15. So binding energy in catalysis, and what does that mean? What do we need to think about? So the enzyme binds to a substrate. If we take this simple case, we need it to bind. We need it to bind specifically. So that's a key part of the enzyme that we haven't gotten to yet-- specificity. But what if it bound its substrate really tightly? Do you think that would be good? No. So it's not good because what does it do? If it took all of the spinach changing off of your substrate and made hydrogen bonds and Van Der Waals interactions, all the weak non-covalent interactions we spent a half a lecture talking about four or five lectures ago, what would happen is you would have type binding. You would have lower energy. But what does that then do to the activation barrier? It increases the activation barrier. So the binding energy is the free energy released when enzyme combines with substrate. But the key is that this bind energy is not used to bind completely. It's used for catalysis. So this energy is used both to bind substrate and-- and this is the key thing-- for catalysis. So what do we want to be able to do and how does it do this? So if we look at this, if we look at our reaction coordinate diagram over here, we don't want to bind substrate tightly because this is the biggest barrier-- the rate limiting step along the reaction pathway. What we want to do is lower this barrier. So how can we lower the barrier? We can lower the barrier by stabilizing the transition state. That now makes this barrier-- probably can't read anything now, but that makes this barrier lower. How's another way you could lower the barrier? You could lower the barrier by straining the substrate to look more like the transition state. So you could strain the substrate in this form, and now, again, you would have a lower barrier compared to that barrier. So you're going to use this binding energy to stabilize a transition state. So we want to use binding energy to stabilize the transition state to de-stabilize-- any of these or all of these could be true-- de-stabilize the ground state-- G-S. Or what else do you need to do to get a reaction to work if you have one or two substrates or even one substrate? Your molecules in solution are all solvated. What you need to be able to do is get rid of the solvent. If you have two substrates, you have to bring them together. You have to bring them together at the right orientation. That doesn't come free of charge. You have to get the energy from somewhere, and the energy is proposed to come from this binding energy. So the binding energy is not used to completely bind the substrate, but to do all of these things to get your substrates ready to form product. So you can dissolvate and bring reactants together. And you can freeze out rotational, translational entropy. So you're getting everything ready for the reaction to happen. So in this case, then, let me just erase this and make this so that this is clearer. What you could have, now, is you can-- so in the beginning, this is the barrier. If you stabilize the transition state, this becomes a barrier. If you de-stabilize the ground state, then this becomes the barrier. So what we're trying to do is lower this barrier to get the reaction to work. And so the major way that we do this is by using the interactions between the enzymes of the weak non-covalent interactions between the enzyme and substrate to help us do these things to enhance catalysis. So that's one of the major mechanisms of catalysis. A second-- and this type of catalysis is unique to proteins. So the two types of catalysis are used widely in organic or inorganic chemistry when you're designing your catalyst. I mean, when you're designing a catalyst substrate binding, a small molecule, a big product release is still an issue. If you go and read the organometallic literature, people have trouble with product release all the time. So the issues in catalysis are exactly the same in biochemistry as they are in organic and inorganic. But now we have to deal with this big protein, which has these unique properties, one of which is that the whole protein is playing a key role in catalysis and allowing everything to align within tenths of angstroms to make these reactions work really efficiently, which chemists can't do yet. And I don't think we'll ever be able to design it, but we can evolve catalysts to become better and better so that they can do the same thing. That's the beauty of proteins is you can evolve them to become better and better catalysts. So the second mechanism-- so the first mechanism is binding energy. The second mechanism-- I can't remember whether they're using I's or 2's. The second mechanism is general acid, general base catalysis. Now, as a chemist, what do you learn about catalyzing reactions? Well, one way you could do it is with a big fat proton. Protons are pretty good at helping you catalyze reactions if you go back and think about chemical transformations or hydroxide ions. What are the concentration of protons and hydroxide ions in aqueous solution of pH 7? 10 to the minus 7 molar. So you don't have much protons and hydroxide ions in the active site. So even though these are very good catalysts that organic chemists and inorganic chemists use all the time, they're using them in organic solvents, you can argue the active site of the enzyme is more like an organic solvent. But anyhow, this type of catalysis is called specific. So when you see specific acid or based catalysis-- where does the general acid and base catalysis come from? It comes from the side chain of your amino acids. So remember, the second or third lecture I said, oh, here are all the amino acids. Here are all the side chains. You really shouldn't know all of your amino acids. It's a basic vocabulary of all of biochemistry, and the pKas of all the side chains. Why? That's why. Because you can't understand anything about catalysis without knowing what these side chains of the enzymes are actually doing. So the general acid and base catalysis come from the side chains of your amino acids. So what side chains do you have? You can have carboxylates. Anybody know what the pKa of a carboxylate is? Hey, Boggin, what is it? STUDENT: Four to five. JOANNE STUBBE: Good, four to five. See, he remembers. You could have imidazole. This has a pKa at neutral pH. Anyhow, you need to go back and look at what the groups are that can be involved in catalysis. And chemists, for decades, have studied how you can use general acid and base catalysis to give you rate enhancements. Now, what I haven't told you is the amount of rate enhancement And so people over the years have measured that with binding energy, you can get factors of 10 to the 8. If you look at general acid base catalysis from all the organic and inorganic reactions people have studied for decades, you get factors of 100 to 1,000 fold. Now, we need to get to a factor of 10 to the 15 in some cases. We've already gotten to the factor of 10 to 6. So obviously, you're going to have to use multiple combinations of these mechanisms to give you these tremendous rate accelerations. So you will see over the course of the rest of the semester many active sites of enzymes with many amino acid side chains that are playing roles in general acid and base catalysis. And the last type of catalysis is covalent catalysis. And again, covalent catalysis means that you form-- and where have you already seen covalent catalysis? You've already seen this when we talked about, how do you study the structure of the primary structure of a protein? We use proteases with tripsin or kimotripsan that can break down the big protein into small pieces. We went through the mechanism of that reaction. In the active site of that enzyme, there is a serine that forms a covalent bond. So over the course of the semester, you're going to see many examples. And I'll just put in parentheses for those of you who don't remember, go back and look at serine proteases. This is a classic example that's in every textbook. And how do we know how much rate acceleration you get from covalent catalysis versus not having covalent catalysis. We know this, again, because of organic chemists studying the detailed chemical mechanisms of these reactions, and we find out that in this case, we get rate accelerations of 100 to 1,000 fold. So what you see is the enzymes. And these are the three general mechanisms by which all enzymes catalyze their reactions in some variation. Now, attributing out of this 10 to 15, 10 to the 8 is associated with this, and 10 squared is associated with that is extremely challenging. And there are a lot of people still trying to dissect reaction mechanisms in detail. And I would argue that understanding how these different methods work and synergize to give you these accelerations is a key to eventually designing new catalysts that can do what you want them to do that's distinct from biological transformations. And I think I'm probably over. I just want to say one more thing. I just want to give you a feeling for what you have to do. If you're thinking about this reaction coordinate, what you need to do is think about how would you stabilize the transition state relative to the ground state? So what we're talking about is stabilization that's unique to the transition state and not the ground state. If you stabilized them both, what would happen? If you stabilized them both-- if you stabilize this guy and you stabilize this guy, the barrier would be exactly the same. So what you need is some way to uniquely stabilize the transition state over the ground state. So the question is, how much do you think? How much rate acceleration do you think you can get from a hydrogen bond? Does anybody have any idea? One hydrogen bond. So here, you have a protein with 1,500 hydrogen bonds. But if you can get one hydrogen bond that's here in the transition state of the reaction, that's not over here, how much rate acceleration do you think you can get? Anybody got any idea? You can get almost 1,000 fold. I mean, and you can do a very simple calculation. I can't remember whether I have this on the-- OK, so that's it. So we can do a very simple calculation, and I'll use this to show you the calculation. Here, we have our rate. This should be Delta G. The dagger should be up in the air. So this is the enzymatic reaction. This is the [INAUDIBLE] equation. One has the same equation for non-enzymatic reactions. So here's a non-enzymatic reaction. In general, the non-enzymatic reaction can happen by some mechanism. To the enzymatic reaction is just much, much slower. So if we assume, for example, that the rate difference between enzymatic and a non-enzymatic reaction is a factor of 10, how much do you get assuming that all of these terms are the same in the enzymatic and the non-enzymatic reaction? You can calculate a Delta Delta G dagger of 1.38 kilocalories per mole. For those of you who are modern, this is 5.8 kilojoules per mole. Sorry, I'm really old, so I still think in kilocalories per mole. But a hydrogen bond, one hydrogen bond is worth 2 to 7-- compared to no hydrogen bond, is worth 2 to 7 kilocalories per mole. So a factor of 10 is 1.4 kilocalories per mole. So that shows you, then, that if you had 2 to 7 with one hydrogen bond, it can give you these factors of 1,000. So I think that's an observation that's something you need to keep in the back of your mind. Because you think about it over and over again. It really doesn't take much to align everything in exactly the right way. And when I say hydrogen bond, these hydrogen bond strengths are really dependent on how everything is aligned. If they're exactly aligned, then you get much stronger bonds. They can even approach-- in the gas phase, they could approach 30 kilocalories per mole. So having everything aligned, that's the job of this whole big protein, to actually give you catalysis. And I think I'm at the end of my lecture now. I won't have time to talk about-- I went over already about the question of specificity. But let me just say, I think enzymes are really quite amazing. There's nothing like them. Faster than a speeding bullet. They can catalyze the rates a million to 10 to the 12, 10 to the 15-fold. And they use really the simple concepts that chemists have developed over the years. But the key to the enzyme is this big huge molecule, and the dynamics within this molecule that gets everything to align exactly right to be able to lower these barriers so that you can convert your substrate into your product. OK guys, see you next time. The end. |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | Special_Cases_in_Fatty_Acid_Metabolism.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOHN ESSIGMANN: We're now at storyboard 19, panel B. In the last lecture, I mentioned that we're going to be looking at several special cases with regard to the metabolism of fatty acids. The first special case concerns the metabolism of fatty acids that already contain double bond. You remember from last time that a double bond with a trans configuration forms naturally during beta oxidation. In the special case we're going to look at now, the double bond has a Cis configuration. Fatty acids with a Cis-double bond typically come from membranes. One of the ways nature plasticizes our membranes is to make them more fluid by putting in Cis-double bonds that bend the fatty acid molecule and reduce stacking. Ultimately, this reduction in stacking results in a lowering of the overall melting temperature of that part of the membrane. The example I'm going to use is oleyl Coenzyme A. This fatty acyl Coenzyme A comes from Oleic acid, which also has the shorthand notation (18:1) delta 9. That is, there is a double bond nine carbons in from the carboxylate. The reaction I've shown in panel B goes through three rounds of beta oxidation resulting in a Cis delta three enoyl Coenzyme A. The enzyme oleyl Coenzyme A isomerase will then remove the acidic hydrogen from the 2 carbon, repositioning the double bond, and then putting that double bond into trans configuration, which is now amenable to further hydration and beta oxidation. Overall, the isomerase has taken the Cis-double bond and repositioned it to make it a trans double bond that is now able to undergo further oxidation by the classical beta oxidation pathway. Let's turn now to panel C of storyboard 19. The second special case that I'd like to deal with concerns the way that we metabolize fatty acids that have an uneven, that is odd, number of carbons in them. One example would be the C15 fatty acid Pentadecanoic acid, which we get from milk fat. In this case, six rounds of beta oxidation will release six acetyl Coenzyme A's. But note that we're left with a 3 carbon residue, or carboxylic acid called propionyl Coenzyme A. Nature doesn't like to throw anything away. It's going to take this three-carbon molecule, propionyl CoA, that doesn't easily integrate into any other biochemical pathways. Nature's going to add a carbon to it, hence making it into a four-carbon molecule. As we'll see, that four-carbon molecule will integrate seamlessly into the TCA cycle. The conversion of the three-carbon compound into a four-carbon compound begins with the enzyme propionyl Coenzyme A carboxylase. In a few minutes, we'll look at the fine details of how this enzyme works. For now, however, what it does is use the cofactor biotin in order to introduce a carbon dioxide, or CO2, into the 2 carbon of the propionyl Coenzyme A. The 2 carbon is the middle carbon of the propionyl group. This is now a branched molecule that's called methylmalonyl Coenzyme A. Next, the molecule is subjected to a carbon chain rearrangement that takes the branched molecule and linearizes it to form succinyl Coenzyme A. Succinyl Coenzyme A will then integrate directly into the TCA cycle to allow its carbons to be metabolized fully to CO2. In summary, nature puts a CO2 into the three-carbon propionyl CoA, forming a four-carbon branch structure. Then, in an amazing rearrangement that we'll look at later, the branched molecule is made linear, forming succinyl CoA, which is a TCA cycle intermediate. So the otherwise useless C-3 molecule, propionyl Coenzyme A is converted to something of value. Lastly, let's turn to panel D. I'm going to deal more with the details of this reaction in a few minutes. But for now, what I'd like to say is that there are many sources of propionyl CoA, not just from odd chain fatty acids in the diet. Secondly, the introduction of succinate into the TCA cycle results in an increase in the amount of carbon going through the TCA cycle. That is, this is truly an anapleurotic reaction. This is a reaction that we can use to increase the overall rate of carbon metabolism in the TCA cycle. We're now going to turn to storyboard 20. Let's look at panel A. In the last lecture on fatty acid metabolism, I talked about propionyl CoA carboxylase, which uses biotin to add a CO2 moiety to the middle carbon of propionyl Coenzyme A. This lecture is a chemical interlude, in which we're going to look at propionyl CoA carboxylase and several other carboxylases that are used in biochemistry. One of these is acetate CoA carboxylase. This carboxylase puts the CO2 group onto acetyl CoA to make malonyl CoA, which is the precursor to fatty acids during the fatty acid biosynthesis reactions. Let's also look at panel B. We're also going to be looking at pyruvate carboxylase, which adds a carbonyl group to the pyruvic acid molecule in order to make oxaloacetate. This is one of the anaplerotic enzymes that can increase the quantity of carbon going through the TCA cycle. That is, it facilitates anapleurosis in the biological system. Let's now look at storyboard 21, panel A. The carboxylases all use carbon dioxide as their carboxylating reagent. CO2 itself is not very soluble in water, so cells use carbonic anhydrase in order to hydrate it to form carbonic acid. The biotin carboxylase subunit will then phosphorylate the carbonic acid in order to form a chemically-reactive intermediate known as carbonyl phosphate or carboxy phosphate. Now, carbon dioxide by virtue of its structure-- that is, a carbon with two electronegative oxygens pulling on it-- is pre-activated for nucleophilic attack. And as I said a few minutes ago, CO2 is just not present at high enough concentration in the cell for it to be able to add effectively to nucleophiles. A nucleophile such as the amide nitrogen of biotin would be an example. So what the cell does is use ATP to make carbonyl phosphate, which is a very water-soluble molecule. It's a very efficient delivery vehicle for carbon dioxide into the active site of an enzyme. Now let's look at panel B. The biotin carboxylase subunit of pyruvate carboxylase has a biotin cofactor at the end of a 14-Angstrom-long swinging arm. This subunit of the enzyme breaks the bicarbonate phosphate bond and releases carbon dioxide in high concentration in the active site. Biotin on the swinging arm then attacks the high concentration of carbon dioxide in order to become N-carboxy biotin as shown in this figure. The N-carboxy biotin takes advantage of its swinging arm to move away from the active site at which it acquired the CO2 and then move to a second site on the enzyme at which the arm releases carbon dioxide, once again at high concentration. But this time CO2 is released at high concentration in the vicinity of the substrate, pyruvate. Pyruvate, in order to be a good nucleophile to acquire the CO2, probably exists as shown in the figure as the pyruvate enolate. Let me review panel B at this point. Overall, this reaction has resulted in putting a carboxylic acid residue on the 3-carbon of pyruvate. And this results in the formation of oxaloacetate. Thus, activation of pyruvate carboxylase will result in taking a molecule of pyruvate and converting it to oxaloacetate. This anaplerotic reaction gives the ability to increase the volume of carbon that's cycling in the TCA cycle, thus allowing you to be able to increase their overall rate of respiration. And as I've said several times in the past, oxaloacetate is present at rather limiting concentrations within the mitochondrial matrix. So this is a very useful biochemical reaction. At this point, let's move on to story board 22 and look at panel A. Let me loop back now and give some bigger picture points with regard to carboxylase chemistry. Pyruvate carboxylase and the other enzymes of this class have two active sites. One of them is the biotin carboxylase domain, where a biotin moiety covalently attached to the enzyme will acquire CO2. Let's call this step 1 of the reaction scheme. Once the biotin on the swinging arm has been N-carboxylated as in step 1, the N-carboxy biotin moves to site 2, where the substrate is. The second site is the biotin transferase domain. In the example I just used, the substrate was pyruvate. But it just as easily could have been another quote unquote "carboxylation substrate," for example propionyl CoA. In the case of pyruvate, as we just saw, it's the pyruvate enolate that becomes carboxylated in site 2 in order to form the final product oxaloacetate. This carboxylation of substrate is the second and final step of the overall reaction. Take a look at panel B. Now let me give you a second example of carboxylase chemistry. In this case, our substrate will acetyl Coenzyme A. We're going to use the same chemistry we described a minute ago for pyruvate, but in this case it is another carboxylate-- acetyl CoA carboxylate-- that picks up the CO2 residue at its carboxy transferase domain. In step 2, which occurs in the carboxy transferase domain, the substrate acetyl CoA picks up CO2 on its 2-carbon-- the one distal to the Coenzyme A group-- to form the three-carbon product malonyl Coenzyme A. We'll see later that the carbon dioxide that's been added to acetyl CoA to form malonyl CoA is an excellent leaving group. Malonyl CoA is the fundamental precursor to all fatty acids through the fatty acid biosynthesis pathway. Again, we'll see the details of this reaction scheme later, but keep in mind now that this is a biotin-requiring reaction, just as the one we just looked at with pyruvate carboxylase. Let's look now a panel C. My last example of carboxylase chemistry will be propionyl Coenzyme A carboxylase. I mentioned earlier that the breakdown of odd chain number fatty acids results in the formation of a 3-carbon linear fatty acid called propionyl CoA. I also said that nature doesn't want to waste any of the carbons. What she is going to do is add a carbon dioxide or a CO2 to the central carbon of the propionyl group. As well as fatty acids with odd numbers of carbons, certain amino acids-- such as isoleucine, methionine, valine, and threonine, for example-- also break down to form propionyl CoA. So what I'm going to do now is describe a fairly general method of metabolism that involves restructuring of the carbon chain subsequent to the carboxylase reaction. In this overall reaction, a carboxylase is going to carboxylate the middle carbon of the propionyl group. Then, following the carboxylase reaction, there's going to be a skeletal rearrangement to form succinyl CoA, which will then enter the TCA cycle. As shown in the reaction scheme, propionyl Coenzyme A carboxylase carboxylates propionyl CoA to form a very specific stereoisomer, (S)-methylmalonyl Coenzyme A. Please note the position of the carbonyl group that's highlighted in blue. This carboxylate in blue is attached to a methylene carbon that has an inverted triangle on top of it. Epimerization about the carbon with the triangle results in the blue carboxylate pointed down the way I've drawn it. The resulting epimer is (R)-methylmalonyl Coenzyme A. In the (R)-methylmalonyl Coenzyme A molecule, I put a little lasso around the hydrogen atom at the methylene moiety of the molecule. I've also lassoed an electron in some of the atoms of the Coenzyme A moiety. A vitamin B12-dependent enzyme called methylmalonyl Coenzyme A mutase, which contains an adenosylcobalamin functional group, will now act on this molecule. In this enzyme, a cobalt residue is going to enable the formation of an adenosyl radical. Look at the book for the detailed mechanism. But for now, it's sufficient to say that this adenosyl radical is going to facilitate the homolytic scission of the bond between the carbon and hydrogen of the (R)-methylmalonyl Coenzyme A. That is, you're going to get a free radical formed at the carbon that has the small blue o written over it. That molecule will lose its hydrogen atom. That hydrogen atom will migrate to the carbon that has the small blue triangle. Coordinately, the carbon with the small box that's part of the Coenzyme A carbonyl group will migrate to the position originally occupied by the hydrogen atom on the carbon with the small o over it. The skeletal re-arrangement is shown here in panel C, and it results in a linearization of the original methylmalonyl Coenzyme A. If we redraw it as in the box, you'll see that you form succinyl Coenzyme A. Succinyl Coenzyme A will then feed directly into the TCA cycle. This is an anaplerotic reaction. That is, it will increase the volume of carbon that's rotating in the TCA cycle. Overall, in the way of a review, the three-carbon compound propionyl Coenzyme A, by way of the carboxylase reaction, is converted to a four-carbon branched molecule called (S)-methylmalonyl Coenzyme A. It's epimerization forms the (R)-methylmalonyl Coenzyme A, which is the substrate for a third enzyme, methylmalonyl Coenzyme A mutase, a vitamin B12 enzyme. This enzyme catalyzes a free radical-mediated rearrangement of the side change. Specifically, one hydrogen and one Coenzyme A functionality switch positions. This rearrangement linearizes the molecule forming, ultimately, succinyl Coenzyme A, which then flows into the TCA cycle. |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | Problem_Set_1_Problem_1_Sizes_and_Equilibria.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. BOGDAN FEDELES: Hello, and welcome to 5.07 Biochemistry on MIT OpenCourseWare. I'm Dr. Bogdan Fedeles. Let's metabolize some problems. This series of videos is meant to supplement some of the other materials on the site, and to give you a more in-depth and more interactive take on some of the topics covered in 5.07 Biochemistry. Specifically, we're going to be working together through one problem from each problem set from this course. Today we're discussing the very first homework problem of 5.07, which is problem one of problem set one. This problem is meant to give you a better sense of the scales and dimensions of the cellular environment. Specifically, we're going to be asking questions like, how big is this cell? What is the volume of a cell? And how many molecules of a given protein are inside a cell? One fundamental idea introduced in this problem is that the cellular environment is very crowded. Things inside the cell are very tightly packed. Now take a look at this picture in your book. As you see here, the constituents of a cell, like proteins, enzymes, the organelles, metabolites, are all in very close proximity to each other. Now, this is also reflected in the concentration that we're given for the proteins inside the cell. The problem tells us there are 350 milligrams per milliliter of protein. Or in other words, 350 grams per liter. Now this is a very high number in the context of biochemistry. Basically, if you think one liter is 1,000 grams then 350 grams of that is protein. So we have 350 grams, 35%, of a cell is protein. And only 60% to 65% is water. Therefore, as the problem says, when we're doing in-vitro experiments using dilute solutions we rarely recapitulate what actually happens inside the cell. Now how big is a mammalian cell? As you will see, sizes of cells in an organism vary considerably. At one extreme we have very tiny cells like endothelial cells, red blood cells. These are very, very small. On the other end, we have reproductive cells, like the egg, which is 100 to 200 microns in diameter. Or even cells that can stretch for centimeter, like the muscle cells and certain nerve cells. In this problem, we're going to be dealing with the red blood cells, one of the most abundant cells in the body. As you can see in this very colorful picture, a red blood cell can be approximated by a cylinder 6 to 8 microns in diameter. So knowing that, we can actually calculate some dimensions of the red blood cells, such as the total volume and their surface area. Now here is a cylinder by which we approximate a red blood cell. And let's say the cylinder has the height, h, and the radius, r. From our problem we know h is about 2 microns and r, well we know the diameter is 6 to 8 microns. So r is going to be, let's go with the middle, 3.5 microns. Now as you remember from geometry, then the volume of the cylinder is going to be pi r squared h, which when we substitute our units, we're going to get about 77 cubic microns. All right. We'll come back to discussing the units in a little bit. Now the surface area is just going to be the area surrounding the cylinder plus the two circles-- the top and the bottom. So the two circles are 2 pi r squared plus the surrounding area is going to be 2 pi r h. And that comes out to be about 120.95 square microns. Now how much is really one cubic micron? Let's try to relate it to a unit that we're more familiar such as liter. Well, one milliliter is actually one cubic centimeter. One microliter is one cubic millimeter. Now, one cubic micrometer, it's like a billion times smaller than 1 microliter. One microliter is already a million times smaller than a liter, so this is really 10 to the minus 15 liters is 1 cubic micrometer. Which is 10 to the minus 15 is the femto units. It's like one femtoliter. Now 77 femtoliters is an incredibly small volume. But as we'll see next when we look at bacteria, bacteria are even smaller. Now, let's take a look at a Staph aureus cell. Now Staphylococcus aureus is a very common bacteria that we often find on our skin or in our respiratory tract. Now this is the same bacteria that you might have heard in that acronym MRSA, or Methicillin-Resistant Staph Aureus. Now, this MRSA is a pathogenic bacteria that can cause a lot of problems in the hospitals nowadays, because it is resistant to most of the antibiotics that we have. Now, here's a picture of Staph aureus. As you can see, it's a spherical cell. And of course this pretty purple color is added in. This is just a electron micrograph picture of a colony of Staph aureus bacteria. So for it too we can calculate the volume of the cell and the surface area. If we assume Staph aureus to be a sphere, of radius r, we are told r is 0.6 microns. Then we can calculate the volume of the cell. The volume is simply going to be 4 pi r cubed over 3. And if we plug-in the numbers, we're going to get 0.11 cubic micrometer. Similarly for the surface area, it's a surface area of a sphere. It's 4 pi r squared. And the units come out to 1.13 square microns. Now 0.11 cubic microns. And we said a cubic micron is one femtoliters, like 10 to minus 15 liters. So 0.11 cubic micrometers is 110 times 10 to minus 18 liters. Now the prefix for 10 to minus 18 that's atto. So it's 110 attoliters. So this is an incredibly small volume. So by calculating the volume and the surface area of these cells, we've essentially answered part 1 and part 2 of this problem. Now next we're going to explore the relationship between the volume and the surface area of the cell. As you know from geometry, the volume typically varies with the cube of the radius, whereas the surface area varies only with the square of the radius. Therefore, as the radius of an object increases the ratio of surface area over volume will decrease because the volume increases quicker. Now, this is exactly what we observe with cells. Therefore, for a red blood cell surface area over volume it's going to be 12,095 square microns over 77 cubic microns. That comes out to about 1.6 inverse microns. Now for a Staph aureus cell, surface area over volume is going to be 1.13 square microns over 0.11 cubic microns. That's approximately 10. Why is this important? It's because the surface area of a cell controls how fast molecules can go in and out of the cell. It essentially controls the flux of molecules across the cell membrane. Now, if the surface area of a volume is a large number, it means the molecules can access that volume fairly quickly and efficiently. But as you can see, the bigger the cell, the smaller the surface area of a volume number becomes. And therefore, for big cells molecules will have a hard time and will take a long time to get inside the cell or out of the cell. That's why nature has designed ways to transport molecules, to make them achieve the right concentration efficiently. Now this is why, in the case of bigger cells, such as the eukaryotic cells or mammalian cells, in our case the red blood cells, nature has evolved transport mechanisms by which it can deliver small molecules throughout the entire volume of the cell. One example of such transfer molecules is hemoglobin, which is used to deliver oxygen. This is what we're going to take a look at next. We are given that hemoglobin constitutes 95% of the proteins in the red blood cells. So let's calculate the concentration of hemoglobin in a red blood cell. Now let's start with the average protein concentration in the cell, which we mentioned earlier, which was 350 milligrams per milliliter. Now if 95% of this is hemoglobin, then the concentration of hemoglobin is going to be about 322 milligrams per milliliter. Now I'm going to abbreviate hemoglobin as Hb. We're told that hemoglobin has a molecular weight of 67,000 Daltons, which is another way of saying 67,000 grams per mole. So then the concentration of hemoglobin is going to be 322 milligrams per 67,000. Grams per mole is the same as milligrams per millimole and per milliliter. The grams go away and we get 0.0048. It's going to be millimole per milliliter and that's the same as mole per liter. That's the molar concentration. Or we can write it 4.8 millimolar. Now this is a pretty important range to keep in mind, because the most abundant proteins such as hemoglobin, are going to be in the low millimolar range. Most of the other proteins are going to be in the micromolar range in a cell. Now let's calculate how many molecules of hemoglobin we have in the red blood cell. So we calculated before that the volume of a red blood cell is actually 77 femtoliters. That's once again 77 times 10 to minus 15 liters. Now we know in this volume the concentration of hemoglobin is 4.8 millimolar. So we can calculate the number of moles. So the new number of moles is going to be the concentration times the volume. So we have 4.8 millimolar times 77 times 10 to minus 15 liters equals-- now of course, millimolar-- we have to transform this back into molar. We have mole per liters. The liters are going to cancel out so we're going to get moles. And it's 369, or so, times 10 to minus 18 moles. Remember 10 to minus 18 that's attomoles. So 369 attomoles of hemoglobin we have in a red blood cell. Now we know one mole contains the Avogadro numbers of molecules. So if you multiply this with the Avogadro number, we should get the actual number of molecules. So number of molecules is just Avogadro number times number of moles, and Avogadro number is approximately 6.022 times 10 to the 23rd power, so a gigantic number, times 369 times 10 to minus 18 moles. We're going to get about 2.2 times 10 to the 8 molecules. Or in other words, this is 220 million molecules. So the problem was telling us about a Google search in which we came up for different numbers and one of them was 2,000, one of them was 200 million, so obviously the answer we got, 220 million, is closer to the 200-300 million molecules that our Google search returned. That's the answer for that part of the problem. Finally, let's see how the size of a hemoglobin molecule compares to the size of a cell. We're told hemoglobin is roughly spherical in shape, with a diameter of about 55 Angstrom. Now as you recall from Intro to Chemistry, one Angstrom is 10 to the minus 10 meters. That's 0.1 nanometer. So our radius here is going to be half the diameter, so it's 27.5 Angstrom is 2.75 nanometers. So the volume of a hemoglobin molecule is going to be 4 pi r cubed over 3. And with plugging in 2.75 nanometers. So it's going to come up to be 8.7 times 10 to the minus eighth cubic microns. Now, you remember the volume of a red blood cell was 77 cubic microns, so if you look at the relationship between the two, how many volumes of a hemoglobin can we fit in a red blood cell? Well, we just divide the volume of the red blood cell to the volume of the hemoglobin. 77 over 8.7 times 10 to the minus eighth. Both are cubic microns. And that gives us 8.8 times 10 to the eight molecules. So this is 880 million molecules of hemoglobin would fit in the volume of a red blood cell. If only hemoglobin would be in there. Obviously this number is an overestimation, because when you're packing spherical objects, they're not going to pack very tightly with each other. And as we said, the shape is only approximately spherical, but nevertheless it's on the same order of magnitude as the 200- 300 million molecules of hemoglobin that we calculated earlier based on the concentration. So from both the volume standpoint and concentration standpoint we now have calculated how many molecules of hemoglobin can fit in a red blood cell. This result we just got actually highlights a very important take home message, which is, if we look at the molecular and atomic scale, it is as distant from the cellular scale as the cellular scale is different from the macroscopic scale. Now if we take one milliliter of blood, we find a few billion red blood cells inside it. Now within each red blood cell we find hundreds of millions of molecules such as hemoglobin. That's it for this problem. I hope you now have a better sense of the sizes and scales relevant for biochemistry and cell biology in general. Keep in mind our discussion of the surface area to volume ratio and why as the cell size gets bigger, we need transport mechanisms to make sure the nutrients and metabolites get to where they need to go efficiently. Also keep in mind some of the concentration ranges that we discussed, as these will become very important in understanding the biological significance of some of the constants that we're going to calculate for enzymes later in the course. |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | Lexicon_of_Biochemical_Reactions_Vitamin_B6_PLP.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOANNE STUBBE: Hi. Today, what we're going to do is focus on yet another cofactor that you get out of your vitamin bottle. And today we're going to be looking at vitamin B6. And vitamin B6 is the cofactor that you use whenever you want to metabolize amino acids. Where do you get amino acids? We all eat proteins. The proteins get degraded to amino acids. And we can use the energy released. And trap that energy to do bio synthesis. And we use amino acids, convert them into central metabolism, and use them to make fats, or sugars, depending on what the environment is telling us we need to do. So let me introduce you to this cofactor, the vitamin itself, as you have seen now, over and over again, is not the actual cofactor. It's Pyridoxine. And so that's what you eat out of the vitamin bottle. And as in the case of all vitamins, inside the cell it has to be converted into the active form of the cofactor. And the cofactor we're going to be talking about, the active form is Pyridoxal phosphate, which is called PLP. And the structure of pyridoxal-- there two main structures of the pyridoxal phosphate cofactor, one of them is involved in 80% of all the chemistry. And the one we're going to talk about today, uses both forms of the cofactor. That fits into central metabolism. I'll show you how that fits in a minute. So pyridoxal phosphate has this structure. You have a pyridine ring, the pKa. The pyridine ring is 6 5, so it may or may not be protonated. And the key part of the cofactor-- one of the key parts of the cofactor, is this aldehyde. So pyridoxal means that this is an aldehyde. The second form of the cofactor is pyridoxamine. So this aldehyde is somehow-- and we will look at how this happens-- is converted from this aldehyde, into an amino group. And this is called PMP, or pyridoxamine phosphate. Now the other thing I wanted to show you, before we move on to look at more details of pyridoxal phosphate, is-- while this is the form of the vitamin. You almost never see this form inside the cell. It is always bound in the active site of the enzyme. And it binds in the active site of the enzyme-- so here's your enzyme-- through the epsilon amino group of a lysine. So you never really, when you purify the protein, you never isolate the aldehyde. So this is the aldehyde. And I won't draw out all the rest of the structure, but what you have is an amino group, that's attached to a lysine, in the active site of your enzyme. And what we're going to do is convert this ketone into an imine. So this is chemistry you see over and over and over again. You've seen it already with carbonyl chemistry, with carbon-carbon bond forming reactions, with peptide bond hydrolysis, that we've already talked about. But what you do is, you're going to convert this aldehyde into an imine. You need to go through a tetrahedral intermediate. And the carbonyl is polarized, delta plus, delta minus. You spend a lot of time practicing this kind of chemistry to form a tetrahedral intermediate. And I'm going to have a proton transfer. So one of the things that's tricky about this chemistry is that, if you count the number of protons they move around a lot in the active site. And the fact is, that you want to protonate something to make it into a better leaving group, you want to deprotonate something to make it function more like a nucleophile, or as a base, and nature has figured out how to orchestrate all the residues in the active site to do this. In most cases, we don't understand all of the details. And so I'm not going to focus on proton transfer, but you're going to see many proton transfers within the active site. So this gives us a tetrahedral intermediate, or transition stage. You've seen that over and over again. And now what will happen is that you want to lose a molecule of water. So again, you need to pronate that, and what you form then is a new carbon doubly bonded to a nitrogen, rather than doubly bonded to the oxygen, that's covalently bound to the enzyme. And so, this is called an imine. And because it's an imine of an aldehyde, it's called an aldimine. So whenever you isolate your enzyme, whenever you isolate your enzyme, the pyridoxal is always covalently bound. OK, so, this bond is chemically easy to hydrolyze, but it's always covalently bound. OK. So what I want to do now, is make a few generalizations about pyridoxal, but the first thing is, that in all cases, whenever you metabolize-- I want to metabolize an amino acid. Pyridoxal phosphate requiring enzyme is going to be the key player. OK. So here's an amino acid. OK, here's the alpha position, the beta position, and the gamma position. Well what's so amazing-- I remember first hearing about this-- is that you can do chemistry at all these positions. And the way nature figures out how to do this is by orchestrating the active site, with acid based catalyst sitting around in the right place, to allow you to do the chemical transformation that this protein has evolved to do. So let me to show you what the alpha position, so this is the alpha position, you can cleave this bond. I'll show you how this happens. You can do all this chemistry with a few simple chemical transformations, takes practice, but once you sort of get what these transformations are, it's amazing what you can get this cofactor to help you do. And I'll explain that in a minute. So you get cleavage of the carbon carbon bond that's loss of CO2 that's a decarboxylation reaction. We don't talk about that in 507, but that kind of reaction generates all of our neurotransmitters. You're going to cleave this carbon-hydrogen bond. Remember, amino acids are in the S configuration-- but for example in cell wall, in bacteria, they can be either the S or the R configuration, so you can cleave that bond, and put the proton back on the other face, that's a racemization reaction. You can cleave this carbon-carbon bond between the alpha and the beta position, that's a reaction-- remember, we talked about carbon-carbon bond formation-- this is the reverse aldol reaction. And the one where we're going to focus on today, which is the one that fits best into central metabolism, is what happens to this carbon-nitrogen bond. So we have an amino group of our amino acid is going to get converted into a ketone group. So, this group, is going to get converted into this group. And to do that, we're going to use the imine of pyrodoxial phosphate that we have in the active site of the enzyme. So this is sort of like a carbonyl and that's going to get converted into the amine, the pyridoxamine. So these are the two forms of the cofactor. So what are we going to be focusing on, is how this reaction, actually, happens, and this is the most complicated of all the pyridoxal prostate dependent reactions. So that I'm going to come back to this in a minute, but what I want to do is make a few generalizations about where you're going to see pyridoxal phosphate chemistry in primary metabolism. So what we're going to see is that the TCA cycle, tricarboxylic acid cycle, or the Krebs cycle, plays a central role. It's found in the mitochondria. You're going to see this over and over again, over the course of the semester. Things feed in and out of the cycle. A cycle means it goes around and around, and if you remove something from the cycle and don't put anything back into the cycle, the cycle stops. And you're in serious trouble. So one way-- one thing-- one way to feed in and out of this cycle is through amino acids. So this reaction, which we're going to call the transemination, or a transamination, is metabolism of amino acid, we'll see into an alpha keto acid. So if you look at the TCA cycle and we look at this reaction, what you'll see is you have this compound, called alpha ketoglutarate. OK, so alpha ketogluterate and that going to interconvert with the amino acid and so this ketone is going to be converted into an amino group, and so we're having the amino group converted into a ketone group. And pyridoxial is going to be converted into pyridoxamine. So we're going to have PLP converted into PMP. I'm going to show you how that works. So, if you feed in glutamate from your diet, it-- by this pyridoxal phosphate dependent reaction-- can feed into the Krebs cycle. If you want to make amino acids, on the other hand, you can suck some of the alpha keto acid out, and convert it into amino acids. And you have to have a way of controlling whether you feed in, or you, actually, remove your metabolites from the cycle. If you come up here and look at oxaloacetic and aspartic acid-- So oxaloacetic acid is also an alpha keto acid, OK. And the amino acid-- can you see? I'm probably too close to the edge-- OK, and so here we have the amino acid. So here we have amino acid alpha keto acid, amino acid alpha keto acid. If you go further up and go pyruvate feeds into the TCA cycle-- pyruvate comes from the glycolysis pathway, which again is breakdown of sugars, pyruvate is an alpha keto acid. So this is a CH3, so this is the simplest, and this can get interconverted into alanine. So what you see is this same theme-- amino group ketone, amino group ketone. I'm being sloppy, here. Most amino groups are protonated, because the pKa, you should remember, is around 9. So they're, mostly, protonated in solution. I should protonate this over here, as well. And so you have a ketone group and amino group. So this is called anaplerotic pathways where things can feed in and feed out of central metabolism. And this is the only time, during the course of 507, that you're introduced to amino acids, and how they're metabolized. So what I want to do now is then, briefly, show you how this transformation, actually, works. OK. So I'm going to give you some general rules. So the transformation I just showed you-- an amino acid into an alpha keto acid-- looks to be complicated. And, I told you, the cofactor changes its structure from an imine into an amino group. And the question is, how does that happen? So I want to show you a bunch of simple, basic rules that allow you to think about all pyridoxal phosphate requiring chemistry. So I'm going to write down the rules, and then I'm going to show you how it works on the reactions we were just looking at this these transeminations, or transamination reactions. So how do we think about the mechanism of these PLP enzymes? So the first thing is-- the first step is-- so you're going to start out with an imine bound to the active site of your pyridoxal phosphate and an amino acid. I'll abbreviate it aa. And the first thing you do is, you're going to, remove this imine and form a new imine with the amino acid. So that's called a transimination reaction, OK. So what we're going to do is form a new imine. And so the imine from the amino acid, and I'll show you how this happen, is going to switch with this one. And so what you're left with, in the active site, is the amino group of the lysine. OK. So this lysine-- nature has figured out how to minimize the numbers of acid and base groups constrained in the region, the active site, where all the chemistry happens. And she uses this lysine, which initially is holding covalently the cofactor, in the active site. She then uses this lysine to do general acid, general base catalysis. And she uses it over and over and over again. Now every pyridoxal enzyme is distinct and has additional groups in the active site. But we know a lot about this chemistry. We even know what the groups are, but we're going to look and talk about generalizations. So the first thing, we have to do is, we need to free up our general acid base catalyst, lysine, and we need to covalently bind the amino acid, which is a substrate, into pyridoxal. So that's the first step. And I'm going to come back to this in a minute. The second step in all of these reactions is-- all amino acids have an alpha hydrogen, that alpha hydrogen has a very high pKa. It's very hard for a normal amino acid side chain, in the active site, to remove that proton, because it's not acidic enough. So what I'm going to show you is pyridoxal increases the acidity of that alpha hydrogen, making it easier to do the chemistry. So the second step in almost all pyridoxal reactions, is removal of the alpha hydrogen. And we can do that, because PLP makes the hydrogen more acidic. And I'll show you why that's true in a minute. And then, the third thing is, once you remove that hydrogen, then you get a look at the chemistry you want to catalyze. And we're going to be talking about this transimination transamination reactions. But remember I told-- you can do all this chemistry at alpha, beta, gamma. So you need to assess what the substrate is, and what the product is, and then, within the active site, you're going to have to do a lot of manipulation with acid and base catalysts to get you into the final stage, where you can release the product you want to release. So the last step in this reaction-- in all pyridoxal reactions-- so we do some chemistry in here. And I'll show you what the chemistry is with transamination reactions. The word transimination reactions-- now I keep saying transamination and transimination. That's because most textbooks call it transamination, because they think about pyridoxal as the aldehyde. But, in reality, pyridoxal is always in the imine, covalently bound. And so that's why it's that transimination, rather than a transamination. So the last step then, is, hydrolysis. And I'll show you how that happens, or transimination to reform this structure. So we get ourselves back to where we started. So there's three simple steps, and in the middle, depending on what the reaction is, you have to do additional manipulations. But all pyridoxal enzymes go through these three, general steps. The first step, I told you, in all these reactions is transimination. OK. So here's our Schiff-base, and it's your pyridoxal. It's covalently bound to the lysine in the active site. Now one thing that students often find confusing is the protonation state of this imine. And that's because it's right around neutral pH, pH 7. So depending on what's in the active site, it could be protonated, or not protonated, if it's protonated of course it enhances reactivity for nucleophilic attack. So you want it to be protonated. So the active site is going to manipulate itself to put in the protonated state. So here you have an amino acid, and here we have a protonated imine, and so this is the nucleophile, and it can attack the carbon of the imine to form a tetrahedral adduct. So that's what we're doing here. So, this, is going to attack, this, to form this tetrahedral adduct. And, you've seen again, the tetrahedral chemistry over again, when I showed you how you formed this imine in the first place. So tetrahedral chemistry, tetrahedral intermediates, transition states, which collapse to form back imines, or carbonyls, happens over and over, and over again in pyridoxal chemistry. So now what happens is we have this tetrahedral intermediate, or transition state-- I have it in parentheses, because it's a high energy intermediate, it doesn't sit around, and let us look at it. Most of the time you can never see it. It's very high on a reaction coordinate. This, collapses then, and when it collapses, what do you generate? You generate the lysine in the active site. So now we have generated a residue in the active site that can function as a general acid, or general base, catalysis through the rest of the chemical transformations. And what've we done, is we've converted this imine with lysine, now to an imine of the amino acid. So that's what transimination is, one imine to another imine. The imine that's covalently bound to the protein through pyridoxal, to an amino acid imine. OK, so, that's the first step. The second step is that ultimately, in almost all pyridoxal reactions, you want to remove this alpha hydrogen. And that alpha hydrogen, again, is extremely non-acidic but by complexing the amino acid to pyridoxal-- this is what the function of the cofactor is-- you are enhancing the acidity of this alpha hydrogen. You're making it easier for a group in the active site, a general base catalyst, like lysine, can now pull off this proton to generate this intermediate. Now why is this hydrogen more acidic? Well, if you look at the structure, you can draw all kinds of resonance structures which shows that this carbanion is more stabilized, because it's attached to the pyridoxal cofactor. So if you look at this, you can draw this resonance structure, which is shown here, and you can draw 20 other resonance structures. OK, so, the key here is you can remove the alpha hydrogen, because you're able to delocalize these unpaired electrons on this carbon over the entire system. So let me show you what that looks like. So if I draw-- this is called Dunathan's hypothesis-- so here's our pyridine ring, here's our imine, and here's our amino acid. Here's the carboxal-- here's the side chain in our group and here's a carboxylate, OK. So the idea is, you have like a benzene ring-- if you haven't seen pyridine rings-- but you have a pi cloud that delocalizes-- where these electrons are completely delocalized over the aromatic ring-- but here, you also have a pi cloud and these things are close enough so you can delocalize. Now what you're going to do-- and the way nature decides what chemistry happens is since we want to cleave, in this case the carbon-hydrogen bond, she places that carbon-hydrogen bond-- by complexing the carboxylate and complexing whatever the R group is-- she places that carbon-hydrogen bond perpendicular to this plane of the pi aldimine system. So what you're doing then is the lysine, that we just liberated through doing the transimination reaction, you now generate an empty P orbital with unpaired electrons in it, generate the carbanion and now this system can completely delocalize. So, again, this pyridine ring is planar. So you can, completely, delocalize the electrons over this whole system. This is a pi cloud. And now this is already set up so that it can delocalize over this whole system. So what you've done then is because of the ability to stabilize this carbanion you're making that hydrogen more acidic. So if you wanted to say, for example, cleave-- remember, I told you at the very beginning, you might be able to decarboxylate-- that enzyme, would place that CO2 perpendicular to the plane of the pi aldimine system and use the same strategy to stabilize the resulting carbanion intermediate. We're not going to talk about that chemistry in 507. So what you've done then-- what the beauty of pyridoxal is that she's increased this acidity, and allows you a great deal of flexibility. Because once you generate that carbanion-- those of you who've had 513-- you'll, immediately, recognize if you have a leaving group on a carbon adjacent to this carbon, you can do an elimination reaction. So it sets up a whole series of transformations. For today, were only focusing on the transimination reaction. How do we convert this to an alpha keto acid, and the pyridoxal to pyridoxamine? That's the question we're asking. So we do this chemistry. And now what we need to do, we can use this resonance structure, we want to ask the question, what is the product? Well, we want to get to an alpha keto acid. OK, and if you look at this structure, this molecule is an imine of an alpha keto acid. So this is, exactly, the state we want to be in, but then we have all of this-- we have this reactive intermediate here. So what we want to do is protonate some place here to generate this state. So that the last step in all pyridoxal reactions is hydrolysis. And now we're set up with the hydrolysis reaction to generate the alpha keto acid. So, we then want to ask the question, where can we protonate? And, so, again, we have this lysine which we now have used to pull off the alpha hydrogen. It's now protonated. So now instead of being a general based catalyst, here, it's functioning as a general acid catalyst. And so now what can happen is you can pick up a proton from this lysine, it's supplying it with that proton, to generate this structure and regenerate lysine that can function now as a general base catalyst. So it's toggeling between general acid and general base catalysis. And now remember, what we want to get in the end is this pyridoxamine, and we want this alpha keto acid, and now we're set up to rapidly go to an alpha keto acid. Where have you seen chemistry like this before? You've seen it in the aldolase reaction that we talked about with carbon-carbon bond forming reactions. So the last step in all PLP dependent transaminations is hydrolysis. So here we have the lysine acting as a general base to activate water for a nucleophilic attack on this imine, which is activated to have water add, and now you form again, your tetrahedral transition state. I have all of these unstable species in brackets. We really-- sometimes we see them, sometimes we don't. But you have to work hard to see them. And now this simply, the tetrahedral adduct, collapses to form pyridoxamine and forms the alpha keto acid. So that's where we wanted to get. But now what happens, is we're in the form of the cofactor-- instead of being in the aldehyde form we're in the imine form. So what we want to do now is reverse this reaction using a different alpha keto acid, and we will generate a different amino acid. So now what can happen-- so in their pyridoxamine form, we then can bind a different alpha keto acid-- remember, in the first slide, I showed you three different alpha keto acids-- oxaloacetic acid, alpha-ketoglutarate, pyruvate-- you can reverse this whole process. And, in the end, what you end up with is a different amino acid and you regenerate your imine of pyridoxal. So out of all-- the only pyridoxal phosphate requiring enzyme that goes from the aldehyde, or imine, to the pyridoxamine are these transamination reactions. And, so, this is the most complicated. Normally, at the end of your reaction you wind up in this state. And this probably seems extremely confusing to most of you, but after you solve three, or four, problems where you have to look at the actual transformations, and think about this tetrahedral chemistry, imines, and amino groups, and alpha keto groups, you will, I think, be able to actually easily see how ingenious nature has been to actually design pyridoxal phosphate. And, I think, the most amazing thing, of course, is that pyridoxal phosphate without any enzyme-- I told you can catalyze many, many reactions you do all the reactions with the en-- it does it spontaneously, at room temperature, at pH 7. What the enzyme does, is only allows one of these reactions by having everything positioned exactly the right way in the active site. So this is one of the cofactors, that I thought was amazingly cool when I was in graduate school, that made me want to become a biochemist. |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | Lipod_Catabolism_Fatty_Acid_BetaOxidation.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOHN ESSIGMANN: Let's Look at Storyboard 17, Panel A. So far in 5.07, we've looked in detail at carbohydrate catabolism, we've seen that the complete catabolism of a molecule of glucose by way of glycolysis, pyruvate, dehydrogenase, and the TCA cycle results in the generation of about 36 to 38 molecules of ATP. At this point, we're going to turn to catabolism of another metabolic fuel, lipids. As we'll see lipids, because they contain more energy per gram, because they're more highly reduced than carbohydrates, will produce much more ATP pundit weight than carbohydrates. For example, if we were to metabolize hexanoic acid, the six carbon hydrocarbon, the same number of carbons as glucose, we'd get over 50 ATPs rather than about 35 ATPs per molecule, which we would have got from glucose. Now let's look at Panel B. Lipids have many roles in biological systems. The first that will be relevant to this lecture is that there are primary energy reserve. In fact about 80% of our stored energy is in the form of lipid. Second, as JoAnne taught us, lipids are key components of biological membranes, and thus, they contribute in a major way to the compartmentalisation that's critical for normal biological functions. And the third role is that some lipids are signaling molecules. The one I've pictured here is Estradiol. While Estradiol may not look like a typical lipid it actually is. Indeed, it's made by a lipid biosynthetic pathway that starts with Acetyl Coenzyme A. And we'll see later that Acetyl CoA also serves as the precursor for the canonical lipid fatty acid. Fatty acids are the classic lipid. They're hydrocarbon chains that are fully saturated or contain a small number of double bonds and sometimes branches. Sometimes the fatty acid moiety is esterified to the backbone of glycerol. If you have three fatty acids on a glycerol backbone, one to each of the hydroxyl groups, that's called a triacylglyceride. That's our primary storage form of energy. With phospholipids, one of the hydroxyl groups of the glycerol backbone is esterified to either phosphate or some kind of decorated phosphate where the decoration could be a sugar or some other moiety. As was seen in JoAnne's lecture's, phospholipids are the key building blocks of biological membrane. Now take a look at Panel D. The rest of this lecture will deal with the details of how fatty acids are broken down to carbon dioxide with the intermediate production of reducing equivalents in the form of NADH and FADH-2 and ultimately energy equivalence in the form of ATP. Cells can acquire lipids directly from the blood where typically they're transported by albumin. They can come directly from our diet or from other organs or from breakdown of triacylglycerides within our cells. We're going to be looking at four steps in fatty acid catabolism. The first step involves the appearance of the fatty acid in the cytoplasm of the cell. The fatty acid can come in either from breakdown of a triacylglyceride stored in the cytoplasm, or the fatty acid could appear from transport across the membrane from the blood. In the cytoplasm, the fatty acid, which is technically a carboxylic acid, will be thioesterified to form a fatty Acyl Coenzyme A. The second step of fatty acid catabolism involves the transport of the fatty Acyl Coenzyme A ester into the mitochondrion, which is the site of fatty acid oxidation. The third step in fatty acid catabolism involves the actual oxidation process itself. The series of reactions is called beta oxidation. Beta oxidation results in the conversion of the carboxylic acid starting material to Acetyl Coenzyme A. There can be several fates to the Acetyl CoA produced by beta oxidation. But the one we're going to be looking at is its entry into the TCA cycle, where the Acetyl CoA is metabolized to carbon dioxide with the generation of reducing equivalents. Those reduced electron carriers have the potential to be converted into energy currency in the form of ATP. The fourth topic or stage in fatty acid catabolism that we'll deal with concerns specialized endings of the catabolic pathway. The first problem that we'll look at concerns the fact that some fatty acids have an odd number of carbons in them, whereas the classical fatty acid beta oxidation system was primarily designed to process fatty acids with even numbers of carbon units. The second problem that we'll look at as an ending of fatty acid oxidation concerns the fact that some of the fatty acids in our diet have a double bond that is either in the wrong stereo chemistry or is in the wrong place to enable easy metabolism. Nature has worked out ways to reposition the double bond to facilitate the entry of the molecule into classical beta oxidation schemes. Let's look now at Panel E. The first step in fatty acid catabolism involves thioesterification of the carboxylate residue of the fatty acid. We're going to see in a couple of minutes that placing a Coenzyme A moiety on the carboxylate is going to enable chemistry at the beta carbon. Without the Coenzyme A group, chemistry at the beta carbon would be impossible. The enzyme involved is called Fatty Acyl Coenzyme A synthetase, sometimes called ligase. And it additionally goes by the more common name Thiokinase. This enzyme uses ATP to adenalate the carboxalate residue. And then it allows Coenzyme A to replace the AMP residue with the resulting product being a fatty Acyl Coenzyme A. This reaction happens in the cytoplasm of the cell. Beta oxidation however, is going to occur in the mitochondrial matrix. So we have to find a way to get this fatty Acetyl Coenzyme A into the mitochondrial matrix. Let's go now to Storyboard 18, Panel A. Panel A shows the cytoplasm, the mitochondrial outer membrane, the intermembrane space, the inner membrane, and the mitochondrial matrix. As I just said, the matrix is going to be the site at which beta oxidation occurs. The intermembrane space contains a small alcohol called Carnitine. And the mitochondrial outer membrane contains an enzyme called Carnitine Acyl Transferase I or CAT-I. CAT-I removes the Acyl group from the Fatty Acyl Coenzyme A in the cytoplasm and transfers it to the alcoholic residue in the center of the carnitine molecule forming an ester of the fatty acid with carnitine. This ester is delivered to CAT-II, which is embedded in the inner membrane on the matrix side. CAT-II will then transfer the Acyl functionality to a Coenzyme A, restoring the fatty Acyl Coenzyme A molecule. Thus, CAT-I and CAT-II working in a concerted way, result in the effective transfer of a fatty Acyl Coenzyme A from the cytoplasm into the mitochondrial matrix, the site of beta oxidation, which will be our next step. Let's now look at Panel B. This panel shows an inset with the mitochondrial inner membrane, the electron transfer complex, and ETFP, the electron transferring flavor protein, which is going to be the entry point of electrons from the initial step of oxidation of the Fatty Acyl CoA into the electron transport chain. We also see in this panel, the fatty acid polmitate the C-16 Straight Chain Carboxylic Acid. The hydrogen beta to the Coenzyme A ester is relatively acidic, therefore this hydrogen will be taken off to form an alkene. And the hydride will be transferred from the beta carbon, the third carbon from the right. Those electrons are transferred to a flavin in the electron transferring flavor protein, ETFP. Eventually, those electrons are transferred to Coenzyme Q to form the reduced form of Coenzyme Q. Those electrons then travel along through the electron transport chain. The Organic product of this reaction is a trans enoyl Coenzyme A. Now let's take a look at Panel C. In the next step, water is added to the 3 Carbon of the enoyl Coenzyme A. Resulting product is a 3 Hydroxy Fatty Acyl Coenzyme A. We've seen oxidation of alcohols that looks something like this many times. For example, malate being oxidized by malate dehydrogenase to oxaloacetate. And as we have seen before, the hydroxyl group is converted to a keto functionality. Hydride transfer goes to NAD+ to form NADH. The enzyme that does this conversion is 3 Hydroxy Fatty Acyl Coenzyme A Dehydrogenase. At this point, we have generated one FADH-2 and one NADH in the overall process of the beta oxidation scheme. The 3 Keto Acyl Coenzyme A is now set up to release a first molecule of Acetyl Coenzyme A. The enzyme beta ketothiolase has a cystine on it. The thiol of the cystine will attack the carbon that has the keto group and release Acetyl Coenzyme A. The residue is a thioester in which the residual 14 carbons of the polmitate that we started with are now connected to beta-ketothiolase. Lastly, beta-ketothiolase will transfer this residual 14 carbons to a Coenzyme A molecule, forming the Fatty Acyl Coenzyme A that will be 14 carbons long, that is, it's two carbons shorter than the 16 carbons of polmitate that we started with. Overall, this process is called beta oxidation. The system is now set up to allow the 14 carbon molecule to go to 12 to 10 and so on, until the entire 16 carbon hydrocarbon has been reduced through seven rounds of beta oxidation to eight molecules of Acetyl Coenzyme A. If these eight molecules of Acetyl CoA are further oxidized by the TCA cycle, you'll get 96 ATP molecules. And of course, along the way, in each round of beta oxidation, you will also produce seven FADH-2s. The seven FADH-2s will be converted into 14 ATPs. You will also get seven NADHs, and they will be converted into 21 ATPs. So the full conversion of the 16 carbon hydrocarbon polmitate will result in a total of 131 ATPs. In order to put together a full balance sheet, however, keep in mind that we needed to use several ATPs early in the process in order to prime the system. That is Fatty Acyl Coenzyme A synthetase used 2 high energy phosphate bonds in order to prime the fatty acid for production of the Coenzyme A intermediate that's necessary for beta oxidation. |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | Regulation_of_Metabolism.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOHN ESSIGMANN: Let's go now to storyboard 36, starting with panel A, Regulation of Metabolism. So far in 5.07, we have looked at metabolic pathways from the perspective of the cell in organelles within the cell. Using physiological scenarios, I've tried to show you how pathways within the cell respond to a change in the environment, for example, by running away from a stressor, such as a dog or dealing with the problems of starvation or diabetes. We have looked a little bit at the ways that individual steps and pathways are regulated. We have not, however, looked at the ways that individual pathways and individual organs coordinate their respective activities in order to accommodate the needs of the entire organism. Coordinated pathway networking is the focus of this lecture. Let's turn to panel B. I'm going to start out by talking a little bit about the seven pathways that we have studied in detail in 5.07 and look at how they're regulated. As I've said before, typically pathways are regulated at their rate-determining steps, that is, the step at which you'll find an enzyme that has a large free-energy change associated with the conversion of substrates to products. In the first pathway we studied, glycolysis, there are three irreversible steps, specifically the glucokinase/hexokinase step; secondly, the phosphofructokinase-1 step; and thirdly, the last step, pyruvate kinase. These are all sites of pathway regulation. Our second pathway, the tricarboxylic acid cycle, or Krebs cycle, was regulated at every step at which NADH is produced, that is, citrate dehydrogenase, alpha-ketoglutarate dehydrogenase, and malate dehydrogenase. While pyruvate dehydrogenase is not formally part of the TCA cycle, I'll point out here that it is also regulated by NADH. In all four cases, NADH feedback inhibits the enzymes that produce it. Under conditions of excessive TCA cycle activity, you'll find that NADH levels will drop, and accordingly NAD+ levels will increase. The reduction in NADH will result in activation of the pathway. In other words, the disappearance of NADH results in the uninhibition of the pathway. Our third pathway, gluconeogenesis, is regulated at the pyruvate carboxylase step at the fructose 1,6-bisphosphatase step and at the glycogen synthase/glycogen phosphorylase steps. We're going to be looking at this pathway in some detail in a little while. Our fourth pathway, fatty acid catabolism, is regulated at the CAT, or Carnitine Acyltransferase step, which is inhibited by malonyl coenzyme A. Malonyl CoA is one of the key precursors for fatty acid biosynthesis. It makes sense that the concentration of malonyl coenzyme A, if high, would inhibit the uptake of fatty acids into the mitochondrial matrix. Keep in mind that uptake into the matrix is the job of CAT1. Shutting down CAT1 by the high concentration of malonyl CoA prevents entry of fatty acids into the mitochondrial matrix, where they otherwise would be subjected to beta-oxidation. By turning off CAT1, malonyl coenzyme A prevents the futile cycle of simultaneous fatty acid degradation and biosynthesis. The fifth pathway is fatty acid biosynthesis. Acetyl-CoA Carboxylase, or ACC, is the enzyme that makes malonyl coenzyme A out of its precursor, acetyl coenzyme A. This enzyme, ACC, is activated by insulin. As we've seen before, insulin detects the fed state in the organism. Hence, following a meal, insulin levels rise, and that's the signal that tells the cells of the body to take up nutrients. That's just one example. After a meal, glucose levels will rise in the blood. Insulin will be released and drive the glucose into the cell. The cell will then activate pathways by which the glucose is converted to acetyl-CoA. Then the acetyl-CoA will be converted into fatty acids, and then ultimately, into triacylglycerides for energy storage. Our sixth regulated pathway is the pentose phosphate pathway. Glucose 6-phosphate dehydrogenase is the first step in the pathway. And as we often see, an early or committed step is where regulation happens. One of the products of the pentose phosphate pathway is NADPH. If the cell has used a lot of NADPH, for example for biosynthesis, its levels will drop. Coordinately, there will be an increase in NADP+. In the case of glucose 6-phosphate dehydrogenase, we find that NADP+, which is the product of excessive reductive biosynthesis, as well as other activities, such as combating oxidative stress, activates the dehydrogenase in order to enable the synthesis of additional NADPH in order to sustain biosynthesis. The last pathway, the seventh, is electron transport and oxidative phosphorylation. This is our most robust pathway for the production of ATP. When there's a lot of demand for ATP, ADP levels will rise in the cell. And it's ultimately the level of ADP that regulates electron transport in oxidative phosphorylation. More about that in a few minutes. Let's now look at panel C. Organs communicate with one another by the nervous system, of course, but also by hormones, small molecules that are secreted by one organ that will have an effect at one or more organs distal to the first organ. Changes in our environment are detected by the brain with input from our sensory organs. Our internal organs, such as the liver, pancreas, kidneys, adrenal glands, and muscles will detect signals, either independently of the brain or after an instruction set is received from the brain. The resulting signal network will allow the entire organism to be able to adapt to the new environment, be it one of stress, for example, being chased by a dog, or one of, for example, hunger. We'll talk later about the adrenal glands, which are going to respond to signals that come in from sensory organs that tell our muscles to start running and to tell the liver to start to provide the muscles with the glucose they need in order to sustain running. Later we'll be looking in some detail at the adrenal-produced hormone epinephrine, also called adrenaline. Adrenaline will have a profound impact, both in the muscles and in the liver, to allow these organs to do their respective jobs. The pancreas is a very important organ in that it provides exocrine functions that help with digestion and endocrine functions that enable us to regulate or balance fuel metabolism. The alpha cells of the pancreas produce glucagon. The alpha cells sense hunger. They secrete glucagon into the blood, which travels to organs that represent our fuel depots. And fuel from those depots, for example, fatty acids and glucose, will then be supplied to other tissues of the body that are in need of nutrition. The beta cells of the pancreas sense what we call the fed state, and they produce insulin. Insulin instructs the various organs of the body to take up fuel in the wake of us having eaten a meal. Now let's look at panel D. There are three general paradigms by which pathways are regulated. The three hormones that I've just mentioned typically will cause the activation of a kinase that will phosphorylate a target protein, resulting in either increased or decreased activity of that protein. We call this hormonal or, more properly, covalent control, because there will be a covalent modification of a protein that will be responsible for pathway regulation. The second major paradigm by which pathways are controlled is allosteric regulation. Earlier in 5.07, JoAnne Stubbe showed us how hemoglobin, the molecule that carries oxygen in the blood, is controlled by bisphosphoglycerate and protons, which act as allosteric effectors. Similarly, other small molecule effectors will control the major pathways of metabolism. We're going to be looking at phosphofructokinase-1 as our prime example. The third strategy of regulation is what I call "acceptor control." This is the way that electron transport and oxidative phosphorylation are regulated. To the right side of panel D is an abbreviated metabolic pathway that I'm going to use to describe each of these three paradigms. At the upper left is a box that contains glycogen synthase and glycogen phosphorylase. Earlier in 5.07, I told you how phosphorylation of a specific C-ring residue on glycogen phosphorylase results in a dramatic increase in the activity of that enzyme. A little later, we'll see that covalent modification, again, by serine phosphorylation of glycogen synthase, results in a decrease in activity of that enzyme. Hence, covalent modification of these two proteins enables, in one case, activation of the enzyme and, in the other case, inhibition of the enzyme. This reciprocal control prevents futile cycling. Now let's look at the box in the center of this abbreviated metabolic pathway. The glycolytic enzyme, PFK-1, Phosphofructokinase-1, converts fructose 6-phosphate to fructose 1,6-bisphosphate. We're going to see that a small molecule derivative of fructose 6-phosphate, specifically fructose 2,6-bisphosphate, will act as a powerful allosteric stimulator of PFK-1. I'll also point out that AMP, Adenosine Monophosphate, will also stimulate this enzyme. I'll also point out here that PFK-1 is a tetramer. Oftentimes, as JoAnne taught us, proteins that are multimers are the ones that are subject to allosteric regulation. Our third paradigm of regulation is accepter control, shown in the box at the bottom of panel D. During periods of intense physical activity, ATP is consumed and converted to ADP, and NADH is converted to NAD+. Rising levels of ADP in the mitochondrial matrix activate the proton-translocating ATP synathase. As more protons flow through the synthase, more ATP is made. They call this accepter control, because the regulatory molecule is ADP, which is the, quote unquote, "acceptor of phosphate" in the synthesis of ATP. Because we covered accepter control in some detail in the lecture on electron transport and oxidative phosphorylation, I'm going to leave that topic for now and focus on covalent control and allosteric control. Let's turn now to storyboard 37, starting with panel A. With regard to the paradigm of covalent control, I'm going to focus on the pair of enzymes, glycogen synthase and glycogen phosphorylase, which reciprocally regulate glycogen synthesis and glycogenolysis. Let's consider a scenario in which there's some kind of stressor. The muscles have to be activated in order to be able to deal with the stress, and the liver has to be able to provide the resources to the muscles that will allow the muscles to continue intense physical activity. Now let's look at panel B. In our scenario, you've just seen something frightening. Your brain then sends an electrical signal to the adrenal glands, which then send a chemical signal, specifically adrenaline, to the liver and to the muscles. At the top part of this response scenario, both the liver and muscles are going to be responding in a very similar way. However, at the bottom part of the response scenario, the liver and muscles are going to be activating very different pathways. Specifically, we're going to see that muscles will strongly activate glycolytic pathways in order to enable the production of ATP to keep the muscles running. The liver, by contrast, is going to be activating pathways that are more like those of gluconeogenesis, that is, spilling out fuel from the liver to provide the muscles with the energy-rich resources that it needs to keep us running away from our stressor. Adrenaline, or epinephrine, travels from the adrenal glands through the blood. It takes only a second or two for this to happen. It's received by the liver and the muscles at the beta-adrenergic receptor. The blood concentration of epinephrine is very low, something of the order of 10 to the minus 10 molar at this point. There's going to be a very substantial signal amplification as we move ahead. Keep in mind for later that the initial triggering signal is in the 10 to the minus 10 molar range. The arrival of epinephrine at the receptor results in structural changes in the transmembrane domain of the beta-adrenergic receptor. The signal is received by a heterotrimeric G protein, shown as the circle with alpha, beta, and gamma subunits. This G protein, in its inactive state, has a bound molecule of GDP, that is, guanosine diphosphate. Upon receipt of the signal from the beta-adrenergic receptor, the heterotrimeric G protein ejects the beta and gamma subunits, and the GDP molecule, which is non-covalently bound, is released. The GDP is replaced by a GTP, guanosine triphosphate. This replacement results in the formation of the alpha subunit with a non-covalently bound GTP. This is the active form of the enzyme. It translocates along the inner surface of the membrane until it encounters AC, or adenyl cyclase. The GTP-bound G protein activates adenyl cyclase. The activated AC dynamically starts converting ATP, which is very abundant in the cell, into cyclic AMP. We call cyclic AMP the second messenger. In our scenario, the first messenger is epinephrine, which interacted with the beta-adrenergic receptor. The second messenger is cyclic AMP, which is produced by the activation of adenyl cyclase, which is asymmetrically associated with the inner surface of the cell's membrane. Let's turn now to panel C. In the upper left of panel C, you'll see a box. This box depicts the chemical mechanism leading to the production of cyclic AMP. Let's now look at the main part of the panel. In a very short period of time, a matter of seconds, the concentration of cyclic AMP within the cell goes from about 1 micromolar up to about 5 micromolar. So the presence of epinephrine at the 10 to the minus 10 molar concentration outside the cell results in a perturbation of cyclic AMP concentrations inside the cell, bringing cyclic AMP concentration to about 10 to the minus 6 molar. This is a very substantial increase in signal, which an engineer would call gain. The increasing cyclic AMP concentration is going to activate a kinase cascade. And the initial kinase that's going to be activated is called Protein Kinase A, or PKA, which is represented symbolically as a C with a circle around it in my drawing. In its inactive state, protein kinase A is in a complex involving two molecules of itself and two molecules of a regulatory protein, which I've indicated as an R in the middle of a circle. Cyclic AMP forms a tight complex with R. This basically causes R to dissociate from the active subunit C, that is, the catalytic portion of protein kinase A. The catalytic portion of protein kinase A is now free to act as a kinase to phosphorylate the next kinase in the cascade, which is called SPK, or Synthase Phosphorylase Kinase. A serine residue on SPK is phosphorylated, and that event converts this kinase into its active form. I'll also point out that one of the subunits, the delta subunit of SPK, is the calcium-binding protein calmodulin, which gives a second level of regulation to this enzyme. Specifically, this enzyme is regulated not only by the concentration of cyclic AMP, but also by the levels of calcium within the cell. The activated SPK, or synthase phosphorylase kinase, is now going to phosphorylate two other proteins, glycogen phosphorylase and glycogen synthase. As I mentioned earlier, phosphorylation of glycogen phosphorylase results in the enzyme becoming more active. That is, it starts breaking down glycogen by phosphorolysis to produce glucose 1-phosphate, which will then be available for further metabolic processing. In the muscle, glucose 1-phosphate will be converted by phosphoglucomutase into glucose 6-phosphate, which will then be a substrate to initiate glycolysis, with the ultimate generation of lots of ATP for the muscle. In the liver, phosphoglucomutase will convert glucose 1-phosphate into glucose 6-phosphate, but in this case, gluconeogenesis will be activated. Glucose 6-phosphatase will remove the phosphate from glucose 6-phosphate, converting it to glucose. The glucose will then be secreted from the liver, go out into the blood, and then go to the muscle to help the muscle carry out glycolysis. Now let's look at the other target of SPK, synthase phosphorylase kinase, specifically, glycogen synthase. In this case, glycogen synthase phosphorylation results in a less active protein, and hence, glycogen synthesis will cease. This makes sense, because a cell under stress would not want to be making glycogen. It wants to be using energy and not storing fuel. Let's step back and look at the big picture at this point. The interaction of a hormone resulted in the covalent modification of proteins. One of those phosphorylated proteins, glycogen phosphorylase, triggered glycogen breakdown. Phosphorylation of a second protein, glycogen synthase, turned it off. Turning off glycogen synthase helped us avoid the otherwise futile simultaneous synthesis and breakdown of glycogen. So overall, this is an example of a hormone causing covalent modification of proteins in such a way that it altered the activity of those proteins in a manner that made sense given the physiological challenges to the organism. Next, let's turn to storyboard 38, panel A. Our next regulatory paradigm will be the use of allosteric effectors to regulate a pathway. As before, I'm going to use the physiological scenario of stress. But in this case, I'm going to show how stress will cause the production of small-molecule allosteric effectors that will activate glycolysis in the muscle and activate gluconeogenesis in the liver. Our focal point is going to be PFK-1, phosphofructokinase-1, of glycolysis. Of central importance is going to be a small molecule, fructose 2,6-bisphosphate, which we'll see is going to be a very powerful allosteric stimulant of PFK-1. Fructose 2,6-bisphosphate is made by the enzyme PFK-2, phosphofructokinase-2. Let's start out by looking at the very short biochemical pathway shown in panel A. We see glucose converted by hexokinase or glucokinase to glucose 6-phosphate and then some equilibrium steps up until we get fructose 6-phosphate. Ordinarily, we think of fructose 6-phosphate as continuing in the glycolytic pathway, but let's not think of it that way right now. The chemical structure of fructose 6-phosphate is shown slightly to the right. Note that fructose 6-phosphate has a phosphate on the 6-hydroxyl group, and that the 1-hydroxyl functionality has only a hydrogen. This is the alpha anomer of fructose, as evidenced by the fact that the hydroxyl group on carbon-2, the anomeric position, is down, or under the furanose ring. With a little bit of electron pushing, you can change the stereochemistry at the 2-carbon such that the hydroxyl group would be up, or in the beta configuration. In the beta configuration, the hydroxymethyl group of fructose 6-phosphate, which includes its 1-carbon, would be down. The beta and alpha-anomers of fructose 6-phosphate are in equilibrium. That is, they both exist at the same time. The kinase, phosphofructokinase-2, catches the beta anomer of fructose 6-phosphate and phosphorylates it, producing the molecule at the lower right of this panel. That's the actual structure of fructose 2,6-bisphosphate. This is the molecule that's going to serve as the allosteric effector of the glycolysis pathway. The PFK-2, or phosphofructokinase-2 enzyme, is shown to the left. It's a complex enzyme having lots of different functionalities. The hatched circle within the larger circle at the top represents the phosphofructokinase domain, which converts fructose 6-phosphate to fructose 2,6-bisphosphate. That is the structure at the lower right. The hatched domain at the bottom is the fructose bisphosphatase domain. This is an unusual enzyme. In the top domain, it acts as a kinase, in the bottom domain, acts as a phosphatase. PFK-2 has two hydroxylated amino acids, one at about 3 o'clock as drawn and the other at about 6 o'clock. When the hydroxyl group at the bottom is present as a pure, unmodified hydroxyl group, the protein acts as a kinase. When the 6 o'clock domain is phosphorylated, the enzyme acts as a phosphatase. Protein kinase A, or PKA, from the previous storyboard, where we talked about hormonal regulation, is the kinase that phosphorylates the hydroxyl group at 6 o'clock on the protein. As we'll see later, the hydroxyl group at 3 o'clock is targeted by a kinase called AMP kinase in the muscle. Before we go on, I want you to look carefully at the structure of fructose 2,6-bisphosphate at the bottom right of panel A. Compare that structure with the structure of fructose 1,6-bisphosphate at the right in panel B. If you squint and look at these two molecules, you'll see that they look remarkably alike. In both cases, the phosphate group is above the plane of the furanose sugar. However, in the case of fructose 2,6-bisphosphate, the phosphate is on the 2-carbon, whereas with fructose 1,6-bisphosphate, the phosphate is on the 1-carbon. The way these molecules look will become important in a few minutes. Now let's take a look at the pathway as shown in panel B. Panel B reflects the liver in its normal, that is, non-stress state. So this is just the liver doing everyday liver business. Glucokinase of the liver is converting glucose to glucose 6-phosphate. The alpha anomer of fructose 6-phosphate is produced. Most of the fructose 6-phosphate is converted to fructose 1,6-bisphosphate. In the liver cell at this time, there's probably some low level of glycolytic activity going on. However, because of natural anomerization, some of the alpha-fructose 6-phosphate is converted to its beta-anomer. Phosphofructokinase-2, or PFK-2, is shown to the lower left, and it's active. Note that neither of its hydroxyl groups, at this point, are phosphorylated. And its kinase site will be converting the beta-fructose 6-phosphate anomer to its phosphorylated form, that is, beta-fructose 2,6-bisphosphate, which has the structure shown at the bottom right of the previous panel. Fructose 2,6-bisphosphate is a positive allosteric effector of phosphofructokinase-1. So it facilitates the glycolytic pathway that is enabling net flux from left to right in the pathway as shown. In addition to stimulating glycolysis, fructose 2,6-bisphosphate strongly inhibits gluconeogenesis. It does this by binding at the active site of fructose 1,6-bisphosphatase, the gluconeogenic enzyme, clogging up the active site, and thus disabling the gluconeogenic pathway. The reason that it's such a good inhibitor of gluconeogenesis stems from an examination, once again, of the structures of fructose 2,6-bisphosphate in the bottom right of the previous panel and fructose 1,6-bisphosphate shown to the right on this panel. Keep in mind that fructose 1,6-bisphosphate, while it's the product in glycolysis, is the substrate for the gluconeogenic reaction. If the active site of the fructose 1,6-bisphosphatase is clogged by fructose 2,6-bisphosphate, then it's not going to be able to process the fructose 1,6-bisphosphate in the gluconeogenic direction. Hence, gluconeogenesis is strongly inhibited. All of the above shows us that fructose 2,6-bisphosphate is really important. It stimulates the forward direction in the liver that is causing glycolysis to have a net flux from left to right. And in addition, it strongly inhibits the back reaction, that is, the reaction in the gluconeogenic direction. Turn at this point to panel C. Now let's ramp up the scenario and think about a situation in which you're being chased by a dog. You want to be able to generate glucose in the liver and then export that glucose to the muscles so that you can run away from your foe. This is a situation in which you would like to have your liver stop doing glycolysis. You want the liver to strongly turn to gluconeogenesis in order to manufacture glucose for the muscle. At this point, refer back to the storyboard that dealt with covalent control, where protein kinase A was activated. Recall the part of the lecture when I talked about covalent control, and we were talking about the glycogen phosphorylase step in the liver. In the liver, we generated first glucose 1-phosphate and then glucose 6-phosphate. And then the phosphate was lopped off the glucose 6-phosphate to produce glucose that went out into the blood. In the present case, we're looking at a step that's further down in the gluconeogenesis pathway. Specifically, we want to activate the fructose 1,6-bisphosphatase in order to push even more carbohydrate toward the production of glucose. So in panel C, we see stress inducing the cyclic AMP cascade that results in activation of protein kinase A. And protein kinase A is going to phosphorylate the southernmost, that is, 6 o'clock domain, on the PFK-2, or phosphofructokinase-2 protein. As I indicated earlier, phosphorylation of the 6 o'clock domain results in conversion of phosphofructokinase-2 from a kinase into a phosphatase. The phosphatase produced from PFK-2 will act upon the pool of fructose 2,6-bisphosphate and convert that pool to fructose 6-phosphate. That is, the 2-phosphate will be lopped off of the fructose 2,6-bisphosphate molecule. By doing this biochemistry in the liver, we've done two things. First, we destroyed the fructose 2,6-bisphosphate pool. Hence, this molecule is no longer available to act as an allosteric stimulator of PFK-1. Secondly, removing fructose 2,6-bisphosphate from the pool has resulted in removal of the active-site inhibitor of fructose 1,6-bisphosphatase. This enzyme, therefore, becomes active and is able to push the net flux of carbon from right to left in the direction of gluconeogenesis, producing glucose that will then spill out into the blood. Please turn now to panel D. We just saw that the liver, post-stress, produces glucose. Now we're going to take a look at the muscle, which, of course, is going to be using that glucose. In response to the stress state, the muscle is going to activate an enzyme called the AMP-dependent protein kinase, or AMP kinase, also known as AMPK. AMPK activates metabolic pathways that generate ATP. The 3 o'clock phosphorylated PFK-2 proves to be an exceptionally competent enzyme for converting fructose 6-phosphate into fructose 2,6-bisphosphate. This increases further the concentration of fructose 2,6-bisphosphate in the muscle cytoplasmic pool. And keep in mind that fructose 2,6-bisphosphate itself is a powerful allosteric stimulator of glycolysis by its effect on PFK-1. So in the muscle cell, we're able to achieve extraordinarily high concentrations of fructose 2,6-bisphosphate, thus favoring glycolysis. Let's turn now to storyboard 39. Let's start with panel A. The way that AMP kinase stimulates glycolysis in the muscle post-stress is illustrated in this abbreviated metabolic pathway. Starting at the lower left, the stressor has created the cyclic AMP cascade that activates AMP kinase. AMP kinase phosphorylates the 3 o'clock domain on PFK-2, converting it to its phosphorylated derivative that shows powerful kinase activity to convert fructose 6-phosphate into fructose 2,6-bisphosphate. This enhanced pool of fructose 2,6-bisphosphate will now act as a powerful allosteric stimulator of PFK-1 to favor glycolysis. I want to point out that muscles do not do gluconeogenesis. Only the liver and renal cortex do this pathway. So muscles do not have fructose 1,6-bisphosphatase, the gluconeogenic enzyme. Accordingly, in muscles we don't have to worry about gluconeogenesis being turned on to some extent and thus dampening the glycolytic effect in the muscle that's needed in order for the muscle to generate the ATP needed to outrun your foe. On this panel, we also see that there are several other effectors that influence the activity of PFK-1. One is citrate. Another is ATP. Another one is AMP, or Adenosine Monophosphate. I want to say a couple of words about AMP in this regard. Let's look at panel B. AMP is not as powerful an allosteric effector on PFK-1 as is fructose 2,6-bisphosphate. But we know a lot about its activity as an allosteric effector because it's been studied quite thoroughly. PFK-1 is a tetramer, and its activity is regulated by cooperativity. This graph shows the activity of PFK-1 as a function of the concentration of one of its substrates, fructose 6-phosphate. When there's no ATP present, which is not a realistic situation, you see the regulatory pattern following that of a rectangular hyperbola. That's scenario A. Scenario B shows what would happen to the activity of the enzyme when ATP is present at a concentration of about 1 millimolar, which is quite realistic, and what you see is suppressed activity. When the heterotropic allosteric effector AMP is added, you see the curve shift to the left. That is, you get enhanced activity at a given substrate concentration. My guess is that fructose 2,6-bisphosphate would push the curve even further to the left. Let's now look at panel C. I think it's useful to use AMP to help us construct a model for what fructose 2,6-bisphosphate might be doing as an allosteric stimulator of the PFK-1 enzyme. The equilibrium diagram that I've drawn shows the relationship between the relaxed, that is, more active state of the PFK-1 tetramer, and the tense, that is, more inactive state of the enzyme. When ATP is abundant within a cell, it binds more strongly to the tense state, and hence, ATP inactivates PFK-1. This makes sense, because if ATP is abundant, you do not want to be doing glycolysis to make even more ATP. However, when the ATP pool of a cell is challenged by doing heavy work or biosynthesis, the AMP is going to bind more tightly to the R state of PFK-1. And that's going to help convert the enzyme into its more active state. That's a little snapshot of how a small molecule, in this case, AMP or perhaps fructose 2,6-bisphosphate can dramatically regulate the activity of an enzyme at a specific step in metabolism. |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | Problem_Set_9_Problem_1_Catabolism_of_Triacylglycerols.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. BOGDAN FEDELES: Greetings, and welcome to 5.07 Biochemistry online. I'm Dr. Bogden Fedeles. Let's metabolize some problems. I have a good problem for you today. This is problem one, from problem set nine. It is a problem in which we're going to calculate how much energy we get from metabolizing a molecule of fat, more specifically, a molecule of triacylglycerol. Here is a structural representation of the triacylglycerol. Recognize the glycerol molecule in the middle here, that it's holding together three fatty acids. Now notice, I picked a short-chain fatty acid, a long-chain fatty acid, and a fatty acid that actually has a double bond. Now when this molecule gets metabolized, it's going to be acted upon by an enzyme called the lipase. It's going to hydrolyze the molecule into its constituents. Obviously the lipase is going to use water molecules, and it's going to break it down into glycerol, shown here. Glycerol. Then, this fatty acid that has two, four, six, C6 fatty acid. This fatty acid has two, four, six, eight, 10, 12, 14, 16, C16 fatty acid. And this one, it's an unsaturated fatty acid, has a double bond, and if you count the carbons it should add up to 15 carbons. So not only it has a double bond, but also it's an odd-numbered fatty acid. In order to figure out how much energy we can get from this one molecule of fat, we will look at how much energy we get from each one of these constituents-- namely, the glycerol and the three fatty acids-- and calculate what is the maximum amount of ATP we can generate when we metabolize each one of these molecules completely to CO2 and water. In order to keep track of how much energy we get from each of the molecules, let's make a handy table right here. So we're going to put glycerol and the C6 fatty acid, and the C16 fatty acid, and C15 fatty acid. All right. And each one of these, we're going to follow the metabolism, and see how much ATP we're going to need to put in or generate. Also, the redox cofactors in ADH or FADH2. Also, for fatty acids, we're going to be dealing with beta-oxidation. And pretty much every single molecule, when it's going to be born completely, it's going to generate first acetyl-CoA. All right. And here, we're going to tally up the total amount of ATP for each one of them. And then we're going to tally up for the entire molecule. So let's start with the glycerol molecule. Now, if you have watched the problem set seven, you might recognize the following pathway. As we just discussed, triacylglyceride can be hydrolyzed to form glycerol. And the glycerol, then, is first phosphorylated by glycerol kinase, and oxidized by glycerol phosphate dehydrogenase, to generate dihydro acetyl phosphate, which can then enter the glycolysis, and follow glycolysis all the way to pyruvate, and then acetyl-CoA. From then, acetyl-CoA can go into the TCA cycle. So let's tally up how much energy we can get from one molecule of glycerol. Let's take a look specifically at the steps where we are generating ATP, or generating redox cofactors, such as NADH or FADH2. First, we need to put in ATP. Neglect glycerol kinase. But we're going to get back one ATP in the phosphoglycerate kinase step, and one ATP in the pyruvate kinase step. So the net ATP formation is one. Now in the glycolysis, we're also going to generate one NADH, in the pyruvate dehydrogenase another NADH, and the glycerol-3-phosphate dehydrogenase will generate also an NADH. Now, keep in mind, this NADH is going to be outside the mitochondria, so we're going to have to use a shuttle to bring it in. But we're considering that we're using an efficient shuttle, that gives us the full amount of energy for this NADH. So, once again, the total, it's going to be three molecules of NADH. So, going back to our table, we said the glycerol is going to give us a net one molecule of ATP, three molecules of NADH. There's going to be no FADH2, no beta-oxidation, and we're going to get one molecule of acetyl-CoA. As you've seen by this point many times, acetyl-CoA will enter the TCA cycle, where it's going to be completely oxidized to two CO2 molecules, and in the process is going to generate the equivalent of 12 ATPs. Let's take a look at where those are coming from. Here is a schematic of the TCA cycle. And one acetyl-CoA molecule comes in, and it's going to generate one, two, three molecules of NADH, one molecule of FADH2, and one molecule of GTP. Now, if we keep in mind that for every FADH2 we generate about two ATPs, and for every NADH we generate three ATPs, that's a total of, 3 times 3 is 9, plus 2 is 11, plus a GDP is equivalent to an ATP. That's about 12 molecules of ATP per molecule of acetyl-CoA that enters the TCA cycle. So now let's tally up how much energy we can get from one molecule of glycerol. So we know we get one ATP, three NADHs, now each NADH is going to give us three molecules of ATP. Now FADH2, we know these give two molecules of ATP. We don't have any beta-oxidation-- we're going to be talking more about fatty acids-- and acetyl-CoA we just talked about, we get 12 molecules of ATP per acetyl-CoA. So, the total here is 12 plus 9, plus 1, that's going to be 22 ATPs from one molecule of glycerol. Now let's talk about the fatty acids. In order to metabolize them, first we need to activate them into thioesters. These are going to be thioesters formed with coenzyme A, or CoA. Here's an overview of the activation process by which fatty acids become fatty acid thioesters. Of course, this is written for the C6 fatty acid, but would occur for any other fatty acid, regardless of the chain length. Now this process is catalyzed by acetyl-CoA synthetase, that uses ATP to first generate this mixed anhydride, with AMP. This process is called adenylation. So this activates the acid, which then reacts with coenzyme A shown here, HS-CoA, which will generate the thioester of the fatty acid. Now, here we are using one molecule of ATP, but we're breaking it into alpha-phosphate to generate pyrophosphate, which is then broken down into two inorganic phosphates. And the energy in this reaction, catalyzed by inorganic pyrophosphatase, drives the reaction towards the right. Now, since we're generating AMP in the second step, we need a second molecule of ATP to convert this AMP back to ADP. So, overall, this process requires two molecules of ATP to generate one molecule of the fatty acid thioester. Once a fatty acid is activated into a thioester with coenzyme A, it can now undergo beta-oxidation. This is a set of reactions in which the fatty acid is broken down into a shorter fatty acid and one molecule of acetyl-CoA. The process then can repeat over and over, until the entire fatty acid is broken down into acetyl-CoA molecules. So let's take a look at how beta-oxidation works. Here is an overview of beta-oxidation pathway. We're starting with a fatty acyl-CoA thioester that has "n" carbons, and by the end of the process, we're going to get a thioester that has "n" minus 2 carbons, and the remaining two carbons are going to be in the form of acetyl-CoA. Now, this beta-oxidation involves four steps. In the first step, we're going to use a dehydrogenase to oxidize this single bond between the alpha and beta carbons, and make a trans double bond. So this is the alpha carbon, this is the beta carbon. So these fatty acyl-CoA dehydrogenases, there are actually several of them, and they have different specificities for short, medium, long, and very long chain fatty acyl-CoAs. But regardless, there will be some dehydrogenase to act on any length fatty acyl-CoA, and introduce this trans double bond. In the next step, we add one water molecule to generate a beta-hydroxyacyl-CoA, which is subsequently oxidized to generate a beta-ketoacyl-CoA. In this oxidation, we're going to use NAD, generating one molecule NADH. Finally, the thiolase, or beta-ketoacyl thiolase, is going to break down this bond between the alpha and beta carbons, in a reverse Claisen reaction, to generate one molecule of acetyl-CoA. And the remainder of the fatty acid is another thioester. So one round of beta-oxidation is going to generate one molecule of FADH2, and one molecule of NADH, as well as one molecule of acetyl-CoA. Now let's update our table with the information we just learned. So, as we just said, for every fatty acid we need to expend two molecules of ATP, to transform them into the thioesters. That's why I put here minus 2 for each one of the three fatty acids. We also learned that in beta-oxidation, we generate one molecule of FADH2, and one molecule of NADH. So, for every beta-oxidation, we generate the equivalent of five ATP. Now we're ready to calculate how much energy we can get from each of the three fatty acids in our problem. Now, for the C6 fatty acid that's what's represented here, we discussed, we're going to activate it, we're going to need to use two ATP molecules and coenzyme A. We're going to form the thioester. And then we're going to do beta-oxidation Now, for a fatty acid that has six carbons, we're going to do the beta-oxidation, and the molecule is going to cleave there, and we're going to do it one more time, the molecule is going to be cleaved there. So we're going to do two rounds of beta-oxidation. And each one of these two carbons is going to become an acetyl-CoA. So we're going to generate three acetyl-CoAs. Now, similarly, for the C16 fatty acid-- two, four, six, eight, 10, 12, 14, 16. All right, it's going to first activate two molecules of ATP and coenzyme A to form the thioester, with 16 carbons. And this will undergo beta-oxidation. And we're going to do it one, two, three, four, five, six, seven times. OK, so seven rounds of beta-oxidation is going to generate eight molecules of acetyl-CoA. Now let's go back and put in all this information into our table. So for the C6 fatty acid, we expended two molecules of ATP to activate it, and then we did two rounds of beta-oxidation, and we generated three molecules of acetyl-CoA. For the C16 fatty acid, again, we activated then we did seven rounds beta-oxidation, and we generated eight molecules of acetyl-CoA. So for the C6 fatty acid, we have a grand total of, 3 times 12 is 36, plus 2 times 5, 10, is 46, minus 2, is 44 molecules of ATP. For the C16 fatty acid, well, 8 times 12 is 96, plus 35, minus 2, that's 129 molecules of ATP. Now the C15 fatty acid is going to be a little bit more complicated, because on one hand, it has a double bond, so we need to figure out how to deal with that. On the other hand, it's an odd chain fatty acid, and as you imagine, the beta-oxidation breaks off two carbons at a time. So the last time we do beta-oxidation, we're going to be left with three carbons. That's called propionyl-CoA, and we'll have to figure out what to do with that. Just as with the other fatty acids, the C15 fatty acid is going to need to be activated into a thioester with CoA. So this is going to cost two ATP molecules, and, of course, we need to add the CoA. And now, this is the thioester of our C15 fatty acid. Now, since the double bond is pretty far away from the business-end of the molecule, we can do a number of rounds of beta-oxidation. In fact, we can do beta-oxidation once, twice. So two rounds of beta-oxidation is going to give us this molecule. Two rounds beta-oxidation. In each one of these rounds we're going to generate one molecule of acetyl-CoA, so two acetyl-CoA. So we get this fatty acid taiyo thioester, which contains a beta-gamma double bond. Now, it turns out there is an enzyme that can isomerize this double bond into an alpha-beta double bond. So this is what is going to happen next. The double bond moves from the beta-gamma to the alpha-beta. Now, this looks a lot like an intermediate in the beta-oxidation. Once again, this reaction is catalyzed by am isomerase. And this alpha-beta unsaturated thioester can continue in a manner similar to beta-oxidation. So, first it's going to add water to form a hydroxyl here at the beta position. Then that hydroxyl is getting oxidized to form a keto group. And the thiolase is going to generate acetyl-CoA, and another thioester shown here. So, from here, we're going to generate one molecule of NADH, and one molecule of acetyl-CoA. All right, now this is a thioester of a completely saturated fatty acid. Now of course, it's still odd chained, so we have one, six, seven, eight, nine carbons. So we can do beta-oxidation actually three times. So, three rounds of beta-oxidation. And it's going to take us to three molecules of acetyl-CoA. And the last portion of the molecule is going to be this molecule, which we call propionyl-CoA, is a three carbon thioester. So far we have generated two, three, and another three here, six molecule of acetyl-CoA. And we've done, two, another three, five rounds of beta-oxidation. And we also generated an additional NADH molecule. So now let's update our table with this information, and then we're going to figure out what happens to propionyl-CoA. As we just discussed, the C15 fatty acid is going to get activated, so we need to ATPs there. Then it's going to undergo five rounds of beta-oxidation. And we generated a total of six molecules of acetyl-CoA, and one additional molecule of NADH. Let's now take a look at propionyl-CoA, and see how we metabolize it, and how much energy we can generate from it. It turns out the first step is to expand from a three carbon molecule to a four carbon molecule. This happens by adding one CO2. Of course, this process will require the expense of an ATP molecule. We generate this methylmalonyl-CoA, which is a branched four carbon chain molecule. And another enzyme, racemase, is going to interconvert this stereocenter from the S-configuaration to the R-configuration. Finally, this methylmalonyl-CoA is going to undergo a rearrangement of the groups to generate a linear molecule, succinyl-CoA. Now, this is one of the most fascinating transformations in the whole biochemistry, and involves an enzyme called methylmalonyl-CoA mutase, which requires cobalamin, or the coenzyme derived from vitamin B12. This unusual transformation catalyzed by the methylmalonyl-CoA mutase, the enzyme that requires vitamin B12 cofactor, it's fascinating because it involves a carbon skeletal rearrangement of the molecule. And this reaction occurs via a radical mechanism. The radical is obtained by breaking a carbon metal bond. In this case, it's a carbon-cobalt bond. Now succinyl-CoA is a familiar molecule, you've encountered it in the TCA cycle. However, we cannot use the TCA cycle directly to completely metabolize succinyl-CoA, as all the intermediates in the TCA cycle are in fact in catalytic amounts. So, we're going to use just part of the TCA cycle, to generate a molecule, malate, which then can be converted into pyruvate. And pyruvate can then generate acetyl-CoA to re-enter the TCA cycle and be completely metabolized. Here is the TCA cycle, to refresh your memory. And this is succinyl-CoA that we can generate from the methylmalonyl-CoA mutase. Now, as we said, succinyl-CoA is going to be converted to malate. Malate can then escape the mitochondria and continue its transformation towards pyruvate. So, in this process, succinyl-CoA is going to generate one molecule of GTP to form succinate. And succinate to fumartate is going to give us one more molecule of FADH2. Then malate will escape the mitochondria. So to summarize what happens in the TCA cycle, succinyl-CoA is going to give us a molecule of GTP, and then one more molecule of FADH2. And it's going to make it to malate, and then malate is going to be converted to pyruvate using the malic enzyme. Now, this is an oxidation and decarboxylation that happens in one step. For the oxidation, we're going to need NADP, instead of the usual NAD. So we're going to generate one molecule of NADPH. Now for the purpose of this problem, we're going to treat NADPH as equivalent to NADH. So malate, via the malic enzyme, is going to form pyruvate. And then pyruvate, in the pyruvate dehydrogenase, is going to lose one CO2 and form acetyl-CoA. In the process we'll also generate one more NADH, and now acetyl-CoA can re-enter the TCA cycle and be completely metabolized, generating in the process about 12 ATP equivalents. So now let's go back, and update our table with all this information that we found out about propionyl-CoA. So as we said, propionyl-CoA required first the loss of another ATP to activate it, to form the methylmalonyl-CoA. But then methylmalonyl-CoA converted to succinyl-CoA. Succinyl-CoA generated one molecule of GTP, so we're going to put plus 1 back here. Then we're going to get one molecule of FADH2 the generating succinate, from going from succinate to fumarate. And then, we're going to generate two more molecules of NADH, one at the malate enzyme step, and one at the pyruvate dehydrogenase step, so we have plus 2 here. And, of course, in the end, we get one more molecule of acetyl-CoA as well. So now we have a total of seven molecules of acetyl-CoA, five rounds of beta-oxidation, one FADH2, three NADHs, and at a loss of two ATPs. So this totals up to 118 ATPs for the C15 fatty acid. Now we can tally the entire ATP yield of a molecule of fat, of this triacylglyceride, and that's going to be 313 molecules of ATP. Now that's a lot of energy from one single molecule of fat. Now, this problem has an additional question, and it asked us to contrast how much energy we get from a six carbon fatty acid and compare that to how much energy would get from one molecule of glucose, which also has six carbons. Let's take a look at our table here. The six carbon fatty acid generates about 44 molecules of ATP when completely oxidized. By contrast, one molecule of glucose would generate only about 34 to 36 molecules of ATP. If you want to follow the same analysis that we did here-- remember glucose, well, we need to spend two molecules of ATP to activate it, but then we're going to generate four molecules of ATP going to pyruvate. And then there's going to be two molecules of NADH generated, one at the GAPDH step, one at the pyruvate dehydrogenase step, and finally, we're going to generate two molecules of acetyl-CoA. So the total is going to be, 2 times 12 is 24, plus 4 times 3 is 12, is 36, plus 2, is 38. So as you guys can see, the C6 fatty acid generates actually more ATP than one molecule of glucose. And that's probably reasonable, because the C6 fatty acids have a lot more C-H bonds. In other words, the carbons are more reduced. In glucose, we have a lot of hydroxyls, so the carbons are in a slightly higher oxidation state. Therefore, there's less energy generated total. Of course, one molecule of glucose pales in comparison with one molecule of fat, which has hundreds of ATP generated, as we thought in part A of this problem. Well, that sums up this problem. I hope it helped you realize why fats, or triacylglycerides, are so much more energy dense than other nutrients, such as sugars or amino acids. |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | Problem_Set_6_Problem_2_Mechanism_of_Phosphoglycerate_Mutase.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. DR. BOGDAN FEDELES: Hello, and welcome to 5.07 Biochemistry online. I'm Dr. Bogdan Fedeles. Let's metabolize some problems. Today we're discussing problem 2 of problem set 6. Here we're going to explore in more detail the mechanism of phosphoglycerate mutase, which is the eighth enzyme in glycolysis. It's the enzyme that catalyzes the conversion of 3-phosphoglycerate to 2-phosphoglycerate. Generally speaking, mutases are enzymes that catalyze the shift of a functional group between two similar positions of a molecule. In the case of phosphoglycerate mutase, this enzyme catalyzes the transfer of the phosphate group from the 3 position of glycerate to the 2 position of glycerate. In 5.07, you will encounter several mutases. Similar to phosphoglycerate mutase, there is a bisphosphoglycerate mutase, which converts 1,3-bisphosphoglycerate to 2,3-bisphosphoglycerate. Now, this reaction is very important when it happens in the red blood cells. Another mutase you will encounter is in the glycogen breakdown pathway. It's called phosphoglucomutase and converts glucose 1-phosphate to glucose 6-phosphate. Now finally, the most intriguing of them all is the methylmalonyl-coa mutase, which is a fascinating enzyme that converts methylmalonyl-coA to succinyl-coA. In this reaction, it rearranges this carbon skeleton of the molecule, and it requires adenosylcobalamin, which is a co-factor derived from vitamin B12. Back to phosphoglycerate mutase, this is a fascinating enzyme because it uses a phosphorylated histidine in the active site. And this is actually an example of a phosphorous-nitrogen bond, one of the very few available in biochemistry. Here is a schematic of the mechanism of phosphoglycerate mutase. Now, the reaction starts where the enzyme is already phosphorylated. We'll call it a phospho enzyme. And the histidine in the active site contains the phosphate group. Then the enzyme binds the substrate 3-phosphoglycerate. And then it's going to transfer this phosphate group onto the 2 position, the 2-hydroxyl of the 3-phosphoglycerate to generate the 2,3-bisphosphoglycerate. Then the phosphate at the 3 position is transferred to the histidine to generate the product of the reaction 2-phosphoglycerate and regenerate the phosphoenzyme. Note that the phosphate group is in fact not transferred. This phosphate group here is not the same that ends up on the 2 position. But rather, this phosphate group gets transferred to the enzyme. And the phosphate group from the enzyme ends up on the second position of glycerate. As we just mentioned, the active form of the enzyme has already the phosphate bound to histidine. Now the question is, how did this phosphate get there in the first place? Presumably, the enzyme is first synthesized in a form that we call apo form, which does not have the phosphate. And the phosphate is then added as a post-translational modification. Now, our problem suggests that one source for this phosphate is phosphoenolpyruvate. And there's some experimental evidence that phosphoenolpyruvate can transfer their phosphate and phosphorylate the histidine in this enzyme. Now, we are asked to comment on how reasonable this proposal is. We're going to evaluate the proposed transformation between phosphoenolpyruvate and phosphoglycerate mutase from two points of view. First of all, is this transformation thermodynamically accessible? And second, is this structurally feasible? We know that phosphoenolpyruvate contains a high-energy phosphate bond that can release a lot of energy upon hydrolysis. Now, if we look in our book, this is the Voet & Voet, Third Edition. If we look here, phosphoenolpyruvate, it says, releases about 62 kilojoules per mole upon hydrolysis. Now this is significantly more than what energy is released by the hydrolysis of ATP going to ADP, which is only about 31 kilojoules per mole. Now, this should not be surprising to you because PEP, phosphoenolpyruvate, is used in the last step of glycolysis, the pyruvate kinase, to phosphorylate ADP and generate ATP. So the fact it has a higher energy of hydrolysis, it just makes that transformation thermodynamically accessible. Let's now take a look at the arrow pushing mechanism of how phosphoenolpyruvate can phosphorylate phosphoglycerate mutase. Here is the phosphoenolpyruvate molecule, and here is the histidine in the active site of the enzyme. Now, as you know, histidine has a pKa of about 6, so an important fraction of the histidine will be protonated at physiological pH. However, for this reaction to work we need the histidine to act as a nucleophile to attack the phosphate. Therefore, we're going to consider it deprotonated. Now, the reaction starts by assuming there's a base in the active site that's going to deprotonate the histidine, and then it's going to attack the phosphate. And finally, the phosphate group is going to leave with the assistance of a general acid. So these are the products that we obtain. This is the phosphoenzyme with the histidine that now has the phosphate group attached. And this is the enol that is released from the phosphoenolpyruvate, which is the enol form of the pyruvate. Now, if we evaluate the starting material and the product in terms of their ability to stabilize negative charge, such as the charges on the phosphate, by resonance-- We notice that there is no significant difference. Here we have two negative charges and one set of phosphorus oxygen bonded. The charge can delocalize on this oxygen. We also have this carboxylate group, which we also have here. So there's not a lot of resonant stabilization between starting materials and products so far. Therefore, this reaction is thermodynamically close to neutral. However, notice the enol form of pyruvate. Now this is in fact a very unstable product. And it likes to tautomerize, basically isomerize in acid base conditions to the keto form of a pyruvate. The mechanism would be, as such, the base can deprotonate the enol. And then, the general acid can protonate CH2 group to generate the keto form of the pyruvate. It turns out the delta G for this transformation is very negative. Delta G here is approximately minus 40 kilojoules per mole. So that means this reaction is strongly going to the right, and strongly favors the keto form. So that means per ensemble the transformation going from PEP in our histidine in the active site is going to be strongly driven to the right because of this keto equilibrium. Therefore, the entire process shown here is expected to be thermodynamically very favorable. Now let's take a look at some structural considerations. In order for PEP to phosporylate the enzyme, it has to be able to reach the histidine that's deep in the active site. Notice that the 2-phosphoglycerate is one of the products or substrates of the enzyme. And therefore, it fits very nicely in the active site. Now phosphoenolpyruvate looks a lot like 2-phosphoglycerate. Going to sketch it here, going to have the phosphate there, and then there's the double bond in this position. So because phosphoenolpyruvate looks a lot like 2-phosphoglycerate it should have no problem fitting inside the active site of the enzyme and reaching the active site histidine. Therefore, the chemical reaction proposed in this problem is quite reasonable. First of all, the thermodynamics are excellent because the hydrolysis of phosphoenolpyruvate gives a lot of energy. And then the sterics are also favorable because PEP resembles 2-phosphoglycerate, one of the products of the enzyme. Part 2 of this problem asks us to evaluate the consequences on the major function of glycolysis of this reaction that we just discussed-- of phosphoenolpyruvate with phosphoglycerate mutase. Here is the second half of glycolysis, going from glyceraldehyde phosphate, or GAP, all the way to pyruvate. As you know, the main function of glycolysis is to generate ATP. And for each molecule of glucose, we have a net generation of two molecules of ATP. Now, ATP is produced in two places. First, at the phosphoglycerate kinase when 1,3-bisphosphoglycerate can phosphorylate ADP to generate ATP. And then the pyruvate kinase where phosphoenolpyruvate phosphorylates ADP to generate ATP. Now since we start glycolysis by investing some ATP, we need two molecules of ATP to phosphorylate glucose. We recover those two molecules of ATP at the phosphoglycerate kinase step. So all the net production of ATP that we get in glycolysis comes from the pyruvate kinase reaction shown here. Now, if phosphoenolpyruvate is used to phosphorylate phosphoglycerate mutase, basically, it's going to react to give the phosphate group here. It's going to generate pyruvate but without generating ATP, right? So the phosphate group goes to this phosphoglycerate mutase, and it generates pyruvate, but we get no net production of ATP. Now if PEP is used to phosphorylate phosphoglycerate mutase, it's not going to be available for the pyruvate kinase step. But we do generate pyruvate, so the whole transformation reaches pyruvate, but without producing a net amount of ATP. Of course, this should not be a significant problem, as in glycolysis we only require the enzymes in catalytic amounts. So initially, we're not going to be generating net amount of ATP until we phosphorylate the entire pool of phosphoglycerate mutase. After that, now we have phosphoglycerate mutase, so PEP is once again available for the pyruvate kinase reaction to generate ATP. I hope you noticed that there is a more subtle question here. If we're going to use PEP to phosphorylate phosphoglycerate mutase, how are we going to get to produce PEP in the first place since we need phosphoglycerate mutase to go from 3-phosphoglycerate to 2-phosphoglycerate, which then produces PEP. Again, it's kind of like one of these chicken and the egg problems. As you'll find out, many pathways feed into or intersect with glycolysis. And therefore, phosphoenolpyruvate could in principle be made in other ways. For example, in gluconeogenesis you'll see that pyruvate can lead to oxaloacetate, which then can lead to phosphoenolpyruvate using this enzyme called PEP carboxykinase, or PEPCK. So there are ways to produce phosphoenolpyruvate, which can then, say, phosphorylate phosphoglycerate mutase, which then allows the glycolysis to flow through in the normal way. Well, that's it for this problem. I hope you found pretty intriguing how phosphoglycerate mutase works. Now remember, this is one of the very few enzymes in biochemistry that utilizes the phosphorylated histidine. And this is one of the few examples of a phosphorus-nitrogen bond we have in biochemistry. Also remember, the phosphoenolpyruvate is the highest high-energy phosphate compound we have in the body. But all that hydrolysis energy really comes from the keto enol tautomerization equilibrium of the pyruvate that gets released upon hydrolysis. |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | Carbohydrate_Biosynthesis_I_Glycogen_Synthesis.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: In this session, we're going to be looking at carbohydrate biosynthesis. Please look at Storyboard 29. Looking at Panel A, the next set of pathways on which we're going to focus concerns carbohydrate biosynthesis. First, we're going to look at the pathway by which glycogen is made. As you know, glycogen is the polymeric form of glucose that's very readily available for energy production. The second pathway is gluconeogenesis. The organs that utilize glucose as their metabolic fuel prefer to have glucose at a concentration of about 100 milligrams per deciliter in the blood. The challenge comes from the fact that we eat only sporadically, and thus, levels of glucose will go up and down depending upon the time that has passed since our last meal. When glucose levels drop, gluconeogenesis is the pathway that's activated. It takes non-carbohydrate precursors, converts them to glucose, and then secretes the glucose into the blood, ultimately to help maintain a constant glucose concentration. Looking at Panel B, our first topic is the synthesis of glycogen. If you look back at my first two lectures, I describe the structure of glycogen, which is also shown here. The piece of glycogen that I've shown consists of a linear chain of glucose molecules connected together through the 1 and 4 carbons. Chemically, we call this an alpha 1, 4 linkage. To the left of the glycogen molecule is the non-reducing end and to the right is the reducing end, which is connected by a tyrosine residue to a protein called glycogenin. Glycogenin is a variant of the glycogen synthase enzyme we'll talk about later. Glycogenin has the property that it can synthesize a polymeric glucose chain, such as the one shown, to boot up the synthesis of glycogen. In effect, glycogenin forms a primer molecule such as the one shown that provides a non-reducing end that can be extended by its sister enzyme, glycogen synthase. While I've drawn a linear glucose, I want you to keep in mind, the branches off of the six carbon of the glucoses in the chain are possible. As I mentioned earlier, glycogen is designed for fast breakdown. Its glucose units will quickly enter the pathway of glycolysis and generate ATPs in a manner of seconds. Let's now look at Panel C. It's useful to review the biochemical steps by which glycogen is broken down because the exact same intermediates appear in the reverse order in the synthesis of glycogen. The key enzyme for glycogen breakdown, or glycogenolysis, is glycogen phosphorylase. Glycogen phosphorylase progressively will nip off the non-reducing end-- that is the left-most sugar as shown-- producing oxonium ion intermediate. And as we saw in my first lecture, glycogen phosphorylase stereospecifically adds inorganic phosphate to the bottom face-- that is the alpha face of the oxonium ion-- to give glucose 1-phosphate with this stereochemistry shown. Once again, these same intermediates are going to appear in the synthesis of glycogen. Now let's take a look at the way that glucose 1-phosphate interfaces with the other biosynthetic and catabolic pathways. Let's look at Panel D. This metabolic map shows glycogen in the lower right-hand corner. We can see how glycogenolosis results in glycogen breakdown to glucose 1-phosphate, and in the reverse direction, we can see how glucose 1-phosphate can be used to make glycogen by way of two enzymes-- UDP Glucose Pyrophosphorylase, UGP, and Glycogen Synthase, GS. Let's look a little deeper at other pathways with which glucose 1 phosphate interfaces. This schematic shows that glucose 1-phosphate can be converted to glucose 6-phosphate by phosphoglucomutase, PGM. In this case, the phosphate is moved from the 1 carbon to the 6 carbon of the glucose moiety. The opposite reaction also occurs. That is, if you have glucose 6-phosphate, phosphoglucomutase will convert it to glucose 1-phosphate. Glucose 6-phosphate is an intermediate in glycolysis, gluconeogenesis-- which is the next pathway we're going to look at-- and another future pathway, the pentose phosphate pathway. All of this shows us that glucose 6-phosphate is a crossroads and one of the branches that leads from it and it is by way of glucose 1-phosphate. Let's imagine a scenario in which we eat a meal. Glucose appears in the blood as shown. It is taken into the cell. The enzymes hexokinase or glucokinase will phosphorylate the glucose into glucose 6-phosphate. If the glucose is not needed for glycolysis or the other pathways that I mentioned, phosphoglucomutase will convert the glucose 6-phosphate to glucose 1-phosphate, and then in the pathway of glycogen synthesis, glucose 1-phosphate will be polymerized into glycogen for energy storage. At this point, let's turn to Storyboard 30 and look at Panel A. Now let's take a look at the detail pathway by which glucose 1-phosphate is converted to glycogen. At the left is the structure of glucose 1-phosphate. The first enzyme involved is UDP Glucose Pyrophosphorylase, or UGP. The second substrate in this reaction is UTP, uridine triphosphate. The UDP Glucose Pyrophosphorylase catalyzes attack by the phosphate on the 1 carbon of glucose 1-phosphate on the alpha phosphorus of UTP. The two products are pyrophosphate and UDP glucose. The reaction is made thermodynamically irreversible by hydrolysis of pyrophosphate by inorganic pyrophosphatase into two molecules of inorganic phosphate. The UDP glucose is the substrate for the next enzyme in the sequenced glycogen synthase. I'm going to divide the glycogen synthase reaction into two parts. In the first, we see cleavage of the bond between the 1 carbon of the glucose and the beta phosphate of the UDP. This reaction liberates the UDP and generates the oxonium ions shown in the brackets. Structurally, this oxonium ion is the same intermediate we saw in glycogen breakdown, but here, we're using it as a biosynthetic reagent. Let's look at Panel B. In this panel, we continue the glycogen synthase reaction. In Panel A, we generated the oxonium ion of glucose, which is shown here, in Panel B in the lower left. Glycogen synthase activates the hydroxyl group on the 4 carbon of the terminal sugar residue. That is the sugar on the non-reducing end. The oxygen on the non-reducing sugar residue of glycogen then attacks the bottom face of the oxonium ion to give rise to a glycogen unit that has been extended by one glucose residue. I have put an asterisk on the 6 carbon of the glucose residue that's been added to the growing glycogen chain. |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | Respiration_TCA_Cycle.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOHN ESSIGMANN: We're still on storyboard 7. We're on panel C. Panel C is where I introduced the TCA cycle. As I mentioned earlier, respiration consists of three stages. The first is the pyruvate dehydrogenase reaction, which we just covered, taking pyruvate to acetyl-CoA. The second is the TCA cycle, or tricarboxylic acid cycle taking that acetyl-CoA and basically oxidizing it in order to generate CO2 but also to produce more reducing equivalents in the form of mobile electron carriers. And then the third step of respiration is taking those reducing equivalents to the mitochondrial inner membrane, where the molecules containing those reducing equivalents are oxidized. And then, the electrons from that oxidation reaction are used to power proton pumps that ultimately will generate the proton gradient that can be used for generation of ATP or for generation of movement or for other things. The tricarboxylic acid cycle, which is sometimes called the Krebs cycle, takes acetyl-CoA from several different sources. One of those sources is glycolysis to pyruvate and pyruvate to acetyl-CoA, as we've just seen. And the second source of acetyl-CoA is from fatty acid oxidation. We'll come to that pathway somewhat later. Again, looking at panel C, the TCA cycle starts with the reaction of acetyl coenzyme A, a 2-carbon compound with oxaloacetate, a 4-carbon compound, to form the 6-carbon product citrate. Citrate will lose 2 carbons as carbon dioxide, and in the process, there'll be a series of oxidation steps that generate three NADHes one FADH2, and one either GTP or ATP by substrate-level phosphorylation. In terms of the banking system of the cell, the NADHes that are generated in the mitochondrion are exchangeable for about three ATPs. FADH2 is exchangeable for about two ATPs. So if you look up the total number of nucleotide triphosphate, or NTP, equivalents that can be produced in the TCA cycle, you'll get about 12 ATPs for each 2-carbon unit of acetyl-CoA that's oxidized. As a detailed point, I want to mention that the two carbons that entered the TCA cycle as acetyl-CoA are not exactly the same two carbons that come out as CO2 in that cycle. The carbon dioxides from the input acetyl-CoA will emerge in later turns of the TCA cycle. Now we're going to look at panel D. In panel D, we start looking at the details of the TCA cycle. JoAnne explained why nature uses thioesters. The sulfur allows enolization stabilization of a carbanion at carbon 2, the carbon that's distal to the coenzyme A functionality. The carbanion is then able to attack the number 2 carbon, carbonyl, of oxaloacetate in the reaction catalyzed by citrate synthase. An intermediate is formed, citroyl coenzyme A, which loses its coenzyme A moiety by hydrolysis in a very thermodynamically irreversible step, resulting in the product citrate. This step is at the top of the pathway, and as is usually the case, this highly exothermic step makes the pathway, overall, irreversible. Chemically, citrate synthase does a mixed aldol-Claisen ester condensation. The product of the citrate synthase reaction, citrate, is a tertiary alcohol, and tertiary alcohols are relatively difficult to oxidize. The next enzyme in the pathway, aconitase, which is shown in storyboard 8, panel A, removes a water molecule and then adds a different water molecule back to rearrange the hydroxyl group, making a secondary alcohol, which is much easier to oxidize. Looking again at panel A, the hydroxyl group of high isocitrate is oxidized to a ketone, with the transfer of hydride to NAD+ to make NADH. This reaction is catalyzed by the enzyme isocitrate dehydrogenase, ICDH. The intermediate in this reaction is oxalosuccinate, which is a beta-keto acid. As we know from what JoAnne taught us, beta-keto acids are prone to spontaneous decarboxylation. So the second half of the isocitrate dehydrogenase reaction involves the loss of carbon dioxide in what's usually considered to be an irreversible step. I do want to point out, however, that under certain circumstances, you can re-add the carbon dioxide in order to make the reaction go in the other direction. After the ICDH reactions, which generated NADH and resulted in loss of CO2, the product is alpha-ketoglutarate, a 5-carbon keto acid. Take a look at the structure of alpha-ketoglutarate in the upper right-hand portion of storyboard 8, panel A. If you hold your finger over the top 2 carbons of alpha-ketoglutarate, you'll notice that the residue is pyruvate. So pyruvate plus an acetyl functionality equals, effectively, alpha-ketoglutarate. Now take a look back at the mechanism by which pyruvate is oxidized by pyruvate dehydrogenase. It's going to be a very similar mechanism for the oxidation of alpha-ketoglutarate. The product of the alpha-ketoglutarate dehydrogenase reaction is succinyl-CoA. Again, if you look at the structure of this molecule, succinyl-CoA, it's actually an acetyl-CoA with an acetyl group put onto one end. Now let's go back and look at the alpha-ketoglutarate from a different perspective. I want to make a point here in that alpha-ketoglutarate is an alpha-keto acid, and if we were to replace its keto group with an amino group, you'd convert this keto acid into the amino acid glutamic acid. As with most enzymatic reactions involving such nitrogen functionalities, a pyridoxal pyridoxamine phosphate cofactor will be needed to interconvert the alpha-ketoglutarate and glutamic acid. So glutamic acid can serve as a source of alpha-ketoglutarate if the cell is starved for TCA cycle intermediates. Alternatively, alpha-ketoglutarate can be a source for glutamic acid when a cell may need amino acids for protein biosynthesis. Now, let's turn to panel B, where we'll start with succinyl coenzyme A. Looking at the molecule of succinyl-CoA, note that I've labeled each of the atoms with a symbol. The triangle and square were from the original acetyl-CoA molecule that came in at the top of the pathway. The enzyme that processes succinyl-CoA is succinyl coenzyme A synthetase. As JoAnne taught us, synthetases are enzymes that typically need a nucleotide. In this case, the nucleotide involved is GDP in mammalian systems or ADP in bacterial systems. In the traditional clockwise direction of the TCA cycle, they are phosphorylated to form GTP or ATP, respectively. The molecule that you get after the phosphorylation reaction in the hydrolysis of the coenzyme A is succinate, a 4-carbon compound, which is also a dicarboxylic acid. This molecule is perfectly symmetrical and can tumble in three-dimensional space. The next enzyme in the pathway, succinate dehydrogenase, cannot distinguish one arm of its substrate, succinate, from the other. So if there were a radio label in the acetyl-CoA at the beginning of the pathway, that radio label would become scrambled at this point, uniformly distributed among the two carbons of the two arms. From this point onward in the TCA cycle, you will note that the label denoted as the triangle and box is scrambled, as indicated by triangle divided by 2 or box divided by 2. The next enzyme in the pathway is succinate dehydrogenase. This is the only membrane-bound enzyme in the TCA cycle. It is a dehydrogenase, and it uses flavin as a cofactor to help remove electrons from the succinate substrate. Flavin picks up electrons from succinate, converting FAD to FADH2 in the mitochondrial membrane. The product of the reaction is the alkene fumarate. The enzyme fumarase adds water to fumarate to form the alcohol product malate. The hydroxyl group of the alcohol malate is primed for oxidation by the next enzyme in the pathway, malate dehydrogenase, or MDH. MDH oxidizes malate, which is an alcohol, to a ketone. The ketone product is oxaloacetate. The hydride removed from malate is transferred to NAD+ to form NADH. As I mentioned, the product of the overall reaction is oxaloacetate, and its ketone functionality is now primed for attack by the next molecule of acetyl-CoA entering the TCA cycle. As a final point, I've mentioned several times that oxaloacetate is present at a very low concentration, only in the micromolar range, inside the mitochondrion of a mammalian cell. So it's always at rather limiting concentration. The cell has to work very hard to preserve enough of the oxaloacetate to enable the next cycle of the TCA cycle, that is the acquisition of the next acetyl coenzyme A group. One of the ways that the cell can generate oxaloacetate is by deamination, or transamination of aspartic acid to the keto acid oxaloacetate. We typically have plenty of aspartic acid and this PLP-mediated reaction helps to maintain a critical level of oxaloacetate. I want to return to storyboard 7 to make some comments about the importance of prochirality in some enzymatic reactions. This short interlude will help explain how to track a radio label in a TCA cycle intermediate as that intermediate progresses through the TCA cycle. As you will note, at first glance, the label does some unexpected things. But at the end of the day, the fact that the label does surprising things helped early biochemists figure out mechanistically how several enzymes work in concert during the linear steps of a pathway. We're going to look at storyboards 7, 8, and 9, starting with the chemical reaction at the bottom right of storyboard 7, panel D. This is the chemical reaction that's catalyzed by citrate synthase. You'll notice that I've highlighted the two carbons with either a triangle or a box. As we have seen, the nucleophile on acetyl-CoA attacks the electropositive carbon of the carbonyl functionality of oxaloacetate. The carbonyl functionality is a flat sp2 hybridized center. So if this were a typical organic chemical reaction, the electrophile could come in from either the top or the bottom, and you'd get two different stereochemistries in the product. That is the citrate at the very bottom would have equally labeled acetyl arms at the top and bottom of the molecule as I've drawn it. You would have a delta divided by 2 for the blue methylene group, and a box or square divided by 2 for the red carboxylate group in each of the two acetyl arms. Again, these arms are at the bottom and top of the molecule as drawn in the lower right of panel D. You'll note, however, that only the top acetyl group of citrate has the labels. Initially, the observation that only the top arm acquired label was a puzzle to early biochemists. One way to think about the citrate synthase reaction is to think about the oxaloacetate laying on the surface of the citrate synthase enzyme. Now imagine that the enzyme precludes, or blocks, access to the carbonyl from the bottom and allows access only to the top, giving rise to only one stereochemical outcome, the one that I've shown in the citrate to the right. Now move ahead to the storyboard number 9, panel B. This panel shows a more cartoon-like representation of the molecule of citrate. You can see the acetyl arm on the top, the pro-S arm, as having the labels. And the pro-R arm, the one that came from oxaloacetate, at the bottom, is label free. So while the pro-R and pro-S arms are chemically identical, they're going to be handled by the next enzyme in the series, aconitase, as being chemically different from one another. All biochemistry is going to be occurring on the pro-R arm, that is the arm that came in from oxaloacetate and not the arm that came in for acid 2 with acetyl coenzyme A. In the bottom right of panel B, I've sketched out an imaginary active site for the aconitase enzyme. I show a base picking up a proton from the pro-R arm of the citrate molecule. And you can see the elimination of the water molecule from the 3 carbon. So despite the fact that the citrate molecule is chemically symmetrical, aconitase, the next enzyme in the reaction series, is able to distinguish between the pro-R and the pro-S arms. At this point, I want you to look back at storyboard 8, panel A. Look once again at the citrate that is at the upper left-hand corner of storyboard number 8, and let's imagine that the aconitase chemistry has happened on the pro-S arm, that is the one in the box. Keep in mind that these experiments have shown that this does not happen. This is just a hypothetical scenario. In this hypothetical case, the hydroxyl group would end up on the number 2 carbon, the one with the blue triangle. You have to draw it out, but if you traced this molecule, in which the chemistry happened on the pro-S arm, all the way around to alpha-ketoglutarate, you would find out that the alpha-ketoglutarate dehydrogenase would liberate CO2 from the carboxylate that has the red box on it. This is in contrast to the molecule succinate that appears on storyboard 8, panel B. Succinate is another symmetrical molecule, but for some reason, succinate dehydrogenase cannot distinguish between the two acetyl arms. Because the enzyme is unable to distinguish the two arms, in this case, unlike that of aconitase, the radio label would become scrambled. The reason that I'm belaboring this point is because one of the ways that biochemists work out the chemical reactions involved in a biochemical pathway is by putting in some kind of labeled molecule. It could be a radio label or a heavy isotope. And then they trace the position of the label in the molecule as you move from molecule to molecule along the pathway. So label tracer studies are ones that are absolutely central to all of biochemistry. And as I mentioned earlier, you'll get lots of experience in the problem sets in using labels to work out the details of a pathway. Before leaving the TCA cycle, there are a couple of big picture points that I want to make. You put in two carbons as acetyl-CoA, deposit them into oxaloacetate to form citrate, a 6-carbon compound, and then in the cycle, you lose 2 carbons as carbon dioxide. That means that there's no loss or gain of carbon in this cycle. If you had just one molecule of oxaloacetate, you'd be able to complete the TCA cycle. What happens, however, when the cell is in an, let's say, "energy needed" situation, where it needs to buff up this cycle, that is increase the number of molecules cycling to be able to accommodate the processing of more and more molecules of acetyl-CoA? Those molecules of acetyl-CoA might flood in from carbohydrate metabolism or, as we'll see later, from catabolism of lipids. Let's look at panel A of storyboard 11. As shown in this figure, one way to accomplish increasing the carbon content of the TCA cycle is to take an amino acid, such as glutamate, and remove its amino group to form alpha-ketoglutarate. Note that if you increase the concentration of any one molecule in the TCA cycle, for example, alpha-ketoglutarate, you're effectively increasing the concentrations of all molecules in the cycle. Because it's a cycle, many of the molecules are in equilibrium with one another. Going clockwise around the TCA cycle from alpha-ketoglutarate, we come to succinyl-CoA. Succinyl-CoA is an entry point into the TCA cycle from odd-chain fatty acids in certain amino acids, such as methionine. So these molecules can give rise to succinyl-CoA that itself will then increase the concentration of all molecules in the TCA cycle. Our third primary input point is at oxaloacetate. It involves aspartic acid being deaminated into oxaloacetate. This is a very common way to increase the amount of oxaloacetate available for the TCA cycle. This reaction can dramatically increase the rate of processing of molecules by the TCA cycle. Finally, there's an enzyme we'll look at later called pyruvate carboxylase, or PC, which can take pyruvate in the mitochondrion, add CO2 to it, and form oxaloacetate. We're going to come to this enzyme later when we talk about carboxylase enzymes as a class. This is an important enzyme in the pathway of gluconeogenesis. So overall, I think you can see that there are several ways that a cell can increase the overall concentration of the intermediates of the TCA cycle. Increasing any one intermediate increases all of them. And that will increase the rate by which acetyl-CoA molecules can be processed ultimately to generate energy. The general word describing this buffing up of the TCA cycle is anapleurosis, which comes from the Greek word "filling up." |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | Glycolysis_and_Early_Stages_of_Respiration.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Let's look at storyboard two. We're going to look in more detail at carbohydrate catabolism at this point. We're eventually going to be doing the pathway of glycolysis, but we have to get there first. And it depends on what precursors are available to enter the pathway of glycolysis. One option that we'll look at a little bit later is to take in glucose from the blood by way of a glucose carrier, and then phosphorylate the glucose using either hexokinase or glucokinase. The second option is to take glycogen, the polymeric storage form of glucose, and degrade it to glucose 1-phosphate, which will be then converted to glucose 6-phosphate which will enter the pathway of glycolysis. We're going to start with this pathway of glycogen breakdown, or glycogenolysis. Panel A shows the structure of glycogen, which consists of glucose monomeric units connected by a bond between the 1 and 4 carbons in the alpha configuration. Chemistry is going to be happening at the non-reducing end, which is to the left. The reducing end, which is to the right, is connected to a scaffolding protein called glycogenin. We'll come back to glycogenin later in the course. Let's look next at panel B. The reaction begins with the proteination of the red oxygen between the terminal two glucose moieties by a proton on the glycogen phosphorylase enzyme. The intermediate product is a resonant stabilized pair of positively charged species, or cations, an oxonium ion at the 1 prime oxygen, and a carbocation at the 1-carbon. The electrophilic carbocation is attacked by the negatively charged phosphate residue, which is non-covalently associated with glycogen phosphorylase to form glucose 1-phosphate. Note that the glucose 1-phosphate that forms has the covalently attached phosphate on the alpha, or bottom face, as it's drawn in the figure. The structure of glycogen phosphorylase allows attack of it's phosphate from the bottom, giving rise to alpha isomer only of glucose 1-phosphate. The structure of the phosphorylase precludes access of its phosphate to the top face of the sugar molecule, so you get only one stereoisomer of glucose 1-phosphate out of this reaction. The other product shown at the bottom of panel C is the glycogen chain, which is truncated or shortened by one glucose unit. Now looking at the big picture, the epinephrine molecule produced as part of the stress response interacted with the cell membrane of the muscle cell. And that interaction ultimately activated glycogen phosphorylase to enable it to degrade glycogen, the storage form of glucose, and liberate glucose 1-phosphate, which is going to then find its way into glycolysis to generate fast energy to enable the student to be able to stand up in class and avoid the stressful situation. I'm not going to go through all of the details of the chemical reactions of glycolysis. You can find those in the book which does a good job of presenting those details. In the first step of glycolysis, we see the phosphorylated hexose, glucose 6-phosphate, be converted by phosphoglucoisomerase into the furanose fructose 6-phosphate. The next step involves the phosphorylation of the fructose 6-phosphate by phosphofructokinase 1. ATP is the phosphate donor, and you form fructose 1,6-bisphosphate as the product. You'll notice that phosphofructokinase catalyzes one way, a thermodynamically irreversible reaction. This step, like hexokinase we talked about earlier, are sites of regulation of the pathway. The word glycolysis comes from the Greek word sugar splitting, and we're now at the step where the splitting of the sugar from one 6-carbon compound into two 3-carbon compounds occurs. The enzyme that splits the sugar in half is called aldolase. And, again, I advise you to take a look at the book to see the detail of the chemical reaction. Briefly, the enzyme is going to form a proteinated Schiff base at the number two carbon of the fructose 1,6-bisphosphate. That proteinated Schiff base is going to draw electrons all the way over from the hydroxyl group on carbon 4 of fructose 1,6-bisphosphate. That movement of electrons is going to result in cleavage of the molecule into two parts at the broken line shown in the figure. Carbons 1, 2, and 3 are going to form DHAP, or dihydroxyacetone phosphate, and carbons 4, 5, and 6 are going to form GAP, or glyceraldehyde 3-phosphate. GAP and DHAP are the products of the aldolase reaction. As an aside, this is also a branch point in the pathway. Dihydroxyacetone phosphate is an opportunity for the cell to make the glycerol backbones of lipids that we'll come to later. So we see here that glycolysis is indeed a resource that can be used for things other than energy generation. Now let's look at panel B. Imagine that a cell is committed to make as much energy as it can. For example, the stand up, sit down scenario we talked about earlier. In that case, the enzyme TIM, or triosephosphate isomerase, is going to interconvert very quickly dihydroxyacetone phosphate and glyceraldehyde 3-phosphate, and glyceraldehyde 2-phosphate is going to be the molecule that progresses along the glycolysis pathway. In the next step of glycolysis, glyceraldehyde 3-phosphate will be oxidized by glyceraldehyde 3-phosphate dehydrogenase, or GAPDH. Again, take a look at the mechanism in the book. In brief, it involves attack by a thiol residue of a GAPDH cysteine on the aldehydic carbon of glyceraldehyde 3-phosphate. The product is a thiohemiacetal which is then oxidized. Its electrons are transferred as hydride to NAD+ on the enzyme to form NADH. This is the only oxidation step in the pathway of glycolysis. The oxidation of the hemiacetal produces the thioester, and that thioester is a very high energy compound. The thioester is attacked by inorganic phosphate to form an acetylphosphate, another high energy compound, which is called 1,3-bisphosphoglycerate. The acetylphosphate of 1,3-bisphosphoglycerate then phosphorylates ADP to form ATP in the first ATP forming step of glycolysis. Keep in mind that two molecules of GAP have been formed in the upstream part of the pathway. So for each molecule of glucose, you're getting two molecules of ATP at this step. The enzyme that does this phosphorylation is phosphoglycerate kinase. After 1,3-bisphosphoglycerate has lost its terminal phosphate from the 1-carbon, it forms the acid 3-phosphoglycerate. As a reminder, mutases are enzymes that move a functional group from one atom to another on the same molecule. The next enzyme in the pathway is phosphoglycerate mutase, which in effect moves the phosphate from the 3 to the 2-carbon, forming 2-phosphoglycerate, which is the next intermediate in the glycolysis pathway. Although it's easy to say that the phosphate is quote unquote "moved," that is somewhat inaccurate. If you look at the details of the step in the book, you'll see that the reaction starts with the transfer of a phosphate from the mutase protein to the substrate 3-phosphoglycerate, forming an intermediate bis-phosphorylated product 2,3-bisphosphoglycerate. As an aside, this is the same powerful allosteric effector that JoAnne described when she taught us about how small molecules can dramatically reduce the affinity of hemoglobin for oxygen. In the case of the mutase, however, the enzyme will take the phosphate off of the 3-hydroxyl of 2,3-bisphosphoglycerate and the enzyme will re-phosphorylate itself. So the final product of the phosphoglycerate mutase reaction is 2-phosphoglycerate. Now let's turn to storyboard four. In panel A, we see the enzyme enolase, which removes the hydrogen from the 2-carbon of 2-phosphoglycerate. This is not an easy task. The PK of that hydrogen is about 30. Nevertheless, the reaction does occur and liberates water. The product is phosphoenolpyruvate, usually abbreviated PEP, or PEP, a very high energy compound. We're now almost at the end of the pathway of glycolysis. And as I mentioned earlier, one usually looks for highly exergonic steps near the beginnings or ends of pathways to see where the pathway is regulated. Pyruvate kinase, or PK, the last step in the pathway is such a regulation point. The pyruvate kinase reaction occurs in two steps. In the first step, phosphoenolpyruvate phosphorylates ADP to form ATP. The enol product then undergoes enol-keto tautomerization, yielding the ketone pyruvate. And pyruvate is the end of the pathway. Let's turn now to story board five, panel A. Let's take a look at this pathway from a higher altitude. First, as I just mentioned, the pathway is regulated at the top and bottom specifically at the hexokinase step, the glycogen phosphorylase step, and the pyruvate kinase step. It's also regulated in the middle, specifically at the phosphofructokinase one step. Regulation can be allosteric, which we shall see as the case with PFK. It also can be covalent. We saw that this is the case with glycogen phosphorylase, or GP. The last lecture, we'll take a look at regulation of these enzymes in great detail. As a second issue, let's now look at the pathway as drawn in summary form in panel B of this storyboard. We start with a single molecule of the 6-carbon sugar glucose. Hexokinase or glucokinase will utilize one ATP to form a phosphorylated intermediate. Phosphofructokinase-1 will use a second ATP to form a doubly phosphorylated hexose, fructose 1,6-bisphosphate. Bis-phosphorylated hexose will split into two trioses, glyceraldehyde 3-phosphate and dihydroxyacetone phosphate. These are inter-convertible. The chemical species glyceraldehyde 3-phosphate is subjected to oxidation. Because we get two molecules of GAP per molecule of glucose, GAP oxidation will produce two molecules of NADH. In the next step, we're going to make two ATP's using the enzyme phosphoglycerate kinase. At this point, we're ATP neutral. We've consumed two ATP's and we've generated two ATP's. And lastly, the enzyme pyruvate kinase is going to generate an additional two ATP's. I put those in a box, because these are the two net ATP's for the whole pathway. So that's the pathway of glycolysis. I want to give you a little bit of a preview of coming attractions at this point. What you'll notice is that the pathway involves an oxidation step in which we consumed two NAD+ molecules and generated two NADH's. NAD+ is derived from a vitamin. We'll only have limited amounts of it. We need to find a way to regenerate the NAD+ in order to process the next molecule of glucose, and we'll see that nature has several ways to solve that problem. Nature actually has three ways to regenerate NAD+. The first we'll call alcoholic fermentation. The second is homolactic fermentation, and the third is respiration. Alcoholic fermentation and homolactic fermentation occur in the absence of oxygen. That is, anaerobically. Respiration by definition is an aerobic process. Now we'll look at each of these mechanisms of regeneration of NAD+ in some detail, but also, in addition to NAD+, you're going to have to generate a number of other products that can be useful to the cell. We'll see those later. Let's look at panel C. Under anaerobic conditions, yeast will take pyruvate and convert it initially to acetaldehyde, and then reduce the acetaldehyde to ethanol. These are the reactions of alcoholic fermentation. Yeast uses an enzyme called pyruvate decarboxylase to process the pyruvate. Pyruvate decarboxylase, or PDC, has on it a covalently attached thiamin pyrophosphate. Thiamin is derived from vitamin B1. As we go through the pyruvate decarboxylase reactions, at the outset I want you to keep in mind that PDC, pyruvate decarboxylase, is very similar to the front end of the chemical reaction series that's conducted by an enzyme present in mammals like us. That enzyme complex has pyruvate dehydrogenase, which we'll come to a little later when we talk about respiration. The thiazole ring in TPP forms an ylide. That means that despite the fact that the PKA of the thiamin pyrophosphate is about 19, you are able to form a carbanion at the carbon of the thiazolium ring system. That carbanion attacks the middle carbon of pyruvate, converting it from a ketone to an alcohol. Now you have the thiazolium ring system with the positive charge beta to the carboxylate of pyruvate. That system readily decarboxylates as shown, liberating thiamin pyrophosphate and the product, acetaldehyde. The next enzyme in this small pathway is alcohol dehydrogenase which utilizes NADH, which came in, in principle, from the GAPDH step of glycolysis. Alcohol dehydrogenase uses the glycolysis-derived NADH to reduce the aldehyde functionality of acetaldehyde to form the product of this pathway, ethanol. Ethanol is an alcohol, hence the name alcoholic fermentation. So looking at this small pathway in total, what you see is that you form CO2 as a first product, which could be the bubbles in a carbonated beverage or what makes bread rise, and form ethanol as the other major product. And, of course, you get your NAD+ back, which you can then return to glycolysis, specifically the GAPDH step of glycolysis, to enable metabolic processing of the next molecule of glucose. So this is the pathway that yeast and other alcohol-forming organisms use to maintain redox neutrality within the cell. I'm on storyboard six, and we're going to start with panel D. The second general mechanism that we're going to look at that concerns the regeneration of NAD+ for glycolysis is called homolactic fermentation. This occurs in mammals and in lactic acid bacteria. And like alcoholic fermentation, it is also a process that occurs anaerobically. That is, in the absence of oxygen. As you can see, pyruvate is a keto acid, and the ketone at the number 2 carbon can be easily reduced. In this case, NADH will transfer hydride to the ketone in order to reduce it to the alcohol lactate. The net reaction here involves consumption of one NADH and the production of one NAD+. And this NAD+ of course, can go back and be utilized to enable oxidation of the next molecule of glucose passing through the glycolytic pathway. When a mammal is running hard, this is the pathway by which we achieve redox neutrality and glycolysis. When we exercise intensely, lactate is produced in excess to keep the glycolytic pathway active. The lactate causes the blood pH to go down. That is, the blood becomes more acidic because lactic acid has a low PKA. I also want to point out that this anaerobic pathway is also the basis for production of lactate by lactic acid bacteria, which is critical to the manufacturing of yogurt. Let's look now at panel E. The third pathway to regenerate NAD+ for glycolysis is respiration. We're going to be going through respiration in some detail later, but right now I'm going to give you a very high level view of it. In the way of an introduction, the mitochondrial intermembrane is very well equipped to be able to transport electrons. Those electrons will travel along in an electron transport chain to oxygen, reducing the oxygen we breathe into water. This is a highly energy generating process, and the energy that's generated is part of the driving force for the synthesis of ATP. The details of how a respiring organism generates ATP is covered later. For right now, however, let's just say that the mitochondrial membrane oxidizes NADH to regenerate the NAD+ needed to sustain glycolysis. And, again, we'll see the details of how this happens later. Later, I'll also cover the ways that redox neutrality is maintained in a mammalian cell. In brief, NAD+ is generated from NADH in aerobes by a series of reactions that I call quote unquote "the shuttles," which will be covered in section 12. Before we go on, let me give you a little recap of where we are. We've seen that there are a couple of optional beginnings for glycolysis. It can begin with intake of glucose from the blood, or it can begin with the breakdown of glycogen by glycogenolysis. The formal pathway takes glucose as glucose 6-phosphate down to pyruvate. We get a total of two ATP's in that process, and we produce two NADH's. Now, however, we've got to have a way to be able to regenerate our NAD+ from those NADH's in order to be able to make the pathway ready to process the next molecule of glucose. Accordingly, nature developed three endings to the pathway that result in the regeneration of NAD+. These endings are alcoholic fermentation, homolactic fermentation, and respiration. Looking at that picture in panel E once again, in us, respiration happens in the mitochondria and primarily in the mitochondrial intermembrane and in the jelly-like mitochondrial matrix. Pyruvate generated in the cytoplasm-- that's the compartment where glycolysis occurs-- goes through the porous outer membrane of the mitochondria. Then, the pyruvate encounters the membrane-bound pyruvate dehydrogenase complex, which is our next topic. In bacteria, which are in many ways like mitochondria, respiration happens in the cellular membrane and in the cell's cytoplasm. Let's take a look at panel A of storyboard seven. We're about to start our discussion of respiration, which is the oxidative metabolism of all metabolic fuels via the common intermediate acetyl CoA. In mammals, as I said earlier, these are mitochondrial reactions. At the outset, I also want to point out that we have seen that carbohydrates can be metabolized either anaerobically or aerobically. As we'll see when we use lipids as our metabolic fuels, they can only be metabolized aerobically. Lipids break down to acetyl CoA, which is then oxidized by the TCA cycle. Let's take a look at panel B. When we talked about alcoholic fermentation earlier, I said that yeast have a pyruvate decarboxylase complex, and I said that the reactions of the pyruvate decarboxylase complex are very similar to the reactions in the early part of the pyruvate dehydrogenase reaction, which is a little bit more complex. Pyruvate dehydrogenase has three activities, E1, E2, and E3. As with pyruvate decarboxylase, E1 has a thiamine pyrophosphate unit, TPP, and ylide on the TPP attacks the middle carbon of the pyruvate. Specifically, it's ketone carbon. At this point, you're going to want to take a look at detailed notes that I've provided as supplemental material. This supplemental material will be referred to as slide one, slide two, and so on. Looking at slide one, you can see that decarboxylation happens exactly the same way that I described for the pyruvate decarboxylase system. Now take a look at slides two through six. In the case of PDH, pyruvate dehydrogenase, unlike the situation with PDC, pyruvate decarboxylase, restructuring of the carboxyethyl group is going to result in the formation of our carbanion that's going to attack the disulfide of lipoic acid. Looking at slide seven, you'll see the conversion of the hydroxyethyl to a keto functionality jettisons the TPP, resulting in a thioester in which there is an acyl group connected to lipoic acid. At this point, the thiol of coenzyme A attacks the keto oxygen of the thioester, producing acetyl CoA, which is going to become a very important molecule as we move ahead. The second product is reduced lipoic acid. Technically, the formation of reduced lipoic acid is the oxidation step of the pyruvate dehydrogenase reaction. The decarboxylation step that happened a few steps earlier is basically the production of CO2 that we eventually will breathe out when we exhale. Now let's look at slide eight. The reducing equivalents on the E2 subunits present as reduced lipoic acid will move across the E2 subunit toward the E3 subunit. The E3 subunit has an oxidized disulfide bond on it, which was created by the connection of two cysteines on the protein. That oxidized disulfide is then reduced by transfer of the reducing equivalents from the reduced lipoic acid to the disulfide. And then finally, the reducing equivalents are passed from the reduced disulfide to FAD to form FADH2. That FADH2 passes along its reducing equivalents to NAD+ forming NADH. This NADH is soluble and will move to its next location. Specifically, this NADH will return to the mitochondrial membrane-- actually to another place in the mitochondrial membrane-- an enzyme called complex one, and be oxidized in the electron transport chain. Overall, one pyruvate enters the PDH complex. We lose its carboxylate as CO2. We generate from pyruvate's residue an acetyl coenzyme A. And at the very end, we get an NADH, which will then go on to the electron transport complex to be oxidized. The formation of NAD+, as I mentioned above, is critical to allow further oxidation of reduce. That is, energy rich molecule such as glucose. |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | Problem_Set_10_Problem_3_Gluconeogenesis.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu BOGDAN FEDELES: Hi, welcome back. I am Dr. Bogdan Fedeles. Let's solve some more biochemistry problems. Today we're going to be talking about problem 3 of problem set 10. Now this is a problem about the spontaneity of gluconeogenesis. As you guys have already learned, gluconeogenesis is the process by which non-sugar precursors are used to produce glucose. This problem should help you think of the following conundrum-- in a cell, when we have glucose we can make energy by doing glycolysis. But at the same time, the cell can take, for example, the endpoints of glycolysis such as pyruvate, or even further metabolites like amino acids, and use the pathway in reverse to make glucose. Now if one process is spontaneous in terms of thermodynamics, then how could we run the reaction in reverse? So here we're going to be looking at a couple of steps of gluconeogenesis that are the exact reverse of the reactions that you've seen already in glycolysis. And then in the end, we're going to be talking about the whole pathway level and make some comments on the spontaneity of both glycolysis and gluconeogenesis. Part A of the problem deals with the aldolase reaction, namely going from fructose 1,6 bisphosphate to glyceraldehyde 3-phosphate dihydroxyacetone phosphate. Now in gluconeogenesis, we will be going in reverse, starting with dihydroxyacetone phosphate and glyceraldehyde 3-phosphate to regenerate fructose 1,6 bisphosphate. Let's take a look at the reaction. This is dihydroxyacetone phosphate. This is glyceraldehyde 3-phosphate or GAP. And these two come together in the aldolase reaction to form fructose 1,6 bisphosphate, or F 1,6 BP. Now the Part 1 of the question asked us to write down the detailed mechanism for this reaction, running in reverse. So as you guys know, aldolase, specifically the Class II aldolases, use an active site lysine residue to covalently bind the substrates during the course of the reaction. So this reaction will proceed by dihydroxyacetone phosphate binding to the active site lysine of the aldolase. Here we have the dihydroxyacetone phosphate and the lysine in the active site. So as a first step to the mechanism, the lysine is going to react as the carbonyl group in the dihydroxyacetone phosphate to form a Schiff's base, or an imine. As you can imagine, the reaction is going to be catalyzed by a general base, which is going to remove one of the protons from the lysine, which then becomes a good nucleophile and can attack the carbonyl. Which can then will be protonated by a general acid. So this way you obtain an intermediate as shown here, which can then lose the water molecule to form the Schiff base. Once again, we need a base that will take this proton and then the water will need another acid molecule to leave. So what we have formed here, this is the protonated Schiff base, or what we call an iminium ion. And if you look at it from the point of view of where we started from, which was a carbonyl group, this is a version of an activated carbonyl. A carbonyl that would it's now poised to do chemistry. Once again, this is a iminium ion. Now as we talked in the carbonyl video, iminium ions are activated carbonyls that can now undergo some of the carbonyl reactions with more ease. For example, this one is going to do an enolization. So we have highlighted here the alpha hydrogen next to the carbonyl group. So we're going to form the enol by removing this hydrogen, and the carbonyl with its positive charge on the nitrogen, so it's at a very good electron sync. So what we're forming here is an imine that's bound to a double bond. So this we're going to call an enamine. So now this is the reactive version of the dihydroxyacetone phosphate. Now it's poised to react with the other partner in the reaction, GAP- glyceraldehyde 3-phosphate, which we've shown here. Now GAP is going to be the carbonyl component of this aldo reaction, while the enamine from the dihydroxyacetone phosphate is going to be the enolic component. So the enol is going to be a very good nucleophile and is going to attack the carbonyl. The reaction is once again initiated by this amine group. The electrons move here and this attacks the carbonyl. And we can protonate it with a general acid. So what we obtain here is now a molecule that has from two portions of three, is going to have six carbons. And if you look closely, this is really the fructose 1,6 bisphosphate bound to the active site of the enzyme as a Schiff space. So now all that needs to happen now is to hydrolyze the Schiff space and release our fructose 1,6 bisphosphate product. So, we're going to need one water molecule. It's going to be activated by a general base. Then it is going to attack this imine and the electrons will move to the nitrogen. We get to this step and here we need a step in catalysis to regenerate the enzyme to its freed lysine form and then release the fructose 1,6 bisphosphate product of the reaction. So once again, we started with dihydroxyacetone phosphate and GAP and you used this covalent catalysis where the substrates were bound in the active site of the enzyme to form our product, fructose 1,6 bisphosphate. Now this, as we discussed in the carbonyl chemistry, is an example of direct adol reaction. And as the name of the enzyme suggests, aldolase this is the reaction going in that gluconeogenic mode. Now in Part B of the question, we're going to look at another important reaction that can run both ways-- glycolysis and gluconeogenesis. That will be specifically going from glyceraldehyde 3-phosphate, or GAP, to 1,3 bisphosphoglycerate. This is the reaction catalyzed by GAP dehydrogenase. Here is a representation of the reaction. We have glyceraldehyde 3-phosphate that is converted by GAP DH to the 1,3 bisphosphoglycerate. Now in the glycolysis step, we know we need one NAD equivalent, and one inorganic phosphate to go into the enzyme to be able to oxidize our aldehyde to an acidic anhydride in 1,3 bisphosphoglycerate. Now in the gluconeogenesis, we're going to be going from right to left. We're starting with 1,3 bisphosphoglycerate We're going to need NADH and we're going to generate GAP and inorganic phosphate. Now let's take a closer look at the mechanism of this reaction as it would run in gluconeogenesis. Here we have a representation of the active site of the GAPDH, where we highlighted the cysteine group, SH. And we also have a general base in the active site and we're going to call it B. And here is our starting material 1,3 bisphosphoglycerate. Now if you remember how GAPDH works in the direct reaction in the glycolysis, we're going to have to form a covalent intermediate, in which the substrate is going to bind to the enzyme. That's going to form a thioester with that thiol group of the cysteine. So this is what's going to happen here. So the base in the active site actually is going to assist the protonation of the cysteine. Then the thiol now is activated and can attack the phosphoanhydride of the 1,3 biphosphoglycerate. And then the first step, it's just going to form a tetrahedral intermediate. There you have it. Now we have a negative charge on the oxygen and we spelled out the phosphate with all the atoms here. As you know, this tetrahedral intermediate is now going to fall apart, releasing the best leaving group. In this case it is going to be the inorganic phosphate. So the electrons come down. They're going to be transferred onto the phosphate and presumably protonated. So after the phosphate is released, now we have formed finally the thioester of our substrate in the active site of GAPDH. As you remember, this is a redox reaction so it involves the co-factor NAD. In this case, running the reaction from right to left, we're going to use NADH to reduce the bisphosphoglycerate to the aldehyde group. So here is our representation of the NADH. As you remember, one of these hydrogens together with its electron pair, it's is to be donated as a hydride or H minus. So the reaction proceeds by rearranging the electrons on the pyradine ring of NAD and the H minus is going to be the nucleophile that's attacking now the thioester. And once again, I'm going to be forming a tetrahedral intermediate, which is shown here. Now this hydride here is the one that came from NADH. And in the process the NADH co-factor is converted to NAD-plus. So now we can see we're just one step away. This is now like a hemiacetal like molecule. So it's one step away from forming glyceraldehyde 3-phosphate. All we need to do is release the enzyme in its original confirmation. So the electrons will flow now to reform the carbonyl, while the sulfur is going to get its electrons from the base and this regenerates the enzyme as we had it in the beginning. And we're releasing GAP, the product of the reaction. So with this mechanistic insight, we have actually addressed how the GAPDH reaction runs in the gluconeogenic mode from bisphosphoglycerate to GAP. Now Part A of the problem also asked us to look up the free energy value for the aldolase reaction that is that delta G naught prime. Now, if you're going to look in your favorite chemistry textbook, or I have here the Voet and Voet Third Edition, the book that we use in this course. Now you're going to find on page 511, you're going to see a table with all the free energies of glycolysis reactions. And for the aldolase step, you're going to see that it's a positive free energy, that is the reaction is spontaneous in the reverse in the gluconeogenic direction. So now that may be a little surprising, but keep in mind that the reactions in these pathways are more often than not, governed by mass action, that is if we have an excess of the starting materials, the reaction would proceed towards the product, while if we have an excess of the products, the reaction will proceed backward towards the starting materials. So both the aldolase reaction and the GAP dehydrogenase reactions are very susceptible to this mass action. So they will be controlled- the direction, the spontaneity of this reaction will be controlled by which of the starting materials or products are in excess. Part C of this problem asked us to evaluate the spontaneity of gluconeogenesis at the level of the whole pathway. Now we know glycolysis is a spontaneous process. Not only when it goes from glucose to pyruvate, but also it generates some high energy intermediates like ATP in the process-- like we get two molecules of ATP per glucose. Now if we want to go backwards from pyruvate to glucose, in a gluconeogenesis pathway, can this process be spontaneous? First let's look at glycolysis. Here is a schematic of a glycolysis. We're starting here from glucose and we're going to need a couple of molecules of ATP to activate it, to get to fructose 1,6 bisphosphate. And from there on, we're going to generate actually, two molecules of ATP at this step and then two molecules of ATP in the final pyruvate kinase step. And of course, we're also going to need an NAD- plus going to NADH here at the GAP step. Well, there's going to be two of these NAD-plus needed to oxidize GAP to 1,3 bisphosphoglycerate. And as we just said, even though we have some energy cost in early on, we're actually generating more ATP by the time we reach pyruvate so the pathway is in fact spontaneous. Now what about gluconeogenesis. In gluconeogenesis, we are starting with pyruvate and we want to go back to glucose. Now if we were to reverse exactly every single step in glycolysis, that pathway is not going to be spontaneous. Step four in gluconeogenesis uses these alternate pathways in a couple of the steps in order to make the process spontaneous. For example, going from pyruvate to phosphoenolpyruvate in gluconeogenesis is not a reverse of the pyruvate kinase reaction. It rather occurs in two steps. So the first step we take pyruvate and convert it to oxaloacetate, using pyruvate carboxylase, so it's going to need a molecule of CO2. And oxaloacetate has four carbons. And it's also going to need energy. So we're going to need a molecule of ATP going to ADP. Now oxaloacetate can now be processed inside the mitochondria or it can be taken out of the mitochondria into the cytosol. And let's say that's the course of the reaction. Where it's going to find an enzyme called PEP carboxylic kinase, or PEPCK, that can take oxaloacetate to PEP. This enzyme once again requires energy. This time in the form of GTP going to GDP. And here we're going to lose that carboxyl group that we added on earlier. Now while this might seem like a cumbersome way to reverse one reaction, this allows both the part of a kinase reaction and going from pyruvate going back to PEP, to be controlled in different ways and therefore allow both of these processes to happen spontaneously. The rest of gluconeogenesis will have exactly the reverse of these steps in glycolsis. For example, PEP going to 2-phosphoglycerate. 2-Phosphoglycerate going to 3-phosphoglycerate. 3-phosphoglycerate going to 1,3 bisphosphoglycerate. Here, since in glycolysis we generated ATP here, we're going to need the ATP to come in and go to ADP in order to accomplish this step in gluconeogenesis. 1,3 bisphosphoglycerate going to GAP, this is the step we just discussed in Part 2 of the problem. As we said, we're going to need NADH going to NAD-plus. Then GAP and DHAP going to fructose 1,6 bisphosphate, this is the step we discussed in Part A of this problem, the reverse of the aldolase reaction. Now going from fructose 1,6 bisphosphate to glucose, we're not going to do these kinase reactions in reverse, where we would be generating ATP and therefore that would be not spontaneous in the reverse direction. But rather, we're going to use alternate enzymes called phosphatases where we lose the phosphates without regenerating an ATP molecules. So therefore, by using these two tricks we can go back to glucose without regenerating these ATP molecules, and therefore the pathway can be spontaneous. Now the bottom line here is that both glycolysis and gluconeogenesis are spontaneous, but gluconeogenesis uses most but not all of the glycolysis steps to run in reverse. And while glycolysis generates energy, we get a net of 2 molecules of ATP per molecule of glucose used, gluconeogenesis as you guys have seen here, actually uses energy to run spontaneously. That is, we need ATP molecules. Now if you look at our diagram, we need ATP molecules to convert pyruvate to phosphoenolpyruvate, actually 2 of them. And then we're going to need more ATP here to form 1,3 bisphosphoglycerate. In addition to the redox, like NADH co-factor, to run the GAP dehydrogenase reaction in reverse. So given these considerations, gluconeogenesis is in fact spontaneous, but it's going to cost us several ATP equivalents. While glycolysis is spontaneous and generates ATP. Well, that solves problem 3 of problem set 10. Here, I hope you got a better understanding of why both glycolysis and gluconeogenesis are both spontaneous pathways inside the cell. Thank you. |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | Lexicon_of_Biochemical_Reactions_Introduction.txt | SPEAKER: The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOANNE STUBBE: Hi, everybody. You're in 507, and if you look at your syllabus, you'll find one of the things in the front page of your syllabus is called a lexicon. And I'd like to introduce you to why we've chosen to have a lexicon for this course. So if you look at this particular slide or overhead, this is what John is going to teach you this semester-- introductory metabolism. The glycolysis pathway, fatty acid biosynthesis, fatty acid oxidation, amino acid metabolism. A complete jungle. How are we ever going to learn anything out of this mess? Well, that's actually exactly the point. So what we're going to do over the course of the semester is convince you that all of this mess can be simplified to 10 basic reactions. And those 10 basic reactions are what's in the lexicon. So it turns out all of biochemistry for primary metabolism can be described using 10 different sets of reactions and your vitamin bottle. And so if you look at your vitamin bottle, what do you see? Most of you probably take vitamins. You have vitamin B1, vitamin B2, vitamin B3, vitamin B6, vitamin B12. All of those provide enzymes the catalysts for all the reactions in this complex metabolic pathway I showed you on the previous slide. They expand the repertoire of reactions that enzymes can actually catalyze. And so within the lexicon, what we're going to show you is the chemistry of actually how these vitamins work. So again, if we come back to the lexicon, we'll talk about how carbon-carbon bonds are made and broken, fatty acid metabolism, sugar metabolism. We'll talk about oxidation reduction reactions and the vitamins that are used for that transformation. We'll talk about the energy storage and the energy currency in the cell-- ATP, etc. So what the lexicon is meant to do is be an aid when you can't remember what does oxidation and reduction? You can go back to your lexicon and look up what are the redox cofactors that are involved in transformation. And by practicing the chemistry in the first part of the semester of all these reactions, metabolism should be very straightforward in terms of all the connectivities. So hopefully what you will do is look through your lexicon tonight, see what these reactions are, and then keep it by your side during the rest of the semester. And when you're having trouble understanding some chemical transformation, you can use the lexicon as a guide to think about the chemical transformations that you'll be looking at over the course of the semester. |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | Ketogenesis_Diabetes_and_Starvation.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: We're now at storyboard 23. Let's look at panel A. Our last metabolism topic under the general umbrella of catabolism is ketone bodies. Ketone bodies are only covered in two pages in the book, but they're medically very important. The medical relevance of ketone bodies stems from their role in starvation and their role in diabetes. Let's look at panel A. This panel shows various sources of and uses of acetyl CoA. It shows that ketone bodies are made from Acetyl coenzyme A. So let's start this part of the lecture with a discussion of where acetyl CoA comes from, and what its various states are including the formation of ketone bodies. We just finished talking about fatty acid catabolism. So let's start on the right of this figure. We see that a fatty acid can be broken down to form acetyl CoA by beta oxidation, usually with the objective of generating energy. If we follow the acetyl CoA from beta oxidation down into the TCA cycle, we see that it will be fully oxidized to carbon dioxide with the generation of a lot of energy that can be used for mechanical work, biosynthesis, and other things. A second source of Acetyl CoA can be seen to the left, where we see glycogen breakdown to glucose or glucose can be directly imported into a cell. In either case, the glucose that we take in or liberate from its storage depot, glycogen, can be converted into acetyl coenzyme A. Glycerol can be generated from the backbone of a metabolized triacylglyceride. It enters glycolysis as dihydroxyacetone phosphate and then can progress to acetyl CoA by way of glycolysis. Alanine can transiminate into pyruvate, which is converted subsequently to acetyl CoA by pyruvate dehydrogenase. This diagram shows us that many pathways converge to generate acetyl coenzyme A. Carbohydrates, amino acids, fatty acids, all can act as a source of this important precursor to energy. Aside from being processed by the TCA cycle, on the left, we see a broken line pathway involving fatty acid biosynthesis. This is going to be the next topic we come to after this discussion of ketone body formation. The pathway on the right with the broken lines is the pathway leading to ketone bodies which is called ketogenesis. In the box at the top of panel A is a cartoon reminding me to tell you that acetyl CoA cannot escape from the cell. Moreover, it cannot even easily escape from the mitochondrion. In order for acetyl CoA to leave the cell and be transported from one organ to another, it needs to be converted into ketone bodies. Another way to look at ketone bodies is that these are mobile or portable forms of Acetyl CoA that can go from a source organ, which is usually the liver, to a target organ, which may need them in order to generate energy by way of the TCA cycle. The target organ for example could be brain under conditions of starvation, or it could be skeletal muscle if you have to run away from something. Let's look at panel B. There are five key facts that we need to know about ketone bodies. First, Ketogenesis mainly occurs in the liver. The liver manufactures ketone bodies and then exports them to other organs for use. These reactions typically happen when the levels of oxaloacetate become limiting in their mitochondrion. And I'll give you an example of why this is the case when I talk about starvation a little later in this lecture. The second important fact is that these are the primary metabolic fuels of the heart and skeletal muscle under normal conditions. The third fact is that ketone bodies become the major metabolic fuel of all cells under conditions of starvation, even cells of the brain after a few days of starvation will convert from glucose being the preferred metabolic fuel to accepting ketone bodies as their major source of energy. The fourth fact I want to talk about with regard to ketone bodies is that they are produced in excess in diabetes. I'll talk a little bit more about that later in the lecture. Finally, ketogenesis occurs in the mitochondrion, primarily the mitochondrion of the liver. So these are mitochondrial reactions. Panel C shows the three classical ketone bodies, acetoacetate, beta hydroxybutyrate, and acetone. From the standpoint of chemical accuracy, it's obvious that beta hydroxybutyrate is an alcohol and not a ketone. Nevertheless, it's lumped in with the ketone bodies for historical reasons. Acetoacetate and beta hydroxybutyrate are what I'll call quote unquote "useful" ketone bodies from the standpoint of serving as precursors to metabolic energy. Acetone by contrast, is not useful by this criterion. Acetone however, is a useful biomarker, because sometimes its presence can help diagnose diabetes. On a personal note, I come from a long line of diabetics. I remember when I was a little kid, my dad before he was diagnosed, would come home from work at the end of the day. He had very poor circulation. So my two sisters and I would try to rub his legs to give him a kind of massage to make his circulation a little bit better. I remember very clearly my older sister saying, "gee, dad smells like mom's nail polish remover," and that's because he was a diabetic producing acetone, which was used at the time at least as nail polish remover. We had no idea at the time what was going on. A few months later, my dad was diagnosed as a type 2 diabetic. I now know that the fruity odor we smelled on his breath was acetone. As I said, acetone is a biomarker of this disease. One last point with regard to the story board. Note that the acetoacetate and beta hydroxybutyrate molecules are acids. In diabetics, these acids can be produced as we'll see later in sufficiently high concentrations to lower the pH of the blood quite substantially. Keep in mind that lowering the pH is the same thing as increasing the concentration of protons in the blood. These concentrated protons will have physiological relevance that I'll discuss later. When a diabetic enters the phase where the pH of their blood is dangerously low, that's called diabetic acidosis. Let's now turn to Panel D. At this point I want to describe the detailed biochemical reactions that give rise to ketone bodies. To make a ketone body we're going to need three molecules of acetyl coenzyme A. One of these molecules is going to be catalytic. That is, it's going to be restored at the overall end of the process of ketogenesis. Let's start by imagining a scenario in the mitochondrion of a liver cell where oxaloacetate becomes limiting. I'll talk about physiological states under which oxaloacetate becomes limiting or sparse a little bit later. Acetyl CoA cannot enter the TCA cycle, because citrate synthase lacks oxaloacetate as a reaction partner. The concentration of acetyl CoA starts to accumulate. Then the beta ketothiolase reaction that is the last step in fatty acid bio oxidation reverses owing to the high concentration of product acetyl CoA. So two acetyl CoAs come together in order to form acetoacetyl coenzyme A. Note that I put markers on each of the carbons of the acetoacetyl coenzyme A. A third acetoacetyl coenzyme A is then added to the gamma carbon of the acetoacetyl coenzyme A. That's the carbon that has the filled-in square. The enzyme that catalyzed this last reaction is HMG Coenzyme A reductase, where HMG stands for hydroxymethylglutaryl. HMG CoA is a six-carbon branch chain molecule. In the present situation, we're going to look at HMG CoA as the source of ketone bodies in the mitochondrion, but I want you to keep in mind that if this reaction were to occur not in the mitochondrion but in the cytoplasm, the resulting HMG CoA could be used for other pathways. For example, HMG CoA in the cytosol is the precursor to cholesterol. With that in mind, let's return our attention to the mitochondrion and ketogenesis. The mitochondrial enzyme HMG CoA lyase will split the HMG CoA, knocking off an acetyl CoA in liberating as the final product acetoacetate, which is our first of three ketone bodies. Acetoacetate is a beta keto acid and hence, prone to spontaneous decarboxylation. Non enzymatically, this will happen at some slow rate in order to liberate CO2 and produce acetone, which is our second ketone body. This acetone gives the fruity smell to the breath of a diabetic whose disease is out of control. Acetone is not going to be biochemically useful to us, for example it's not going to be metabolized to generate energy. The second chemical fate of the acetoacetate is its reduction by NADH using the enzyme beta hydroxybutyrate dehydrogenase. This reduction forms our third ketone body beta hydroxybutyrate, which is a biochemically useful molecule in that it serves as a good metabolic fuel. Acetoacetate and beta hydroxybutyrate do not need any kind of special transporter to get out of the cell into the blood. They diffuse through the mitochondrial membrane and later through the cell membrane. They are then transported by the circulatory system from the liver to organs that need them for energy. As you mentioned above, ketone bodies are portable forms of acetyl CoA. In a real sense, the liver by making these ketone bodies is acting as a food caterer where ketone bodies represent food that's delivered to other organs. At this point let's look at storyboard 24 panel A. Now let's take a look what happens when the ketone bodies travel by the blood and are taken up by another organ such as muscle. Acetoacetate, which I'll refer to as ketone body one is good to go and is ready to enter the main stream of metabolism. So I'm going to come back to it in a minute. The beta hydroxybutyrate, by contrast, has to be processed in order for it to be useful to the target organ skeletal muscle in this case. In step two, the muscle form of beta hydroxybutyrate dehydrogenase will use NAD plus to oxidize the beta hydroxybutyrate into acetoacetate, which joins the pool of acetoacetate that came in directly from the blood. We now have to put a thioester group on the acetoacetate, and that comes from an unusual source. In step three you'll see a succinyl coenzyme A from the TCA cycle giving its coenzyme A residue to acetoacetate, which results in the formation of acetoacetyl coenzyme A. This reaction happens in the mitochondrion of the cell. At step four acetyl coenzyme A is converted by beta ketothiolase into two molecules of acetyl coenzyme A. And again, we're going to need another coenzyme A group to come in at this point as part of the beta ketothiolase reaction. Remember that beta ketothiolase is the last enzyme that's operative in beta oxidation of fatty acids. Here it's doing the same chemistry that it does in beta oxidation. It splits acetoacetyl CoA into two acetyl CoA molecules. And in steps five and six, those molecules integrate into the TCA cycle. In the TCA cycle they're oxidized to carbon dioxide with the generation of energy. Let me review for a minute before going into a physiological scenario. Way over to the left at step one, the liver has made acetyl CoA and packaged it into two ketone bodies, acetoacetate and beta hydroxybutyrate. They travel in the blood to target tissues, for example, the muscle, or heart, or the brain. In these target tissues these ketone bodies are internalized, converted to acetoacetate, and then to acetoacetyl coenzyme A and then ultimately to several molecules of acetyl coenzyme A. The acetyl coenzyme A that started in the liver, ends up in the target tissue and then can be used to generate energy. This is a particularly important reaction under conditions of starvation and diabetes. Let's now look at panel b of storyboard 24. As you know, I like to look at physiological scenarios because at least to me, they helped make biochemistry real. The scenario I want to look at is that of diabetes. In Type 2 diabetes, which is the type that I have, my cells have become resistant to taking up glucose. My cells are insulin insensitive. After a meal I have very, very high concentrations of glucose in my blood, because the cells of my tissues are not capable of taking it in. Hence, if I do not take my anti-diabetic medication the sugar concentration in my blood stays high, which leads to some of the medical complications of diabetes. More on that later. Given that there's a lot of glucose in my blood, but it's not getting into my cells, my cells are actually in a technical state of starvation. Take a look at the pathway I've drawn in panel b. Glucose on the left is not getting into the cell. I've used broken lines for the pathway from glucose to pyruvate and then from pyruvate in the cytoplasm into the mitochondrial matrix. These broken lines are meant to indicate that the pathways involved are just not very active. The sparse activity of these pathways means that acetyl CoA levels are becoming somewhat limiting in the mitochondrion. Because pyruvate is also limiting, the enzyme pyruvate carboxylase doesn't have sufficient pyruvate in order to maintain the oxaloacetate concentration within the mitochondrial matrix. Once again, oxaloacetate is the TCA cycle intermediate that that's at the lowest, that is micromolar concentration. I'm focusing here on the liver, although I should add at this point that all tissues are similarly limited in the pathways indicated by the broken lines. The liver's response to sensing this limitation in carbohydrate processing is to either take in lipid or to break it down from internal stores, for example, triacylglycerides in order to produce acetyl CoA. But because oxaloacetate is limiting, the step at beta ketothiolase backs up, producing a large amount of ketone bodies. The ketone bodies are produced in excess, so you can see them escaping into the mitochondrion and later out of the cell, and they go off into the blood. Consequently, the liver of diabetics produces a lot of ketone bodies, because it senses that the body is starving, To the right of panel B, I have some blood chemistry values that are of relevance to diabetics. In a non-diabetic person, blood sugar concentrations, that is blood glucose is maintained at about 100 milligrams of glucose per 100 milliliters of blood. When I was diagnosed with diabetes, my blood sugar was over 300 milligrams per 100 milliliters. As I recall the symptoms were disorientation. I couldn't walk very easily, I was thirsty, and I urinated a lot. Normal ketone body concentrations are less than 0.2 nanomolar. In a severe diabetic situation, your ketone body concentrations could be 15 to 25 millimolar and the pH of your blood could drop from the mid 7 range down to about 6.8. The kidney responds to the high concentration of glucose and the high concentration of protons, that is the low pH, by increasing urine volume output in order to try to urinate out the glucose and protons. The results are that the diabetic becomes excessively thirsty which again is one of the biomarkers or symptoms of the disease. The classic historical treatment of diabetes is to give insulin, which will push more glucose into the cell and thus offset the biochemical defect that leads ultimately to ketone bodies and to the high concentration of glucose in the blood. Aside from giving insulin by injection there are other medications that will result in a sort of reactivation of the beta cells in the pancreas in order to produce more insulin naturally. Alternatively, there are medications that will block gluconeogenesis and thus stop the ability of the liver and other gluconeogenic organs from producing glucose. So by blocking gluconeogenesis, one can lower the glucose concentration of the blood. As you see there are many, many ways to treat this disease. Let me add that it can be a very debilitating disease, leading to blindness, amputation, and cardiovascular difficulties. It's a good idea to try to avoid the risk factors for this disease. It's not fully preventable at least in people who come from families in which nearly everybody gets it, for example, my situation. But by avoiding risk factors you can push off the date of onset by many years. |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | PLP_Pyridoxal_Phosphate_Reactions.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. BOGDAN FEDELES: Hello and welcome to 5.07 Biochemistry online. I'm Dr. Bogdan Fedeles. This video is about pyridoxal 5 phosphate, or PLP, an essential metabolism cofactor derived from vitamin B6. All animals are auxotrophic for PLP, meaning they need to supplement their diet with vitamin B6 in order to survive. PLP is one of the most ancient cofactors, and surprisingly, it can catalyze chemical transformation, such as a transamination even without an enzyme. PLP is actually involved in a staggering number of biochemical transformations. This video summarizes the most important reactions involving PLP that you will see in 5.07, and will also show you how to write the complete curved arrow mechanisms for these transformations. Let's talk about PLP-catalyzed reactions. As we just mentioned, PLP is the cofactor derived from vitamin B6. This cofactor is very important for a number of reactions. In this course, we're going to look particularly at the transamination reaction. This is a crucial reaction for the metabolism of all amino the acids, and we're also going to encounter this reaction when we replenish the intermediates in the TCA cycle, what we call anaplerotic reactions. And we're also going to see PLP involved in reactions in the malate-aspartate shuttle that transfers redox equivalents, reducing equivalents, between mitochondria and cytosol. Let's take a look at the structure of vitamin B6, also known as pyridoxine. This is the molecule that we ingest when we get our daily vitamin supplement. Now in the body, this gets oxidized to form intermediate called pyridoxal. Notice the aldehyde group here, which is going to be the business end of the molecule. Now, the active co-factor, PLP-- it's actually the phosphorylated version of pyridoxal. This requires one molecule of ATP and the enzyme pyridoxal kinase. And we get PLP, or pyridoxal 5 phosphate. The name PLP comes from the initials as outlined here. Now, this nitrogen on the pyridine ring tends to be protonated because it's pKa, it's close to physiological pH, between 6 and 7. Now, for the rest of this presentation we're going to be abbreviating this phosphate group as such, and throughout the course. Now, a related molecule is pyridoxamine 5 phosphate, which we'll see, it's an intermediate in the mechanism of PLP-catalyzed reactions. Also known as PMP. Now, notice the PMP has an amine group here which replaces the aldehydic group, which is the business end of the molecule. Now, in all reactions with PLP, this co-factor is actually covalently bound to the enzyme that uses it. Typically, there's a lysine in the active site of the enzyme. As you remember, lysine has an amine group, and this can form a Schiff base with the aldehyde. The reaction proceeds in two steps. First, we form a tetrahedral intermediate. As such. And then, we form the Schiff's base. So this will be the enzyme-bound PLP. And this is where all the PLP-catalyzed reactions start. Let's take a closer look at the transamination reaction. Transamination reaction occurs between an amino acid and an alpha keto acid. We have here amino acid 1, where we highlighted the amine group, and alpha keto acid 2. As you can see, there's a keto group next to the carboxyl. Now, in a transamination reaction catalyzed by PLP, the amine group moves from the amino acid to the keto carbon of the alpha keto acid. And we obtain a new alpha keto acid, and a new amino acid. So in effect, the PLP-catalyzed transamination reaction facilitates the transfer of the amine group from an amino acid to an alpha keto acid. Now, this reaction actually occurs in two steps. In the first step, the amino acid transfers the group to the co-factor itself. So if you remember from the previous slide, the PMP contains an amino group, and that will actually contain this amino group that was taken from the amino acid 1. Now in the second step, the PMP will transfer its amino group to a different alpha keto acid to generate a new amino acid. Now, there are enzymes for virtually every single amino acid that can accomplish this first transformation, where by using PLP, to transfer the amine group and form an alpha keto acid and PMP. Now, in the second part of the reaction, however, the alpha keto acid 2 is typically alpha keto gluterate or oxaloacetate. So in this case, not any alpha keto acid can function. Alpha keto gluterate or oxaloacetate. Now, let's take a look at an example. For example, glutamate. It's going to be our amino acid. And oxaloacetate is going to be our alpha keto acid. And in a PLP-catalyzed transformation, we will obtain the alpha keto acid corresponding to glutamate, which is alpha keto glutarate, and the amino acid corresponding to oxaloacetate, which is aspartate. This enzyme that catalyzes this transformation is in fact ubiquitous, and it's found both in the liver and the muscles, and it is in fact a-- we can call it, depending and the product, we can call it aspartate, transaminase or glutamate oxaloacetate transaminase. In fact, if we find this in the bloodstream, this enzyme acts as a biomarker. And it tells us about some damage that might have occurred in muscle or liver, which forced the cells to spill out their contents. This biomarker is-- you'll often see as SGOT-- serum glutamate oxaloacetate transaminase. And this is just one of the biomarkers that are measured in blood tests that tells us about heart disease or liver disease. Let's take a look at the mechanism of the transamination reaction. And in particular, we're going to take a look at part one, which as we discussed before, the amino acid reacts with PLP to form an alpha keto acid and PNP. Here is our co-factor PLP, covalently bound to the lysine in the active side of the enzyme via a Schiff's base. And here is our amino acid starting material. So in the first step, the lysine that forms the Schiff base with the co-factor is going to be replaced by the amine functionality of the amino acid, and it will form a new Schiff base with the co-factor. This starts with the amine group attack on the pyridoxal carbon to form a tetrahedral intermediate. And following an additional proton transfer, the lysine can be kicked off to form the new Schiff base. So far, we have started with the Schiff's base corresponding to the PLP bound to the enzyme and we now obtain a co-factor forming a Schiff base with the incoming amino acid 1. So this portion of the mechanism is called transamination, because we're starting with one amine and we're forming a different amine. Now, let's take a look at the alpha proton attached to the alpha carbon, which we're going to highlight here. This proton is now in between two carbonyl-like groups. Here is the carboxyl group and here is the Schiff base group. So it becomes acidic enough that it can be removed by an active side base, for example the lysine in the active site. This will generate a carbanium alpha carbon. This carbanium can only form because it is resonance stabilized. And indeed, the PLP link system-- it's highly conjugated, and it's a good electron sink. For this carbanium, we can write, in fact, many different resonance structures. Let's take a look at one of them. This symbol denotes resonance structures. Notice in this structure that the positive charge on the pyrodine nitrogen is now gone, and highlights the fact that this is a good electron sink. And the ring now looks more like a quinone. That's why we call this a quinoid structure, or intermediate. This quinoid structure shows us a glimpse into how the reaction will proceed, because the alpha carbon now-- it's doubly bonded to a nitrogen, which anticipates how this alpha carbon will become a keto group and a product of the reaction will be an alpha keto acid. What happens? The quinoid structure can be re-protonated, but at a different place. For example, on the aldehydic carbon of pyridoxal. To highlight that these facts-- these structures are in fact resonance structures, we're going to put them in brackets. So let's take a look at what happened in this past couple of steps. So we had an alpha proton that was fairly acidic, it was able to be removed by the active site lysine. And then this proton came back to a different position. So all that's happened in just a couple of steps was a proton transfer. Now, looking at this intermediate, we can see that in fact the Schiff's base or the imine of the PMP form of the co-factor and the alpha keto acid corresponding to amino acid 1. So via a hydrolysis reaction, these two can come apart. So in the first step, an activated water molecule attacks alpha carbon, forming a tetrahedral intermediate. And one more proton transfer and we're going to kick off the pyridoxamine form of the co-factor. And notice we obtain PMP and the alpha keto acid corresponding to the amino acid 1. This last step is, in fact, just a hydrolysis reaction of a Schiff base. Now, let's take a look at the second part of the PLP transamination reaction. The part one left us off with formation of PMP. So in this second part, PMP will react with a new alpha keto acid to regenerate PLP and a new amino acid. In fact, this part of the mechanism-- it's the exact reverse of part one. Here is PMP and our alpha keto acid. In the first step, we're going to form-- as we've gotten used so far-- to a new imine between the keto group of alpha keto acid and the amine group of PLP. As usual, first we're going to get a tetrahedral intermediate. And one more proton transfer, and we can kick off the OH group to form the imine. Now, this portion of the reaction is, in fact, imine formation, which is the reverse of the hydrolysis step that we saw in part one. Now, as you remember, there is an active site lysine which can act as a general base. And it's going to de-protonate one of these two protons on the pyridoxal ring. The reason that we can form this carbanion here is because this negative charge is delocalized throughout the entire ring system. And let's show one important resonance structure. Which is none other than the quinoid structure we saw before. Just as before, the protonated lysine can now donate proton on a different position. For example, the alpha carbon of the alpha keto acid. As you can see here, now the proton is on the alpha position. And now where this starts to look more like the Schiff's base formed by an amino acid with the PLP version of the co-factor. So from here on onwards, we're just going to substitute the PLP-- the amino acid bound to the PLP-- with the active site lysine in the transimination reaction that we saw before. So first, the lysine can attack this carbon, forming a tetrahedral intermediate. And then, following some proton transfer, we can kick off the amino acid and generate the Schiff's base corresponding to the PLP co-factor bound to the enzyme. So here we have PLP, enzyme bound, and the new amino acid 2. We mentioned PLP is a very versatile co-factor, so let's take a look what other reactions besides transamination can PLP catalyze. One interesting reaction, used especially by bacteria, is a racemization. This involves taking an L amino acid, for example L alanine, and converting it via a PLP-catalyzed reaction to D alanine. This is an important reaction for bacteria, because they incorporate the alanine into the cell walls, which make it very hard to recognize by the immune system, and makes it very hard to digest by the host proteases. Let's take a look at a key intermediate in the PLP-catalyzed reaction. As before, L alanine is going to react with the PLP bound to the enzyme, and it's going to form an amine. Here we're highlighting the stereochemistry of the alpha hydrogen. And here is our active site lysine. As we've seen before, this alpha hydrogen is acidic enough that it can be removed by the lysine. And it's going to form a carbanion at this position. Now, this carbanion, as we've seen before, is able to form because the charge is, in fact, delocalized through the entire system of the pyridoxal ring. So for this structure, we can write a number of resonance structures, which we're not going to mention here. Now, this carbanion can be re-protonated. And here we had a-- the hydrogen was pointing up on the top of the plane of the page, but we can re-protonate it from the bottom, and that will change the stereochemistry of this carbon. See if that re-protonation happens from the bottom, we will obtain the Schiff base corresponding to the D alanine. So by being able to generate this carbanion at the alpha position, the PLP reaction and co-factor allows the inversion of the configuration and the alpha carbon, converting L alanine to D alanine. Now, another interesting reaction that requires PLP is de-carboxylation. Here we're looking at an amino acid-- for example, glutamate. In a PLP-catalyzed reaction, you can lose this CO2 group and form this molecule, which is called gamma aminobutyric acid, or GABA. This is, in fact, a very important neurotransmitter and inhibitor, a neurotransmitter that is required in the brain. Let's take a look how this de-carboxylation is catalyzed by PLP. As always, have we seen so far, the glutamate will react with PLP bound in the active site of the enzyme, to form a Schiff base. Here is the Schiff base. Like that. Now instead of de-protonating at the alpha position, the CO2 is activated to leave. Because it will leave behind the carbanion. Such as that. And as we've seen before, a carbanion formed at this position can de-localize throughout the entire pyridoxal ring, and therefore it stabilize and it can exist long enough. And we're not going to draw, but there-- you can imagine, there are a number of different resonance structures. Now, this carbanion-- it gets protonated quickly by a general acid, and will generate this structure, which is just a Schiff base corresponding to gamma aminobutyric acid with PLP. And now, from here, a transimination where the active site lysine will remove the PLP and free up the GABA products of the reaction. In this video we talked about PLP-catalyzed reactions. PLP is the co-factor that comes from vitamin B6. Here's vitamin B6, what we call pyridoxine, which is the molecule that we find in our vitamin pills. Now, in the body, pyridoxine formed pyridoxal, which is activated to form pyridoxal 5 phosphate, or PLP. And typically when it reacts, PLP is found as a Schiff's base bound in the active site via a lysine. PLP is very important for transamination reactions, which are essential for the metabolism of all amino acids. We have seen the transamination reaction where an amino acid 1 reacts with alpha keto acid 2 and the PLP catalyzed reaction forms an alpha keto acid 1 and amino acid 2, essentially transferring the group-- the amino group from the amino acid to the alpha keto acid. The reaction occurs in two steps, where first the amino acid is transferred to PLP to form PMP. Then PMP then transfers this group-- the amino group-- back to a alpha keto acid to generate a new amino acid. As we saw, the mechanism of transamination involves multiple steps. The first step is a transimination reaction where the Schiff's base that's formed between the active site lysine and the PLP becomes a Schiff's base between the incoming amino acid and PLP. Next, we have a proton transfer, which is allowed by the ability of the PLP ring to stabilize a negative charge, and move the proton from the alpha position to somewhere on the PLP ring via a quinoid structure. And finally, that the resulting Schiff base is hydrolyzed to generate PMP and an alpha keto acid. In the second part of the reaction, PMP now reacts with alpha keto acid to form a new Schiff base, and then the proton transfer happens in reverse, via, again, a quinoid structure, to generate the Schiff base corresponding to the PLP and the new amino acid 2. Which, via a transamination reaction will generate amino acid 2 and the PLP enzyme bound. Finally, we mentioned that PLP is very versatile, and it can catalyze other reactions, such as racemization, for example, switching the configuration of the alpha carbon from L alanine to D alanine, or de-carboxylation, generating alpha aminobutyric acid, or GABA, an important neurotransmitter from glutamate. |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | Blood_Sugar_Fluctuations_and_Gluconeogenesis.txt | JOHN ESSIGMANN: There are many different treatments for diabetes. I'll just use myself as a case study. One of the things I did to myself is I measure my blood sugar at different times during the day, and my blood sugar this morning was 116. And my blood sugar at about five o'clock this afternoon will probably be in the high 80s or 90. My blood sugar after I have a meal might go up to 160, 170, and it shouldn't go that high. And some mornings, like a couple of days ago, my blood sugar was 146, which is too high. And that reflects two things, what I ate the night before, but the second thing is the one that's my problem, which is gluconeogenesis. Gluconeogenesis technically means new synthesis of glucose from non-carbohydrate precursors. For example, I could take the amino acid aspartate, and I could find a path by which I could convert that into glucose. I could take lactate, and I could convert it into glucose. OK, so during the day, I use my glucose, and I stay active, and I walk around. I ride my bike and so on. So my glucose level, it's pretty reasonable by the time I go to bed. But then at night, when I stop moving around a lot, this gluconeogenesis process continues in me, and that's what causes my blood sugar to go up. The medicine I take is called Metformin. It has a number of targets, but one of them is one of the enzymes, called PEPCK, Pyruvate Carboxykinase, that's in the gluconeogenic pathway. Let me say a word about gluconeogenesis, another word actually. So we all have dinner, like say six to nine o'clock at night. We go to bed, and there are certain organs in the body, the brain, our red blood cells, that require glucose. They can't work with anything else. So gluconeogenesis, by principally the liver, provides a constant stream of glucose to these organs that absolutely require it, like our brain. Now, when we go to sleep at night, as time gets longer and longer and longer, after the meal, our glucose level, our natural glucose level, is going to start to fall off. And the liver then compensates by increasing the amount of gluconeogenesis in order to keep our blood glucose at about 100 while we're not eating. What I do, during the night, is to take this drug that will prevent the switch to produce more and more sugar by gluconeogenesis. |
MIT_507SC_Biological_Chemistry_I_Fall_2013 | Problem_Set_3_Problem_2_Proteases_Mechanisms_of_Inhibition.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. BOGDAN FEDELES: Hi, and welcome to 5.07 biochemistry online. I'm Dr. Bogdan Fedeles. Let's metabolize some problems. Now, I have here problem two of problem set three, which is an excellent exercise about the mechanism of inhibition of enzymes, specifically proteases. Now, this deals with the same protease from problem one of this problem set, which is interleukin converting enzyme, or ICE. Therefore, it's best that you familiarize yourself with the mechanism of action of this enzyme, ICE, by solving problem 1 and then continuing with this video, here. As you found out by solving problem one, ICE is a cysteine protease. It features cysteine and a histidine in its active site. The mechanism starts by the histidine acting as a general base and deprotonates the cysteine SH group. The thiolate anion then can attack the substrate, forming first, a tetrahedral intermediate. When this tetrahedral intermediate collapses, it cleaves the peptide bond and accomplishes the chemical reaction of the protease. Then, the newly formed thioester is hydrolyzed to release the other half of the product. Let's take a look at the mechanism. Here is our peptide substrate. And this is the peptide bond that's going to be cleaved by protease. As we mentioned, we have a cysteine in the active site and we have a histidine that's just going to function as a general base. I'm going to denote it as B. In the first step, the histidine is going to deprotonate the cysteine. The cysteine is then going to attack the carbonyl of our peptide bond, and the electrons are going to go to the oxygen, and form a tetrahedral intermediate. Here, notice the histidine is going to be protonated, and the oxygen is going to have a negative charge. In the next step, this tetrahedral intermediate is going to fall apart by breaking the peptide bond that the protease is supposed to cleave. There is the thioester with the cysteine in the active site and one half of our product is going to be released, here. Once again, the histidine is going to be deprotonated at this step. In the second step, the thioester we just formed in the active site will be hydrolyzed by a water molecule, which will be activated by the same histidine in the active site. Here is our water molecule. So, the histidine is going to deprotonate the water, activating it for attack on the carbonyl, forming once again, a tetrahedral intermediate. I have a negative charge here and a positive charge on the histidine. Finally, this tetrahedral intermediate will collapse, restoring the cysteine in the active site, which can be reprotonated by the protonated histidine, and releasing the second half of our substrate, here. So there you have it. Now we restored the active site with the cysteine and the histidine. And this is the second half of our peptide that we cleaved. So notice the carboxyl side is right here and the amine side was released a little bit earlier. The mechanism that you just saw is very similar to the serine protease mechanism that is described in the book and in the lecture notes. Notice, every time we form a tetrahedral intermediate, this is probably stabilized in an oxyanion hole formed by some of the residues on the backbone of the enzyme. Now let's take a look at a couple of strategies for inhibiting a cysteine protease like ICE. This problem is proposing two strategies. One involves an aldehyde inhibitor, the other involving an acyl methyl ketone inhibitor. Let's take a look. Here is the structure of a proposed aldehyde inhibitor. Notice here, this is a aspartate residue, or aspartate looking residue, which together with the other couple of amino acids, forms the recognition portion of the inhibitor that we're allowing to bind the protease. The R group is going to be an aldehyde, which will be crucial for actually inhibiting the enzyme. Question one of this problem is asking us to propose a mechanism by which the aldehyde inhibitor works. We're given an important clue that this is a mechanism based inhibitor. A mechanism based inhibitor means that the inhibitor binds in the same fashion as the normal substrate of the enzyme. Therefore, after we have reviewed the mechanism of the cysteine protease and remembering some of our carbonyl chemistry, we should be able to propose the following chemical reaction. Here is the active site of our protease. This is the cysteine and this is the histidine, which we're denoting as a general base, B. And here's our aldehyde inhibitor, which I'm going to just show the aldehydic group, right here. Since the R group next to this aldehyde resembles very closely the natural substrate of the enzyme, this aldehyde group will be positioned in the same place where we would normally find the peptide bond that will be cleaved by the enzyme. Therefore, this thiolate group, once it forms, will be in great position to react with the aldehyde and form a tetrahedral intermediate. Therefore, the histidine deprotonates the cysteine, and the cysteine can then attack the carbonyl to form a tetrahedral intermediate. Therefore, we get this tetrahedral intermediate and a protonated histidine base. Now normally, the reaction would proceed from here to form a thioester, but because this is an aldehyde, the reaction stops here, and that's how the enzyme will be inhibited because we have now bound this inhibitor. We'll have it covalently bound in the active site to this cysteine. An interesting observation, which you can't really tell from the problem, when people looked at the x-ray structure of this tetrahedral intermediate, they noticed that the negative charge on the oxygen is not, in fact, stabilized in the oxyanion hole that would stabilize such tetrahedral intermediates for the normal reaction. The ability of the aldehyde group to react with the cysteine in the active site and form a covalent bond can readily explain why the molecule would function as an inhibitor. Nevertheless, the reaction between the aldehyde and the nucleophile, the thiolate, is readily reversible. So whenever the inhibitor is in its carbonyl form, it can potentially fall off from the active site of the enzyme. Therefore, how good of an inhibitor this molecule is will depend on how tight it binds to the enzyme, and not necessarily on the fact that it forms a covalent bond in the active site. This is an example of a reversible inhibitor, even though it forms a covalent bond with the enzyme. Therefore, its ability to inhibit an enzyme will depend on the relative concentration between the inhibitor and the natural substrate of the enzyme. Let's remember the Michaelis-Menten equation written for a reversible inhibitor. As you recall, we have an enzyme reacting with a substrate. We have k1 and k minus 1 the rate constant to form the enzyme substrate complex, which then with k2, is going to form the product and reform the enzyme. But the inhibitor will react with the enzyme in the absence of the substrate in an equilibrium, forming an enzyme inhibitor complex which does not lead to any product. The constant of this equilibrium, I'm going to call it Ki, and this is the dissociation constant of the enzyme inhibitor complex. As you have seen in the notes, the rate, taking into account the inhibition constant here, the rate v is going to be Vmax times the concentration of the substrate over Km. This is Michaelis constant for the enzyme. Times 1 plus concentration of inhibitor over Ki plus concentration of substrate, s. Now, this equation tells us exactly how the rate is going to change as we increase or decrease the concentration of the inhibitor. Notice here, that this term is always greater than 1 because concentration and Ki are going to be positive numbers. So this is 1 plus something positive. It's always going to be greater than 1, therefore, the denominator is going to be bigger than if we had Km times 1 plus s. So in the absence of the inhibitor, the denominator is going to be Km plus s. Therefore, when we add the inhibitor, this denominator gets bigger, and therefore the whole fraction gets smaller. We get a smaller rate. This is the basis for why the inhibitor will inhibit the enzyme, and therefore the rate of the reaction is going to be smaller. To see this graphically, we can write the reciprocal of the equation and look at the Lineweaver-Burk plot. The reciprocal of the equation is going to be 1/v equals-- and if we crunch the numbers, going to come up with Km/Vmax times 1 plus I/Ki times 1/s plus 1/Vmax. Let's plot this. I'm going to have 1/v and here I'm going to have 1/s. Now, if the concentration of substrate is really, really high, 1/s is going to be almost zero. So at the limit, when 1/s is zero, then we should get v equals Vmax. So therefore, let's say here it's 1/Vmax, and therefore without any inhibitor, we're going to get a line that looks like this. Now, as we're adding an inhibitor, this slope-- that is the coefficient or 1/s-- is going to be increasing. Therefore, we should get higher slopes. The higher the concentration of I, the bigger the slope. So it's going to look like this. So as concentration of I increases, the slope of this graph will increase. Notice however, that all these lines, even though correspond to slower rates, as the concentration of substrate increases-- that is 1/s gets closer to 0-- they will all converge to the same Vmax. This is the key feature of a competitive inhibition because the substrate in high quantities can out compete the inhibitor. Nevertheless, this mechanism only applies when the binding of an inhibitor to the enzyme is fast and reversible. If the binding is not reversible, obviously the enzyme will be inactivated forever, and then we will see a time dependent inactivation of the enzyme. The same phenomenon will happen if the reverse reaction, that is the dissociation of the inhibitor from the enzyme, is a slow process as well. All these considerations form a comprehensive answer for part one of the problem. Question two asked us to provide several reasons for which aldehyde inhibitors are not actually desirable as therapeutics. Once again, we have to think about the carbonyl chemistry. We saw here that aldehydes can react very well with nucleophiles such as thiols and thiolates. But aldehydes in solution can react with water and form what we call geminal diols. That look like this. Therefore, the effective concentration of the aldehyde in solution will be diminished because of this equilibrium, and that inhibitor might not be efficient at that lower concentration. Another consideration involves the oxidation of aldehydes. These could be enzymatically or even non-enzymatically oxidized to form carboxylic acids or carboxylates. These would obviously not be very reactive and any inhibitor that gets oxidized will stop being an inhibitor. Additionally, owing to their reactivity, aldehyde group could react with many other biomolecules. Think about the amino acid side chains. Many of them are actually, in fact, capable of reacting with aldehydes. This will also diminish the effective concentration of the inhibitor and it may even cause side effects. Therefore taking all these into considerations, aldehydes may not be a great solution for therapeutics. Question 3 is asking us to propose a mechanism by which the second kind of inhibitor, the acyloxymethyl ketone is inhibiting the protease. As you see here, the acyloxymethyl ketone is actually very similar to the aldehyde inhibitor. Notice this group is exactly the same as the carbonyl group of the aldehyde, but instead of having just the hydrogen, we have a methylene next to a aryloxy or acyloxy group. This, as you know, is a very good leaving group and provides a second reactive site, this methyl group here, that can react with the enzyme. The second kind of inhibitor is in fact very similar to the tosyl, phenyl, chloro ketone inhibitor of serine proteases, which is discussed at length in the lecture notes and in the book. These inhibitors fall in a general class of alpha substituted ketones and they feature two reactive sites. One is the carbonyl and the other one is the methylene group, which has attached a good leaving group. Now, here is a general form of a alpha substituted ketone. Here is the ketone, here is the methylene group attached to a good leaving group, which I'm going to denote x. Now, here are the residues in the active site. Here is the cysteine with the thiol group, and here is the histidine, which I'm going to draw out. Histidine, all right. So as we saw before, the reaction will start by the histidine acting as a general base. The histidine deprotonates the cysteine, which then can attack the ketone carbonyl. This leads to the formation of a tetrahedral intermediate. And we have a positive charge, here on the histidine, and a negative charge on the O minus, here. Now, this tetrahedral intermediate presumably will be stabilized in an oxyanion hole, as you guys have seen before with some kind of hydrogen bonds to the backbone of the protein. Now, in the next step, this is something that you can't really anticipate or know without doing some experiments. But it turns out this O minus is a good SN2 nucleophile to displace the good leaving group, x. So we're going to have an S2 reaction, This O minus attacks the carbon, and then the x takes the electrons and leaves. What we're going to form here is an epoxide. All right, so this is the epoxide and we still have our protonated histidine here. And there's a positive charge here, and of course, the x group is taking its electrons and leaves as an anion. Now, in the next step, because this is sitting actually close enough to the epoxide, the epoxide is going to get protonated. This takes the proton from the histidine. So now we have a protonated epoxide, and the histidine now is in its deeper native form, and the epoxide has a positive charge. Now, because we have a protonated epoxide, this acts as a very good leaving group, and this carbon becomes very susceptible for an SN2 reaction. And it turns out, the histidine, this nitrogen, is a good enough nucleophile to react in an SN2 type reaction. And in fact, the reaction is helped by this other nitrogen. I should have used a different color. So what we're getting out here is a covalent bond between the histidine and the alpha position of the original alpha substituted ketone. Now, this reaction will not be, in fact, reversible. So we should just say one arrow only. So there we have our histidine. Now it's covalently attached to our inhibitor and this step is, in fact, irreversible. So this prevents the inhibitor from ever dissociating once this reaction has taken place. Now, even though this tetrahedral intermediate around the carbonyl carbon, it can fall apart as we saw before, reforming the carbonyl and the cysteine thiol, the covalent bond to the histidine remains. And this is, in fact, the reason for which these inhibitors are irreversible and will show a time dependent inhibition. Regardless of the more complicated mechanistic detail I just showed you, the take home message here is that when using an alpha substituted ketone, the end result will be a covalent bond between the enzyme and the inhibitor that is not reversible. Therefore, we expect to see a time dependent inactivation of the enzyme. The more enzyme is being taken out of the reaction by the inhibitor, the slower the overall reaction will be until there is no more enzyme left to catalyze the reaction. The problem also provides some kinetic data, which, in fact, supports the time dependent inhibition features of the second kind of inhibitor. In this figure, we see on the y-axis, the concentration of the product that is formed, and on the x-axis we see the time. Now, if we look at the dark circles, which is the control reaction in which we have some substrate, but no inhibitor, we see that the product is produced and increases linearly with time. However, once we add the inhibitor at a certain concentration, and this would be the dark squares, then we see that the amount of product increases for a while, but then it grows slower and slower until it eventually stops. Now at this point, if we add an excess of substrate, we see that the reaction does not restart, meaning the entire amount of enzyme has been inactivated, and this inactivation is irreversible. However, we're also provided this additional piece of the data. If we run the reaction in an excess of substrate, we see here the open circles, the amount of product produced increases linearly with time. But if we add the same amount of inhibitor, these open squares actually are on top or right next to the open circles on this line, which says that the inhibitor has virtually no effect at this level of concentration. Which tells us that the inhibitor is in fact competing with the substrate and we will have an excess of the substrate. The inhibitor does not get a chance to bind and inhibit the enzyme, at least within this interval of time. Now this way of plotting kinetic data is perhaps a little misleading because the amount of product that we obtained from the reaction does not reflect how much of the enzyme gets inhibited. We want, in fact, to look at the percentage of enzyme activity that is remaining after a given amount of time. Therefore, if we were to plot percentage enzyme activity versus time, for a controlled reaction we expect the reaction to proceed, and the enzyme to stay just as active at any given point in time, so we will see a straight line. However, when we add an inhibitor, which shows a time dependent inhibition, then the percentage enzyme activity that remains at every point in time will be decreasing. And in fact, will be decreasing to the point that it reaches the maximum of slope for the maximum amount of inhibitor present in the reaction mixture. So this line represents the fastest that the enzyme can be completely inactivated by an inhibitor. It will be governed by the binding affinity of the inhibitor to the enzyme. This answers the third and last question of this problem. I hope you enjoyed this exploration of the various strategies by which inhibitors can inhibit proteases. This problem highlights, in fact, the importance of understanding the mechanism of action of enzymes. Only then we can begin to design therapeutically useful inhibitors. |
english_literature_lectures | Mark_Steel_Sylvia_Pankhurst_pt_2.txt | I just want to get out to the speakers yes smash it with him silvia would take new recruits into country lanes where their first task would be to collect flips the right size for smashing supporters were taken off window stretching classes emmalin announced the argument of the broken pane of glass is the most valuable argument from modern politics whatever a suffragette was sent to prison they would go on hunger strike so the prison authorities responded by hitting doctors to force feed them when Sylvia was arrested for smashing windows six of them flung me before my bedroom the man's hands were trying to cross over my mouth and a steel instrument pressed around my gums feeling for gaps in my teeth something gradually forced my jaws apart as they tried to get the tube down my throat silvia started to take issue with the movement it wasn't that she lacked courage if she'd been arrested 15 times I've been on hunger strike more than any other suffragette but she objected as she saw there was less emphasis now on involving lots of women and more on individual heroism and that way she was aware your supporters are left would not very much to do while a few heroes are treated like sites from here the suffragettes went in opposite directions Sylvia's focus strikers meetings whereas the other suffragette leaders called for strikers to be jailed Sylvia caused art fraud on a tour of America by visiting the poorest immigrant communities and by agreeing to speak as a black university in Tennessee and to see why this had such an impact this is an example of a standard textbook used in English schools at that time the prosperity of the West Indies has declined since slavery was abolished a large population is lazy vicious and incapable of any serious improvement a few bananas will sustain the life of a negro he is quite happy and quite useless so she went off to form our own section of the suffragettes in East London a former own newspaper the woman's drink malt she was arrested for organizing a mass booming of the prime minister outside Downing Street that she slipped away and then turned up in disguise to speak to the meeting so the police surrounded the meeting arrested her and took her to prison where she went on hunger strike she was released but on the condition that she didn't make any more speeches so she dressed in disguise again and made another speech this time when the police arrived her friends turned the hose on them and she escaped eventually she was arrested again it was taken to prison again went on hunger strike again and this time when she was released there were even more conditions so she flits to Norway then the government introduced a new law that said if a woman in prison went on hunger strike they would be released and then re-arrested again as soon as they had something to eat and to really take the piss they called this the cat and mouse event as Sylvia was becoming renowned for disappearing whenever she was a mouse she had to come up with Emma Maureen Everett disguises to get around these London and at one meeting in BO she said I reached the hall in disguise and what a triumph it was to be back among my people Akutan mr. Crouchback jump Sylvia jump with arms outstretched so jump I did as she wrote an article declaring we have not yet made ourselves a match for the police and we have got to do it the police know jujitsu I advise you to learn jujitsu the reason for this was the increasing violence of the meetings for example at one meeting she said the table was flung to the ground and the chairs were smashed this is eyes the heck nice secretary was beaten with the truncheon sure dude men and women came to the meetings with sticks in their hands retaliating against the blows and release I also began to see a weapon called a Saturday night made of tarde rope closely twisted and sometimes weighted with lead even the anti strike wing of the suffragettes extended its campaign of violence a one night in 1914 they burnt down three Scottish castles and famously they chained themselves to the railings of Buckingham Palace imagine the impact that must have had in those Royalists reverential times whereas eighty years later there's probably been a woman looking out that window game that will never get anywhere I threw myself down the stairs when I was pregnant with the end of the throne and nobody took any notice of that and most famous of all was Emily Davidson she had been a teacher but gave up to work full-time for the suffragettes and once it is say she broke into the house of commons and spent the whole night hiding some way around here in a cupboard but nobody knows which covers and the story is probably just a myth she was arrested for trying to set fire to the post office in Parliament Street and then imprisoned she went on hunger strike and barricaded herself into her cell and then one night in 1913 she laid a wreath on the statue of Joan of Arc and the next day she went to the dog as the King's horse Anmer r an interview just down there Emily ran out of the crowd over the fence onto the course and under the Kings host which apart from anything else must have taken the most meticulous planning to know exactly where the right horse would be at exactly the right time she must have spent ages study in the fall whereas today you could just got to the jockey before any go away there's a score unless over we might Emily was knocked unconscious and later she died in hospital and the incident was noted by the king just as the horses were coming rind technol corner a suffragette dash tight scandalous preceding a disappointing day got home pride 15 an empty in the garden so the government was under siege from the suffragettes from the unions and from the Home Rule movement in Ireland and these issues all came together when there was a general strike in Dublin the leader of the strikers Jim Larkin spoke with Sylvia Pankhurst at the Albert Hall but then Christabel who is living in Paris at the time summoned Sylvia and told her that she and the whole of the reefs London group were all expelled from the women's social and political union the supporting Larkin and the strikers and we're Sylvia complain that this was undemocratic Cristobal replied we do not want democracy here they there was a rail because Sylvia continued to call her group the East London Federation of suffragettes and it'll perk Erza to the sort of rail that plagued groups like bucks fees the other pan curse declare that they were the only ones entitled to use the name soon all these issues were engulfed by one other the coming war the government announced that the Germans were raping nuns and bio netting babies and were around every corner and this fever eats every section of society dog homes were full of dak sins that have been abandoned because of their German name across Europe socialist organizations led huge demonstrations in opposition to the war for the day war broke out almost every one of them changed |
english_literature_lectures | Mark_Steel_Sylvia_Pankhurst_pt_3.txt | mind and supported it instead one group that became more fervent than almost anyone was emiline suffragette she announced that all suffragette action would cease because with that patriotism which has nerved women to endure endless torture we ardently desire that our country shall be victorious the war has made me feel how much there is of nobility in men the suffet newspaper denounced a minister at the foreign office because he had a German uncle and Sylvia despaired as the rest of her family went around the country speaking at recruiting meetings for the Army and according to her they handed white feathers to every young man they encountered wearing civilian dress and they always assured Their audience that God was on their side of course he was God's always on your side in a war there is Never As far as I know been a war in which a general has got up and said last night night in this a time of need I prayed to God unfortunately it seems he's backing the Turks on this one one of a handful of individuals across Europe to announce their opposition to the war was Sylvia Pankhurst she wrote that as I saw this clamor to war there was a cry within me stop all this breaking of Bones this mangling of men this making of widows so emilyn wrote her a letter I'm ashamed of where you stand on the wall I only wish Harry was still alive so he could have gone and fought I wonder if you said oh I wish I was like Mrs Wickham over the road eight strapping young boys she had lost a lot at passionale oh I was jealous but apart from the Carnage the war also caused food shortages forcing up prices so Sylvia turned her office into a cheap Cafe for the most desperate on a site which is now a pub with possibly the finest Pub sign in the whole of Britain she even set up a toy factory to give people work women whose husbands had gone away to war would come to work in this little building in a sort of anarchist profit share Collective and from here Sylvia set up marriages between local women and single solders so that the women could carry on getting an allowance but the difference between Sylvia and the old suffragettes was shown when she got one of her old suffet comrades into to help a lad who'd been on suffer jet demonstrations came in destitute looking for help and the sub project told him well why don't you enlist it was in the women's dreadn that SEC Creed Su soon first made an anti-war statement and at one point the paper was selling 40,000 copies a week the women's dreadn called for mutinies in the Army at which point Sylvia was jailed for 6 months for sedition in the summer of 1915 Sylvia received a letter from Kia Hardy which began worryingly dear syfia in which he told her that he was so ill he didn't expect to last a week a few days later while speaking on a demonstration against conscription she noticed a newspaper headline Kia Hardy dead following this despair she became ecstatic when news reached her that Lenin and the Bolsheviks had taken power in the Russian Revolution and captured the Zar ladies and gentlemen we got him Y come on get at this point she changed her paper's name to the workers Dreadnaught the government sent arms to the forces fighting against the Russian Revolution but Dockers in the East End of London refused to load them the river's joint shop stewards movement organized this campaign and Sylvia kept him continuously supplied with Lenin's appeal to the working masses which was printed illegally communist Harry poet said my land lady in popler expressed surprise that my mattress seemed to vary in size from day to day she little knew that inside our mattress we kept our copies of Lenin's appeal Sylvia was invited to attend a socialist conference in stutgart but she didn't have a Visa so she had to slip out of the country in Disguise and go to Italy then she traveled along goat paths to get into Switzerland finally reaching Germany having crossed the Alps on foot she traveled to Russia on a tiny Norwegian fishing boat as a stowaway without a passport across the Arctic sea when she got to Russia she was hugely impressed with the revolution but she had an argument with Lenin insisting the British Communist Party should have nothing to do with elections to a parliament Lenin wrote a book arguing against Sylvia panker stance called left-wing communism and infantile disorder that's cool isn't it to have Lenin going trouble with you is your too bloody left wing feel like sitting in a pub with George best and him going I'm going home you're just being silly now mate Lenin insisted that the Communist parties of Europe should participate in elections and wherever possible they should join the labor party Sylvia derided the idea of of joining the labor party and of standing in elections at all saying that the Communists should instead be encouraging power to pass to the local communities so having spent her whole life campaigning for the vote now she was saying there was no point in anybody voting so instead of joining the newly formed Communist Party Sylvia and a few supporters went off to form their own party and over the next few years the workers dreadn became increasingly hostile to the Russian Revolution but it did run a series of lessons for Esperanto as a way of call combating nationalism then the subtitle of the paper for international communism was dropped and replaced with new ones such as for Clear thoughts and plain language and the happy are always good she might as well have had workers do Ed not because I'm worth it ironically as the panker were at war with themselves the government Was preparing to back down following the war it seemed inconceivable that soldiers who F the war should then be denied the vote so a bill was proposed to extend the vote to all adult men but then as the men had gone off to fight 1 and a half million women had taken their place in the factories so it also seemed ridiculous to deny them the vote as they had clearly taken on the traditional men's roles so the vote was granted to all women over the age of 30 and one of the first women to stand for election to Parliament was emilyn panker for the women's party campaigning for policies such as women wearing less lipstick Christel went even more peculiar she became an Adventist and predicted that Europe was about to enter an age of dictators and earthquakes which would end with the second coming why do people feel the lead to join these sorts of religions are they in church listening to stories about how God made woman out of a rib and parted the sea and turned Lot's wife into a pillar of salt while they sit there thinking trouble with this religion it's not mad enough for me two BS emilyn became the Parliamentary candidate for White Chapel backed by the conservatives while several of the most prominent suets went on to work for the British Union of fascists which must have been quite handy they could have given the SS tips on breaking windows in contrast Sylvia fell in love with an exiled Italian Anarchist who worked on the workers dreadn called Sylvio Coro this has to be one of the biggest family Rifts of all time Sylvia and Sylvio moved to a house on this site that they called Red Cottage in the Suburban area of Woodford in Essex what a fantastic thing to do for no other reason than to annoy everybody on the neighborhood watch scheme I love the idea of suburban anarchists |
english_literature_lectures | Stream_of_Consciousness_and_Mrs_Dalloway.txt | this lecture is over stream of consciousness it also includes information about modern innovation in writing virginia woolf and her novel mrs dalloway for stream of consciousness is a technique in literature it's a style of narration meant to mimic the flow of human thought we'll talk more in a minute about where the catalyst for this inventive way of writing came from if you think about the way that you think which is called metacognition by the way the human brain when it thinks it doesn't divide things into chapters or into sentences often your thoughts are incomplete or fragmented or you jump to your sister's rabbit who one time your finger which reminds you of the dentist and when you bit the dentist's finger when you were younger or your brain just goes all these different directions at least mine does the human thought process flows and authors try to mimic the way humans loosely associate when jumping from thought to thought this is a contrast to traditional narration where you have a third or first person narrator who is telling what happened whether it's in an emotional way or it's in an objective way or whatever often you would get a character's perspective on the stage through a soliloquy in a soliloquy the speaker addresses the audience or some absent third person so that we can know what the speaker is thinking even though no one else is on stage stream of consciousness then is primarily the character addressing the character's self it's like a soliloquy except the third person is the reader instead of the audience and the character is not actually saying the words as they would in a play with a soliloquy we see the inside of the character's mind and it has little structure it can jump from place to place whereas with a soliloquy you're usually going to get a logical sequence with complete sentences and beautiful words lots of literary devices whereas the human mind doesn't really work like that um primarily this technique is used in fiction instead of drama or poetry though it is used in poetry to a degree after say 1920 or so often this technique of stream of consciousness is also lacking in punctuation or traditional grammatical structure sentences can be whole pages long or more whereas what we see in more traditional works whereas it follows a traditional structure and pattern of grammar now stream of consciousness is often used interchangeably with the term interior monologue an interior monologue performs the functions of stream of consciousness in a more organized way an interior monologue is going to be more structured we see the characters thoughts in a stream of consciousness and they may still be disjointed in an interior monologue or make these associative leaps however inner monologue will maintain the syntax the punctuation often it'll be a more comfortable narration whereas stream of consciousness might make you stop and think maybe you have to reread some of the stuff because you're reading someone else's thoughts and they're not going to think like you necessarily these terms though still are used interchangeably and some consider like stream of consciousness as the genre while interior monologue is the delivery of stream of consciousness because inherently stream of consciousness is the interior speech in someone's brain so it is by definition an interior monologue now this innovation these things in writing when the whole scope of writing changes these things don't happen by accident these come from talented authors or new authors or interested authors purposefully seeking to change the craft wolf and her contemporaries virginia woolf we're going to talk more about her in a minute they wanted their work to reflect life they wanted authenticity they wanted to mirror the universal human experience previously characters and books had been written for the upper classes or they had been made for tragedy we talk about you have to be of noble blood you know because you have a long way to fall whereas with what wolf was trying to do what her group was trying to do is to make it accessible to make it literature relatable to everyone also we have to consider that she's writing in between the world wars world war one was unlike anything england or america or anyone had ever seen before it changed how people lived how they saw the world what they thought of as important and valuable and possible and because literature is fundamentally a reflection of the human experience literature had to change too because the human experience changed while we still maintain the same feelings and understandings that we have as a culture as a for thousands of years the way the avenue the vessel had to change because the world war changed things so much now wolfe herself um was born in 1882 post-civil war in america born in england her two most famous works are mrs dalloway and to the lighthouse she was part of the bloomsbury group which is this group of authors living in england who are working together to be innovative and as creative as possible wolfe is considered one of the most influential modern writers she was prolific in her writing even when she was suffering from these nervous episodes or breakdowns she still would write her social life would suffer but she would write and write and write she married in 1912 she had an extremely happy marriage um her husband leonard was a writer in the bloomsbury group and he they set up their own publishing house so a lot of her works were published by herself and her husband in their home she was a feminist and she believed in how to say she believed that often men's narcissism and repression were is what caused women to have these nervous episodes in the first place finally she died in 1941 by committing suicide she put rocks in her pockets got in the river and drowned herself she wrote a letter to her husband and she felt that she says she didn't want him to have to suffer because she was ill she wanted him to be able to live and to work without having to worry about her this was not her first suicide attempt and she addresses suicide in mrs dalloway now wolf's innovation comes from her experimentation with storytelling for example mrs dalloway is a woman's whole life kind of explained within the confines of a single day it starts out in the morning ends in the evening but this whole time we're getting images out of clarissa's brain there's another story that she wrote that takes place two days that are ten years apart and it tells the story in kind of the same way as mrs dalloway where where in real time we have a very small amount of time but in brain time we see memories flashbacks feelings triggered by events she wrote a book length essay discussing a woman's needs in writing fiction called a room of one's own and she uses stream of consciousness an inner monologue in all of these works that she's trying to be innovative and trying to do something different with fiction now mrs dalloway is her most well known novel and it's about clarissa dalloway's preparations for a party that she's giving that evening close to dallas in her mid 50s she's in society well to do very wealthy her husband's in parliament and throughout this day as she's preparing for the party she has various memories of flashbacks of her life choices she regrets or wasn't able to make because um if you watch the importance of being earnest video that i did it talks about how homosexuality is literally illegal in the late 1800s in england and so although mrs dalloway is taking place in the inter-war years that's still a taboo i mean even today to an extent if homosexuality is a taboo depending on where you are what company you're in the whole novel takes place within a single day it also deals with another issue that was confronting the world at this time post-traumatic stress disorder after world war one when men came back from the war a lot of them were really messed up by what they saw what they did what they experienced and septimus warren smith in mrs dalloway is one character who is suffering from post-traumatic stress disorder now some things to look for as you read and look at wolf's style does clarissa's stream of thought match the rhythm of your own when you read clarissa's thoughts um you know ever i think you know all of life kind of has this rhythm which is sort of weird to say maybe but clarissa too has a rhythm in her thoughts look at it read it out loud feel what she's feeling also um the story is told in a third person point of view is wolf's third person recreation of clarissa's thought does it feel authentic or is it you know you feel meant does does clarissa feel real or does she feel manufactured do septimus feel real or does he feel manufactured and whether they feel real to you or not what is it in their style in the writing that makes them feel authentic or inauthentic and also what moves does wolf make within the text so you'll see clarissa as human as real as virginia woolf saw clarissa dalloway what what is she trying to show us with clarissa's thoughts i hope you've enjoyed the lecture and i hope you enjoy mrs dalloway it's one of my favorites |
english_literature_lectures | Faulkner_and_Hemingway_Biography_of_a_Literary_Rivalry.txt | from the Library of Congress in Washington DC you well good afternoon I'm mark Sweeney chief of the humanities and Social Sciences Division here at the Library of Congress and I want to welcome you to another in a series of occasional lectures sponsored by HSS our division our division provides reference and research service through three reading rooms here at the Library of Congress that's the main reading room the local history and genealogy reading room and the Micra forum reading room we we invite you to avail yourself of our services we're open Monday through Friday 8:30 in the morning tol 5 p.m. and we have extended evening hours on Mondays Wednesdays and Thursdays until 9:30 and justed dispel a few myths about research here at the Library of Congress you do not need permission from your congressman to use our collections you don't need to be working on the next great American novel all you need to do is want to use our collections and services for your learning so please visit us and use our services our next scheduled lecture Syria here is with Anne peacock on the role of libraries in achieving a more inclusive information society for women and that will be next Thursday March 22nd at 1:00 p.m. in the Pickford theatre that's down on the third floor in this building today's program is being co-sponsored with the poetry and literature Center at the library the center is home to the quote laureate of the United States and produces an extensive schedule poetry readings book programs and literary events coming up on March 26th which is the be the week after is the next in a series of monthly literary birthday celebrations with an evening devoted to the memoir about Tennessee Williams by William J Smith check the Center's website for more information about upcoming events webcasts past events and their new blog which I'm sure you'll enjoy so enough about the advertisements today so today's lecture was planned by our very capable reference specialists in English and American literature at a ioke son and she will introduce today's speaker so thank you Abby for your work in putting today together today's program well I do have an unbelievably fabulous job as the literature English and American literature specialists in the main reading room if you all haven't seen it get your reader card and get on over there it's the most unbelievable place that I get to work 2012 marks the 50th anniversary of Faulkner's death we we do prefer at the library to celebrate birth dates rather than death dates but last August we did a program with Sally wolf from American Emory University about William Faulkner she had read some family Ledger's and Diaries and things they were the basis for a lot of his writing so it was a sold-out program it is on the webcast so you can go on the library's website and find that Faulkner program that preceded this one and at that time I met Joseph for shown did I get it right for Joan who who talked about this upcoming book on Faulkner and Hemingway biography of the literary rivalry and so I'm delighted to introduce him today a friend from book club who happens to be here reminded me a couple days ago that Faulkner and Hemingway fans are rabid they are like for one or the other and I thought maybe we should have wedding style uh sure 'he's here today and instead of asking you bride or groom they would set you according to which fan you went on but i think will manage to keep peace otherwise I think our speaker does not take sides he comes down neutral on this issue but maybe we'll poll the audience afterwards and see how you all feel dr. ferzaan holds a BA in English and Women's Studies from the University of Delaware his ph.d is in English from George Washington University and his dissertation there focused on the modernist dialectic William Faulkner Ernest Hemingway and the anxieties of influence and rivalry he now is an adjunct lecturer of English at Georgetown University as well as an assistant professor of writing in the University Writing Program at George Washington University I looked over his list of courses that he's taught and I want to sign up for all of them you know we English majors never quite get over that and he talks about he teaches about modernism he's also taught about Herman Melville and Frederick Douglass examining race and authorship in the 19th century and also American Civil War poetry and I was particularly interested in the adapting authors adapting author some Shakespeare courses and their adaptations I'm not going to talk much about his book because he's here to do that but there was this pre-publication review that said this study is the best most balanced account ever produced of the artistic relationship between William Faulkner and Ernest Hemingway the careers dominate 20th century American literature and as this book shows the example and work of each writer informed and influenced that of the other both men recognize the value of the other and crucian goes a long way toward explicate acomplia elysee on the part of both so the way this program is going to work is he's going to speak for a little while we're going to have a question discussion session you realize that we're being webcast there's that gigantic camera to remind you so we encourage you to ask a question but by doing so you are giving permission to the Library of Congress to put you up on the web so keep that in mind and we'll turn it over now to dr. presume I want to thank you all for coming as well many more people than I anticipated so I'm going to give another thanks to Abby for although all our hard work with doing the planning the logistics the press release and everything and I should say from the beginning to any humor or laughter and what I speak is not due to me but due to the authors in this case I am just a messenger of I've worked with a lot of these letters for eight to nine years or so and they still make me laugh as I go through particularly the Hemingway who wants so with this in mind I want us to start in August 1918 this is when a young recently wounded Hemingway writes to his sister Marceline from Italy why a bright beam of an August moon have you not written me is it that you love me not or is it but neglect if but the girls of our village could see me in my dress uniform I am of a great fear that the man would be wifeless is the performer always however for them to appreciate my scars it would be necessary for me to wear no pants ie trousers where else to have flaps sewn the length of the knees that they might be unbuttoned at will to show the marks of Valor which may be many in various aha I will wear nothing but my tank suit then all then we'll all be reviewing Mizpah right among other things this letter reveals a kind of playful repose a month after Hemingway was seriously injured it also anticipates some definitive aspects of the nascent Hemingway persona the war the wounding the romance of the injured veteran the valor both real and constructed of a combat veteran the use of nicknames in linguistic play and in this case erroneous French I think he meant an essay pas but that's a lot of his work does that and the eager performance of masculinity right he's the decorated soldier he's the Casanova he's the ladies man right though he was genuinely wounded by an Austrian mortar in July 1918 Hemingway exaggerated greatly when returning home he noted among other inventions that he had served with an elite division of the Italian military and he was actually a Red Cross ambulance driver among other non-combat duties around the same time William Faulkner whoa similar fictions about his own war experiences his training with Britain's Royal Air Force notwithstanding Faulkner returned to Mississippi in December 1918 in an officer's uniform with an affected limp and a host of invented stories although he finished only as a cadet and never left Canada right performance clearly both authors felt that being a post-war figure with growing artistic aspirations implied among other things certain kinds of creativity and masculinity this of course was well before each new anything of the other as a fellow modernist and rival author and their first mutual awareness seems to have been the mid to late 1920s yet this 1918 Hemingway letter predicts a kind of epistolary persona that would inflect how they rivaled and influenced each other from the early 1930s through their deaths in the early 1960s among excuse me among other issues the rich correspondence reveals the author's a fact or performance their strong sense of masculinity competitive awareness of each other self-confidence an occasional self revelation one late Hemingway letter for instance recasts the Hail Mary prayer to criticize the religious tone of Faulkner's 1954 novel affable I'll refrain from reading that one out loud right no blasphemy for the webcast or another shows him writing a mock Faulkner passage about a mutual acquaintance The Times Book Review critic Harvey bright anyway also once referred to Faulkner as old corn drinking mellifluous an attempt to disparage him as both pro stylist and alcoholic writer to say nothing of Hemingway's own trouble with alcoholism yes despite their literary successes both authors felt the pool of the others example and professional standing throughout their mature careers their correspondence among other venues bears this out richly and sometimes humorously as we shall see and they're rich correspondence has been a valuable element of this project which began as my PhD dissertation in 2001 and is now developed into book form and these letters I hope form what I want to be one of the books richest contributions a lot of the letters I've examined particularly the Hemingway letters remain unpublished at this they will be published eventually through a major project housed by Cambridge University Press currently they are in the kennedy Presidential Library in Boston and this to me demonstrates the richness of archival manuscript study some of my recent work has been with unpublished correspondence papers documents newspapers that sort of thing and in addition to the research for the book which was not done at the Library of Congress I've studied with some of the Library of Congress as Ralph Ellison papers just on the first floor of this very building in a just-released book essay looking at Ralph Ellison's complex ways of dealing with Hemingway's influence I looked at some of his unfinished manuscripts his drafts and some of the letters that he and his wife exchanged with Hemingway's widow Mary when both were living in New York and 60s and 70s this seems a particular advantage of working with early and mid 20th century authors given their rich correspondence and typewritten or handwritten manuscripts of course well before email right in Hemingway's case his letter writing style is richly and highly analyzable there are often Corrections or strikethrough sometimes he continues in the margin way up and you have to kind of rotate the page he also you know there's sometimes he would leave things out he would cross over things of course misspelling things getting foreign languages wrong I mean they're very rich they often underscore the thematic content of his letters but they also reveal some of his professional anxieties struggles now I just want to give you a brief overview of the work I try to do in the book hopefully successfully I look at a variety of texts and works spanning the mid-1920s and through the early 60s as far as their fiction their novels and stories Faulkner tended to use those as more of a venue to discuss Hemingway to refer to Hemingway his novels pile-on requiem for a nun and the mansion have characters making conversational references to Hemingway a few of his unfinished screenplays also have the same conversational reference almost always to For Whom the Bell Tolls his story is story collection big woods it's a collection of hunting stories has a story in it called race at morning which features an aged hunter named mr. Ernest right it's from the late 50s at that point Hemingway was very well known his 1939 book the wild palms makes several Hemingway references to the Sun Also Rises a farewell to arms to the story Hills like white elephants he also a character also makes a Hemingway --vs pun in addition to another character the narrator's references to matador and aficionados right mean in this context I mentioned Matador aficionados is to call attention to Hemingway and his the persona of the bullfighter and so forth right most of what we'll talk about today is the correspondence particularly the period 1947 to 1955 this is really the richest period of their letters that it encompasses the ranking of his contemporaries Faulkner gave in 1947 the Nobel Prize the question of reviewing the Old Man and the sea and we'll we'll look a little bit from an episode of 1947 where they wrote letters to one another so far those are the only letters I've located where they are corresponding with each other and we will get to this but the sort of the infamous or famous episode depending on your own you're leaning right is the ranking of his contemporaries that Faulkner gave presumably off the cuff in April 1947 at the University of Mississippi he places himself second to Thomas Wolfe he puts Hemingway fourth and he also managed on John Dos Passos and John's and this pretty much stayed with him through his travels to Japan his professorship at the University of Virginia and his his travels to New York and also the talk he gave at West Point in April 1962 I won't talk about these today but their Nobel Prize addresses also shows strong awareness and cross reference to one another but of course in a world literary context Hemingway's major bullfighting works death in the afternoon from 1932 and the dangerous summer which she worked on in the late 50s layer bullfighting writing and this Hemingway persona the firt in the first one death in the afternoon he he has a narrator who was a very Hemingway like figure who essentially makes a crack at Faulkner for being such a prolific writer and he says in there you know but he's prolific too by the time you get them ordered there'll be new ones out right meaning that he doesn't edit well enough and I will do talk a little bit as well about how the authors framed or adapted one another Hemingway edited a collection in 1942 called men at war he included a Faulkner story turn about about what takes place during World War one but as a gesture of one-upmanship he also included his own superior material namely from a farewell to arms and For Whom the Bell Tolls Faulkner also had a hand in adapting Hemingway's work for film he worked with the director Howard Hawks in 1954 to turn to haves and have-nots into the film the one of course with Bogart and Bacall right and he did eventually give a very laudatory review of the old man on the sea in 1952 so analysis of these types of works in these types of texts in addition to relevant biographical and historical context has helped me understand this complex and multifaceted literary relationship which I read is more than a simple rivalry of course my book subtitle notwithstanding and I do want to be clear that even though mine is the first book about these two I'm not the first scholar to talk about points of conflict and contact between the two authors the my hope in it in it is that I give as full as possible a portrait of the narrative of their rivalry and influence so with that in mind now I want us to concentrate on a few particular moments in the narrative of the author's relationship specifically we can consider letters from the 1940s and 50s the early 50s marked the interim between their Nobel Prize addresses Faulkner won his in December 1950 Hemingway his in 1954 much of Hemingway's correspondence from this period reveals a strong and self-pitying element right this is kind of goes against the persona or the image of the tough worldly active writer these are he's almost narcissistic and self-pitying in these letters he was preoccupied with among other things particular aspects with which he associated Faulkner the 1947 ranking which will get to the Nobel Prize address which the Nobel Prize excuse me which he hoped would portend a loss of creativity he wanted to see it as a swan song for the any writer who wanted the religious themes of some late Faulkner works and Faulkner's alcoholism which Hemingway felt weakened his writing this is where the old corn drinking mellifluous term comes in right despite some gestures of respect and camaraderie in this correspondence language of competition and masculine conflict can be said to dominate these examples such as their horse racing boxing and baseball metaphors or in one case a duel between the authors which we will get to write a particularly complex episode stems from a relatively simple request that the critic Harvey bright made a farmer they met in New York in 1952 after Faulkner returned from Europe and bright asked him to review the old man in the sea for The Times Book Review Faulkner however rejected the offer but he took the page proves back with two Mississippi with when he returned to Oxford he wrote a statement about Hemingway and sent it to his editor at Random House he went on to give praise albeit reserved praise of such Hemingway works as a farewell to arms the story collection men without women and for Whom the Bell Tolls but he did so from a somewhat loftier place as if he then a recent Nobel laureate was in a position to defend a writer dealing with creative struggles and with tepid reviews of his novel across the river and into the trees I'll be it with good intentions ie creating dialogue between these writers Harvey bright forwarded Faulkner's comments to Hemingway who predictably overreacted as we see from two letters in late June first from June 27th and heaven an image of this tight as a long typewritten letter this is just an excerpt from it right before this passage he says As I Lay Dying stands up the best may be it and parts of pylon and then we continue if and if it's easier to read there's a type this probably a little clearer um about eight altogether of the stories stand up that longest sentence in requiem for an on doesn't stand up because it isn't a true sentence if you just omitted the periods at the end of various sentences it is damn good but it is not one long sentence anyway if you have to write the longest sentence in the world to give a book distinction the next thing you should hire Bill Veck and use midgets I remember writing is there's your baseball metaphor right I remember writing a long sentence once in green hills of Africa about the Gulf Stream but I remember how it just got started and went on and I ended it the first place the sentence under I suppose what bill meant that I had no courage to take chance in writing would be that I would not write a whole book consisting of one sentence actually I have too much respect for the English language it is a wonderful thing to be able to work with sometimes we have to perform certain operations on it that may have been good for it or bad for it but I respect it and myself too much to operate on it or anything else while drunk okay this is typical continuing to use Harvey bright as a critical sounding board Hemingway wrote again on June 29th noting and here's another image of it oops sorry back up he is a good writer when he is good and could be better than anyone if he knew how to finish a book and didn't get that old heat prostration like honest Sugar Ray at the end do we know what that refers to anyway know what that refers to Sugar Ray Robinson write a famous fight he did June 25th 1950 - in Yankee Stadium temperature was around 100 degrees and he lost to Joey Maxim in the 14th pound this had just happened four days before another sports metaphor it's hires out there's another letter where he compares Faulkner to a tiring pitcher who's been in for one too many innings right then to continue now I enjoy reading him when he is good but always feel like hell that he is not better I wish him luck than he needs it because he has the one great and uncurable defect you can't reread him when you reread it is I wish this were my humor I wish I could take credit for it when you reread him you are conscious all the time of how he fool you the first time one thing I think anyway has a mind here is the novel sanctuary that has a particularly complex and controversial episode with a corncob that's another core cop another nickname Hemingway used to use for right in truly good writing no matter how many times you read it you do not know how it is done bill had some of this at one time but it is long gone it was one point later in this letter where he just writes criticism class is out right among other things such sports metaphors namely bill back Bill Veck excuse me as a minor-league baseball owner and manager and Sugar Ray Robinson B speak Hemingway's gender-based sense of competitiveness as well as his view of Faulkner as a threat to his professional ego and of course the ego was much more insecure and anxious than the image of the big-game hunter the big-game fisherman the world traveler the boxer the authors had anticipated this kind of writing / competition model in the 1940s in part through a topic common to their own interests and some of their work horse racing in the fall of 1945 Hemingway correspondent with the critic Malcolm Cowley about a number of literary matters including Kali's work on what became the portable Faulkner anthology in 1946 as Hemingway wrote to Kali on October sixteen I had no idea Faulkner was in that bad shape and very happy you are putting together the portable of him this is what this is when Faulkner's reputation was struggling he was struggling very much he has the most talent of anybody and he just needs a sort of conscience that isn't there and then later on but he will write absolutely perfectly straight and then go on and on and not be able to end it all right I wish the Christ I owned him like you'd own a horse and train him like a horse and race him like a horse holding and writing how beautifully he can write and as simple and as complicated as autumn for a spring consider as a compliment fall common II not I write Falkner's letter to Robert Linscott at Random House regarding the portable Faulkner the idea had been floated there that Hemingway write the collections preface both Malcolm Cowley and Faulkner objected all right not surprising cally thought it would be in his words in dubious taste Faulkner gave his own reasons on March 22nd 1946 I am opposed to asking Hemi way to write the preface it seems to me in bad taste to ask him to write a preface to my stuff it's like asking one racehorse in the middle of a race to broadcast the blurb on another horse on the same running field a preface should be done by a preface writer not a fiction ear certainly not by one man on another in his own limited field this sort of mutual back scratching reduces novelists and poets to the status of a kind of eunuch cape on pampered creatures and some spiritual vanderbilt stables there is a the masculinity element right he doesn't want to be mindless he doesn't want to be a symbolic eunuch mindless possessing nothing save the ability and willingness to run their hearts out at the drop of Vanderbilt hat the woods are full of people who like to make a nickel expressing opinions on this work of novelists can't you get one of them clearly competition yes whether seeing themselves as trainers or as racing thoroughbreds who notably are not mindless and not subservient the writers further juxtapose their sense of writing and manhood as competition each sees himself as autonomous and superior to the other Hemingway wants to enact a trainer or editor role or as Faulkner once outrace the field which arguably he did and he would also attempt to outrace the field about a year later and this brings us to mid nineteen forty-seven which is perhaps the defining moment and the author's relationship in this case Faulkner sounded more confrontational than he intended to sound or perhaps more so than he wanted to seem when answering questions at a University of Mississippi creative writing class in April 1947 he was asked to rank his contemporaries after being asked by a student to include himself in the list he answered one Thomas Wolfe he had much courage and wrote as if he didn't have long to live to William Faulkner three John Dos Passos four Ernest Hemingway he has no courage has never crawled out on a limb he has never been known to use a word that might cause the reader to check with a dictionary to see if it is properly used all right five John Steinbeck at one time I had great hopes for him now I don't know and the session was part of a series of talks that he gave at the University despite his agreement with the English department faculty were present students were allowed to take notes and his comments were not restricted to the classroom instead the university's press release for these sessions was picked up by the New York Herald Tribune which Hemingway received in Cuba and May and when Hemingway receives this press release he was going through a lot of emotional and personal struggles his longtime editor max Perkins had just died his middle son Patrick had a very bad case of the flu I was sorry I had been accident and his wife Mary had a very bad case of the flu and he this is he was sort of moody enough anyway and to hear and read something like s especially when courage enters the picture right he would not have been happy so the ostensively private remark was private no longer and to unpack some of this rich episode I want to look at one of the two letters that each wrote to the other in the wake of this incident the first letter that Faulkner wrote to Hemingway and to general charles lanam one of Hemingway's wor friends has already been published in joseph blotters selected letters collection in that he's largely apologetic for what has happened here Hemingway's second letter to Faulkner which was from July 23rd has been published in Carlos baker's selected letters collection here I want us to consider two previously unpublished letters both of which are housed in the Kent in the Hemingway collection at the Kennedy Library they will eventually be published but probably not for some time as we might guess anyways initial reactions to Faulkner's ranking and mention of Courage entailed aggression part of his strategy was to mobilize his war friend general Lanham to write Faulkner and attest to Hemingway's battlefield composure during World War two Lanham also seems to have sent Faulkner a copy of Hemingway's bronze star citation which he received from the Army in mid-june 1947 and I'll just read a brief excerpt from it this is also from the Hemingway collection Hemingway displayed a broad familiarity with modern military science interpreting and evaluating the campaign's and operations of friendly and enemy forces circulating freely under fire and combat areas to obtain an accurate picture of conditions through his talent of expression mr. Hemingway enabled readers to obtain a vivid picture of the difficulties and triumphs of the frontline soldier and his organization in combat right he wasn't not he was not in Europe in a combat role per se but he was very much involved in the action and you have to imagine the Faulkner getting this as a writer who wanted to go to the First World War didn't because the war ended before his training was done but then whoa fictions about at litem he continued to lie about it through most of his life Faulkner though in his in the first letter he wrote was apologetic and contrite he noted that he intended no offense and that his remarks were incompletely printed and one sees similar civility in Hemingway's July 16 1947 reply the one also sees a particularly uncivil suggestion dear bill he opens services this is if this is a whole letter right here and I'm going to read the first third and fifth paragraphs right there I have another version if it's easier to read the hell with the whole thing I'm sorry that you were misquoted and that general buck Lanham went to the trouble of writing the letter on the misquote and that you should have to write to me in to buck thank you very much for doing so the sari is slightly disingenuous here because anyway was the one who mobilized Lanham to write to Faulkner and prove that the battlefield composure the battlefield courage now the third paragraph please know that none of it means damn to me now that we know what it was about this might be easier to read I hope would fight anytime for your right to call me any sort of son of a as a writer even though might disagree the same way would be glad to shoot it out over any personal points of honor only I hope I'd shoot to miss you on account of wanting to keep you as a writer actually I know I would when this is the final paragraph I hope you're well and that your family are and that you're working good I'd like to get together with you sometime and drink a little and talk there are very few of us left and this is typical of Hemingway's letters where Faulkner is concerned namely in its moodiness right the opening and closing paragraphs have civility there's even camaraderie with the US you know there are very few of us left in both the first and the third paragraph he says that they should get together with Lanham and drink and talk the middle paragraph though I think of this should come up yes the middle paragraph is where there's the masculinity the conflict this is the Hemingway code right here namely glad to shoot it out over any you know points of honor right and you might notice from this middle paragraph hope it's somewhat big enough to see right around here something I think is significant about this letter is what's not here that is no strike throughs emendations nothing's crossed out especially I only hope I'd shoot to miss you right which to me suggests that the violent symbolism is somewhat intentional it's also very typical of the Hemingway code right so now three days later we go to Faulkner's reply this is July 19 1947 he does reply with a note of civility but he doesn't retract their ranking dear brother H it starts right pretty lengthy letter we're going to I'm going to end up reading parts of the first third and fourth paragraph you see the handwritten bill F at the bottom this is from this is house at the Kennedy Library with with a Hemingway collection let me skip to this so it's a little easier to read I hope thank you for your letter I feel much better not completely alright I owed Lanham an apology and I hope he accepted it but the bloke I'm still eating to is Faulkner I cringe a little at my own name and printed gossip I hate like hell to have flung any other man's into it damn stupid business one of those trivial things you throw off just talking a nebulous idea of no value anyway that your test by saying it then the third paragraph or the second paragraph rather take a thing like Madame Bovary the woman not the woman the book or your Alpine Idol or that one of Joyce's about the woman playing the piano ie his story the debt he also mentions Ring Lardner within this paragraph here it's finished complete all the trash hacked off and thrown away three dimensions and solid like a block of ice or marble nothing more than even God could do to it it's hard durable the same anywhere in fluid time you can write another as hard and as durable if you are good enough but you can't beat it and the fourth third paragraph that I'm not going to read here as Faulkner is outlining this aesthetic vision of his he also mentions Dickens and Henry Fielding as as part of what he was trying to get at him in the talk and these comments excuse me in the last paragraph I wish I'd said it that way but even then it would benefit would have been misquoted probably as moths saying in the first place usually are but what I wish most is I'd never said it at all or that I could forget having done so which perhaps I could and would do if it had not been about a first-rate man right so we see here different sorts of performance or different sources of self-image here right Faulkner is more civil he's more conciliatory he doesn't respond to the idea of shooting it out right he seems to think that was sort of beneath him perhaps perhaps right but you also don't see here any kind of retraction it's kind of it seems to have kind of gotten out of his hands a little bit after Faulkner learned of the release of his apparent literary gossip he was anxious to clarify what he had said or what he had meant to say or what he wanted to appear to have said about Hemingway he had to revisit this episode in New York Japan Virginia and at West Point over the next decade he attempts clarification and amelioration here by putting Hemingway on a par with Joyce flow bear and others he's different attitudes though indicate a split in his persona his reserved side wanted to avoid open confrontation with another writer particularly one so truculent as Hemingway his private demonic side though may have wanted to disparage Hemingway's literary reputation and elevate his own I should note here that Faulkner's placement of himself second to Thomas Wolfe was somewhat misleading because Woolf died in 1938 so at the time Faulkner gives the ranking he the implication that he himself is the most important living contemporary often right so with this in mind he did feel himself to be the better writer demonstrated by subsequent commentary and by his never fully retracting the ranking or unequivocally reached retracting right faulkner shared Hemingway sense of competitiveness but not his ways of expressing or performing it and to close I want to move ahead a decade after the exchange of these letters and guarded praise we can consider something from a late unfinished Hemingway work the dangerous summer written when he was struggling both creatively and personally in the late 1950s in this nonfiction text about two rival Matadors who were also brothers-in-law competing during the summer of 1959 Hemingway viewed his writing life through the lens of masculine competition yet again as with most of his nonfiction the Hemingway persona presents and even dominates the story at one point he observes bullfighting is worthless without rivalry but with two great bullfighters it becomes a deadly rivalry because when one can when one does something and can do it regularly that no one else can do and it is not a trick but a deadly dangerous performance only made possible by perfect nerves judgment courage and art and this one increases its deadliness steadily than the other if he has any temporary failure of nerves or of judgment will be gravely wounded or killed if he tries to equal or surpass it he will have to resort to tricks and when the public learns to tell the tricks from the true thing he will be beaten in the rivalry and will be very lucky if he's still alive or in the business substituting the words writing and writers for bullfighting and bullfighters here reveals much about Hemingway's framing of his profession that's my symbolic gesture I get to do as a literary scholar imagine the above passage starting writing is worthless without rivalry but with two great writers it becomes a deadly rivalry and so on figuratively speaking Hemingway had made this move throughout his career that is to say framing writing as a contest though the wounded and killed in this sense would of course be symbolic the duel notwithstanding for him competition between various sorts of artists held a certain creative value in that such kind of such one-upmanship could lead to more innovation and more chance take early and mid-career he spoke in various letters of taking on or getting in the ring with his term such offices Melville Dostoevsky Cervantes Maupassant Henry James and others Faulkner was also competitive and driven yet any more understated way perhaps because it came from a healthier professional ego that Faulkner was more successful in the 1950s with the Nobel Prize qo surprised and to National Book Awards among other honors surely helped his professional self-image with the exception of Faulkner second Pulitzer Prize which was awarded posthumously in 1962 for his novel the Reavers Hemingway knew that Faulkner received these late awards which further weakened and already struggling hardest lastly did they ever meet each author made reference to meeting the other a handful of times but no direct biographical evidence has yet been uncovered concerning when where and under what circumstances I just have a few remarks at each as may H major based on the years of certain remarks this meeting seems to have been after November 1931 but before July 1952 in an interview with the New York Herald Tribune this is after Faulkner publishes the sound of the fury and As I Lay Dying he's very much a writer on the rise an excerpt from this interview you know he admires the work of Ernest Hemingway whom he has never met I think he's the best we got and then the Hemingway letter this is one of a series of letters Hemingway wrote in the early 50s when he was the only non Nobel laureate between them this is where there's the competitiveness and the insecurity I never met him but wants to shake hands and never to talk with an another letter Hemingway mentioned sending Faulkner a congratulatory telegram for the Nobel Prize and he says you know i cabled him how pleased I was and he would not answer right and that it's very much the sort of self-pitying element here and then one of Faulkner's last public appearances this was at West Point in April 1962 he noted the two cadets at the last time I saw him he was a sick man right so with this in mind in September 1947 though they nearly met as Hemingway scholar H R stone back has written in a 1999 essay Hemingway and his Key West friend Toby Bruce drove that month from Florida to Idaho and route Hemingway suggested that they stop in Oxford Mississippi the meeting was not to be though Faulkner was apparently being honored by Oxford on the very day this is right around Faulkner's 50th birthday and the hyper-competitive Hemingway clearly would have kept his distance as of now their mere meeting seems to be a literary moment waiting to be uncovered nevertheless the authors nearly absent social relationship is largely immaterial when compared to their very present and very complex literary relationship one that I hope can enrich what we already know of these two nuanced and multifaceted American writers thank are you going to run things yep okay I'm on spring break I have to flip in a teacher mode here yes please yes mm-hmm okay mm-hmm at this point I had none that I've discovered but for the comments about the appendix right right right I lean toward the second I mean Callie's certain in editing that certainly had his own particular sort of agenda right and some other scholars among them david earl who teaches in florida has discussed how Callie's comment in the introduction that Faulkner's works were all out of print in the early 1940s is a little bit misleading they were out of print in hardcover but they were various paperback and other versions of it so I leaned a little more toward the second where Callie did a little bit of his own reassembling to kind of because mean it yes always of course to bring the spotlight back to he himself the portable in some ways I mean I think it would it tries to make Falcor a little more accessible to a lot of his readers because we know I mean that's a challenge of his work and I've seen that when I teach Faulkner novels and classes right sometimes is that I get a little bit of an eye roll or a grunt or something because you see me with him using so many different names and Families there's even a map in the portable Faulkner that Faulkner did himself right with this apocryphal County Yakubu tava he invented a lot of this so I think it improves a little bit I mean it certainly helped bring his reputation okay yes the he-man right yeah I'm either I mean I could do it an entire another presentation on just images of him right but he's I mean one of the things I really I noticed with his especially his his unpublished letters is how those show the opposite of that they're more tender they're more insecure self-effacing right especially when particularly his third and fourth was Martha Gellhorn and then Mary Hemingway when they were away he wrote these very emotional and self-pitying letters about them leaving him alone and so forth that's one of the reasons that his marriage to Martha Gellhorn through the 30s and early 40s was so tempestuous because she was not the type to sit idly by and be the wife of the writer she was a accomplished journalist she was on one of the landing craft on d-day Hemingway was as well but she sort of got there first that's something he was very much bothered by because it's it's and one of me and that's something I talk about a lot that it's those who send to trumpet the the gender the most are the ones who are most insecure about I mean I know that's a trickier thing too with people with readers of Hemingway and even in some ways Faulkner as well he was much more understated and white and he was no less devoted a hunter and outdoorsman than Hemingway was but you don't have the magazine pictorials of him in in Mississippi right he just send it to do it was anyway the images of him in Africa and the big-game fishing and the self-aggrandizing journalism right so you have questions yeah in brief his widow Mary Hemingway was close friends with Jackie Kennedy right and at the time of Hemingway's suicide in July 1961 his reputation was was very much not not quite nil but it was really struggling and from I I hope I have all my facts right only some of them apparently she had shopped that around to various other repositories and no one seemed to want it and then that's about how they wound up at the Kennedy Library that's where the where most of his things are there are some here in the Library of Congress in the Archibald MacLeish Papers there are some at the Harry Ransom Center at the University of Texas some are at Princeton University library along with F scott Fitzgerald's letters as well but it's a wonderful facility because it's a nice kind of small a very sunny room right off to the side of the museum and they have photocopies of all the relevant material so I'm like here where you're looking at originals there you're looking at photocopies but you can just go to the shelf and pick them up so does that answer I hope I got all my facts right if not something correct yes please mm-hmm a little bit I'm gonna try to back up if you'll excuse me one of the parts I didn't read from the typed Hemingway letter and I hope I still have it this fourth paragraph here he mentions wolf and he says you know you're so much a better writer than a wolf that I can't understand how you'd be fooled by the bulk of his stuff and he talks about Max Perkins as the editor of both wolf and Hemingway and how was really max Perkins who made Thomas Wolfe a good or accessible read it was wolf wrote so much know that you know the stories of him showing up with manuscripts of his works but just in boxes right with so much material because he just seemed to write and write and faltered he he did come back to this but as some other scholars have noted as well one of the differences in his ranking with the Hemingway selection is that he gave the annotation of its he has no courage he's never crawled out on a limb right and so that's something that kind of haunted him there always fought him but it was it was always this he you know he always was trying to sort of deflect it by saying well it was taken out of context and and but he never said I was wrong because there was always the implication that he's the better more accomplished writer does that answer ok sure hmm Beatrice was Hemingway's first editor and then she on also I coulda little me most of my work with them is later because the bonnie and live right was the was the first what published Hemingway's book in our time and then famously Hemingway wrote the parody of Sherwood Anderson the torrents of spring in part to break the contract because and sherrod Anderson was the marquee author for Bonnie and Lou Frey and Bonnie and lived right also published Walker's first novel soldiers peg in 1926 but I haven't worked as much with that that's I think it's the earlier period where they were somewhat aware of each other but not in really the rich hyper-competitive or as we saw humorous or entertaining manner right so does that that answer your question I don't actually don't know if she edited both it's possible if they were both at that point young writers trying to publish their first work they wouldn't have had kind of obviously the pool or the influence that they would have had later when because Hemingway's reason for breaking the bonnie and live right contract in the mid 20s was to go to Scribner z-- to be with at scott Fitzgerald Thomas Wolfe and max Perkins as the editor then Faulkner wound up at Random House doing most of his most of his works yes I think what he meant by that was a kind of editorial conscience in the sense of because it's in that same whatever he says you know it seems as if he had never you could never throw away the worthless remember Hemingway was so strict with stripping everything down and write one true sentence right and that's why I think with these two authors their styles are so diametrically different right so the conscience I what he's mostly referring to there was Faulkner basically writing it and keeping it as it was and not paring it down and one of the letters that Callie wrote back after the horse-racing letter he said he wrote to Faulkner and said you know Hemingway would be a good editor he'd be a good trainer because he knows how to say what he feels and to strip it down so and if I mean human you could read some of their signature works and even without an author name tag you kind of know who it is just because of their writing styles were so different so good thank you oh please yes of ironically with with Hemingway my favorite works of his or his short stories the first collection in our time and then there was men without women and winner take nothing and to me his his style works best with his stripped spare short story I do like For Whom the Bell Tolls though as things Bob might favorite Hemingway novel I think in part because it's more Aryan than anything he had done because I lean as a reader I lean more toward Faulkner with Faulkner it is go down Moses which he published in 1942 as a cycle of seven stories it has among other things his famous story the bear so what are yours yes that's that's a that's a wonderful one as well it's one of the ones it's one of the Hemingway novels that Faulkner owned right they did own some of each other's work and I haven't yet been able to track down and study their copies of each other's works I would love to see marginalia I don't know if they're there but I would like to I want to find that out yes please yes to say it all yes put it all between one cap in period he's too difficult right right right good wrongs good I tried I do try to slow it down a little bit by saying or sort of embracing the challenge and saying you're not going to read this quickly this is not something you're gleaning only from sparknotes right most recently I actually taught as I Lay Dying very successfully at an upper level course in Georgetown and that that I think was successful in part because it's a wonderful group of students on the other another part as well because that's a more accessible work you don't have as many of the two to three page sentences going there so but I've had to fill in before and teach absalom absalom which is one of his best novels but you do have the long and I try to slow it down I say of course this is not easy but most good things of quality are not simple right where you get it and we a lot of it is reading it out loud and then it's easier to get a sense of what he's doing and how you have these everything's on the head of the pin right the flashbacks the flashbacks within flashbacks and the movement in time so but that's that's something I really enjoy about him he has a reader that's something I enjoy about him said answer yes absolutely and and anyway I mean I tell this to the students who Hemingway brings his own challenges in the sense of the extreme subtlety and the implicitness right where it might seem to be one thing on the surface but when you read it reread it and reread it you notice a lot of the subtlety a lot of what's not there a lot of what's implied mmhmm yep Oh clearly even when he wasn't always intending to be right the the kind of suffer mm-hmm right yeah that's what he tries to work with but then it's funny some of his nonfiction like it the articles he did for Esquire and the 30s are that's where the celebrity kind of grew out he's always talking about himself as an expert writer editor critic fisherman hunter I mean you have a lot of these the hemic that's were the hemming with the he-man image sort of comes out all those right yeah that's it it's sort of nicely written same thing death in the afternoon right before that as well so good thing is it was another no anyone else oh yes please more so Falkner he struggled a little bit more economically a Hemingway was always the morn sort of celebrated and the wealthier writer in part because so many of his books were sold to Hollywood to make films and he also commanded large fees from Esquire and other periodicals to publish his work and there was something something that in the book I talk about whether I think there might have been some some moments or some elements in which Faulkner which just felt slightly insecure financially or economically I mean one thing I mean I enjoyed working with too was looking at the screenplay for to have and have-not that Faulkner helped adapt and for a lot of people I think the film to have and have-not with Bogart and McCall is better and more accessible than than the novel in part because it's a lot of Hollywood changes are made and it's set up and anyway you know all of his major novels were turned in to films and he he hobnobbed with Gary Cooper Marlene Dietrich and others Joe DiMaggio and others as well in New York so good thank you yes Elinor Rekha am i what mmm-hmm right I haven't I wish I could go there but I haven't right mm-hmm mm-hmm that's another book I think right I think a lot of it would have been artistic chances right because other and it's in the comments I wasn't able to read that he made in New York and Japan and Charlottesville Virginia he often said you know if I paraphrase essentially that he did one thing very well.the this despair stripped-down dialog and the clear focus but didn't try to do that he basically he a lot of his later comments have certain key words pattern method are some of the other ones where he says he tried but it was at one point where he says he didn't try for the impossible and he actually referred to both himself and Wolfe as failures in the sense that they tried to do so much more than one person could do felt they didn't reach it but he feels he says to me failures better failure is more of a success than then actually reach me but I would love to be able to go to Hong Kong and look at some of the the Faulkner papers right well that's what I noticed that means a lot of his papers are at the University of Virginia as well so does that answer okay thank you second this has been a presentation of the Library of Congress visit us at loc.gov |
english_literature_lectures | Charles_Dickens_Part_2_of_3.txt | [Music] probably dickens's favorite of the pretty city of Rochester held a special place in a novelist imagery he described it and referred to it often in the pck papers David Copperfield and Great Expectations but the city appears elsewhere in his work under pseudonyms Rochester becomes cloisterham mudfog doboro or wingley in Rochester the Royal Victoria and Bull Pub was one of numerous pubs that Dickens visited often in search of Interest characters this Pub here started um in the medieval times uh Middle Ages but um reached its um importance in um the late Georgian period um climaxing basically where it gets its name from The Victorian ball it took its name for Victoria she stayed here one night the reason she stayed here is because it was a coaching Inn it was a coaching in because it took Royal Male coaches coming from London to DOA and they stayed watered changed horses overnight here as opposed to other pubs in this street which is a a very long Street it stretches from here to chattam and at the time I think oh god there could have been a 100 pubs in this one street catering to different clientele you can walk towards a hostelry and have a a meal a drink and it's always there so Dickens he used to leave his home in gads Hill travel over land walking and V and visit various pubs on the way um as you do you walk from your home until lunchtime and stop maybe at the three crewes which is very near um gads Hill but a good walk an enjoyable walk and have a drink then he would stagger into Cobin which was maybe a mile walk to the leather bottle where he would stay for a further 3 or 4 weeks writing books and drinking and then walk back no wonder his wife divorced him actually because she probably a walk to the pub then may have taken six or seven [Music] weeks I think Dickens um came from a lifestyle where his father was a bankrupt um and he had to work very very hard um when he saw his house his future house at gads Hill it was a dream he wished one day to own this beautiful house and he said I one day I will own this property I it was just a wish I don't think he ever realized he would actually achieve that dream like most of us but he did a few kilometers from Rochester a villa gad's Hill held a special place in the author's memory when I was a child we would often walk my father and I from chatam to here and I was simply fascinated by this Villa that represented wealth elegance and success in short it was everything my family did not have in the uncommercial traveler I wrote word for word what my father told me one day when we had come to gads Hill Charles if you persevere and you work well and very hard you can perhaps live here one day 40 years later Dickens finally realized his childhood dream in 1857 he became the owner of gad's Hill Place this same house that symbolized success in the eyes of his [Music] father [Music] the small coastal town of broadstairs became the vacation spot for the Dickens family from 1839 on they spent each summer vacation at this [Music] Resort faithful to custom Dickens spent his evenings in a local par the tartar frigate was the most comfortable little sailor's bar along the coast [Music] after my long walks I enjoyed sitting and listening to the stories of broadstairs fisherman and I wrote about them in a piece called our English Watering Place it amused me to immortalize them oh yes one can say that I love broadstairs and several of its citizens can be found in my [Music] work broadstairs remains a small Resort town today every summer tourists are drawn here by the beaches and it's Charles Dickens Festival during the festivities broadstairs once again lives in a Victorian mode to the Delight of the holiday crowd events and historical places recalling dickens's life and sources of inspiration are everywhere if the Tarter frig was dickens's favorite Pub in broadstairs by far his favorite residence was this solid house with its crenelated roof Dickens spent 8 years at Fort house and after his death it was renamed Bleak House the title of one of his [Music] novels it's here with his writing desk installed in front of a big window facing the sea that Charles Dickens wrote his most personal and his most famous book David [Applause] Copperfield so he took his initials CD Charles Dickens and he reversed them to DC David Copperfield do you understand children CD Charles Dickens DC David coer and upstairs you are going to see the very small room where he wrote and formed these famous books he called it his airiness it's magic in the Victoria days Britain became millionaires Britain owed more money than any other country because the rich used the poor and this is why Dickens is so famous because he wrote about these conditions you open any page of any book it's about the privileged and the underprivileged the rich and the poor now as you go around your find this is a Victorian [Music] house the Dickens Museum in broadstairs is another dwelling that held a special place in the writer's work this House formerly belonged to Miss Mary Pearson strong who was the model for Betsy Trotwood David copperfield's [Music] Aunt he certainly knew her for many years because he came for 14 years and his son describes how they used to come and have tea with this lady in this house Miss strong gave him the idea of the donkeys being driven away because he used to watch her get very cross when donkeys were driven in front of this house and he used this incident in his novel David Copperfield the invention of the steam engine by George Stevenson at the beginning of the 19th century had already led to the construction of several thousand kilomet of railway tracks but the second half of the century saw an even greater expansion of Britain's rail network I traveled a lot for my work as a journalist actor and lecturer each time I was amazed to watch the countryside Race by at unimagined speeds for a man used to a Trotting Horse traveling at 50 m an hour was simply [Applause] [Music] phenomenal returning from bologne awaiting the Train's imminent arrival in the station with Ellen Turnin and her mother Dickens had no idea that a few minutes later he would miraculously escape death the fast train from staplehurst derailed above a Viaduct and seven First Class Cars plunged into the Ravine one car remained hanging over the void the one containing Dickens and his companions Ellen began screaming and her mother shouted good God ladies ladies there is nothing we can do I told them you must remain calm and collected please don't scream I beg you Witnesses say that after successfully helping his companions out of their dangerous situation the famous novelist with his customary cool returned to the train car to find his flask of OD and his top hat holding the flask with one hand and his top hat filled with water in the other Dickens hurry to bring Aid to the wounded and the dying dickens's heroic Behavior showed an inner compassion that manifested itself in philanthropic activity his whole life associated with Miss Angela birded Coots heis to a large banking Fortune Dickens first built a home for single mothers later the same two philanthropists constructed the first low rent housing units in London for workingclass families who were displaced by the huge train station construction sites this rail Revolution led to the urbanization of England if the country was 80% rural at the beginning of the 19th century by the end of Victoria's reign it was almost the reverse and Britain became the first country in human history to be mainly [Music] Urban ending the isolation of certain regions the railway contributed to unifying the country and helped standardize a British lifestyle and culture thanks to the appearance of second and third class tariffs travel was no longer the exclusive domain of the rich workingclass families took advantage of Sunday day trips to escape the pollution of the cities and visit surrounding Countryside and the Sea the train in the Victorian era marks the birth of popular tourism |
english_literature_lectures | Victorian_Era_Charles_Dickens_Oliver_Twist_Lecture.txt | Oliver Twist Charles Dickens made into countless movies TV shows different you know the BBC and England has done several versions very famous Hollywood musical called Oliver I believe won Best Picture in the 60s so overall movie it's you know a musical play that people can do and perform very famous this particular piece is taken from you know the novel so this is a very short section and this is part of the exposition which as we know from our plot diagrams and triangles and all that it's just basic background stuff that you need to know before we can really get going in the meat and potatoes of the story I read this piece in my English novels course I took as a grad student I enjoyed it and I hated great expectations when I was a freshman you guys read that I believe I mean with a passion I remember it took like over 200 pages in my literature book and I remember putting it off until over Christmas break because then we come we came back into January for about two weeks before our semester ended and so I put it off and so I took it to Florida to read and I kept putting it off like okay well sit and read it when I turn the book in at the end of the year it had some sand in it I mean so maybe a lot of it was my fault that I put it off and put it off but I just never really got into it with all of her I did this little excerpt is so small you just get to barely meet Oliver and get to see what a situation is the book is so enjoyable there's a lot of action and plot he is an orphan who gets pushed out of the orphanage and really kind of has to live and fend for himself he's recruited into a gang of thieves kids kid thieves you may have seen the movie all of her and company the Disney movie with the cats and stuff right and dogs that's based on this it's called Oliver and he's a thief and that okay and there's another kid thief named that a Dodger The Artful Dodger is his name and they all work for one older man named Fagin who is in charge of all of them he gives them food and housing in a you know horrible conditions but you know he's kind of even though he's in charge of the thieves he's kind of a good guy because he does care about them but yet he's in charge the thief so there's that kind of morale you know do we like him or doing not but anyways he runs into he befriends at some attractive lady you know prostitute there's a man who goes and kills and chases all of her and he's connected with the process I mean it's just there's a lot of action I mean not like you know Armageddon action or die hard action but yet it I was like wow if great expectations had a smidge of this I'd have enjoyed it much more so you may enjoy that if you ever have to read that in college or you want to read it just for you know for a classic piece we're going to listen to here in a second again pay particular attention to the how the children are treated because remember in this era the Victorian era the child labor and all of that even though it started in the Romantic era William Blake and all of his poems that we talked about it still is prevalent here if they're treated like you know almost like a prison the amount of food that they're given because they're considered the lowest of society these people don't contribute these kids don't contribute they don't have any parents and so it's a real rough surviving up the kids are really treated a lot like prisoners okay the way that they have to eat the rations with which we see in the story you know what half a half a roll or half a biscuit on Sunday hey big spenders you know onions team and just they barely had any and for a little boy Oliver to speak up and he speaks up because well it's pretty much chosen and fate and all that but a kid was saying you know I'm I'm gonna I'm gonna eat this boy next to me I if I don't get more food I'm gonna I'm gonna die I got to do it and so everybody decided a well they believed them and so we need to ask for more and that's where little Oliver was in charge of figuring that out and being the kind of sacrificial lamb or the messenger and upon the conclusion of you know they were posted a scientist take this boy body in London can take him take him take him and we will pay you to take him it's like somebody comes by your house every week to pick up the trash but they pay you to take away your trash can't see how that plays out but thus we see you know the food that you know that with the board what they have to eat as opposed to what Oliver has to eat you know it doesn't really it's not equal and that's where we see where these orphans who aren't productive to society and don't have kids and you know our way at the bottom and we can see where that hierarchy builds up and we see greater representation of that in the modern unit when we get you know talking about that in great detail and so just a lot of the the not too much excitement in this particular chapter or section I think it said the background this is part of chapter 2 in the book and so we have a lot of exposition that needs to be set up so that the rest of the plot and the rising action and the climax and all that stuff can actually occur later on in the novel so take particular note of those elements of you know that how the children are treated just being able to say well they treat it badly that's not really good enough okay look at those specifics of how they live their life and really it's kind of troubling to think that children rubber treated in such way or children certain expectations were placed upon them and this particular time period and this isn't the beginning because remember we we had some experiences with Blake back in the romantic the chimney sweeper you know kids being forced to uh you know we're fearful of dying and such because of what they had to go through and you know kids just shouldn't have to have to go through that so |
english_literature_lectures | Introduction_to_literature_first_recorded_lecture_part_2.txt | okay sorry sorry about that um technology got a little wonky but this is the second uh chapter of the lecture or a short second chapter before the third and final chapter um so as I was saying uh Greeley proposed the the American frontier quote-unquote Frontier uh as a kind of an escape chamber a decompression chamber all this pressure in the industrialized Northern cities all this poverty could be released have agrarian Farmers democracy would be wholesome etc etc but the reason I put the word frontier in quotes is that it wasn't just some American frontier the land was the home of native communities uh since the beginning of the quarter excuse me the beginning of the semester we've been talking about the relationship between the colonial European Colonial powers and Indigenous communities and if you can remember one of the first things I ask you to read that represented represented in indigenous worldview was mittark's Will which was a claim to land in the Northeast from the 17th century it was a it was a land claim uh since contact there has been this contestation over land indigenous groups were in the Americas prior to any European groups but as the European population grew inevitably there were conflicts with Native groups who were thinking okay this is all these are our homes if you want to negotiate for treaties to make certain land claims that can potentially work but the desire for more and more land uh wasn't anticipated I don't think by most native communities the numbers of people pouring into the the Americas was not anticipated um so the very first text uh from the Americas we read Mary rollinson's captivity narrative was a consequence of this expansion in contact and the movement further away from the Eastern seaboard um she was taken captive by the narring acid tribe in the Northeast in as a consequence of King Phillips war as we talked about in class from the perspective of the Native communities it was probably we could say it was an anti-colonial War it was a claim about their land and their life ways from the Protestant the Puritan perspective as we discussed uh the Captivity represented God's punishment or choosing for potentially spreading too far from the the literal Puritan original colony in the figurative faith well by the late 19th century uh the spread of settlers had moved even further east uh uh this slide here represents how far that spread was moving after the Civil War if you can look on this map so I just want us to think about one powerful indigenous nation in that nation is the Sioux the term Sue was actually coined by French Canadians to Encompass uh culturally linguistically related Nation composed of three different groups the Yankton the Lakota and the Dakota I want to use this group as a specific case study uh because Yankton so she was she was a Sioux Indian um now if you look at this image here this this map of the United States this little black Square shows us what the larger image this larger image represents which part of the United States and this was the territory that the the Sioux controlled essentially this is their territory um here's the yanktoni so this was the colosse community here Lakota down here Dakota over here Yankton here excuse me this is a college house group the Yankton here um one of the things I want to emphasize is that we need to be careful not to assume that the US was this Mighty Colonial force and Native communities where victims to this Imperial Colonial power retrospectively the expansion of America for the West it may have seemed like that but in fact as I say here for much of the 18th and 19th century the Sioux were more powerful and more feared by other tribes in the American Military um it's important to remember that it wasn't just these European powers that coming into the Americas that dictated how things went uh in fact it was a delicate process of negotiation and it wasn't always conflict I said that from the start settlers wouldn't have survived if it wasn't for a native Hospitality to this day we have Thanksgiving uh to commemorate that or at least in part it should commemorate that um the the French were in Canada to trade for Beaver for for fashion in in Europe and they totally relied on powerful tribes like uh the Iroquois and Huron um so we we shouldn't fool ourselves into thinking that the English and the French were totally in control they were very much controlled by the the will the wills of native peoples they had to negotiate the will of native communities and the Sioux the Sioux was a hugely powerful nation in part they're powered they became more and more powerful interestingly enough as a result of contact because of the horse the horse came from the horse was introduced into the Americas in the 15th century with uh with the Spanish and the Sioux adopted that technology the technology of the horse really effectively really efficiently and became really powerful uh and so this so by the end of the Civil War when we first in the wake of it were readings that call us on 1900 uh the Sioux were still remained a very powerful Nation so here if you look at this map this is contemporary this is the state of Minnesota the Contemporary state of Minnesota the state of Iowa Nebraska South Dakota North Dakota this is a huge huge territory okay and this was what the suit controlled at one time took a bus from down here basically from here in North Carolina all the way to here Seattle and I took a bus that went through the Dakotas and it was like it took me a day driving on a bus you know like ctm style a day just across this territory and this is before the automobile so this nation is super powerful okay that's that's the point um oops sorry technical glitches okay um as as I know in this last bullet point here as American settlement and the Transcontinental Railroad cut further into suit territory there are increasing skirmishes between the tribes and the settlers um they the railroad was kind of this Vision that different politicians really were enamored with in love with in the mid and late 19th century if it could connect the east coast and the West Coast the United States and people could take a train all the way to country that seemed like a pretty phenomenal thing but the train had to be built across all kinds of native land and Native communities would sometimes attack uh train outposts or workers building the train because it was going through their land uh you may have seen American western movies uh most of the westerns are set after the Civil War and they deal with they kind of glorify or romanticize the the frontier lifestyle of the cowboy figure and usually that's in relationship to the planes frequently to the plains Warfare planes Warfare so this area that this little square down here represents is also called the Great Plains um and these different tribes in this area uh are known as some of the Great Plains tribes so in addition to the Sioux you have here you have the Cheyenne the Arapahoe the Pawnee the Omaha um those are all major major native groups in the Great Plains um just as a is the last point to to note the power of the the Sioux after the Civil War there was a famous famous battle and I guess I should say that uh do you with the railroad moving West and immigrants moving for the west and with these increasing skirmishes uh the government wanted to make sure that native communities were were settled and they no longer were required to or basically they no longer had the freedom to to move anywhere they they wanted to move within their territory they wanted them to be confined to reservations to specific allotments of land where they would where they would live um the Sue being confined in a certain area of land was was kind of dissonant or didn't go with the life and philosophy of many tribes on the Great Plains like the Sioux because their life uh for instance was was linked up to the migration of the buffalo herd it was a major source of food it was a major source of clothing they used the Buffalo they used all of the Buffalo for food for clothing for their teepees they used buffalo skin for their teepees and so kind of following the buffalo herd uh was part of their life way so they were they were nomadic as a consequence of this and so the government asking them to be confined to a certain area of land was was was kind of asking them to abandon a whole life way that was really important to them so needless to say many many Lakota many of the Sioux didn't want that didn't accept that and they fought back and they fought back very well because they were masterful uh Horse Soldiers and one of the most devastating battles in the American imagination in the late 19th century is known as Custer's Last Stand and this was a battle between a number of different Plains tribes the the Cheyenne uh as well as the Sioux we're resisting being forced to go on a reservation they didn't want to do that and there was a civil war Capital mayman known as Custer General custard who was kind of a decorated soldier in the Civil War and supposedly known for his bravery but it was also not particularly sharp uh some later people say he there's a famous military school in the United States known as West Point and Custer was like last in his class last in his co-order at West Point um and so he wasn't some of his tactical decisions in battle weren't particularly uh yeah wise I guess you could say are decisive and so in I think it was 1876 ish around 1876. he had followed the Lakota and Cheyenne into the Northwest into what is now Montana State and he chose to attack a camp um and he was and so he charged there were a number of different generals and he led 200 soldiers uh into this camp or the sand and and Sue had uh were kind of on the run and trying to avoid Custer um and Custer didn't coordinate his attack with other men and so this famous General was was killed the Sioux Warriors and Cheyenne Warriors killed all these American cavalrymen to a man and at this time this is kind of the first time it was like uh the newspapers would broadcast or or have this information really quickly because the telegraph technology the telegraph would spread in the period and it shocked everyone in the eastern United States which is to say in the late 19th century one of the most famous Civil War Cavalry generals was was devastated in his in his in his troops were were killed two men but by the suit and this this shocked the east on one hand it shows you that the suit how powerful the Sioux were we shouldn't think even the late 19th century that the US military was was more powerful than they were on the other hand it it was it was a sad unfortunate historic turn because after Custer's Last Stand after the Lakota Victory um at the Battle they call it the Battle of greasy grass or the Battle of Little Bighorn uh after their Victory uh the American Military cranked really worked harder to confine Lakota to the reservation spaces um and ultimately they were uh they were kind of rounded up and forced to to return to the Dakota areas where which was in essence their their Homeland but they were confined to a a large reservation uh but a reservation nevertheless um okay and that leads us to uh a discussion of legislation that was passed in the wake of this these these battles uh legislation known as the Dawes Act or the allotment Act that really shifted the way people thought about Native communities in relationship to the American national body and citizenship and influenced the lives of so many Native people including zakala saw and I will talk about that in the next and then my next chapter so I'll turn to that in just a moment |
english_literature_lectures | Introduction_to_literature_first_recorded_lecture_part_1.txt | okay so hello everybody I'm trying to imagine our whole classroom as I speak right now this is my first QuickTime lecture and I'm trying to make it like a real lecture so I'm not just reading things and talking to myself and boring all of you so before I get underway I wanted to thank all of you again for being flexible with your schedules and being understanding to my situation I would much prefer to be in class with all of you rather than having this disembodied voice represent my contribution but we can learn like this this is ICT man so so let's get underway let's get underway I thought thinking back to last week I thought many of you did a very good job with your comments on Henry David Thoreau and Ralph Waldo Emerson um I was I was impressed and I thought that you most of you understood the stakes of of their work you understood the stakes of Emerson's self-reliance and nature as well as the stakes of Thoreau's Walden well one of the things I asked you to consider was the work of Emerson and Thoreau who can be called American transcendentalists in relationship to industrialization in the Northeast one of the things if you read my notes thoroughly which I'm sure you all did because I know you have so much time on your hands these last couple of weeks anyway I hope you read my notes early at some point but one of the things I wanted to emphasize was that although we'd been talking about the institutions Lavery in the South things were not totally wonderful and magnificent in the free labor north and we need to think about the fact that even the instant the institution of slavery has characteristic evils that the civil war ultimately put to an end but free labor and liberal capitalism in and of itself does not make inegalitarian equal society so one of the things Thoreau for instance was was critiquing was what happens as a result of the rise of consumer capitalism and and different needs that we think we have but in fact we might not need what happens is the consequence of industrialization will get back to that in just a minute but I wanted to kind of draw a line of continuity between last week and this week because we'll be talking a bit about the northeastern cities in just a second so I'm gonna move this down here so this week we're talking about zip kalus saw a really amazing American Indian writer in my estimation we're looking at a piece that was originally published in the Atlantic Monthly at the turn of the century in 1900 and were we're considering her autobiography I asked you to read a couple excerpts from her autobiography and she was published in a really important literary journal in the United States the Atlantic Monthly before there was anything like an English department where people would say oh yeah you should read this novel this is American literature you should read this short story before there's anything like that the way people talked about American literature was in these journals like Atlantic Monthly or Harper's Weekly they were journals based out of the Northeast and and the journals in the wake of the Civil War were we're trying to understand what it meant to be American and I'll come back to that in a moment but let's talk let's talk about America after the Civil War to understand the work of zocalo saw we need to understand what was happening in the United States after the Civil War which ultimately will help us understand what was happening happening to American Indian communities after after the Civil War so check out this the slide and on the left here you can see a political cartoon and it shows a former African American veteran presumably in a former white or northeastern veteran and it says down here it says a man knows a man give me your hand comrade we have each lost a leg for the good cause but thank God we never lost heart in God we never lost heart and I think this publication may have been in something like Harper's Weekly a journal similar to the Atlantic Monthly that is a kolosov was published in I think this is an interesting political cartoon because it it kind of symbolizes an idealized way of thinking about the Civil War of the wake of the Civil War in the sense that you have these two different Americans coming together holding shaking each other's hands and although the war cost them each of Lyne accustomed greatly they they're able to to join each other in a new union and and for a new sense of national identity now this this notion of different people's coming together and in joining in a new union in the wake of the Civil War in some ways was was really idealized federal troops had to occupy the south to ensure that that the southern community didn't antagonize the African American community in a period known as reconstruction so federal troops were stationed in the south for about 15 years after the Civil War and in the North this this this this version of holding hands across race and maybe class that kind of idealistic coming together after the Civil War didn't really manifest itself as people thought it might but quickly to reflect on the Civil War the the the toll on the nation was tremendous and and down here I say the nation was literally and figuratively scarred soldiers like the ones represented in this in this political cartoon the soldiers that survived often had lost limbs families and cities have been destroyed particularly in the south cities have been destroyed the families across the nation recent scholars in fact I think it's the president of Harvard as a historian Harvard University and she has a book that talks it that that estimates that 750,000 Americans died in the Civil War that's over a hundred thousand times more than the previous number thought either in combat or maybe as a result of disease related to combat or in appropriate health care to in today's numbers 750,000 Americans percentage-wise would be seven million so seven million people in the nation would have died in the Civil War if we think about it in in the terms of today's populations it was it it was devastating it was devastating for the nation but it did indeed as I say in this final point here it did indeed put an end to the institution of slavery so what would what would the new America look like after the Civil War this was an important question that northeastern writers politicians well politicians and writers all over the all over the United States but in particular in the Northeast in the kind of cultural political centers people thought are we won the war and our mode of existence our way of life is is the one that triumphed and they felt some impart a responsibility for reimagining what America could or should be or could or should look like who is included in this political vision who will be integrated into the national body as citizens this question obviously matters because with the end of slavery you have four million African Americans who are free freedom south and are now going to be incorporated or marginalized or not incorporated depending on different political leaders visions into the national body in the north for sure people were we're interested in in this idea of of integrating African Americans into the national body I don't know if any of you looked at that article I sent out from the New York Times on on Abraham Lincoln but if you if you read that you may have noticed that some early writers and politicians did not imagine an integrated American national body in fact even Lincoln earlier in his political career thought that African Americans should be freed but they should be find their own colony they should be repopulated in Africa and given their own place to live they couldn't imagine European and African descendants living shoulder-to-shoulder in a nation Harriet Beecher Stowe the author of Uncle Tom's Cabin also thought that I don't know even if the in the in the end of her life if she thought that but that this notion of of re of moving the african-american population to a separate colony was was part of abolitionist discourse Lincoln changed changed that position by by I think that the start of the the Civil War and and started thinking that in fact these liberties that we've been talking about all semester we hold these truths to be self-evident that all people are created equal that political institutions should represent everybody and give everyone a fair shot Lincoln thought yeah this this isn't just for European Americans this should be for everybody so including African Americans into political vision in the nation was was a shift around the time of the Civil War so what would this new America look like you have four million slaves that were now free considering how how to include them and into the social order but on top of that you between 1870 and 1920 you also have over 26 million immigrants who entered the country twenty-six million immigrants so there's mass migration into into the north and into northern cities in particular many of the immigrants were Catholic both Italian and Irish on for instance I out you know I always tell you stories about my family because I think the anecdotes help personalize the history we're talking about so it doesn't seem so distant abstract but for instance in the early 20th century mmm my mother's side of the family which is Italian immigrated to the United States so between 1870 1920 you had a lot of immigration including Irish and Italians which meant many new Catholic people were incorporated into the nation now as I say here in this Third Point there was suspicion about the possibility of integrating Catholics into the American national body and this political cartoon on the left demonstrates that that's point of that suspicion here you have Lady Liberty wrapped in an American flag here and she's this is a mortar you know where you grind herbs or something like that to make to make to make a mix of blended mix of spices and this blended mix of spices are all these different people from all over the world that are being incorporated into into America in the United States and the spoon of equal rights is stirring up the pot stirring the pot here so you can kind of see these stereotypical representations of people from around the world but they're all included they're all included in this pot they're all getting mixed together including what looks to be like an African American so is included in this pot but there's one person that X is X cluded and that person is standing in the pot here with a dagger and that is supposed to be an Irishman and so this caption down here says them the motor were more mortar excuse me of assimilation and the one element that won't mix so there was there was prejudice and racism that targeted Irish Irish communities Anna Tyne communities as they emigrated in the country in that period particularly the Irish because initially there they had much greater numbers I wanted to remind you when we were discussing Maryland's captivity narrative rowlandson went when her children were taking captives and she was separated from them she was less concerned about their well-being in relationship to the natives compared to the French Canadians the French Catholics rowlandson was really worried that not that the natives had her children take cap captive but that the natives would sell her children to the French Canadians and they would be converted to Catholicism so that's kind of one of the religious strains that we can continue to see here at the turn of the century there was deep suspicion about the Catholic faith and whether or not it would what what it would mean in relationship to two but the Protestant American democracy okay so so this is a political cartoon from 1880 that kind of demonstrates that demonstrates that point okay so where do these immigrants go they go to many of them to major cities major cities like New York New York City if you look at my notes on Thoreau and Emerson I talked about the fact that last year Jennifer and I showed Gangs of New York a film by Martin Scorsese that represents the draft riots in New York City working-class life in New York City at the time of the civil war I showed that in a series we called American culture through film last year some of you were there and might remember might remember the film for those of you that weren't I tried to tell you why I thought it was a fascinating film and why why it's why it helps us understand the 19th century the American late nineteenth century even though it's kind of sensationalist than in a Hollywood film well that film shows Irish immigrants and influx of Irish immigrants into New York City around 1860 in fact at the very time of the Civil War there's one clip from the movie where Irish immigrants are getting off a ship and someone is signing them up for the civil to be a union soldier and civil war as soon as they get off the ship they're not even they maybe just were American citizens for a second or maybe they're getting citizenship if they they fight but it's it seems absurd because what do they even know about the American political situation and one of the interesting things about the film the gangs New York is it shows that for working-class people for poor people in the North the the Civil War seemed like kind of an abstract thing that didn't relate to their lives because they were trying to put food on their table and eat so I called your attention to a scene where they actually went some of the these poor folks go to watch a theatrical representation of Uncle Tom's Cabin and they they're booing it and throwing vegetables at the actors they don't they don't see their own future destiny linked up with the plight of Uncle Tom at all so the film gangs of new york is set in lower Manhattan in this area called the Five Points and this image on the left here represents the five points of New York City for most of the 19th century this is the poorest part of New York the the painting you see looks a bit chaotic you see pigs over here kind of running around you see mixed a mixed community see african-americans you see European Americans you see children running through the streets it looks kind of chaotic and slightly disorderly it was called the five points because it was at the intersection of five different major streets you can kind of see I apologize I'm having trouble I can't see the cursor but if you look at the building centered here one Street goes off to the right one Street goes off to the left there's another one kind of off your left so there are five different streets that come together good um I wanted to show you a clip from the film the gangs of New York so you could see how Martin Scorsese represented the five points in in the film and this piece of this historic fiction so here is a scene from the five points when Leonardo DiCaprio who plays the son of an Irish immigrant returns to this area that he grew up in but was taken taken from as a young boy after his father was killed his father was killed in a battle kind of defending the Irish community and his father was killed by this Protestant kind of warrior boss named to build a butcher who's played by Daniel day-lewis so this is Martin Scorsese's representation of five points in lower New York City it in the during the time of the civil war so Leonardo DiCaprio here on the Left plays the son of an Irishman returning to return into five points poverty boys sorry and I'll just pause it here to say so he's getting the lay of the land so so in a way Square says he's kind of introducing us to all the different groups that exist in the five points and and note that these are all different gangs these people are marginalized from the economy in the United States they can't make money really legally they're not included in the national vision economic vision really so what do you do when you're not included you create a shadow economy that's what people do to this day so they steal or they sell you know they sell they steal people's money or they they sell illegal things etc etc so you're being introduced to five points which is a working-class community poor people that are that in different groups are kind of trying to get money you know doing stealing and things like that Street era always lively have an evening where the gangs around now got the daybreak boys and the swamp angels they worked the river looting ships the Prague hollers Shanghai sailors down another bloody angle chart tales was rough for a while but they become a bunch of Jack Rowland dandies the long run around murderers alley looking like Chinamen hellcat Maggie she tried to open her own grog shop but she drunk up all her own liquor got thrown out on the streets and for years now she's under labor I didn't there's a plug out they surf from somewhere deep in the old country got their own language known understands what they're saying they love to fight the cops and the nightwalkers around peckers world they work on their backs and kill with their hands they're so scurvy only to plug uglies to talk to him but who knows what they're saying the slaughterhouses under Broadway twisters they're a fine bunch of bingo bonus I know little 40 things I used to run with him for a while till they got took over by venture to cockroach and Israelite buggers venture carries a charm if you try to leave the gang to say axis grab the true-blue Americans call themselves a gang but all they really do stand around corners damning anyone got the sound of the dead rabbits Selena Nardo DiCaprio says to any of these gangs that are robbing and stealing and make money they have the sand of the dead rabbits are they as powerful or courageous as the dead rabbits that was the gang his father his father led full of Irishmen before his father was killed you don't say that name that name died would you it been outlawed I wasn't a black house the chinks told me that the natives celebrate their victory every year is that true I thought they do that's quite the affair butchering okay so that was that was Martin Scorsese's representation of this very painting here that's the five point neighborhoods in New York City so a question becomes then with 27 new immigrants coming into the city they don't have jobs they're living in crowded houses there's not public education for them if there's not public health care what what can you do what can you do and different thinkers across that time came up with different solutions in Europe and the United States and one of the more famous I don't know I guess thinker or influential Americans in this period and then in the mid 19th century mid to late 19th century was someone that owned a newspaper and very one of the most widely successful newspapers in America at a time called the New York Tribune that person was Horace Greeley this man here on the left and Horace Greeley was was interested in reform he was very concerned with the plight of the poor and and so he published articles in his paper that tried to address their plight in fact Greeley published Karl Marx Karl Marx was you know writing in in England and was trying to answer similar questions what do you do with with the working class and poor people in a in a capitalist society excuse me take a little drink water but where Marx thought workers of the world unite nothing to lose but your chains and thought overturning the capitalist system would be the solution to all this poverty and an imbalance of economic power Greeley thought well potentially one solution was was that the American frontier quote-unquote frontier would be like a safety valve to all of this chaos and criminality and destitution in northeastern cities where these poor folks don't have anything what about all this land to the west what about all this land and and Greeley thought these immigrants and people that didn't have anything should be allowed to settle in the West and start little homesteads and plant little farms and and and and become kind of agrarian farming people and make a life out of it on on the frontier that's kind of an old idea actually Thomas Jefferson thought that every a republic a democratic nation would be best served by if everyone was kind of like had a small farm and was lived kind of an agrarian small farming lifestyle and and so really is kind of known for this quote that he may or may not have said but it embodies part of this vision go west young man go west young man which is to say settling the frontier populated frontier would solve these problems this urban poverty and it would it would create opportunity for the have-nots I don't have anything but as I say here but this American frontier or West was the home of Native communities sorry |
english_literature_lectures | An_A_Level_English_Literature_lesson.txt | [Applause] [Music] [Applause] [Music] [Applause] [Music] [Applause] [Music] and um so I thought I'd just start off maybe by just asking you is anyone reading anything really good at the moment that they could just share for the first five minutes anyone okay anyone reading anything third yeah well um the last book I was Rebecca by okay um I really really enjoyed it um that was like about three days ago I showed that great and what um have you ever what kind of made you pick that book up um well cuz I was sitting on Amazon and I was looking at of like the books that I wanted to read and said why don't you try reading that one so I said it was really because she read when she was my so I picked up I put it down it's intriguing isn't it because of course the kind of the title of the book about Rebecca and then she's sort of like this sort of spirit haunting this sort of poor girl isn't it in the kind of marriage and then she's never she's Anonymous isn't she throughout throughout it so will you be reading any more sort of de Maria or yeah I'm looking at that and also um oh can't be um I want to read k Sons okay completely different I've been thinking about going back to that after red last yeah quite a few people did that for coursework didn't they as a sort of bit of a um sort of read in parallel to the Kite Runner that's really interesting yeah thank you um talk to Miss Lansbury she loves D dearia it's her thing um so if anyone's going to recommend um it will be Miss Lansbury anyone else read anything good does anyone do any reading for English like outside put your hands up if you I'm not going to be angry with you if you if you do okay so you you're rereading the set text and that's kind of part of it as okay does anyone sort of what was the last thing they read that was any good The Hunger Games okay that's that new film coming out they're really good you should go see it by the way okay [Music] [Applause] [Music] yes we've got the frame narrative what's the frame narrative Holly Frank narrative is Frankenstein story oh the Frank Nar the letters stankin yeah so we've got lots of different narrators coming through haven't we telling their story we were looking at the monsters and you were preparing the monsters narrative in the different chapters for last time what things were was I asking you Lucy to look for oh well this is what you should be looking for ever um language stru form have the monsters represented okay so language structure and form and which assessment objective does that come under A2 yeah very good so ao2 okay for the analysis okay and what was the other question that was sort of I suppose directing our analysis what does the creature learn literary metaphor okay so what does the monster creature probably less semantically loaded um learn literally or symbolically and what was the difference Emma between perhaps those two how did you interpret that um literally is sort of what he learns literally don't to describe it metopic stuff that he learns without even knowing he's learning it okay so things we can perhaps um see that he's picked up maybe morals or something like that morality can anyone try to Define what you can learn literally without having to say the word literally would it not be sort of that learning a language because you you have to learn it how it is there's no sort of other way to do it okay so yeah actually kind of learning something learning something new learning something Concrete in a sense that we can all recognize right and it was chapters wasn't it 11 to 16 so you'll be need to focusing today um as we go through these okay um picking out obviously the um the people who um looked at these chapters has picked out the key significant points for us and you can always open it to the floor afterwards so you can always add things if you want to okay and we'll work our way through um making notes what does that mean that's what we've been talking about saying making notes underlining stuff okay writing things down and trying to look for these focuses [Music] [Applause] [Music] [Applause] but most importantly we see his sort of connection with the world he he kind of seems people yet he's pictured like a child like an innocent child who without the paternal love of Frankenstein has to figure out how to actually um survive in this world and that's really important because it kind of shows us it kind of like shows us how culpable Frankenstein is for the monsters like reaction and response to humanity AB as this sort of paos to the monster's um sort of circumstances CU he's not around to kind of show him what to do there's I mean the whole chapter is concentrated with um the personal pronoun to kind of suggest that the elements of the earth like the cold damp substance he referring to is mud and and it's quite sad because we really just have this sort of childlike view of him this um at the beginning of this chapter yeah can you find that point for us just yeah have a little look for it in our [Music] [Applause] [Music] editions so it's interesting so why have we got this symbol of femininity here yeah could you say it sort of represents the conflict that he fed because he sort of T Between Two Worlds and even though we know him to be a man if there is sort of this recurring image of it it sort of just shows his conflict this where supposed to be okay you might interpret it as that it's true it's true um I think interesting as well it's it's worth noting that um the final uh well the penultimate paragraph of night quickly shut in um and he watches his human neighbors and um he talks about the the man taking up an instrument which produced The Divine sounds that had Enchanted me in the morning what we make of that sort of Passage a little extract so we got to think about why Shel decided to kind of write it in this way why is he sort of Enchanted by the music that's just all influence he is because he's only just met these people and already of you think him was a good a good character then obviously when he loses them and Victor sort of diss him then he's not got the inflence and he goes back so it sort of shows that maybe if Victor had sort of been a bit nicer to him then it wouldn't have ended up how it did yeah absolutely absolutely um it is also obviously a flashback isn't it here in the narrative for us anyone remember what that was called in terms of the mode uh antic very good yeah so it's in the analeptic mode here isn't it the flashback um which I think is interesting why do you think shelle's decided to to write it like that why just not you know have it as we go along any ideas yeah we can tell his story in a nicer way like if it if we' been with his story in the beginning he wouldn't have had the words to express it so we have to go back and we wonder how he can express himself so beautifully to go through the story to find out how yeah so it's a kind of Delayed Action isn't it because of the the Beauty and the poetry and his language can find out that and yeah technically he shouldn't know language at the start should he so she's got that hurdled AC cross Lizzy um well by putting it in the middle is sort of instead of having it as we go along combined with the other stories or whatever like we do with won and um Victor it's more like we can Charter his downfall in such a why it's like um his alienation from society and he the way he tries to be involved in society and then the effects and implications that that has and the fact eventually obviously he yeah he's alienated from it and isolates himself okay interesting ideas is s of fate and almost dramatic irony that we know what's going to happen uh and by not taking the re from uh by surprise we sort of we know what's coming next and that perhaps makes it even more poignant yeah perhaps so perhaps it just heightens our sense of paos yeah for the creature nice ideas right chapter 12 then we move [Music] [Applause] [Music] on um obviously changing the weather um but he's also noticing emotion and kind of rally understanding things like body language metaphorically recognizing the power of emotion because he says I observed count countenance of Felix was Melancholy Beyond expression he SED frequently so he's noticing the way he's acting is controlling he's emotionally cance deeper than just what he's doing very good I think there's lots of things like that throughout we've come to the end there well some really good concentration there trying to get through these chapters quickly um we'll finish off on Monday with chapter 16 and then um we'll we've got a couple more to go and then really um I'll s you some work over Easter we'll finish off over um just a couple of weeks from Easter and then it will be revision from there on in and we'll have covered the course Okay but well done some really good ideas there okay that's it [Music] [Applause] [Music] [Applause] [Music] [Applause] [Music] [Applause] |
english_literature_lectures | 5_Inferno_IX_X_XI.txt | Prof: Today we are going to look at three cantos. They are connected in a number of interesting ways: Cantos IX, X, and XI. They describe--they focus on the pilgrim and the guide, Virgil, being--approaching the city of Dis. So we are moving--they are moving and we with them, away from the area of incontinence, which is the section of Inferno we read through from Canto V to Canto VIII. They are approaching the gates of the city of Dis in Canto IX and the pilgrim experiences a serious impediment, an impasse, we will call it. He cannot go any further. The guidance of Virgil fails him and we are going to examine why it fails him and what is the problem that the pilgrim will have to solve. Once he is in canto--within the city of Dis, the first sinners he meets are the so-called heretics, heresiarchs, among whom--chief among whom is really Epicurus, the Epicureans, and in many ways you understand already that link between the city and these philosophers. Let me just add one thing so you have-- the idea is a bit clearer: Dante acknowledges in this philosophical text that he writes called the Banquet three schools of philosophy. The so-called academics or Aristotelians, then the Stoics, and the third Epicureans; now he handles--he just examines who these Epicureans are and for him they appear as those who are guilty of some form of pride, if you wish, intellectual pride, since heresy is a question of intellect and not of will, we'll talk about that. They deny the immortality of the soul, and in fact, it's really a problem to figure out why should Dante think of them as sinners at all. In antiquity they were viewed as one more school of opinion, a philosophical opinion: my mind does not convince me, my reason does not find it convincing the belief in the immorality of the soul. Why should I be punished? It's intellect, since the logic of Dante's own idea of sinfulness is that the will has to be involved, the will is at the center of the habit to sin. We'll talk about this, then in Canto XI, there really is no great action. Dante goes on explaining the so-called topography of evil, goes on explaining the arrangement of sins. What is the principle of construction of Inferno? There now he turns to Aristotle, we'll turn to Aristotle's Ethics first of all as the plan, as the model to--for the arrangement of sins and then also we shall see in a very interesting way he will turn to-- he will allude to his Physics. You see he goes on talking about--from a personal problem which we have to understand in Canto IX, the crisis of, then the questions of the intellect and its relationship to the will; and then in Canto XI this idea of what are the dispositions to sinfulness and we shall see--and the turn to Aristotle. Let me go back to--now looking at exactly the crisis of Canto IX, Dante's progressing in this journey. He reaches the gate of Dis and now--this is around lines 40 and following. Three Furies, the so-called three Erinyes of Greek mythology: Alecto, Tesiphone, and Megaera. They appear and they stop him. They say you cannot go into the city, a city described very much as a medieval city. In fact, it's a kind of swamp for reasons that we-- having nothing to do with really ecology but the idea that medieval cities were built near swamps because the land was always more malleable and there was water clearly in the-- that's not the reason for Dante but the reason for the certain ways of understanding medieval cities. The three Furies, Alecto, Tisiphone, and Megaera will stop and they call on Medusa who doesn't come, but they summon Medusa. That's why they say, "let Medusa come." They threaten the pilgrim, with the sight of the Medusa, "let Medusa come," she isn't here, I repeat. This is--if you had to translate it into let's say from English into Italian you would use a subjunctive, "may she come, I wish she came, we wish she came, let her come and we will turn him to stone." That's the threat. A threat of petrifaction, because according to the myth, and if you don't know all of it for instance, you may have seen a movie about the Medusa, if you look at the--if you gaze at the face of the Medusa, one who gazes at the head of the Medusa who was a great virginal beauty, a vestal in the Temple of Neptune, according to the myth. It was violated by Neptune and Minerva takes revenge on her by metamorphosing her into this ugly repulsive figure with her hair turned into snakes and yet she has this power, this magic power of turning all the onlookers into stone. That's the threat. "Let Medusa come and we will turn him to stone they all cried looking down. We avenged ill, the assault of Theseus," Theseus who also violated the boundaries of Hell to free the Eurytus, another little story that there were-- Theseus was successful in the liberation of Eurytus. The drama involving the pilgrim directly, this is a menace on him. "Turn thy back," this is Virgil who intervenes, "and keep thine eyes shut, for should the Gorgon" the Medusa, "show herself and thou see her there would be no returning above." And now that's the turn. Listen to this: "My Master said this, and himself turned me round and, not trusting to my own hands, covered my face with his own also." The poet interrupts the narrative and talks to us as a poet. This is the first so-called address to the reader. I will talk about this little technical detail. That is to say, this is no longer part of the action, now it's no longer the pilgrim, the story of the pilgrim, but the poet who is sitting in his study and who says, "you who are of good understanding," the Italian says, "you who have healthy intellect, who you have a good an understanding, note the teaching that is hidden under the veil of the strange lines." The poet assuming authority turns to us readers, and in a sense, he needs readers so that his authority can be constituted and he warns us. He admonishes us, to engage in what clearly appears is as an allegorical operation. We have to read, and the language is the language of allegory. We have to know how to read underneath the veil of language, there's something hidden underneath this. What is the allegory about? Let me just give you more about the story of the myth of the Medusa, so you will see the relevance maybe of that myth and the-- what I left out of the myth to this scene. As you know, the Medusa will be conquered, will be defeated. She will be defeated by the poet, by Perseus who--it's the origin of Pegasus, the horse of poetry. Pegasus--I'm sorry, Perseus who, using the shield of Minerva-- the shield Minerva had given him--and by looking not at Medusa directly, at her face directly, but at a reflected image in this shield, in a mirror, the shield of Minerva, manages to see her and will kill, he will slay the Medusa in the story. Within the Ovidian narrative, this is clearly a means to evoke for us the need for a kind of--not a direct vision but a mediated vision. That is to say, through the mediation of poetry for us, for the mediation of--I'll come back to the scene in a moment, but through the mediation of the shield can Perseus really take flight, kill and then take flight on the back of Pegasus. For us, the shield of Minerva is the text, because at this point there is a sort of direct-- let's say, divergence between what the pilgrim is enjoined to do. Virgil says to him, "don't look, shut your eyes," and not trusting the pilgrim, either his quickness, he must have been awed by this situation-- such a situation he doesn't understand. He covers, Virgil covers, the pilgrim's own eyes. In turn, the poet addresses us and tells us to open our eyes. You open your eyes and look. You can because you have the shield of Minerva. You have this textual mediation that will allow you to escape the direct threat and glance of Medusa. So you do know that the story Mercury, who is the messenger, that clearly--the figure of the interpreter. That's what the messenger means, the bearer of messages, the bearer of words. He comes and manages to make a breach within the wall of the city and the pilgrim and the guide can continue their descent. This is really the story. What is happening? What--how do I--can we explain? What is this allegory? Let me say a few things so that you can understand the whole technique of allegories. Whenever you read the Divine Comedy, probably much more than I would ever do, other scholars will tell you that the Divine Comedy is a vision, which it is, and that it's allegorical which at times it is, and the allegory is supposed to explain everything that the pilgrim finds himself lost in the woods. That's not the woods--to me it's the woods, but they say it's a state of sin in a way and there he meets three beasts that stand for pride, and wrath, and what not; they're three beasts and they may stand for other things. The significance of that initial landscape, as you may recall when we talked about that scene is not all that clear and that's part of the problem. That's what I call the land of unlikeness within which the pilgrim will find himself. The inability to join together signs and their significations, the awareness that there are no signs which are so self-transparent as to be understood or de-codified in a particular way. What is this idea of allegory? Dante here is clearly asking--telling us that it is an allegory at work. 'You readers have good understanding of healthy minds look, open your eyes and look underneath the veil of the strange verse.' Why are they so strange? What's so strange? What is the story? What's going on here? What is allegory first of all? Allegory, as you know, the word means to speak otherwise. It's a figure but it related--but not quite, to enigmas within the manuals, the primers of--rhetorical primers from medieval and classical times. Enigmas, irony, when you say one thing and you mean another, but to say this, is to say very little because Dante has been very thoughtful. He has been probing this issue very deeply, and in a couple of places in his works. In the Letter to Cangrande that he writes, about which I will talk later--it's a letter that he sends as an introduction to the first ten cantos of Paradise, Cangrande being the lord of Verona where Dante had lived for awhile. And also in the Banquet, this philosophical text where he explores the idea of what--how a meaning can be arrived at. I have a statement. How can I go on drawing a particular significance, or more than one significance out of a statement? He distinguishes two types of allegory. And there is a so-called allegory of poets and allegory of theologians. How does he distinguish them? How would he ask us to distinguish between them? The allegory of poets is an allegory where the literal in which, the literal sense is a fable, is a fiction. To say, that's the example he gives, Orpheus by the power of his language moved stones, that's an allegory of poets. It really means that the power of the voice of the poet manages maybe to edify cities, whether we need poetic myths for the edification of a city. That's understandable. Or to say Orpheus, that by the power of his words, tamed lions. It's to say that whatever ferociousness we may have inside us can partly be tamed by the music, the song, the poetry, and so forth. That's the allegory of poets. The literal sense is a fiction. To say that it's an allegory of theologians is completely different. The example that Dante gives is taken--sends us to Exodus, the biblical story of Exodus. You all know, I take, what the biblical story is, so once again movies help. The biblical story of Exodus, the story where the Jews abandon, leave the bondage, the slavery of Egypt, go through the desert and reach the Promised Land. This is happening historically, it's true. This is not a fiction, this is--the Red Sea did open up and the Jews could pass through the Red Sea, could cross the Red Sea that way. This is history and in the allegory of theologians, the literal level must be historical. It must be an event. So this is the distinction. Of course, the question is what kind of allegories are used here? We'll come to that in a moment, but keep that in mind. This is--I hope--it's more than of archaeological interest. Within the allegory of theologians, they distinguish four levels of exegesis, exegesis being a word meaning interpretation. Four levels: the literal, the moral... An allegory is telling you what to do, teaching you. It has an ethics involved: that you read and there is an ethics when you are reading. You have more or less a text. It's time to direct your will or tell you what you should be doing. A tropological telling you what--tropological meaning what does it mean in terms of your whole life, not just an action in a particular case. And then the so-called analogical or eschatological. So that the story of the Jews crossing the wilderness and going to the Jerusalem means having a kind of a spiritual conversion, moral conversion, means that this is really the way that life ought to go. You go from sin to glory or the peace of the city, and then anagogically, this is the story of the soul. It prefigures what the soul ought to be. In the case of the allegory of poets you only have two levels, the literal and the moral. There are a lot of difficulties with this way of distinguishing between the two types of allegory because both the allegory of theologians and the allegory of poets, even if the allegory of theologians refers to events, it's still words that we are reading. There is a way in which Dante seems to at one point dodge the whole thing of how can you really distinguish between the two modes-- the rhetorical modes--independently one from the other. And actually he goes on saying, really, the difference is in how you take the literal sense. In the kind of act of faith that you may have, that the literal sense--that the Bible is the word of God, then you are reading it theologically. But if you decide, and one might, to say that the Bible is really a collection of extraordinary poetic stories, then you are reading according to the allegory of poets. How is Dante circumventing this whole issue? He's circumventing the whole issue by saying, my story may well be taken as a story of allegory of poets, but it's also an allegory of theologians because the literal sense is 'I.' The historical sense is in me. I am the historical cipher moving through these experiences, and therefore, it is my life that they will give a particular sense, a particular truth value, to whatever poetic fable I may be relating. We only have taken care of one little problem here, very external to the story, the allegory of poets or allegory of theologians. It's time to decide that this is what is going on here, but, how are we to understand this threat of petrification. And you cannot really understand it, but I'm here to tell you. You cannot really understand it on your own, so you have to trust my words. The fact is that Dante had written in his youth a number of poems for a so-called Lady Stone. In Italian, it's not as bad as that, though 'stone' could be a good word in English, Donna Petra--petra meaning stone, and the passion--it's a description of a love that was unrequited, but a passion for this woman was such that he felt that his intellect would be petrified, that he could be--in other words he was unable to go anywhere. It's a statement of despair, if you wish, whenever you have this sense of a death that is going to take over and you are going to be paralyzed in your will, then this is a petrification. This is what I think is happening here. Dante is engaged in retrospection, to an experience of his past, and that experience of his past is now ahead of him threatening him once again. He has to cleanse himself, he has to move beyond it, but to explain better this idea of the closing of the eyes-- this is why Medusa, though he talks of Medusa there and this lady Petra is a kind of Medusa. Let me give you--read a little scene from--that I think really explains what's going on and prepare us to move forward to the next canto, Canto X. It's a little scene from the Confessions of Augustine. As you know, a book that Dante knew very well. Dante even goes on quoting it at very strategic places, so there's not issue of bringing it in gratuitously to explain this scene. It's actually direct--it could be viewed--forgive the reversal, but this would be viewed as a gloss on what Dante will go on writing. The Confessions is written with-- it's an intellectual autobiography: the story of a young man who will go on being fascinated by various schools of philosophy. He's a Manichean, and then he will turn into a neo-Platonist, goes on--is very flattered by a skeptical, rhetorical way. He's a professor of rhetoric--a rhetorical way of dealing with values and the world around him. And then in Book IX, he'll go on telling the story in the garden of Milan, the famous story under the fig tree, very emblematic. We could talk about these things, about why the fig tree in Milan, which leads many scholars to go on wondering, were there fig trees in Milan? Isn't the climate too cold? You need really the southern climes for that kind of thing, forgetting that the fig tree is always in the Bible. It appears as the tree under which the prophets go and rest in the mistaken belief that everything is over and that somehow a time for complacency may come. They're denouncing it of course. This is all done in a mode denunciation, and that's exactly where Augustine puts himself, under the fig tree and there he goes on experiencing a particular drama by reading St. Paul, etc. Throughout this text, though this is about neo-Platonists and Manicheans. As you know Augustine--as you probably know, Augustine goes on reflecting about his love of shows, the biggest one of them, at the beginning of Book III, is his love of theatre and his critique of the theatre. Now, why do I go to the theatre and how do I explain the fact that there may be some well-meaning young man who sees the maid in distress and jumps on the scene to free her. And he goes on talking about how can I be a spectator, what does it mean to be a spectator? What does it mean to be involved? And then there is another little scene. A friend of his, Alypius--Alypius is a Greek intellectual in the best sense of the word. A man who believes in self-mastery, in intellectual self-mastery, a young man who witnesses Augustine's own experiences. In narratives you always have Sancho accompany Don Quixote; there's always the other, more or less skeptical, who gives authenticity and who exactly will go on making claims for the truth value of what the narrator or the protagonist will go on experiencing. His name is Alypius. Alypius will eventually rejoin him. He's in Carthage. They grew up together. Augustine and he grew up together. Augustine goes to Rome. Alypius will rejoin him in Rome, and they go on from there, eventually going to Milan. When in Rome, Alypius does what nobody-- we would all do, first thing he wants to do is to go and watch the games played at the amphitheatre, at the Colosseum and the games are horrifying games to Augustine. He says, how can an intellectual such as you, want to go to the games where actually human beings are being thrown, for the delight of the crowds, are being thrown to beasts, to lions. Alypius, of course, he tries to justify himself. I really want to go, but exactly because, he'll say, but I really do, because I'll read you the whole passage: 'I will go but because I'm an intellectual, I promise that at the crucial moment when the sign is given for the animal to devour the human being lying there I will not watch. I will--I'm going to turn my eyes away and I will shut my eyes.' Let's see what happens. This is from Book VI, Chapter VIII. It's a great little story. By the way, a scholar of romance philology who used to teach here many years ago by the name of Eric Auerbach, a great Dante scholar and he wrote this book called, Mimesis, he reflects on this scene, not connecting it with Dante, but it doesn't matter. I read it first in his book and says, this is really--it's a little scene that marks the end of Hellenic rationalism. Let's read this; it's interesting just because of that, and then we'll see how we could apply it to Dante. I think it's very clear. "He had gone to Rome to study law," this is Alypius, "and there he was carried away incredibly with an incredible eagerness after the shows of gladiators. For being utterly adverse to and detesting such spectacles, he was one day by chance met by diverse of his acquaintances and fellow students coming from dinner and they with a familiar violence, hailed him vehemently refusing and resisting into the amphitheatre during this cruel and deadly shows. He thus protesting, 'Though you hail my body to that place,' this is Alypius, "And there set me, can you force me also to turn my mind or my eyes to those shows? I shall then be absent while present and so shall overcome both you and them. They, hearing this, led him on, nevertheless, desirous per chance to try that very thing, whether he could do as he said." "When they will come thither and had taken their places as they could, the whole place kindled with that savage pastime, but he," Alypius, "closing the passage of his eyes, forbade his mind to range abroad of such evil, and would he had stopped his ears also. For in the fight when one fell, a mighty cry of the whole people striking him, strongly overcome by curiosity and as prepared to despise and be superior to it, whatever it were, even when seen, he opened his eyes and was stricken with a deeper wound in his soul, than the other whom he desired to behold was in his body. And he fell more miserably than he upon whose fall that mighty noise was raised, which entered through his ears and unlocked his eyes to make way for the striking and beating down of a soul. Bold rather than resolute, and the weaker in that it had presumed on itself which ought to have relied on Thee for so soon the-- " the whole confession is addressed to God, for it's a confession, a witnessing to God so in case you are confused about the references. "For so soon as he saw that blood, he therewith drank down savageness, not turned away, but fixed his eye drinking in frenzy, unawares and was delighted with that guilty fight and intoxicated with bloody pastime. Nor was he now the man he came, but one of the throng he came unto. Yea, a true associate of theirs that brought him thither. Why say more? He beheld, shouted, kindled, carried thence with him, the madness which should goad him to return not only with them who first drew him thither, but also before them, yea and to join others. Yet, thence it did start with a most strong, a most merciful hand, pluck him and taught him to have confidence not in himself but in Thee. But this was after," and that's really the story. What is the meaning of this story? In the Confessions, I think that Augustine is very clear: the failure of the mind to master its own will. It's about the crisis. It's about the weakness of the will to begin with, but it's also the pride: the belief that one can rise above the contingency of temptations and be in control of oneself. And yet, it's a story of a temptation which he, Alypius, cannot quite resist. I think that this is exactly what's happening in Canto IX. Dante's dramatizing not only the failure of the intellect; he's already been talking about the early part of the canto, the failure of Virgil to guide him. He's discussing now the failure of his will, at least seen in--as an event of the past but clearly is seen as something that can happen to him again. The passage to the city can take place after this scene and now he enters into the City of Dis and against the walls of the city, he finds the Epicureans--again, those who do not believe in the mortality of the soul. Let me just read in Canto X some passages. By the way, each Canto X of the Divine Comedy, they're really are all cantos intimately related with each other. So if you were looking for a paper you want to connect Canto X of Inferno, Canto X of Purgatorio, and Canto X of Paradise, I would encourage you to do so. Let me just read a few lines. Dante asks who these souls are and the answer he gets is this line 12, "All will be shut in when they return from Jehoshaphat," which is the valley in Jerusalem; in the valley, into the valley of Jerusalem where, according to the law, the resurrection of the dead will take place. That's where they would be meeting. It's interesting then, that there is this contrast in the canto between the so-called Epicureans, who do not believe in their immortality of the soul, and clearly, this view, this opposition as one could call it, between Athens, very classical, Athens, and Jerusalem. You may have heard of this, the city of Athens by the way, the word itself means immortality, the immortality of athanatos, the immortality of wisdom. Wisdom survives, but not the people. There is a kind of--the kind of contrast between the two cities, very old, very ancient contrast. And now we have in this part, Epicurus and all his followers, what we call the Epicureans. Let me just gloss the Epicureans a little bit further from--than I did before. There are two types--in the mythography of the Epicureans, there are two types of Epicureans. Whenever we think about the Epicureans, we think about those, the vulgar Epicureans, those who think about--worship their stomach, the pleasures of food, an Epicurean in that sense. I think that Dante has dramatized that kind of Epicurean in Canto VI when he meets, remember Ciacco, whose name means "pig." In fact we talked about the hogs of Epicurus, the herd of Epicurus. But then there is the noble version of the Epicureans, the canto here, in Canto X those who are interested in intellectual pleasures, the pleasures of conversations, the pleasures of friendship, the pleasures of meditation. And they are those who do not--who remove themselves to the garden, do not seem to really care much about what happens around them, because in the belief that they should really take-- cultivate their soul and cultivate their own pursuits, take care of their own pursuits. That's what we are having here. These are the noble, philosophical Epicureans, not the vulgar sort that believe in the supremacy of bodily pleasures. Nonetheless, pleasure is the aim of an Epicurean ethics, my pleasure. This continues, "in this part Epicurus and all his followers, who make the soul die with the body have their burial place." How fitting, how fitting is the punishment for this crime, this sin. It's perfect because these people never really believed in the immortality of the soul and they are condemned to be dead. That's what they think and they dwell literally in sarcophagus, in sarcophagi, entombed. That's how they appear. There is another little detail I have given you-- sometimes we may wonder about the appropriateness of a sin, of a punishment for a particular sin, but here we have no reason to really be surprised at all by this kind of destiny reserved for the Epicureans. "But for thy question to me, thou shalt soon have satisfaction from within there, and for the desire too about which thou art silent." Then they're interrupted. All of a sudden, now Dante's once again involved, and what's here primarily is no longer the question of immorality of souls, but how the political aspect, the political implications of this kind of belief, of believing in immorality of the soul, how is this refracted onto the political scene as it were? Again, therefore, this is almost a Platonic conceit, the relationship now between no longer bodies and cities, as we saw in Inferno VI, but here soul and city. Is this a soulless city? What happens? How do we experience it? How livable? Which is--I mean it is also a pun, how livable is this kind of city? What happens and what is the--what are the relationship--what is the relationship between various figures? Dante singles out two people, one a Guelf and one a Ghibelline. We are in the middle of the civil war of Florence once again. It's going to be Farinata, a Ghibelline and Cavalcanti, the father, the old man, who is a Guelf. By the way, they're also related to each other because Cavalcanti's son, Dante's best friend--you remember he dedicates his Vita nuova to him, he calls him 'my first friend Guido Cavalcanti'-- had married the daughter of Farinata. They stand there in their tombs ignoring each other and each ignoring the pangs, worries, and perplexities of the other. It's a little picture of what we call "any city." This is the city in the beyond where everybody's squabbling. Nobody's paying attention to anybody else, and everybody believes that one's own passion, one's own concern is really paramount and foremost. There's nothing that can come near to it, so it's all--it's a canto that interestingly enough is marked by interruptions: one is speaking, the other says forget it, I got to talk now it's my turn. And so it is the--it is a little vignette of Florence in the year 1300 probably, or later but 1300 is a good date for us. So, he's interrupted, Virgil and Dante are interrupted by someone who says, "'O Tuscan who makest thy way alive through the city of fire and speakest so modestly, may it please thee to stop at this point: thy tongue shows thee native of that noble fatherland to which I was perhaps too harsh.' Suddenly this sound issued from one of the chests," and so on. So they go on, "Turn round, what ails thee?"--says Virgil--"See there Farinata who has risen erect; from the middle up thou shalt see his full height." He appears from the navel up in the tomb. Now, a little historical detail. There used to be in Rome, a church is still there, but it was already there in the year 1300, when Dante went on an embassy to Rome: the church so-called of the Holy Cross in Jerusalem which, according to the legend, was built with material, with stones from Jerusalem that had been brought to Rome by Constantine's mother, Helena. In the basement of that church, which would be opened only once a year around the Easter season, there would be a mosaic showing--and you can say it would only-- on Good Friday that the--that basement would be open. And that mosaic, it's no longer there so I cannot--I'm not encouraging tourism; it's just I'm giving you a little detail. There used to be a mosaic of the rising Christ from-- shown from the navel up and it's clear here that the representation of Farinata showing himself from the navel up is meant as a caricature of the belief in the Resurrection. There are two--this is the really--of the story of a man who doesn't believe in the Resurrection, and iconographically Dante will go on focusing, insisting on this--on the counter. This man doesn't believe in the Resurrection. That is another possibility of looking at it so there is a--the description is clearly meant to evoke all of that. There is a great exchange between them: who defeated whom, the continuous battles between Guelfs and Ghibellines and Dante claims that his own family managed to take good revenge when the time came. And clearly the implication is that more revenge, since Dante has been, will be necessary. They're interrupted by the sight of--by the old man Cavalcanti. Look at what the story of the canto is: Farinata worries about his ancestors; Cavalcanti worries about his son. So these are the Epicureans who have a sense of continuity somehow, a sense of dynastic continuity: all within the immanence of personal concerns and family. So they go--they move beyond the fragmentations of self from the others. They seem to have a kind of extended idea of themselves, in spite of themselves, in spite of their beliefs. This is what happens, an extraordinary scene: "Then rose to sight," line 55 and following, "beside him a shade showing as far as the chin; I think he had lifted himself on his knees. He looked round about me as if he had a desire to see whether someone was with me, but when his expectation was all quenched he said, weeping: 'If thou goest through this blind prison by height of genius, where is my son? Why is he not with thee?" The reference clearly is to Dante's best friend, to Guido, whose name also means that he should be guiding him. Maybe the old man--the old father was hoping that the son, according to his name, could really be leading the younger poet, as he had led him in his early poetic experiments in Florence. And then he answers, he's disappointed, and Dante answers, I answered him, "I come not of myself, he," doesn't even mention him, meaning Virgil, "he that waits yonder is leading me," so this is the pun on "guido"-- "through here perhaps to..." That's very unclear. The text here, my translation is "to her" and so does yours, but many other translations probably say something different. I would say "to one, your Guido held in disdain." It's unclear because the "her" would mean he's leading me, Virgil leads me to Beatrice, whom Guido held in disdain. Why would Guido hold Beatrice in disdain? Is this really the story of the Vita nuova? The antagonism between Guido and Beatrice? There's nothing that really suggests all of that. "To one" would be to God or to some aim that Guido held in disdain, we'll see what that means. "His words, and the nature of his punishment had already told me his name, so that I replied thus fully. Suddenly erect, he cried, 'How saidst thou 'he held'?" That's--Dante uses the past preterite, "he held." The old man, infers because of the use of the past preterite, that his son is already dead: a mistake, an equivocation. "He held? Lives he not still? Strikes not the sweet light on his eyes?' When he perceived that I made some delay before replying he fell back again and was seen no more." Farinata is unconcerned. He goes on saying, well yeah you drove us back but we drove you back, brings the subject to the political-- strictly to the war between Guelfs and Ghibellines so that we really have to ask-- Dante goes on saying, please before leaving, reassure the old man that his son is still alive, because by the year of the journey Dante-- Guido was supposed to be alive. Though he will die very quickly afterwards. What is this story of political disarray of Florence? And the story of the memory of Dante's friend with whom he had just--the friendship was just--had finished. The friendship was over. Dante, for those of you who are interested in the biography of the poet-- one of the early and toughest decisions he had to make was to banish Guido Cavalcanti from the city because he thought that Guido was the cause of some unrest within the city. Guido went into exile and never made it back. He died three months later in the swamps of near--of Liguria, a little bit north of Tuscany. Dante lives in many ways with a kind of guilt, personal guilt, I suppose. He won't talk about it openly about the--what had been happening between them. What is--how are we trying to understand this scene? Let me just give you some details about-- some other details about this canto. "If you go through this blind prison by height of genius..." There's a little bit of irony there. Cavalcanti clearly does not understand, nor can he understand, that Dante's journey in the beyond is not due to height of genius, but these are the philosophers and he has a kind of philosophical idea about how certain experiences are going to be possible. "Why is not my son with you?" Etc. "I come not of myself; he that ... to one whom your Guido held in disdain." Well, what has happened here? Who is Guido really? Guido is what we call an Averroist. Did you ever hear the term, Averroist? Probably not. Averroes, Dante actually mentions him in Limbo. He was an Arab philosopher and a famous commentator of Aristotle. And of all the texts, he commented just as now, in the Middle Ages they would be reading the great classics of philosophy, especially very difficult text such as the-- On the Soul of Aristotle with-- following the commentators. Averroes was known as the "Great Commentator." He was the great commentator of Aristotle's On the Soul. And he argues that Aristotle does not believe in the immortality of the soul. That's the argument that he's going to be challenged by Aquinas and by many others, but that's the primary--Guido Cavalcanti follows Averroes' understanding about the soul. The one who is here, a heretic so to speak, is not just the old man, but also Guido Cavalcanti himself. At this point, before I go any further telling you more about the who is an Averroist, what does it mean to be an Averroist, I really have to raise a point with you. What does heresy mean? Because I did indicate that in antiquity it was never really thought of as a sin because it's a question of mind. The word comes from the Greek haeresis, meaning "to choose." One who is a heretic is someone who makes a particular intellectual choice. To be viewed as a sinner, you have to also indicate some kind, an element of pride behind a particular belief and so Guido is held responsible for spreading, disseminating this idea of Averroism. What is then Averroism? Well one of the ideas that Averroes says is that we-- in the commentary On the Soul-- that we human beings are not even capable, intrinsically capable of thinking. That we are made--remember the famous structure? The diagram about the soul? That we are a concupiscent entities and sensitive entities. We're also rational entities, but rationality occurs to us intermittently. Thoughts, we even say that in English, 'a thought came to me.' That reminds me the best way of understanding Averroism: we don't think all the time, occasionally thought comes to us, and there's no way that really. And when we think we are really existing in a sort of break, a discontinuity from the world of feeling. So there's a fairly tragic understanding, making human beings the object of thought, not subjects of thought. We are not agents capable of producing thoughts, thoughts come to us and also tragic because it sort of presents a break between the sensitive part of our experiences and the rational part of experiences. We live like animals more often than not: we eat, we drink, we sleep and so on. Then occasionally we manage to disengage ourselves from all of this and capable of contemplative thoughts. At that point we no longer really live we are just--we are abstracted from ourselves, we are removed from ourselves. Not only Guido believed in these ideas, these ideas shape one of the most beautiful poems written at the time of Dante by Guido Cavalcanti himself, and the poem is called, A Lady Asks Me. I want to tell you what the poem is about. It's a poem where there's a fiction: the poet Guido Cavalcanti imagines that a woman, which may have been the case, asks him to define love. You poets are always talking about love, and I don't understand what you mean by love, and nor do I understand what the effects of love. And he writes a song, this long song, saying that love--a lady asks me to talk about the nature of love, the function of love, and the effects of love. And he goes on almost scholastically, taking one case after the other. He begins by saying, in the exposition, that love is a passion that comes from Mars, not from Venus. That is to say, the nature of love is always to be one of conflict and one of war and chaos, not one of an order, the benevolent Venus. He goes on too saying that it's--it induces death and it's characterized by deliriums of the mind. It's a very clearly, grim idea of love. What Dante's doing in this Canto X is connecting Guido's ideas of love and the politics of civil war. He finds that there is a strict necessary connection, a necessary correlation between the thinking about love of Guido Cavalcanti, whom Dante opposes. As you know from the Vita nuova, he believes that Beatrice can indeed be someone who can lead him to God and to the knowledge of God, in the persuasion that it is not by truth that you come to the knowledge of God. If you cannot come to the knowledge of God by truth, then how do you come to the knowledge of God? By love, by thinking about love: that's the way of the ascent. On the other hand, Dante will have this idea that the political-- that the order of civil--that this disorder, the civil disorder, the civil war is nothing else than the phenomenon of a theory put forth by the Averroist, by Guido Cavalcanti. This is really what I think the double focus of this canto: love and politics and the connection between them. A connection which, by the way, the Averroists whom Dante links with the Epicureans, deny. But he's making this connection, imaginatively, a connection denied by the philosophers themselves. With Canto XI and--Dante will go on. We have a few minutes and I can talk about this. Dante, as I said, explains the order of--on the face of it, the juxtaposition is clear. To the disorder of the city, we are now going to have a reflection, a rational reflection on the order that sustains the City of Hell. If there is any disordered place, it is as if there's a logic even to the disorder of evil. And the idea is that there is a tripartite division to Hell, the plan of Hell. All of the sins are divided into three parts, sins of incontinence that we saw from Canto III, IV, V, actually to Canto IX. The middle area which is called the area of violence and then the third area, the sins of fraud. And Dante calls fraud the sin peculiar to human beings because it's not just a sin of the will, but there is also the premeditation of the mind, the complicity of the mind, the sense of fraud which is also a sense of treachery. Dante sees the conjunction of will and, at the same time, the order of reason in the performance of that evil. The canto comes to--ends with a question. Dante says that this is all from the Ethics of Aristotle and then Dante wonders, look, he'll say, lines 90: "Oh Sun that healest all troubled sight, so dost thou satisfy me with a resolving of my doubts that it is no less grateful to me to question than to know. Turn back again a little', I said,' to the point." You know he's been explaining everything; actually he didn't explain everything. He never explained heresy, which we took some time to talk about and he never really explains-- Dante tells him, you never really say anything about usury. The point that's, "the usury offends Divine Goodness, and loose that knot." Why is usury--what is usury exactly? The question is why does Dante ask this question of usury? How does he answer it? We can understand why he asks about it. How does he answer what usury is? Let me just read this passage lines 98 and following: "'Philosophy, for one who understands,' he said to me, 'notes, not in one place only, how nature takes her course from the divine mind and its art, and if thou note well thy Physics," another text of Aristotle, the Ethics is mentioned for us, now the Physics, "thou wilt find not many pages on, that your art, as far as it can, follows nature as the pupil the master, so that your art is to God, as it were, a grandchild. By these two, if you recall to mind Genesis, near the beginning," the biblical book of Genesis, "it behoves mankind to gain their livelihood and their advancement, and because the usurer takes another way he despises nature, both in herself and in her follower, setting his hope elsewhere. But now follow me, for I would go; the Fishes are quivering on the horizon and all the Wain lies over Caurus and farther on there is the descent of the cliff.'" To explain the sin of usury, Dante puts forth a theory of art. That's what's happening, as if usury were a violation of art. How does he understand art? Art, what is art? He understands art as work, that's the best way to explain it. Talking about the beginning of Genesis when a human being-- when Adam was thrown out of the Garden of Eden and was told that in order to recover, retrieve the garden, he had to go back to work, that work becomes an ascetic--not a punishment. Here Dante doesn't see work as a punishment, but an ascetic exercise whereby one can regain or transform the wilderness into paradise. That's really the idea, but I think there is more that is happening here in this connection. This is the general thrust of the canto. What is art in the Middle Ages? You may want to know because first of all, I did say that there is a general coherence. You remember those were my initial words when I started today's class from Canto IX to Canto XI? Art is understood by the Scholastics as a virtue of the practical intellect, in the order of making, a virtue of the practical intellect. You may go, what is this practical intellect? How many intellects do we have? Well there's a speculative intellect. When Dante talks about the immortality of the soul and those who do not believe in the immortality of the soul, that's a question of the speculative intellect. If I went on thinking about God, suppose that I had this weakness of mine to think about justice, for instance, an abstract of idea, justice, not particular cases of justice, then I'm involved in an exercise of the speculative intellect. He ends the canto with the practical intellect, an emphasis on the practical intellect is the mind that worries about doing or making, and they are not the same thing. What is the difference? To say that there is a practice intellect in the order of doing would be to worry about when you talk about prudence: a virtue of doing, because it's not the artisan's work. To say that it's a virtue of practical intellect in the order of making, it means that the work of art is a thing that one elaborates. From this point of view, the issue is never really one of--does it tell the truth about whatever. It has its own thingness; it's a thing, the work of art is something made and therefore as made, it has its own reality; it has its own laws; it has its own rigor. That's one thing. It's work, but look at all the images that Dante is using to reflect on this problem. "Philosophy, for one who understands,' he said to me, 'notes, not in one place only, how nature takes a course from the divine mind and its art. And if thou know well thy Physics," which is it's a theory of nature really, it's a theory of motion, it's a theory of how things grow, how things are born, grow, and perish, "thou will find not many pages on, that your art as far it can, follows nature as the pupil the master, so your art is to God as it were a grandchild." I just want to talk about these metaphors here to make you understand what Dante--how Dante understands art. On the face of it, he's saying that art must be an imitation of nature. You have followed nature. Does Dante then have a mimetic idea of art? Not at all, not at all, because look at the metaphors he's using: two metaphors. "Your art, as far as it can, follows nature as the pupil the master, so that your art is to God as it were a grandchild" because art follows nature, nature is the child of God, so art is a grandchild, but it's an image of fecundity and fertility. You can understand why Dante opposes art, finally, to usury. Usury is viewed as the activity that is sterile, an activity that produces what's symbolic, money out of money. So it's a symbolic kind of operation; as opposed to it, Dante--opposes to it, Dante casts the virtue of art as work, but one of production, one of generation: the "grandchild" of God. Art is the grandchild of God. Then there is this other metaphor, "as the pupil follows the master," which is not a gratuitous metaphor, because after all within the context, here Dante has Virgil who is teaching him, so there is almost a kind of flattery, if you wish. One little detail, he's flattering the relationship. He acknowledges his discipleship to Virgil, which he does all the time, but it implies the educational aspect of art, in the most etymological sense of the term. Art educates, in the sense that it leads us out of a particular state of barbarity, ignorance, darkness, etc. So it has an educational and a non-mimetic role, because art imitates the productive rhythms of the natural world. Dante views an art that is open to beginnings, to life. This is the meaning of Genesis, the idea of Genesis, an art that always--that is original, but not in a romantic sense of originality, but an art that leads us back to the thought of origins. The thought of how things come into being because only then, do we understand how--what the ends are. To understand the ends, we got to know something about the beginnings, about the seeds that make us whatever it is that we think we are. We have gone then from Canto IX to Canto XI, from concerns about the pride of the mind, which we could even call the wound of the intellect and the weakness of the will, to the view that really is no distinction between an Epicurean thinking about oneself and the state and distinct from some kind of theorizing. Dante sees a connection between them, to finally an idea of art. And I think that this idea of art is also for Dante remedial for the evils that--to which we are prone. Dante thinks that should we apply to ourselves the same kind of care and rigor with which an artist produces the work of art, then what we call the cultivation of the soul may indeed begin to take place. Art--Dante's attention to art is part of this ascetic exercise. Let me finish here and we'll give another few minutes-- I touched on a number of issues and I'm anxious to hear your perplexities, questions, comments, suggestions, whatever. Yes? Student: With the heretics, he makes it so clear that they sin, because it's both intellect and will. It's because--with the heretics he makes it clear that they sin both because of intellect and will, it's not just because they're thinking something false. Prof: The question is, with the heretics that we are not really dealing only with the question of thought, or it's a freedom of thought, but it deals with the fact the heretics are engaged in act of-- acts of intellect which are supported or shaped by also acts of the will; that was exactly what I maintained. Student: Then that makes sense, but then does he ever really try and explain to them-- I mean everyone wonders about the virtuous paintings and why they're there, and it seems that theirs is just a failure of intellect and there's really no failure of will. Prof: For the--for Dante you mean? Student: Yes. Prof: The question is about the pagans, does Dante think that theirs is not just the failure-- if I understood the question, the failure of the intellect but also the failure of the will. Yes, I think that this is the case. We shall see a number of pagans where probably we can highlight--we can see highlighted some of these concerns that he has. Concerns--you will read Ulysses who is represented as engaged in a flight of the mind, the wings of the intellect. You know they are wings of desire, Platonic wings of desire. And then Francesca, Canto V, remember she is like a dove, etc., called by desire, the open wings, clearly the wings of desire. And then in the case of Ulysses, you're going to have the wings of the intellect, the mind that tries to reach, the flight of the mind. He too, appears as a rhetorician. That is to say, it would seem that we like to believe that there is a distinction between let's say, a metaphysical, it's a little bit physics, as a kind of metaphysical intellectual flight and Dante's always saying, look, is always trying to probe the presence of passions, the rhetorical aspect of the claims to reach the truth, or the plain of truth. Yes, am I answering your question? Student: Yes, I mean it's also just they were put there because they did-- Christ hadn't come yet, which is why they were still in Hell, why Virgil came to Hell? I'm just wondering is that like-- Prof: Okay, good. I have been missing the mark in answering and the question really is how-- are the pagans in Hell because Christ had not yet come and in what way did they violate-- Student: That just seems the failure of intellect rather than-- Prof: The failure of intellect? No, the answer is no. Let me give you a general idea of why it's no, and then a particular idea. First of all, generally, if Dante were to judge and he does, judge the world, the culture of let's say Greek--the classical world. He puts most of it in Limbo. That's a judgment, saying you're really marginal, you're really liminal to my story. That's what he's saying, though in Paradise he will go out of the way to reclaim Plato, Aristotle, and all the possible Aristotelians. If he were to do that from a perspective which is outside, he really would be a boring poet, in my view. You don't believe in the immortality of the soul, as if he were saying to Epicurus. I do, you are a fool and therefore I am saved and you--I put you in Hell. That's really not worthy of a great mind, because one can imagine Epicurus saying, well you are the fool and you think you are saved, I know what I have been doing. You can take sides. Dante never does that, what he does do is take the perspective of the sinners themselves. Let them talk and gives them a rope with which they hang themselves. That's really what's happening, that's the best way to present this argument. He's not judging the pagans, now I'm coming straight to your question, he's not judging the pagans from a Christian standpoint alone that is outside of it. For a number of reasons, because he believes actually that whatever--that the pagans are adumbrations of the Christian view, the Christian vision. They are not just "other" to be rejected, on the contrary. And then in particular I can say, that Dante will go on praising some pagans among the saved. He even praises one figure the so-called Ripheus, who was only mentioned once, and we know nothing about him, a Roman who was a sailor in--with Aeneas, and only because Virgil refers to him as justus Ripheus, the just Ripheus, which is, to me, a way which Dante says, not just the kings but every simple, humble worker can also be saved, Trajan, the emperor Trajan and so on and a number of other cases. In the case of Virgil, I'm not going to go there because I'm going to be on TV for the next six months I don't know, because reams of books have been written by people who wish to see him saved. I mean that's such a good guy throughout Inferno and Purgatorio and the people who go on arguing that he might--he may be saved. One thing we know is that he comes as far as the Garden of Eden and just as Beatrice is going to arrive then the pilgrim, the lover now, Dante trembles at the idea that here she is, the destination of his journey, and he needs the help of Virgil. He turns around and his eyes will never see him again. He had vanished, so we know that, there is a kind--that seems to be the limitation of how far he can go. To say this is really to say very little, because within Dante's cosmos, Dante has an idea of the curvature of space. This is the sphere. A redemption means that all things will go back to the beginning, that's what happens, so only from that point of view, we don't have a lot of thematic reflections about this aspect, who is going to be saved at the end of that, who knows. The whole question is the unfathomable quality of God's justice. Dante wonders though, we're talking about Christ and Christ, and what about the in--those living in--near the banks of the Hindus? They never heard the name of Christ? Are they going to be saved? They are just, can they be condemned? These are questions that he will not answer. He raises these questions in Paradise. You're a little bit impatient I say but that's-- I think that from the point of view of eschatology, he must have an idea of--otherwise Redemption has failed. If some people are damned then there is no Redemption, you see what I'm saying? The measure in which evil--if you get into this metaphysical framework, if there is going to be residues of sin, then there is also residue of injustice. I don't know that I answered your question. I think I did but--Yes? Student: Thinking about the Medusa, I understand the idea of Medusa as an allegory for the mediation of poetry, but is it also a connection between the figure of the woman and the act of petrification? Thinking about the Donna Petra, I'm wondering if Dante's also saying something about the dangers of love or the dangers of misplaced love. Prof: Thank you. The question is about the story of the Medusa and the part about the mediation of poetry to conquer Medusa is clear, but the question also wonders whether this has to do with, first of all, the fact that the Medusa is a woman, and also that petrification is the petrification of misplaced love. Is that an accurate paraphrase? Student: > Prof: Okay, the answer is that's a great question. I sense there your awareness of the Freudian reading of the myth of the Medusa as castration, as indeed a kind of literalization of the threat, at least, not castration but the threat of castration on the male from the point of view of Medusa. I will take the second part of your remark to explain also the first part. I think that this is a question of misdirected love, but what Dante has seen--I would urge you to go and read the poems about this lady Petra, the lady of stone--because they are poems where Dante literally engages in fantasies of violence against her. If I could get my hands on her with a passion, unrequited makes him--it's almost like a sort of sadistic coloring about it. I think it's clearly misdirected love. What he understands though is the kind of death that that sort of desperate love has brought to him. You call that--you can view that death as the fear of castration. I think it's an extreme version of that, so I would agree with what I hear is behind your question. Yes? Student: So why does he draw us into that allegory of Medusa and petrified love? Prof: Well he-- I--you mean if it's--the question is why does he draw us into that allegory if it's about petrified love? Right, this is the question and I presume-- I don't want to explain your question, I have to answer your question, but I presume that your question stems from your-- this kind of concern you have. If it's such a private story, why us? In what way are we involved in that? This is his story, right? The answer that I could give is yes, this is--can be seen as Dante's own confession and the confession and that's why I read the passage from Augustine, from Augustine's own Confessions: a confession that can also be exemplary to us. And because he thinks, I think, that there are no experiences that are irreducibly private and therefore unshareable. It's part of the concern of this writing--of this writer, that anything that will happen to him we are--I have to say this. We are going to find moments when he tells us its night dreams, pretty heavy night dreams, but treated with an extraordinary, I think, care. We are going to have his--we are going to enter into his psyche, why part of--way of entering into his psyche? Because he has to understand not only how shareable his experience is, but also, what is the root of the way we make decisions? Do--can we really be ever all the time vigilant? This is in the case of the dream. Am I responsible if I perceive the world in a certain way? Am I responsible if dreams come to me in a certain way? Then, am I accountable for this kind of dreams? The concern is always that of trying to delimit an area where there is some common ground between his experiences and ours, and that's the transaction of allegory. Yes? Student: At the end of Canto IX, I don't know if it's significant, but Dante turns to the right as he enters the City of Dis, did you find that significant just going back to what we talked about last time? Prof: Of course, this is fantastic. The question is I have been talking about the fact that in Inferno Dante goes to the left, seems to--which I also said though that that is really very difficult to visualize, so we say counter-clockwise or clockwise, that his descent is clockwise in Inferno. And yet, I'm so grateful to you for noticing this. Here in Inferno IX, Dante actually goes out of the way to say that he turned to the right and the-- I don't have a position of my own on this but there will also be-- well this is to emphasize that heresy is primarily the disease of the intellect. You remember that I specified that there is the will and the intellect, and the will is always the left and the intellect is the right. I think that Dante is--this is the point where Dante is sleeping. It happened to Homer occasionally to fall asleep and this is the only time that you have caught him dozing off. Thank you so much we'll see you next time. |
english_literature_lectures | The_English_Industrial_Revolution_I.txt | okay so the last three lectures here uh what we're going to look at is the great event in world history which is the Industrial Revolution and it turns out that by 1850 uh Britain had gone from being this tiny peripheral country in Europe to being the major world power this kind of Colossus arride the globe and uh the thing that's interesting about Britain is it's only .6% of the land mass of the world and even by 1850 it only had something like 1.8% world population probably at the dawn of the Industrial Revolution in 1760 Britain might have had one person in 200 in the world was actually in Britain and yet by 1850 it's alleged that Britain was producing 2third of the world's coal output at half of the iron production in the world and something like a half of the cotton textiles that were produced in factories and so in described in this way it just seems like this amazing event uh which transformed uh this country uh and also at the same time Britain became the world's major military power uh it had a Navy uh which was the largest in the world British Naval Doctrine actually called upon want it to be bigger than the next two navies in size so they could defeat any two people combined any two other countries defin combined against it uh it had incredible Colonial possessions uh Canada New Zealand Australia uh large chunks of India and Pakistan Ireland uh parts of China uh parts of Malaysia uh and so it had kind of spread across the globe uh in 1842 in the Opium War it forced the proud Chinese Empire uh to seed Hong Kong and allow the British to import narcotics into China uh because its possession of India meant one of the big uh Goods being produced there was opium and the market for opium was in China uh and so this was a why there's a there's a long history of kind of Chinese nationalism because of the humiliations of the 19th century there was a question here 184 to okay uh and then by 1860 the British and French together uh marched into Beijing cap captured the capital uh and forced even more treaty ports on the Chinese and by 1855 Chinese tariff revenues were actually collected by the uh Colonial powers and then remitted uh to the Chinese government the Chinese government wasn't even trusted to collect its own tariffs and tariffs were set as at these very low levels uh in China uh to stop them from blocking the importation of manufactured goods H and it turns out that the British colonial Pursuits in this period were actually largely devoted to the prosecution of free trade the British government actually preferred not to occupy other countries it just wanted access to the markets of other countries and it tended to move in when the local rulers became protectionist uh or were unable to administer the territor so actually what the British wanted was the World Market at that stage and we'll see why when we look at the growth of the textile industry in Britain Britain was the major producer for the world textile market and even as late as the late 19th century uh within a 30 miles of Manchester in England was producing about 40% of the entire world output of cotton textile goods and so you had this incredible uh transformation uh of uh uh Britain's position in the world at the same time for example uh Britain in the case of India entered into a free trade agreement with India which allowed goods and capital to flow back and forth between these two countries even though in the 19th century Indian wages were somewhere between a quarter and one 16th of those in Britain and yet the British didn't fear uh entering into this uh free trade uh Arrangement and so it's similar now to say the United States and Mexico where there was all this doubt and worry in the United States about the effects that that would have on the US economy uh the Industrial Revolution actually also transformed Britain itself this's a very rough sketch of Britain all the way from the medieval period the bulk of the wealth and population of England had been concentrated on the more fertile lands in the South and the East here and London was the the capital and the giant City uh of Britain it turns out that most of the growth in the industrial revolution period in terms of the high productivity Industries actually occurred in the north of the country and in Scotland and so this traditionally backward area of Britain became the heart of the British economy uh and towns like Manchester up here uh had incredible growth in the industrial revolution period at the start of the Industrial Revolution period Manchester only had something like 17,000 people by 1830 it's up to 170,000 and so there was this enormous growth of wealth and population in the the north of the country and one of the puzzles actually in explaining the Industrial Revolution if you want to explain it in institutional terms is that had Britain been divided politically here on this line the Industrial Revolution would have been regarded as a phenomena of this country here the south of England had the same relationship it seemed to the Industrial Revolution as France or Germany had and so another puzzle is that it's not just this tiny country that's being transformed it's actually a Subs sector of this uh tiny country that's going through this transformation and interestingly by the way the the north continued to dominate the British economy until World War I somehow the events after World War I caused a relatively rapid collapse in the Northern Industrial sectors of Britain and the e reemergence of the South as the dominant part of the country and so now the wealth and the income is all concentrated in the south of England and it actually shows up in terms of you know on the south coast here there are Towns now in England where life expectancy is 10 time 10 years higher than it is in my hometown in Glasgow in Scotland uh and that's just reflection of now this this this traditional pattern actually returned so that this area had a period of something like 100 150 years where it was this this kind of advanced part of the world and then somehow it all reverted again after World War I and so this account of the Industrial Revolution makes it seem like some incredible detective Story I mean what happened in this tiny part of the world which represented as would say an entirely new phase of human history in terms of the rapidity of economic growth and the rapidity of technological advance and it makes it seem seem like there must be some secret you know it's like uh the the the the Holy Grail or something of economic growth is going to be dug up from a hillside in Manchester and uh and one that's one of the puzzles in explaining the Industrial Revolution is why would it be in just in this particular uh part of the world um but it actually turns out that the Industrial Revolution is actually quite a complicated series of events and in fact we'll see that there are actually Four separate revolutions that somehow coincided in Britain in this period we'll see that sometime after 1760 there's the classic events of the Industrial Revolution which was a technological transformation of industries by means of Innovations and we'll see that there's a set of industries that are transformed in this period the most important one being the textile industry and something like 60% of all the economic growth in Britain in this period can be attributed to that transformation of the textile industry and so you get then that's the classic Industrial Revolution where something happened in terms of innovation in this economy and led to this dramatic change right and that growth in the textile industry also was very important in explaining why Britain then had this huge political interest in dominating the rest of the world because very quickly they were producing so many textiles that they had produced enough for the entire British population they needed a market in the rest of the world in order to sell these textiles to and we'll see that just becomes the dominant industry in Britain in the 19th century so that's the classic industrial part of the Industrial Revolution but here's this amazing coincidence British population in 1760 was not much higher than it had been at its medieval maximum in 1315 in 1315 there's about uh 6 million people in England uh on the eve of the Industrial Revolution it's not much beyond that level so there's actually been very little change and this is the kind of static malthusian world and is also suggesting there's been relatively little productivity advance in Britain all through this period because remember population growth is a measure of of productivity Advance population began to grow very rapidly in Britain in the industrial revolution period and British population uh is more than twice as big by the end of the Industrial Revolution and Britain changes it becomes a much bigger country and it becomes also one of the most densely populated parts of the world as a result of this population boom which occurs in the industrial revolution period and one puzzle is why would these two events coincide we'll see that that population boom had to do with just the kind of the basics of marital Behavior by people all across the English economy and the puzzle is why through all of history would you have a coincidence and it's really it's within something like 10 or 20 years The Coincidence of these events uh where suddenly population begins to Boom at the same time that you have this Industrial Revolution and then a third thing also happened in this period which is that that higher population in Britain somehow had to be fed and Britain begins to buy 1850 15 1820 it starts to import food on a large scale and now it's a country that's that needs very substantial food uh Imports uh but most of that population was actually fed by English agriculture in England and that seemed to imply that there also had been an Agricultural Revolution at the same time as the Industrial Revolution because people were richer in the course of the Industrial Revolution richer people as we eat more food they want more meat they want more butter uh there's a much larger stock of people and it seemed that the output of the Agricultural sector in Britain roughly at least doubled and maybe tripled in the course of the Industrial Revolution and so it raises this other and and but it turns out we'll see that that agricultural revolution had nothing to do with mechanical Innovations and also in the industrial revolution we can actually locate the people that launched the Industrial Revolution we know the people The Men principally Who were responsible for that change in the modern world in agriculture we can't find anyone who was actually responsible for this it must must have been just thousands and thousands of smallscale farmers figuring out how to do things better and again the puzzle is the Agricultural Revolution exactly coincides with this Revolution and this Revolution and so the puzzle is why is that happening right agriculture seemingly had very little productivity the advance over the past 500 years now it's being transformed and so it's raising this puzzle about well what you know why is the whole of the British economy energized in this period uh what happened to this society that caused this transformation and how could it be such a relatively sudden transport transformation and then the last of the the great events in this period is that there's also a transport Revolution within the British economy uh um transport in pre-industrial Europe was typically notoriously uh slow uh Co we actually have good evidence in terms of speeds of travel because there were coach timetables that were published uh starting from at least the mid 17th century and so we know that the average speed of Coach travel in Britain in something like 17th century might be 2 miles an hour right I can walk at four miles an hour right and so the average speed of travel across the economy is actually incredibly slow and as a result of this people really don't want to travel long distances so Queen Elizabeth I believe I won't have the exact number right here but rained for something like 42 years at the end of the 16th century she never made it further north than about 200 miles from London even though the the northern border is 400 miles from London in her entire Reign uh there were large chunks of the country that she never even saw uh and so travel was very slow it's very expensive because you have to stop all the time overnight in ins uh in the 17th century it took six days to get from London to Edinburgh which is about the same distances from here to Los Angeles uh and so you know it's it's it's surprisingly slow the speed of movement of people what happens in the 18th century is that the cost and speed of travel on traditional Road networks in Britain improves very dramatically uh and the thing is though this is mainly an organizational Revolution they actually figured out a way of how will we build good roads again they knew how to build good roads ever since the Romans right the Romans left behind a bunch of roads in Britain when they evacuated the place um then the road system deteriorated in the Middle Ages and one of the problems simply was that the way the roads were paid for was that the local Village that the King's Highway ran through was responsible for maintaining it The Villages often had no interest or no means in order to do that and so the the road network was terrible and in particular in Winter the whole Road system would deteriorate once the rains came uh and in some areas actually um when roads ran through Open Fields these non- enclosed Fields people would start striking out into the fields trying to find an easier way through and so in some places the roads would be hundreds of yards wide but just full of rutted waterfill holes uh and and but they knew actually how to make much better roads in this period but there simply wasn't the organizational structure behind that and what they actually did was effectively they privatized the road Network in Britain in the 18th century and simply said you can build Gates on these roads and you can charge people to use them and you can use that money in order to maintain the road Network and with that they actually very rapidly in a course of something like 20 years between 1750 and 1970 remade the entire Road Network in Britain and again the puzzle comes up why would they build something like 15,000 miles of New Roads just on the eve of this technological Advance this demographic Revolution and this Agricultural Revolution all occurring at this time in Britain right I mean why yeah I mean and I see that's what creates this incredible puzzle is why would you stagger through world history for 100 thousand years never achieving any any major sustained period of technological advance and then get to this minor country on the edge of Europe in 1760 and suddenly it seems everything's happening uh and and and yet we actually know that the political system of Britain is largely unchanged right that there had been this significant change in 1689 but nothing happened in the British economy for the next two generations until we get to 1760 and it's a period of no major political changes within the British economy and so the obvious kind of easy institutional explanations just don't look that plausible it has to be something more protracted more prolonged and and I say one of the puzzles is if the Industrial Revolution really was such a sudden event it's going to make it an incredible mystery in terms of why it was the world transformed this way so what we're going to do first then is just go through and describe a little bit the details of what actually happened in these different uh revolutions and the first one we want to look at is the actual classic uh Industrial Revolution and here it turns out that there's actually a list of industries that are transformed um the first one is textiles and that's the most important as say about 60% of all the growth of output in Britain is if going to be attributed to the textile industry the second one was the introduction of steam power into the economy and as say another of the puzzles here is this is a completely separate technological development from this one here it's a completely different technological idea steam power than textile production and in fact uh steam is used to power these textile mills but you could have actually powered all of these Mills well into the industrial revolution just by using water wheels right there was plenty of water power available in in the British economy and that's one of the things then when people try and explain the Industrial Revolution as being from the fact that Britain had coal because you use coal for steam engines uh one of the the problems is that you know country like Japan which didn't have easy access to Coal had tons of water power and it had a cotton industry at the same time as a much bigger cotton industry than Britain had on the eve of the Industrial Revolution it had all the prerequisites in order to have its own Industrial Revolution right and similarly China had very significant cotton industry India had a very significant cotton industry in this period and so the question was what was the particular Advantage if it was going to be some geographic feature it turns out steam power is very it is important in this period of the Industrial Revolution but it itself actually is completely separate and it's not really required for this by the way these early steam engines were actually used to pump water back up through water wheels right so you would run the water down and then you pump it back back up the hill and then you run it through the water wheel again because they needed smooth power for these machines and the early steam engines were too shaky I mean they would have shaken the machine into bits so you actually just use them for to push the water back up the hill and then run it through the wheel again uh the third Revolution the industry here was Iron and Steel uh the fourth one was the railway and then the fifth one is coal mining so these are all sectors of the economy which Drew dramatically in this period and had significant uh technological advances and what's interesting here is that actually these are all interconnected and so another argument here might be just accident that you had a set of technological advances that then produced technological advances in other areas but these are not connected with the main technological advance of the Industrial Revolution okay and the reason these are interconnected is Britain had these reserves of coal and had was mining significant amounts of coal right and coal mining creates a set of technological problems as the mines get deeper they had huge amounts of coal in the Northeast for example but they knew you had to keep going deeper to get you exhausted the surface seams and then you go down and they knew there were more of this stuff down there but you know by the time they're into the 19th century they're running a th 2,000 ft down into the ground and then the problem you get is when you drive that pit shaft down it's running through underground strata that have water in them and the water leaks into the shaft and it's going to drown your mine workings unless you pump that water out so the so the barrier is the deeper you go the more water you're going to get in the pit and the more you have to pump out and so that's what the the early use of steam engines was for and and so they had this technological problem they needed something they could do it with horses but it takes a lot of horses to do that and one of the big advantages of of early early steam engines were incredibly inefficient in terms of the conversion of the coal into actual usable power they have something like a half percent efficiency right by the late 19th century they have steam engines that are 25% efficient right but it took a long time along that technological path to develop those so that the early steam engines could not work in any area where fuel was expensive but one of the features of coins is there's a huge amount of really cheap fuel there's a lot of waste coal just sitting at the pit head most of the cost of coal is actually the cost of transporting it from the mine to the final consumers and so that's why there's a connection between coal mining and steam power right and then it also turns out that once you produ producing a lot of coal in the economy it creates a lot of demand for transportation of heavy Goods it also railroads actually developed initially in the coal mines so the idea of transporting goods over railroad they actually had an extensive horse hauled railroad system in the northeastern coal mines at the period of the Industrial Revolution and then the other thing is that iron steel to produce that in large quantities was not possible in Britain before the Industrial Revolution because the traditional Iron and steel industry used wood to make charcoal to make Iron and Steel Britain had chopped down most of its forests by 1760 there's very little wood left in the economy Iron and Steel then is produced in Europe in places like Sweden and in Russia but another of the the key technical development in this industry was to figure out how to use coal in order to produce Iron and Steel and then once you figure that out you've got this incredible supply of coal you can start producing this on a very large basis so there are actually there is this set of Industries actually has these interesting interconnections where it would be possible to say the big thing the British had was a they were sitting on a mountain of coal right they're sitting on a pile of coal and that's what allowed them to transform their economy except for the fact that you've got this other completely separate Revolution that's going on here which had very little connection with these other Industries and where the only interconnection really was that you used coal eventually to to power these uh these Mills but but the the cost of power for the uh textile mills is mainly you know it's like two or 3% of the total cost of production so even if you'd had to switch to water and that was two or three times as expensive it wouldn't have dramatically impacted these Mills so that's the the array of Industries now let's start and then talk about textiles specifically so textiles and particularly cotton Tex in this period was the equivalent of the computer Revolution that we've had in the recent World here and they had their Silicon Valley which was the area around Manchester in this period and to see how dramatic this transformation is we know exactly how much output they were producing in TX enery because all of the Imports all of the the industry depended on Cotton which was all imported into Britain and we have the complete records of how much cotton was imported and to produce a pound of cloth takes a fixed amount of cotton so we know exactly how much output they're producing in different periods and so in 1760 they had a very small industry which imported about 3 million pounds of cloth for use in England uh sorry 3 million pounds of cotton uh for use in England uh and as you see it's a minor Tex industry uh and a lot of that cotton would actually be used for uh candle wicks and various other purposes actually not actual cloth by 1850 that industry was annually employing 621 million pounds of cotton it had risen already by the 1830s to be something like cotton alone was about 20% of the Imports of the British economy and cotton goods were about half of the exports of the British economy so it just became this giant industry and it had these Global political consequences because as the industry developed you needed a reliable and cheap supply of cotton and so the growth of Britain in the industrial revolution period is tightly connected to the development of slavery in the US South uh and it was very important also the productivity advances which were made on the slave plantations in terms of supplying much cheaper inputs to the industry so there so again another argument is being made is why was this revolution occurring in uh uh Europe as opposed to in Asia well there's this connection then between the colonization of the Caribbean and Brazil and uh uh Southern parts of the United States and the need for the supply huge supply of this raw material to the industry in Britain uh the cotton industry as say it was about 0% of GDP in 1760 by the mid 19th century it's almost 10% of output in the British economy are cotton textile Goods but it's not just the cotton textile industry because there's a whole bunch of other Industries which use different fibers which are all then starting to be transformed so cotton goes first but next is linen right uh next is the wool industry there's also a jute industry which uses jute from India to produce heavier things like uh floor coverings sacking uh there's also a silk industry uh which is similarly being transformed in this period And so it's just it's a revolution that sweeps through all of the textile Industries and each fiber is different they have different technological problems in terms of mechanizing their production but gradually all of them are transformed in this period And as I see the the interesting thing about the this industry is it's been intensively studied so that we know the person alities who actually transformed this industry and so the next thing we want to do is draw up a a little kind of table which is going to just list and it's in the book if you don't get any of the details here uh the innovator the device the year and the result okay and and um the first uh big innovation in is actually a little bit before the classic Industrial Revolution and by guy called John K and this is actually a revolution but remember in this period there is no cotton textile industry it's tiny so this is actually to do with the Woolen industry and so he's an a a weaver and Artisan Craftsman in the north of England and the interesting thing about this indry in this period is we'll see that the people who transformed the world are not scientifically trained they're local mechanics they're Tinkers go to the Davis bike store right and those are the kind of guys or you know your local car mechanic somehow these guys are transforming the world right it's not actually coming from the universities the universities have academic learning that of very little value mostly in this in this period right there are a few so James W is the one person actually is coming from a university background so John key is there and his device is actually called The Flying shuttle okay and I'll explain in a second exactly what the flying shuttle consisted of and that was actually earlier it's 1733 uh and what was the result poverty uh and so first of all let me explain exactly what the flying shuttle uh is okay and and the reason we want to go through this these Innovations in detail is because I want to illustrate that these are things that could have been invented a thousand or 2,000 years earlier some of these things right that's one of the puzzles here is it's not you know that you needed calculus or anything to do this right it's got nothing to do with the advances of science it's simply people thinking hold on there's got to be a better way of doing something like this so what is the flying shuttle well in weaving cloth you have these long threads which are called The Warp and then you have to interweave the Cross threads which are thinner and less strong which are the WFT right and so I'm sure in in your English classes you've heard expression of stuff about the warp and weft of life you know the interweaving of all those elements that makes up the fabric of human existence no okay um anyway so you have the warp and you have the WFT and you got to insert the WFT here and what happens is if you look at the loom horizontally is all these threads are lying here and and on each insertion you lift up a different set of the threads and then you fire across the WFT okay and so this is called the shed and the early way of doing this was that the W thread would just be on a little bobbin and you would throw it through the sh and catch it with your other hand right and so that's how weaving was done for thousands of years right was you would throw it catch it throw it catch it right there are two problems with that as a technique one is if you want to make a cloth that's wider than the distance between your hands you need two people to do it and the second is that it kind of slows down the procedure I mean even though people get pretty adapt at doing this it's a relatively slow procedure right what was K's Innovation K said well why can't we mechanize this and why it's called the flying shuttle so it was the shuttle was the thing that was thrown through so it's just a bobbin with thread on it and the thread unwinds as you throw it through he built a little vehicle with wheels where the the shuttle actually sits here and the thread comes out like this and then on either side of the room of the womb the loom there's actually little boxes and there's Springs here and what happens is these are connected to Strings that the Weaver has at the front of the Loom and the Weaver jerks the string it fires the shuttle across and then the Box on the other side catches it and then you jerk it again and it shoots back across right and so it's just a it's very simple device I mean mechanically it's difficult to get this to work right you have to spend some time to get well how's exactly is this going to work but it had quite profound productivity implications because for a start now you can actually weave wider cloths and the second thing is it's estimated to have doubled or tripled the speed of weaving you can just do this much faster if you just once you get into rhythm with this string here right and and that became the basis by the way this kind of shuttle technique is the basis of uh you know weaving up until now they have shuttle as looms but for the next 200 years essentially okay uh and and so it's it's very simple idea but I say the ancient Romans could have done this I mean it's involving a little bit of metal wood uh and it no great kind of technological inspiration okay and so uh what actually happens to Kay is that he then attempts to use the property rights system in order to patent his Innovation he is impoverished by litigation trying to enforce his patent and the reason is that once anyone else saw this this is so simple once any other Craftsman Carpenter you know clock you know maker or something like that saw this they could reproduce it right and so you couldn't enforce your property rights because it's just like software piracy now once you sold a few of these devices to someone they could get someone committ and say make me copies of this thing and it's used weaving at this stage is a domestic industry Weavers typically weave in their own homes they they have these sheds attached or there these rooms above the homes and so trying to get these people you'd have to sue all of these individual people bring them to court and then often they just they're renting their accommodation they don't have any property you can't recover big damages and so basically you couldn't actually recover profits from this industry and then uh poor K uh his house was destroyed by Machine Breakers in 1753 because the Weavers objected to these devices is because by increasing the productivity of weaving it's actually reducing the demand for Weavers or at least they feared it was going to reduce the demand for Weavers and eventually he was actually invited to France by the French King and set up in a Royal manufactury and they tried to then under Royal sponsorship introduce the device uh to uh France but he actually died somewhere in France in poverty in 1764 right and it's going to turn out that this is actually pretty much the standard story of these innovators early in this industry which is that they're trying to make money in this period I mean they're clearly motivated by this uh but it's very hard actually with these Innovations to make a lot of money because of the the the very the poorness of the property rights regime in uh a place like uh England in this period and then interestingly this device spreads all the way through the British weaving industry in France despite Royal sponsorship and Royal promotion it actually fails to spread very widely and it's later in the 18th century it has to be reintroduced to France and so another puzzle is it's not just that the people are producing these Innovations it's also the case that producers are very receptive to trying to use these Innovations right and so so it turns out it's a little more complicated it's not just you got to have an innovation it's also that people are very willing to say okay let's try that right okay so that's the first one uh the second innovators in the industrial revolution period are guys I'll just give their last names Wyatt and Paul and they produce a spinning machine and that's produced in 1738 and uh what happens to them poverty uh so whyatt didn't Paul actually devise get the key idea of mechanical spinning or that later comes Dominate and so they they they they figure out how you actually can mechanically spin thread but the problem of often of these early Innovations is you might have the right idea but it takes a long time and a lot of capital to actually make this into a viable machine economically and they simply ran out of money before they could get to that stage and so they had a factory and I think Dr Johnson the famous dictionary owner had a share in that factory which ended up being worthless right uh and uh they just couldn't get the device to actually work but the interesting thing is that when arcrite comes in it's essentially a copy of that early device that he eventually patents and and and it turns out he's the one guy who makes money but he makes money because he can figure out the problems of production uh what are these guys again uh I think it's Paul is just a dilatant younger son of some relatively Rich family I think Wyatt is a ships Carpenter right so again these are not you know people from universities they're not people with high engineering background the interesting thing is that they're just interested in this possibility of spinning thread using a machine in this period but so you can see that in the period leading up to the Industrial Revolution there is actually this period of kind of failed Innovation or not I mean this is successful but there's a there's a precursor period right and it's just the case that once we get to the 1760s somehow this rate of innovation uh really takes off but that we'll do on Monday e e e e |
english_literature_lectures | Mark_Steel_Sylvia_Pankhurst_pt_1.txt | [Music] [Music] [Music] High it seems remarkable now that anyone would be so excited about getting the vote that they would dedicate their whole lives to securing it because most modern politicians I think you'll agree like yourself seem to be the embodiment of passionless soulless dullness do you agree with that can we start what we talking about but most young people have so little interest in elections that they don't even know how they work I know this from when I stood in an election and I was giv out these leaflets and these these two students came up and one went yeah safe man yeah I'm going to vote for you man and his mate went you can't vote man you only 17 and he said yeah I can get R out I've got connections man so maybe things were different back in the days of the suffragettes or maybe the suffragettes were about much more than the vote Sylvia panker became a hero to thousands of the poorest people in the East End of London she was attacked by Lenin for being too leftwing and she ended up living in Ethiopia revered as a princess under King of the raran highly [Music] cesin Sylvia panker was born in Manchester in 1882 at a time when to most people in Authority the idea of women voting was heresy for example the MP for Harford CW Radcliffe hook said I will oppose the right of women to vote until women are bigger than men which is fantastic logic so what about big women then and they have the vote and what about things that are bigger than men did they get the vote the Tory MP for Colchester EK cars Lake said the wife should be absolutely and entirely under the control of her husband she may not get about and if she does her husband is entitled to lock her up Manchester was the most radical city in England at the time and two of the most prominent characters in these circles in the 1870s were emilyn and Dr Richard panker who supported causes such as the abolition of the workhouse and votes for women the panker had five children including Harry who died young then there was cristel and Sylvia now despite her liberal parent Sylvia remembered under the discipline of the servants being tied to the bed all day for refusing to take cod liver oil only the victorians could decide the punishment for not taking something to ease your joints is to strap your joints to Furniture Duran s ia's childhood the radical movement was transformed by a mass agitation for better working conditions by some of the poorest people in the country including women matchmakers in this building in East London who went on strike the women were especially annoyed because they'd had their wages docked partly to pay for a statue to ex prime minister gladston the strike had a huge impact in raising the status of workingclass women in the community now the factory has been turned into Loft style Apartments but the developers have made a special effort to preserve the history by making sure that each flat is roughly the size of a matchbox the strikes changed the Outlook of the panker family and they became involved in a newly formed independent labor party and it's important to remember that at that time people joined that party in order to make it a radical campaigning organization whereas if anyone tried to do that with the modern labor party they might as well join the RAC and try to turn it into a radical campaigning breakdown service there was another effect of the independent labor party on Sylvia Pankhurst the leader of the new organization was Kia Hardy who is a passionate supporter of votes for women Hardy had been brought up in litshire where he had to sleep on a dirt floor and started working in the pit at the age of 10 until one morning when he turned up late because he was looking after his dying brother and he was sacked bastards K Hardy became the first independent labor party MP to the area of West Ham and one day when Sylvia came home from school she found K Hardy in the living room talking to her parents and later on she wrote about this meeting his eyes were two deep Wells of kindness like Mountain pools with the sunlight distilled I felt I could have rushed into his arms for several years she spent any time she could with Kia Hardy she'd help him write his speeches and in turn he'd read her the works of Shelly Byron and William Morris all this would have been scandalous for any unmarried couple at the time but Sylvia was 21 and Hardy was nearly 50 and married with Sylvia he could let down for a moment the granite image of the workingclass fighter and indulge his artistic side while she was attracted to the radicalis of a man that was untainted by the peculiarities of a middleclass upbringing writing about one of their days out together she said he would pick up little stones and play with them as children do you know I never played games as a child I said ah he said with infinite compassion and tenderness that is the matter with you you heard too much serious [Music] [Applause] talk sorry then in 1900s the campaign for votes for women came together with the workingclass movement in the Lancer cotton Mills when the Weavers launched their petition for the vote and then a group of people met in this room in the panker house to decide what to do next and that's when the winning social and political union was formed they decided to start off with some high profile stunts good morning for example she took a petition and went banging on the prime minister's door at which point she was arrested and stunts like this attracted National coverage until the daily male called the women suff Jets so they adopted that as their official title pett and emiline in particular became known as an impressive speaker especially in the poor areas in 1907 there were 400 meetings at which there were over a, people and the marches also got bigger until after one March in Hyde Park the times reported it is no exaggeration to say that the number of people present was the largest ever gathered together on one spot at one time in the history of the world the pank Ker's next tactic was to rush the House of Commons so emilyn and chrisel were jailed the protest did eventually take place but an inspector informed the women that the Prime Minister wouldn't see them so emilyn punched him in the jaw and they got arrested for assault they should have there somebody doing an advert going I can throw stones and break windows but could I punch a copper Square on the jaw and get arrested I don't know if I could do that if you could join the suffrage JS in the fighting that followed these arrests 108 more women were arrested so that night a group of suffragettes came to the home office with stones wrapped in brown paper and smashed all the windows from that moment onwards emilyn and Christel were committed to a strategy of Smashing things and the theoretical basis behind this plan was summed up by the elderly subet who said |
english_literature_lectures | Michael_Slater_Charles_Dickens_Part_1of_3.txt | hello and welcome to this podcast from Blackwell online my name is George Miller and my guest today is Michael Slater ameritus professor of Victorian literature at Burke who has spent a lifetime studying the life and works of Charles Dickens so much so that no less a biographer than CLA Tomlin has said that no living person is a greater Authority on it than Michael his previous books include Dickens and women a four volume edition of Dickens journalism and the general editorship of the Everyman Dickens series but the book that he's just published is in a sense the crowning achievement a richly detailed life of the writer which focuses on the whole of his output the journalism the sketchies the letters as well as the novels John Bowen reviewing the book in the times literary supplement recently wrote that it is a Triumph of compression and immediately takes its place as the most authoritative fair-minded and navigable of modern biographies so 20 years on from the last major life of the writer by Peter akroy what persuaded Michael to embark on a new life of dick I thought that although there wasn't a sudden you know discovery of dickin secret diary or something uh it was a good moment to write I mean sufficiently far away from Peter acro's book which is now 20 years back but a completely different approach but I did have I may say unparalleled knowledge of Dickens journalism I had the whole range of his edited letters available to me so I thought something could be done and I thought what I would like to do would be to focus it very much on Dickens the writer I mean that sounds obvious in a way but but I mean all of his writings and how they interleaved and interconnected I mean the the mere fact that most people don't know and are kind of astonished to find out that while was writing Bleak House or the these great novels he was also in the same month writing satirical pieces reminiscent essays dozens of wonderful letters making great speeches I mean just phenomenal that's why it's such a fat book and I wanted to to cover all his writing and as it were to to read horizontally uh through Dickens instead of having you know chap on Bleak House chap on our mutual friend sort of on we go but to to show how all these things interwove and that you find the same themes in you know one month he may be writing pck papers and he's writing some Jolly thing about people getting drunk he's also writing a pamphlet uh against what would have been extremely sort of blue-nosed legislation and so on you you find all kinds of of tie-ups you know uh between the journalism and what he's saying in his letters and speeches and what he's actually writing in the novels and there are some you know very very celebrated instances of that so that was going to be the the focus really of it Dickens the writer and then I had the problem of how do you begin because obviously he wasn't born scribbling in his cradle I didn't want to begin until he was writing as it were but you can't really have a BG of Dickens in which he starts at the age of 18 or 19 and I really didn't know what to do I I I began writing when he was like 18 or 19 and when he was writing and wrote you know several chapters but there was no beginning to the book and I just could not think I mean I could not make my hand right Charles Dickens was born on the 7th of February 1812 you know and go through all that which you have to have which has to be there in the biography and I suppose at the same time you're aware that those Unwritten about years contain things of significance for what is going to come later aren't you oh of course yes yes I mean that's right and he did write about uh very memorably WR about some of those years uh later on notably in the so-called autobiographical fragment but in the end I I found a way to begin the book with the first two pieces of his writing that survive from his childhood a school boy letter and a an even younger little formal invitation he wrote or at his parents dictation to some other kids to come to a party that gave me a start CU these two items the earliest surviving bits of Dickens writing that we have were five years apart and I could as it were fill in the gaps from departing from those two bits of writing I could create those two episodes of his childhood and also he himself talks about writing various juvenile tragedies and sketches and so forth all all now lost but we know something about them so I could talk about all that so that gave me I just called it early years those two first chapters were written quite a bit later on and then I had and I I think this was a touch of Genius myself at the end the the of course he dies at the end as as as you have to in a biography but my last chapter is about his last two Publications which uh of course are postumus public ations and he knew that they would be published one was his will of course he knew that was going to be made public and forced to actually ends it at the adds it at the end of his biography and that is the amazing thing where he mentions Ellen turnon the First Legacy he leaves is boldly to Ellen turnon and so you know what what's he playing at here he's concealed this relationship for 12 years and suddenly she's the first person mention in his will then there's the long piece about Georgina his sister sister-in-law very cold and awful words about his wife whom he can't even bear to mention so that's one publication sort of Dickens justifying himself really and there was quite a lot of criticism of it a lot of people recognize what he was doing you know somebody said well they often people dispute about which of dickens's Works was the best but I know which was the worst and that was his will so it was discussed as a publication and then after that when forer the first volume of his biography he published in it the autobiographical fragment which Dickens had written just before he wrote David Copperfield in in mid career about his 10th 11th years 12th years when he was sent to work when he taken from school sent to work in the blacking Factory and all that sort of uh as we would call it now traumatic experience which of course I discuss both in the early part of the book in the middle part of the book when he's actually writing it but again it's the impact on it because although he said to forer I don't care whether you publish this or L really I'm just writing it down so as you know what happened to me but of course he knew that forer would publish it and so like his will it was something he knew would come out after his death and which would hugely in this case hugely increase public sympathy for him and love and compassion for him he was already regarded as a Great Hero but nobody had any idea that he'd had such a horrific uh episode or as he made it out to be in his childhood when as he said for any care that was taken off me I might have become a little robber or a little Vagabond I mean a kind of Oliver Twist as it were and so I think that the autobiographical fragment and and forers biography obviously set the image of Dickens the public image of Dickens for 60 years or so it wasn't until the 1930s and the beginning of of real Revelations about Ellen turn and the public perception of Dickens changed as it of course was doing of all the victorians as a result of L strich and the the revolution in biography in the 1920s and 30s |
english_literature_lectures | 3_Inferno_I_II_III_IV.txt | Prof: We'll begin today with Inferno. Let me say a few things about the poem in general, the structure of the poem, the formal structure, and then we get into the cantos I, II, III, IV which have been assigned to you. I already, in passing, in my introductory remarks, I did say something about the title as you recall, the title of the poem. The poem, we refer to it as the Divine Comedy, it should be "Comedy." That's what Dante calls it and he called it comedy for a number of reasons. The first reason is that it ends in--with--in happiness. It's a story that begins with a kind of disorder, catastrophe if you wish; the pilgrim was lost in the woods and then works himself out toward the light, toward the truth, toward God. That's--so comedy describes the thematic trajectory of the poem. It's going from one condition to another, and from this point of view, it's literally the opposite of the tragic movement. In the tragic movement, you always have some kind of initial state of cohesion or initial state of happiness that then goes on-- going toward some kind of fatality or disaster. There's a second reason for the title "comedy" and the second reason is stylistic. Dante, he calls it a comedy because he adopts the vernacular, first of all. As opposed to--the possibility would have been writing it in Latin in a kind of-- the language of philosophy and the language of great cultural exchanges but he calls it-- instead he uses the Italian vulgar language. It also means that stylistically he adopts a humble style. You know what the theory of styles is coming to us from ancient Greece and Rome. There are three levels of styles. Three levels of style: there is the high tragic style or the sublime style that describes exactly the events that involve kings since style is-- must have some kind of aptness to the situation that the story describes. Then, there is a middle style or an elegiac style, and then there is a low style, the style of comedy, the style that indeed Dante will adopt lowly; but this is a kind of peculiar implication for Dante to call a story such as his, a "comedy" and this is the implication: that, in effect, he undercuts the idea of a rigid hierarchy of reality. There is no such neat separation of the high, the middle, and the low. That which is low and humble, such as the experience he is describing, his own experience--an ordinary human being of living around the year 1300 who manages to have this extraordinary experience of going up to see the face of God and come back-- to return--come back to the earth to tell us about it. It's really a sign that the low can become high and the high can become low. That this--that the classical distinctions that we read-- of which we read in the Poetics of Aristotle that Dante did not know, or in Horace's Poetic Art that Dante did know, are really false, are really illusory. This is not the way to proceed, so Dante has a number of items that he's pursuing in calling this text the "Comedy." The other thing I have to say about the formal structure of the poem, as you all know, it's divided into three parts: Hell, Purgatory, and Paradise. Easy. Each of them has thirty-three cantos, with the exception of Inferno. Inferno has thirty-four cantos, which means there's one--Canto I, which we are going to read in a moment, plus thirty-three. They are neatly separated in that, Canto I represents and stands for a kind of general rehearsal. It's a journey that fails. Dante--the real journey will begin with Canto II. It has 100 cantos, but the basic unit of his narrative is the number 3. In fact, it's written in so-called a style or a metrical form, but there're key devices, a so-called terza rima, that I'm sure you recall from your own readings in high school or in other courses, the terza rima, which is the rhyme scheme, it's always going to be A, B, A, B, C, B and so on. Number 3 once again is the fundamental symbolic number of division within this text. What is the reason for this? There is a large aesthetic reason and the aesthetic reason that it can be found formulated, crystallized in a verse from the Book of Wisdom, "You O God, have created everything according to number, measure and weight." And so the Divine Comedy has to duplicate the symmetry and the order, and the harmony that he thinks he sees in the universe. The poem is presented and introduced as a reflection of that superior, divine order of the universe and wants to be part of it. So it's both as this kind of ambivalence to reflect it and become what we call a metonymy: the part that wants to be attached to a larger whole. So these are the--some general concerns that you've got to have about the poem, some general ideas that will help you understand the pattern of parallelisms that we're going to find within the poem. And I can give you one very quickly now. Since I'm really asking you that when you read the poem, you should be aware that it's a kind of linear structure from 1 to 100. And yet, within the triple--the tri-partition of the poem -- Hell, Purgatory, and Paradise -- there are cantos that correspond to one another. Canto VI of Inferno, it prefigures Canto VI of Purgatorio and both of them in turn, will prefigure Canto VI of Paradise, Canto VII and so on. It can be done, Canto X, X and X, it can be done in a fairly systematic way. If you wish sometimes you could really read I, II, III, IV and yet I, I, and I which is-- or maybe you wait for the end of the poem when you have read the whole poem then you can go back and read it horizontally as it were as much as vertically down. These are some ideas, general concerns and implications in Dante's structure of the poem. We begin now with Canto I; it's a very well-known canto where the--it's the general preamble--you know how the poem starts. It's--I'm just reading the famous lines, everybody reads and everybody probably remembers. "In the middle of the journey of our life, I came to myself." Those of you who have a different translation will nonetheless get the gist and the changes, the shifts are really minimal. "I came to myself within a dark wood where the straight way was lost. Ah, how hard a thing it is to tell of that wood, savage and harsh and dense, the thought of which renews my fear! So bitter it is that death is hardly more. But to give account of the good which I found there I will tell of the other things I noted there. I cannot rightly tell how I entered there. I was so full of sleep at that moment when I left the true way; but when I had reached the foot of the hill at the end of that valley which had pierced my heart with fear I looked up and saw its shoulders already clothed with the beams of the planet that leads man straight on every road. Then the fear was quieted a little which had continued in the lake of my heart during the night I had spent so piteously; and as he who with laboring breath has escaped from the deep to the shore turns to the perilous waters and gazes, so my mind which was still in flight turned back to look again at the pass which never yet let any go alive." It's a great beginning. It begins in a very extraordinary way; it begins with--in the middle to begin with, right? That's--the beginning is in the middle, the beginning is the present reality of the pilgrim who finds himself lost, but it's more in that first line. "In the middle of the journey of our life"--what is he saying? I think it's fairly clear the first conceit and the fundamental conceit of the poem is that life is a journey, which means that we are always on the way. I don't know where we are going yet. He will find out soon, which means that we are displaced, which means that we are going to have a number of adventures. It means that we are not yet where we want to be. Life is a journey. In fact, he will go on--not only go in the middle of the journey; he also calls it our life. That possessive, "our" life, it's his way of establishing one fact, that this is not really yet a unique experience. It's something we all share and something which might also concern us. That we are also, not only he is in the journey of his life, but we too are in that journey of--the common journey of our life. Then in contrast to that, there is now an autobiographical focus. There is a stress of "I found," "I came to myself within a dark wood" and so on, "I," so this is going to be very much the story, once again, very much as the Vita nuova, an autobiographical story. It can only be a personal story, but here the self is going to see the world--is going to see himself through the prism of the world. He will enter a public space. The great difference, I will say, between the Vita nuova which we just looked at last time, and the Divine Comedy is the following. The Vita nuova was destined to fail as a narrative, exactly because the protagonist went on drawing us and drawing himself within the solitude, the interiority of his own life; a life which was completely disengaged from the concerns for the outside world. The Divine Comedy starts exactly with that kind of shipwreck, the shipwreck of some other intellectual activities that he will go on describing and I will describe with you. As soon as we read this first line, "I came to myself in a dark wood," and if you read a lot of traditional commentators, traditional commentators will tell you, oh well Dante is here in a state of sin, the dark wood is really the condition of spiritual-- maybe despair. He's at an impasse; we know he doesn't know where to go. I think that this allegorizing is a little bit too easy and it's not really--we have no evidence for this in the poem yet. We don't know, but what we do know is that Dante is lost in a landscape that is terrifying. He is caught within it; he's clueless about how he got there. I don't even know how--he goes on to say, I don't know how I got myself in that situation, whatever that situation was, but he knows one thing that he wants to get out of it. He understands that. He does not yet know how he to get out of this situation, but he knows that he has to get out of the entanglement of the wood. Then he goes on, oh how hard it is, etc., "so bitter is that death is hardly more. But to give account" -- that's the next tercet -- "of the good which I've found there I will tell of other things I noted there." In a way, the poem is already over. He's now shifting from the narrative of events: I was lost in the dense, savage, harsh wood of the night. His gaze rends this night, tries to find out; he cannot see anything beyond himself, and then immediately says, but to tell you the good I noted there I have to tell you other things. The poem has, right from the start, the double narrative focus. The first focus is that he's going to tell us the story of a pilgrim who is caught in the-- what we call the diachronic, the time-bound, the daily events--a number of encounters which he cannot quite decide as to their meaning. So he's led by Virgil as we're going to find out soon, but he doesn't understand what's happening to him. Then there is the second focus and it's the focus of the poet who has seen it all and enjoys what we would call an omniscient perspective. The whole poem really moves around this double axis: the axis of a synchronic view, synoptic view of the poet who has seen, who has--he becomes a poet because the pilgrim has had this experience, and then he goes on telling us about this experience. That's what I call a synoptic and omniscient narrator and then he looks at the--at what the pilgrim did not know. There's a kind of irony in this discrepancy between the diachronic viewpoint of the pilgrim and the synoptic viewpoint of the poet. It goes without saying that the structure is not that neat, you will see that there are moments when Dante will go on-- one's point of view will encroach upon the other, so I will tell you more about this particular structure and now he starts this narrative. Let's see what has happened. "I cannot tell how," this is the second paragraph of our translation, "I cannot rightly tell how I entered there, I was so full of sleep at that moment." It's a kind of torpid lethargic, the lack of consciousness, the lack of any--everything--his faculties are dormant -- let's paraphrase it like that. "When I left the true way; but when I reach the foot of the hill at the end of that valley which had pierced my heart with fear I looked up and saw the shoulders already clothed within the beams of the planet that leads men straight on every road." What he does -- dawn has come, dawn breaks, and he looks up toward the sun believing that the natural sunlight is going to unveil to him the layout of the land and he may find an escape route from this particular disaster. I'll tell you one thing, that if this were a Platonic narrative the poem would come to an end right here, because this is what happens in a Platonic narrative. You are in a cave, you know we are all--this is a famous myth of the Republic; we see nothing, we only see flickers of light, it's like being at the movies, flickers of light, unreal. They are simulations of the truth; they are being projected on the side of the cave, and we mistake those shadows for realities. If you are really wise and you are a philosopher then you do know that you can turn your neck around and find out where the source of true light is and then you are saved. The whole experience of the cave is predicated on this premise: that knowledge saves you, that knowledge is virtue. And knowledge does save you to the extent in which it really can heal what one could call the wounds of the intellect: ignorance being the wound of an intellect which knowledge, learning, education, philosophy, can really cure. Dante will find out very quickly that that is a false promise; that in many ways, there are realities and that his own realities are going to be a little bit more complex than what one can find in books, manuals of philosophy, about how we get saved and how we can save ourselves. Let's see what he says here and what happens. So he sees--he turns toward the light, I saw already that the sunlight and then he feared this passion, because it's a passion of the soul, the fear that paralyzes him and paralyzes his discernment. He cannot--it literally stops him, does not know which way to go and it's the overpowering passion at this point of the poem was quieted a little "which had continued in the lake of my heart during the night I spent so piteously." It's a dark night. Some mystics--Dante is not a mystic, but he's clearly alluding to the dark night. This is the dark night of the soul, this is a spiritual experience that has found now; it's coming to a head, if you wish. "And it is he who with laboring breath his escape from the deep to the shore turns to the perilous waters and gazes, so my mind, which was still in flight." Through the whole passage, I have to tell you, the two paragraphs we read is replete with neo-Platonic language. He is talking about himself as if this were a flight of the soul, flight of the mind. My mind which was still in flight, this is the idea that we have experiences which are purely intellectual, the kind of experiences that we can all find when we are reading a book, we are studying, we are thinking. The mind takes its flight; i.e., this is an intellectual problem. There are many other terms here that remind us of the--his use, his deliberate use of neo-Platonic language. For instance the word wood which is in the Latin-- in the Italian is selva, which translates the Greek hyle, which as you know, is the primal matter out of which the demiurge will form and shape reality. Yet, immediately there is a great shift that I want to focus on briefly. "After I had rested my wearied frame for a little, I took my way again over the desert slope, keeping always the lower foot firm; and lo, almost at the beginning of the steep, a leopard light and very swift, covered with a spotted hide, and it did not go from before my face but so impeded my way that I turned many times to go back." He has been just now talking about the flight of the mind; "my mind was still in flight." It's an experience of a shipwreck, first of all, that many epic stories begin like that. Think of the Aeneid, the shipwreck of Aeneas on the shores of Africa as he is about to enter the city that Dido is about to build and that will bring him to a great revelation and a great love story with Dido. Here there's no such a relief for him. It's the shipwreck of the mind, a mind that seems to be literally unable to define both his whereabouts and his destination, but as soon as he does this, as soon as he understands that that's what the problem is, he shifts the language from mind to body. "My wearied body," and this is the great difference between neo-Platonic narratives and Dante's kind of experience. The intrusion of body and what is the body--what does the body stand for? The body stands for the limit of purely intellectual journeys. The real journey that he has to undertake is the journey of mind and body, and the body stands for the irreducible historicity of one's self. The body stands for one's own reality, the passions; it stands for one's own will. This is the difference between what the Greeks understand as the great intellectual adventure, which is one of knowledge, and Dante's idea that the real problems are problems of the will. We may know all where we are and we all may understand that we are not happy with the situation that we find ourselves with, but we cannot quite solve it by knowledge because the problems are problems of willing. What is he doing? Let me just try to make this very simple. There are two schemes, I'm really simplifying it too, because I think this is dramatic enough in and of itself. I don't have to overdo it with unnecessary complications. There is a Socratic scheme whereby all issues are issues of intellectual sorts. I know, and therefore I am virtuous. I know what justice is and the implication is: I'm just. That's a false implication because if I go around the room here and ask you all to give me a definition, of each and every one of you of justice, you'll all tell me something: that justice is a justice within the self, that justice is a way I relate to my family, or justice is a way I relate to the city, or justice is a way I take care of the problems or whatever of the world that I find myself in. Dante will say this is not the knowledge of what--the definition of justice cannot make you just. The issue is one of willing, desire. The other scheme that Dante opposes to the Socratic idea that all issues are issues of philosophy or issues of knowledge is one of the will. In the Letter to the Romans, St. Paul writes, "The good that I will do, that I do not do. The evil that I would do that I do." He draws attention to the essential existential problem, the problem of the self. My will is divided against itself. The only way I may know what to do, but I do not know really, I am not sure that I will, I will it. That's the fatality of life. We all know what's good. How many of us go around choosing and doing what we know it's not good for us, it's not the best in terms of our judgment of situations. So it's willing and the perspective of the will becomes Dante's perspective in coming to terms with the limitations of philosophy and intellectual knowledge. We shall see Dante talks about the will in a number of ways. Last time, talking about the Vita nuova, I began mentioning to you that the will is even better of course for him than unwilling, and yet the best experiences of life seem to be those of one does not want to happen to us; such as a dream, for instance, the dream of falling in love or sometimes in the death of the beloved. You don't--he certainly did not want the death of Beatrice. It's a way of understanding that unwilling is as compelling as willing. And yet he understands that as soon as he focuses on Beatrice as the figure, the real contingent historical figure of love, that he has to give a direction to his own desires, that he has to define the will--it's-- you understand what I'm saying? He understands that too. That's already an anticipation of what--of the problems of the will. Now he starts exploring other problems, which we shall see, both as they relate to the self, to the psychology of the character, but also as they relate to politics for instance. To a vision of the world as an act of and the projection of our own will; that the world is the way we want it to be, and that men are too--to the way in which we relate to problems which are of problems-- necessary problems, the problems of reality, etc. For the time being though, Dante takes the will as his own, or the body, his own perspective. He finds out that he cannot go up the hill that he has seen. Three beasts, just paraphrasing the text, a leopard, a she-wolf, a lion, and the lion and the she-wolf will block his journey up and so he's going right back where he finds himself and he's now back into the deepest despair possible because there seems to be now really no exit. Then he sees something, "when I was rushing down to the place below, there appeared before my eyes one whose voice seemed weak from long silence. When I saw him in that great waste," and as you know, because this is the first words that Dante will say, the pilgrim, "have pity on me whoever thou art, I cried to him, shade or a real man." These are the words taken from Psalms, "misere" -- he's prostrating -- of King David, so he's prostrating himself and I stress this because the voice, the Davidic voice will constitute an important strain in this narrative. How does Dante talk? That's one of the ways and we shall see how discretely it will appear further down in the poem and now the conversation what he meets is-- the fear he meets is Virgil. "Not man, once I was a man." Look at what--we can stop a little bit to reflect on what this experience here has been like and can we define it in some ways. Dante finds himself lost in the landscape. He does not know how he got there. He mistakes the sunlight, the natural sunlight, as the light of truth by following which he thinks he can reach the plain of truth, up to the top of the hill, the top of the mountain from which he can survey the land and find the exit, find an escape, find a transcendence, we'll call it. Let's call it a way of getting out of that disentanglement. Then he comes right back. He sees the three beasts that we don't know what they are. Are they sins? Are they dispositions to sins? They are animals and therefore they stand for animal projections of our desires, of his desires, the she-wolf, the lion and the leopard, that's all we can see about them. Then he meets a figure that can't even know if it's a shade or a man. He's dramatizing what a medieval thought, a medieval literature; it's called the land of dissimilitude, the land of an unlikeness. He finds himself in a world where things are not what they seem. Where there seems to be a disparity between, let's call it--let's use the literary language, with the signs and things--signs and their meanings, things and what they stand for and part of this effort is to literally stitch this to, this break between signs and symbols, meanings and signs, stitch them together. First meeting then is with this figure by the name of Virgil, who will become his guide. No man, you know who he is. He's the author of the Aeneid, but look how he presents himself. I think it's crucial you pay attention: "Not man; once I was man, and my parents were Lombards, both Mantuan by birth. I was born sub Iulio," meaning Caesar, which is true, "though, late in his time and I lived in Rome." And he gives a biography of himself, a self-presentation in terms of his life, the historical circumstances and also his works. That's what we call a miniature biography: a sense that I came into this world and somehow. This is what it all comes down to, this is really the sense of my birth, and he speaks about his own birth. "I lived in Rome under the good Augustus in the time of the false and lying gods. I was a poet, and sang of that just son of Anchises," Aeneas, "who came from Troy after proud Iliam was burned. But you? Why are you returning to such misery? Why dost thou not climb the delectable mountain which is the beginning and cause of all happiness?"-- meaning the Mountain of Purgatory that will take him to Eden-- which is really what all things according to the biblical version of cosmology, all things started. He'll say, "art thou then that Virgil the fountain which pours forth so rich stream of speech? I answered," etc., "O glory and light of other poets, let the long study and the great love that has made me search thy volume avail me. Thou art my master and my author. Thou art he from whom I took the style whose beauty has brought me honor." This is Virgil, clearly, the poet. And the extraordinary recommendation, which I can tell you as a teacher, every teacher really would love, years after the students have been studying with the teacher, to give them this kind of great acknowledgement: how I remember your teaching. That's what he does, very rhetorical; we'll call it the rhetoric of capturing the benevolence of the interlocutor. It's clear that the exchange between them and the recognition of Virgil is as a poet, and therefore I have to take a few minutes to tell you a little bit about this. It's--because Virgil, we know, he's the poet of the Aeneid, clearly the story of a journey, of a grand epic journey by Aeneas. We'll talk a lot about it; I hope you've been reading that poem, if you haven't already. That's not the way he was known. Virgil was not known this way in, what we call the twelfth-century Europe, in the culture immediately preceding Dante's time. He, Virgil, more than as a poet, was known as a philosopher. He was a neo-Platonic philosopher. That is to say, a neo-Platonic philosopher is one who had written a poem, but the substance of the poem was not the fiction about the burning of Troy, the journey of Aeneas to--as an exile who could go around looking for a new land with his father on his back and the kid-- the Ascanius by his hand; it was nothing like that. It really had some philosophical depth. It was a way of acknowledging that poetry is capable of providing philosophical illumination. What was the philosophical message? I call it neo-Platonic, because it was just very much like what they thought was the Odyssey in that same time: the Odyssey, the story of Odysseus or Ulysses, who goes from Ithaca to Troy, and then after ten years of the war in Troy goes back to Ithaca. It takes him ten more years to do that. The neo-Platonists, the allegorizers, were viewing that poem as the metaphor for the journey and the experience of the soul. It was the story of the soul that goes from the point of origin, one's home, one's home is always--we call once home a place of one's-- where one's soul is--when one can find oneself somehow and one is familiar, comfortable, whatever. He leaves Ithaca, goes to Troy, goes through life, and then he has to purify himself in order to go back, as in a circle, a neo-Platonic circle back to the point of origin. This is the neo-Platonic allegory of the Odyssey. They will do the same thing in the twelfth-century France. I could give you names if you wish: Bernard Sylvester, Fulgentius a little bit earlier, John of Salisbury who was an Englishmen of the twelfth century, who will go on writing about the Aeneid is the story of a hero who is born in Troy and then goes through the stages of life. Each book represents a stage of life: childhood, youth, maturity, with all the temptations of the flesh that happened with Dido and then he goes back to-- arrives in Italy and that's the Book VI. They would never really bother reading the other book. The Aeneid was viewed as a philosophical text illustrating the pattern and the movement of life. It was really telling us, and that's really the promise of philosophical knowledge, that we all, like Aeneas, are born, but with care and prudence can reach the Promised Land. Dante changes this interpretation and he changes this interpretation by making and insisting that you--that Virgil is a poet. He replaces the philosophical promises with the idea of poetry in the belief that poetry is better than philosophy. That philosophy cannot quite reach the depth of immense light that makes two general promises for everybody. It tells that we can all be like Aeneas, going from Troy to Latium. And Dante says no; this is not really saying much, but saying at all about the reality and the individuality of my life. Poetic language is for him, the language that addresses these issues, and therefore, poetry here is seen as a version of history. This is--you are the writer, he says to Virgil, who wrote the poem dealing with Roman history. Poetry and history--they deal with the world of contingency and not the world of universal, and therefore, potentially empty promises. That's the great change, the great new interpretation that Dante's advancing about Virgil. What Virgil will tell him is that the--they go on talking about--Dante will call him master and author, this is--and great sage and all that. And Virgil says to him, this is very simple, you must take another road; you are going the wrong way. This is the--that's what I call the idea that Canto I is a rehearsal to the whole poem. It gives the story of a journey, a journey that fails, a journey that is aborted, that's not the way you go. It's not--you find yourself lost, what do you do? You try to go quickly up to the destination, you climb up the hill and you think you can make it. No, no, no. Virgil replies that he has to go in a different way. He may reach the same point, the same destination, the world of justice, the vision of God, the idea of love, the good as he calls it, but he must go down. That the way up is down, he has to go through the whole spiral, through the horrors and the suffering of Hell, then reemerge through Purgatory in order to be able to reach the beatific vision. That's very simple. The way up is down and this is something that marks the difference between philosophical presumption and the notion of a spiritual Christian humility that he has to pursue and wants to pursue. This is--the conversation between them goes on with this prophecy of the so-called hound, page 27, line 100. It says, the world is in such a disarray, and yet now there's very mysterious and enigmatic prophecy that we'll come to and discuss by the time we come to the end of Purgatory and so now the journey begins, "then he set out and they came on behind him." That's the end of Canto I and we come to Canto II, where now it's--what's the time of the year? If this has to be a historical, a reducible, and essentially a biographical experience, Dante has to be very careful. He will give us the time, the precise time and place of this experience. We are--the poem starts on Good Friday, on the evening of Good Friday of the year 1300. These are the--so it's a very Christo-mimetic we call it, an imitation of Christ-like experience, because he will emerge to the light of Purgatory on Easter Sunday. Dante says, why should I take this journey to extraordinary--you here--this is the whole brunt, the whole substance of Canto II. "I'm not Aeneas," he says, "I'm not Paul." "Why should I go there?" Why should I go on? "You poet" -- I'm on line 10, "you guidest me, consider my strength, if it is sufficient... You tell us of the father of Sylvius as he went," meaning of course Aeneas, etc., and then you also tell us about the Paul and he concludes, "but I, why should I go there," line 32. "Who grants it? I'm not Aeneas, I'm not Paul." First of all, these are the coordinates, the imaginative coordinates for his own journey. Aeneas, the hero who brings about the foundation of Rome and then Paul, the Apostle to the Gentile, the thirteenth Apostle who really brings about-- who also goes to Rome, but he was also the spiritual domain of the Church. So these are the two--so who is he? That's the first question he must answer. "I'm not Aeneas, I'm not Paul." The implication is who am I? This whole part of this story is to find out who he is and part of this journey is indeed to find out-- the beginning of this journey at least, is to find out that the poem is written in order to bring some degree of redemption and clarity to the world. He really believes in this, in what we call "salvific" role of his own voice. Not without irony, not without discouragement, not without a sense that this may indeed be a proud arrogant kind sort of posture. This whole idea about who he is, is immediately countered by a reflection on his own divided will. I just want to read this thematically, "and as one who one wills what he willed, and with new thoughts changes his purpose, so that he quite withdraws from what he has begun, such I became on the dark slope; for by thinking of it I brought to naught the enterprise that was so hasty in its beginning." It would take me really too long to give you a sense of an appropriate gloss to these lines. They are lines about the limitations of willing. Dante begins by claiming the importance of the will in the act of knowledge and we do see that he so aware of it-- there's one detail that I did not mention. You realize how the pilgrim moves throughout in Canto I and he will move in the same way but he'll never talk about it again. He hobbles in--toward the light, he goes around hobbling and that hobbling is part of his being wounded, having a wounded will that he must somehow heal. Someone will say, good, Dante's a voluntarist, not quite. Dante doesn't believe in philosophy and intellect, not quite. He believes that the two faculties of the soul, intellect and will, are like the feet of one's body. They got to--if you really want to walk fast, at least, and safely, you got to be using both. You got to use the intellect and the will. As soon as he claims the importance of the will, which the Socratic experience had somehow neglected, he goes on reflecting about the limitations of the will. What are the limitations? One of it is that the will can be divided against itself. The second one is that the will needs something to regulate it, because I can will anything, I can will this and that, how do I order what I will? The third limitation of the will, and we'll talk more about the-- some of these aspects but there's a third limitation of the will is that I can never really go faster than my own will. I'm a prisoner of my--if I believe that that's all I have, my will -- I can't go faster than it. If the will is weak and slow, and divided, that's what I am. I can't go past it, some of the difficulties. Here he goes, Canto II, trying to think about the identity and the purpose of this journey--who am I? "I'm not Aeneas, I'm not Paul," then who am I? That's the great question and as soon as he does this, he admits--he sheds light on the first internal issue that he must cure, the internal issue is the divided will. We come to Canto III, I'm sorry that we go a little fast but that's the way -- our will moves fast. This is now Dante enters into the gate of Hell, the famous inscription, "Through me the way," pretty scary and that scares him quite a lot. Dante meets--the first sinners that he meets are the so-called neutral angels, Canto III, around lines 30 and following, "And I, my head encircled with horror, said: 'Master what is this I hear, and who are these people who seem so mastered by their pain?' And he said to me: 'This miserable state is borne by the wretched souls of those who lived without disgrace and without praise. They are mixed with that caitiff choir of the angels who were not rebels, nor faithful to God, but were--'my text says, "for themselves." The right translation is, "by themselves," because if you are for yourself, you are for someone. These are the angels called neutral who in the great cosmic battle with which the world begins between God and the satanic forces, just became spectators, just watching. The translation is per- in Italian it says "per se stessi," which we translate as "by themselves," sort of taking a separation. In other words, Dante begins by dramatizing that which to him is the most serious of sins, not--being disengaged, not taking sides, in the belief that somehow you wait and see what the outcome is and then you can go on taking sides. That's the start of this experience and then he goes on-- why are they--he responds, "They have no hope of death, etc., pity and justice despise them. Let's not talk of them but look at them and pass." He doesn't even name them, he won't even name them because to name them would be to bring them into reality and the neutrality stands and is a sign the way in which they de-realize the world. They reduce the world to a pure show of their own--for their own spectatorship. So, he goes on from here and now the second action is that he sees Charon the famous-- he goes into the ferry, Charon who will ferry all the souls and gives an extraordinary description of this figure and the souls who blasphemed God and the parents, the humankind. And then there is an extraordinary image that I want to read with you when Dante describes the souls and what he sees are souls that go on the boat of Charon. This is toward the end of Canto III, lines 112 and following. "As in autumn the leaves drop off one after the other until the branch sees all it spoils on the ground, so the wicked seed of Adam fling themselves from that shore, one by one at the signal as a falcon at its recall. Thus, they depart, of the dark water and behold of the land on the other side a fresh crowd collects again on this. 'My son, said the courteous master, 'all those that die in the wrath of God assemble here from every land and they're eager to cross the river,' etc." I really want to focus on this image of the autumn leaves, whereby Dante describes the dead souls as the leaves in autumn that have fallen from the tree. The conceit is that the souls are leaves, and it's an image that Dante takes straight from the Aeneid of Virgil. In Book VI, also Virgil describes--Book VI of the Aeneid, focuses on the descent of Aeneas into Hades and there he also waits and he sees the souls waiting for reincarnation, the famous theory of metempsychosis. You may have heard of this term, which means the reincarnation of the souls. The souls are waiting to be reincarnated and come back in an endless cycle. Dante--Virgil himself had taken this image from Homer, of course. Dante changes Virgil's, the thrust of Virgil's image, because in Virgil it's quite accurate since he has a Pythagorean understanding of existence. That is to say, life is a continuous circle, the wheel of becoming, Plato's wheel of becoming. Time goes on and on returning on itself. We always witness these circles and cycle of the seasons, and this is also what happens with human life. There is nothing really unique about us because we die and then we can wait for the reincarnation of our soul. Death in Virgil is an elegiac experience: it's never really tragic and cannot be tragic, because it lacks that edge of the uniqueness, the edge that something particular and special has been happening to the world because I am here, certainly to my world, because I am here, and then I may disappear. Dante changes this idea of this circularity, the elegiac quality of death and life that we have in Virgil. Why do I call it elegiac? Because you die and yet you can come back, because we're really like leaves: and just as leaves fall in the autumn, you just wait for the spring. We might not be the same leaves, but leaves very much like those that have just fallen will return. Look what Dante does. Dante goes on focusing on the idea of uniqueness of every leaf. "As in autumn, the leaves drop off one after the other until the branch sees all its spoils on the ground, so the wicked seed of Adam," we-- there is already some kind of--some evil being acknowledged at the root of our own existence, "the wicked seed of Adam, fling themselves from the shore one by one at the signal, as a falcon at its recall. Thus, they depart over the dark water and before they have landed on the other side, a fresh crowd collects again on this." For Dante, this image shows not that leaves are-- our souls are like leaves, but that a leaf can be described as a soul only if you insist on its own uniqueness and the fact that it will never return. There is one fairly contemporary poet who understood this and I really want to quote to you a few lines. I don't know that I remember them all, it's Gerard Manley Hopkins. Some of you will have read--who read English literature remember this, you may remember this. He writes his famous poem, called Goldengrove Margaret. Do you know the poem? Do you know what I'm talking about? "Margaret, are you grieving of a Goldengrove unleaving? Leaves like the things of man you ... care for, can you? As the heart grows older it will come to such sights colder," etc. and continues. What is--Hopkins really is reading this image. What is he really saying? He's saying he understands that leaves, like the things of man you care for, can you--what comes back that's not the life of human beings, but comes back are the things that human beings may use and waste and those things. So the right analogy for Gerard Manley Hopkins, in a sense, really in the wake of Dante, is between the things of man, the things that we have, the things that we can produce and leaves, not souls. In formal terms, this really means that Dante is replacing the notion of epic circularity that you find in the classics, the classics of Homer, the classics such as the Aeneid, with an idea of linear novel. The life of human beings is best described formally by a novel in the sense that we are caught in a journey that goes on. It's unique and will reach its destination whatever it will be. We don't know yet, at this point, of the poem. Now Dante enters into a circle, into the garden, the first experience in Canto IV. I will--I want to give you some time to--I will have to deal with Canto IV and then I will stop. I'll have to come straight to the point about Canto IV. Dante comes to what is called "limbo" the-- a word that comes from Latin, in Latin it-- that's what limbo is, this is the edge, that's what it means, lembos. In Italian you speak of the lembo, the edge of a dress or a jacket and so on; comes to the area of Hell which really is outside of Hell. In fact, it's very much like the Virgilian after-life of the virtuous. And he meets a lot--it's described as a garden, one of the gardens, one of the three, four gardens we find a sort of pre-figuration of the-- in many ways of the earthly paradise and gardens in Purgatory, but it's also pre-figuration of the world of Paradise where the city, Jerusalem, is described as a garden. The first thing about this: it's called the locus amoenus in case you want to know what the technical term, locus amoenus, the kind of idyllic place, and this is a term, a phraseology that belongs to very much the epic world. And in the world, for instance, later in the Renaissance whether it's-- it could be Spencer or Tasso, or Milton, there is always the story of the hero who reaches this idyllic bucolic place and-- to relax. It's really a place of the breakdown of the errancy of the hero, of the adventurous spirit, and always shows, or maybe it's just the irony in literary structures, that whenever heroes seem to look for a pause to the quest by reaching the garden and relaxing, that's where they find out they are in the most dangerous situation. That whenever you think that you are safe and you can disarm, and there's the running cool water of the river. There is the shade; there is the fragrance of the landscape, this is the way that we're--the wherewithal, the description of these bucolic gardens, that's exactly when the snake will appear. That's exactly when the enemy will be capable of reaching you and overwhelm you. This is also--in other words there are all these places of temptation and that's what happens here. Dante here, arrives and he sees the poets, Canto IV, it's an extraordinary--they're all virtuous heathens, he sees the poets, and the great poets that he will see-- the first poets that he will see are-- first of all he sees all the figures from-- he will enumerate the characters, scientists, philosophers from the Greek and Roman world, but put a little bit at the edge. They really don't seem to have much impact on the situation here. But then the dramatic situation is when here, lines 80 and following, "O you," he sees the poets, the classical poets from Homer to Horace and Ovid, "O you who honored both science and art, who are these to have such honor that it sets them apart from the condition of the rest?' And he said to me," Virgil speaking, "their honorable fame which resounds in thy life above, gains favor in Heaven, etc. 'Honor the lofty poet, his shade returns that left us.' When the voice has paused and there was silence... I saw four great shades coming to us; their looks were neither sad nor joyful." As befits limbo. It's clearly like life here. That's what the after-life is in Dante's conception. It's only an extension of what we choose to do on this earth. If you really think that life, the beauty of life, which is not a bad idea, is talking about having endless seminars of aesthetics or poetry, as these poets do, that's what your after-life is. You sit down, sit on the grass and go on talking about beautiful things, he says. "The good master began: 'Mark him there with sword in hand who comes before the three as their lord. He is Homer, the sovereign poet. He that comes next is Horace, the moralist. Ovid is the third, and the last, Lucan," famous epic poet whom Dante will again celebrate in Purgatory, "Since each shares with me in the name the one voice uttered they give me honorable welcome and in this do well.' Thus I will assemble the noble school," school in Greek means-- it's also leisure, the leisurely life, scholar, the world of play and the world of leisure, " the lord of loftiest song who flies like an eagle," Homer, " above the rest." And the irony, of course, is that he's blind and the eagle to have the sharp view, the sharp vision, but that Homer's vision is an inner vision. He looks--he's blind because he's looking inward in order to know what the song he is to sing will be about. "After they had talked together for a time, they turned to me with a sign of greeting and my master smiled at this; and then they showed me still greater honor, for they made me one of their number, so that I was the sixth among those high intelligences. Thus, we went on as far as the light, talking of things which were fitting for that place and of which it is well now to be silent." Here is Dante. He inscribes himself in the history of Western poetry; from Homer to Dante, he counts himself as sixth among them. Here's Virgil, here's Lucan, here's Ovid, here's Horace, and of course, the master of them, all Homer. We go from Homer to Dante, it's a little history, if you wish, of Western poetry and Dante thinks that he belongs to it. We could say a number of things about what they are talking about. He says he doesn't say. We talk about beautiful things which we--I infer we can easily infer they talk about poetry: they talk about their craft. There may be a little link that I would like you to reflect on, on your own, between the garden and the kind of poetry, the kind of beauty that they are really talking about. This is the way Dante's imagination works. You have to pull together things that don't seem to be described. So this is the garden, there's some kind of self-absorption about this. There seems to be a kind of self-enclosure about this kind of poetry. It's a little scene that reflects on what the spiritual condition may have been like, but what is the most surprising and what constitutes the temptation of this canto, the temptation of the scene for the pilgrim's own spiritual pilgrimage; he's involved in the spiritual descent which turns out to be an ascent. He says at one point "I was sixth among such genius." Do you see the discrepancy? Do you see how this is jarring? Can you hear it? He is going down for redemption. He is descending in humility, and yet now he talks as a poet, his poetic voice is one that elevates itself. There seems to be a kind of discrepancy between the two, that's the great temptation of Dante. To believe that as a poet he has been claiming that poetry is better than philosophy, that poetry is like history, and the first concern Dante has is to reflect on the wholeness of this claim. How this kind of claim about the importance of poetry can turn out to be a temptation for him. It's a hubris and I focus on it, as I have focused on it, because in effect what we're going to discuss next time. Canto V, it is an extended reflection, a way for Dante to reflect on this claim, an extended reflection on the dangers of such a claim and the responsibility of writing poetry. Dante will meet the great heroine of all love stories, Francesca. Everybody knows about this fantastic figure, a lovely figure Francesca who dies for love, but Dante's actually encountering a reader of his own poetry and he witnesses the kind of traps and risk that reading implies. We are going to go on next time thinking about this whole question of reading and responsibilities of writing. Canto IV, by the way, I cannot leave without saying something about the epic quality because I have been just talking about-- I have been defining the poem really as a novelistic vis-à-vis the circular structure of an-- of Virgil's understanding of the reincarnation, and therefore, the great notion of what an epic is and now I will say-- actually Dante goes on mixing his genres. As soon as you formulate something, Dante has a way of undercutting that formulation. Canto IV ends with a miniature representation of the epic quality of this text. It goes on enumerating all the souls that he can see, which is really as you know, if you remember from your Iliad on the Gates of Ilion, at one point Helen will go on numbering all the ships of the Greeks to the old Priam. You remember that scene? Some of you may remember that, and this is the way Canto IV ends with lines 124 and following: "There before me on the enamelled green were shown to me the great spirits by the sight of whom I am uplifted in myself. I saw Electra with many in her company, of whom I knew Hector and Aeneas and Caesar.. .I saw Camilla, Penthesilea on the other side, and I saw the Latian king who sat with his daughter Lavinia; I saw Brutus who drove out Tarquin, Lucrece, Julia, Marcia, and Cornelia; and by himself ... I saw Saladin," a great sign of honor, Saladin who sits by himself as a kind of lofty state and, " when I raised my eyes a little higher I saw the master of them that now sitting amid a philosophic family... Socrates... Plato," the other one was Aristotle, "Democritus, Diogenes..." all the Greek philosophers and then Orpheus, Cicero, Linus, Seneca, Euclid, Ptolemy. A version of the classical encyclopedia: all of the knowledge is gathered here, and yet Dante has a way of saying that traditional encyclopedias, these formal structures as he wanted to organize the world of knowledge, there is something wanting about them. Why? What's wanting about them? They never tell you how you can really educate yourself. They never describe the process of education. To know that to describe the process of education you go to have an encyclopedic poem where you are showing the phases and the stages of learning; as the pilgrim will it was the second thing that I have to say is that, in any enumeration as you have here, an epic enumeration, and I will close my remarks for the day. Enumerations always imply the wish of a narrative such as an encyclopedia, because it's as little bit of an encyclopedic form I say, to encompass the whole of reality: that's what encyclopedias want to do. The reality of it, the intellectual reality, and yet enumerations by their virtual being enumerations will tell you that no totalization is possible. There is something that always escapes the formal ordering that the encyclopedias want to reach. And for now I stop with my remarks and we have a few minutes in case there are questions, as I hope there are. I welcome questions. I will repeat your questions, by the way, for the benefit of the videotape. Please. Student: So when Dante talks about the first group of sinners who work for themselves who didn't take a side, so you think that was also political commentary? Prof: Yes. The question is, Dante talks about the neutral angels. He calls them "for themselves" and we say it's "by themselves" and the question then really is, is that also a political commentary? Before I could rephrase, I could repeat this--the question--I said yes, absolutely. Dante will use the very same experience for himself later in Paradise where he will say that in many ways being by oneself, there are times when that can become an act of virtue where the notion of neutrality is going to be redefined. Dante doesn't like clearly neutrality, all right. Why is he--does he find neutrality? Because it's a language of privation for him, it's really way of--it's the decision not to take decisions because you are always making a decision even when we think that we are not making a decision, so you really are. To him that is the sign of a great cowardice. He explains that in cosmic terms. This is the metaphysics the great war in Heaven, which by the way, enables me now to say that it gives the whole of Hell a kind of symmetrical structure because it clearly begins with the neutral angels and ends with the encounter-- with Lucifer at the end. So you that this is really the kind--what frames the narrative of Hell. To go back to your question, when Dante is going to talk about politics, he also believed, for a while, that he had to take sides. He did, and he did by going back now to the few remarks about his language, his political involvement, on the first class, the first seminar. He was banned, he was sentenced to exile, and then he removed himself from all partisan politics. How did he do this and why did he do this? For a while he was a Guelf being thrown out of Florence with many others, also as an act of punishment for a decision he had made to throw out the Ghibellines, among whom his own best friend, Guido Cavalcanti, who died in exile. He's thrown out, and we do know these historical happenings. They spend the first few years of their exile plotting and making, and going on through machinations as to how to get back to Florence and damn it, really make them pay. He realizes very quickly the wickedness of this plan. He realizes that his own party is no better than that of the Ghibellines and so he removes himself from them, the act of removal from the criminal violence that he and his other accomplices were for a while thinking about, turns out to be madness and so he decides in this act of virtue, absolute solitude and that is by himself. There is a kind of judgment, but let me refine that, how can you go on really describing that the act of neutrality is bad. And yet there is a way in which there are times where neutrality can become nothing less than a virtuous decision as he will make. Other questions? Please. Student: I have a question from last Tuesday's lecture. Prof: Last--yes please. Student: Last Tuesday--you talked about friendship and love, love and >. There was no conversation about lust, and we know from this history that both of them, Beatrice and Dante were married and that he had children and I'm wondering if the physical aspect above is irrelevant for the poet. Prof: The question refers to the remark I made about-- that was picked up by a student also, another student, the remark about love and friendship as the two are dramatized in the Vita nuova. It refers--wants to know, wants me to explore more the role of lust, since we know that Dante was married and what does lust have to do with love. I take, that's really the question. Let me just say that it's a very complicated question and I will talk about this next time when we deal with Francesca, who is of course, the heroine of lust, if there was ever one. Then we'll see what Dante means by that. I mean the--it clearly is an issue related to love, fundamentally related to love. But to believe and I'm really--probably have confused some of you here, but don't be confused, to believe that Dante's judgment of Francesca is limited to the representation of Canto V, where he describes the relationship between lust and love, lust and love, would be a grand mistake. Dante understands that without physicality there could be no love, that there's no soul which is not connected to a body. In fact, some of the remarks that I made today are really about-- can be construed--I did not make the point but it can be construed as his sense of the inevitability of the body, that the intellect in and by itself, the soul in of and by itself, without materiality, without some degree of being wedded to the body, is really not part of the human experience. This is what I was saying today about the fact that the journey is not only just an intellectual journey. It has to be done in the body; it's part of the sense of the inevitability of the body. After all, this is really what his Christianity is about: it's about the incarnation. It's about the being embodied, about the divinity being embodied and therefore entering our human condition. Let me also say that Dante goes on thinking about Francesca. He condemns her and Francesca is always circling around in this state of permanent desire with power, in other words by moving around, they're really describing their unquiet hearts. If you are--when you are at peace you just stop and sit. She just goes on moving around and around; that's their punishment and yet, he remembers her. When Dante reaches the Heaven and metaphysics in Paradise and he has to talk about the kiss of creation, how creation--God creates the world by imprinting a kiss on the material that he had at his disposal. The language that he will use is going to be the language of Francesca and that to me means that the kiss of Francesca can also be construed as the existential encounter to God's-- ;the--God's kiss on creation. It's a way--I don't think that he's redeeming her, there's no such thing as the redemption for the infernal souls, and yet there is an idea--there is something that may have gone wrong, and we'll have to see what it is, and it's not the lust. We have to see what the situation is; it's not lust. I don't think it's only lust and that is part of--there's something even larger about her. What--the mistake that she seems to be making is of a different sort and I'll keep you hanging until next time so that--otherwise what's the point? We have to look at Canto V. Let's read Canto V for next Tuesday, but before you go, there's some time so in case there are-- we have a couple of minutes, other questions? Please. Student: In Canto I, when he is talking to Virgil and he says, "you're the only one from whom my writing drew the noble stuff of which I have been >." It's really kind of a lie isn't it? Prof: It--he says-- Student: So he's saying that he's-- that he's taken his style from him and that's the style for which he's been-- for which he's gained honor, but he's not honored as an epic poet, he's not--that's not--what are the implications of what he's saying because it's really not actually true. Prof: Okay, the question is, a reference to the encounter between the pilgrim and Virgil in Canto I of Inferno where Dante-- in what I called the captatio benevolentia, what are captured in the benevolence of Virgil, the only living-- he doesn't even know if it's a shade or a human being, turns out that he is a human being, he says, well you are the poet from whom I took the style that has honored-- has made me honor. The sort of--the real question is, this is a little bit of a lie isn't it, because Dante by this time is not really known as an epic poet. I would really say--call it a lie, say it's part of the simulation, rhetorical simulation of--captured in the benevolence of the listener. At the same time it's really an acknowledgement of Virgil's mastery, and what is Virgil's mastery? Dante really thought that his own poetry is-- that Latin is not a dead language, that the vernacular is really the way Latin has become in time, so the continuity between Virgil and him is real, so there's a kind of continuity of poetic continuity. You're quite right that it's literally--literally it's not true. At that point, Dante's more of a provincial poet or the poet who follows the stilnovisti, the poets of the Sweet New Style from Bologna and Florence and then Virgil. Thank you so much; we'll see you next time. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.