playlist stringclasses 160 values | file_name stringlengths 9 102 | content stringlengths 29 329k |
|---|---|---|
Literature_Lectures | Lecture_16_Literary_Prophecy_Amos.txt | Professor Christine Hayes: Let me just briefly recap as we are moving into the literary prophets, or the classical prophets, they are sometimes called. It is easiest to think of them as being associated with particular crises in the nation's history. We are not going to be looking at them all, and I have picked out some of the main ones that we will be looking at. Really, they are exemplary in a number of different ways. So you have prophets of the Assyrian crisis. This is when the two kingdoms still exist. In the north prophesying in Israel, you have Amos and Hosea. And in the south you have Isaiah and Micah. So think of those four books together. It will be easier to note the differences among them if you group them together. And we will be doing that. Then the prophets of the Babylonian crisis. By this time the northern kingdom has fallen. We are moving towards the end of the seventh century. The Assyrian Empire has fallen in 612. The prophet Nahum talks about the fall of Assyria. And we move then into the very end of the century and down to the beginning of the sixth century, with the destruction of Judah. So prophets associated with that time: particularly Jeremiah, and also Habakkuk. Then we have the prophet of the exile, who is Ezekiel. And then the post-exilic period, or the Restoration, when the Israelites are allowed to return to their land and we have several prophets at that time: Haggai, Zechariah, Joel and Malachi will be the prophets we'll be looking at briefly. There are three long prophetic works, and I have circled those : Isaiah, Jeremiah and Ezekiel, one associated with each of the three crises. So again another mnemonic for you is to think of them each associated with each of those major crises. And the rest are all much shorter works, I think Obadiah being the shortest, really just a very, very short work. There has been a long debate over the degree to which these classical or literary prophets were harking back to long standing Israelite traditions or constructing norms that would later come to be viewed as long standing Israelite traditions. Kaufman describes these classical prophets as the standard bearers of the covenant. This is his term. And in his view they could be seen as conservatives, but by the same token he says the new prophecy conceived of ideas that Israelite thought of the earlier time had not conceived. And in this sense, Kaufman argues they are also radical. He describes them as radical conservatives or conservative radicals. As a result of the radical nature of some of their message, the prophets had to speak with great exaggeration. And you will notice this when you read their writing. Great exaggeration, a lot of dramatic imagery, dramatic features. They denounce the people. They chastise the people. And as a result they were often scoffed at or even persecuted in return. But eventually the nation would come to enshrine their words in its ancient sacred heritage, which is testimony to the fact that their message must have served a crucial role at some time in the changing political and religious reality. Now, we have already talked about the Deuteronomistic historiosophy, and how it developed as an interpretation of the historical catastrophes of 722 and 586, and this interpretation made it possible for Israelites to accept the reality of the defeat of the nation, the defeat of Israel, without at the same time losing faith in God. The defeat of Israel, the exile of the nation, was not to be taken as evidence that God was not the one supreme Lord of history, or that God was a faithless God, who would abandon his covenant and his people. The defeat and the exile were interpreted to affirm precisely the opposite. God, as the universal God, could use other nations as his tool. He could use these nations to execute judgment on his people, and he did this in an act of faithfulness ultimately, faithful to his covenant, which promised punishment and chastisement for the sins of the people, the sins of idolatry. The classical literary prophets, Isaiah, Jeremiah, Ezekiel and the 12 minor prophets, follow the basic thrust of this interpretation of events. They agree that the defeat and the exile are evidence rather than disproof of God's universal sovereignty, and they agree that they are God's just punishment for sin. But they are going to differ from the Deuteronomist in two significant ways. First they are going to differ in their identification of that sin. For the prophets, it is not just idolatry for which Israel is punished, although that is important, too. And second of all, they are going to differ in their emphasis on a future restoration and glory, a message that we do not find in the Deuteronomistic historian. The individual books of the prophets are really arranged according to two interacting principles: size and chronology. So you have the first three books, are the very large, prophetic books: Isaiah, Jeremiah, and Ezekiel in chronological order of the three crises we have outlined here. And then you have the minor prophets, and the minor prophets, again, are roughly chronological order, although book size also plays a bit of a role in arranging these materials. That was very common in the ancient world--for size to determine the order of books in a corpus. We are not going to be following the order of the canon, because it does jump around chronologically; first with the three large books and then going back and having some of the smaller books of earlier prophets. We are going to be looking at them in chronological order. We are going to be looking at them against the backdrop of the historical crisis to which they are responding. So we are going to begin with the first of the literary prophets, even though it is not the first in the order of the Bible, and that is Amos. Amos preached during a relatively stable period of time. This was in the northern kingdom. It was around 750 under the reign of Jeroboam the Second, not the first. And this is at a time before the Assyrian threat is becoming very apparent, and Assyria's empire building ambitions--before those are becoming very apparent. There are many passages that suggest that Amos was an ordinary shepherd. He came from a small town about 10 miles south of Jerusalem; so he came from the southern kingdom to prophesy in the northern kingdom. He was called to Bethel, which was one of the royal sanctuaries in the northern kingdom, to deliver his prophecies. But despite the suggestion that he was an ordinary shepherd it seems more likely that he was probably a fairly wealthy owner of land and flocks. He was probably educated and literate. The northerners are said to be very surprised by his eloquence and his intelligence. But they did not like his message, and ultimately he is going to be forced to go back to the southern kingdom. The Book of Amos can be divided structurally into four sections, which I have listed on the board over here. You first have a set of brief oracles of doom. These are in the first two chapters, Amos 1 and 2. And then you have a series of three short oracles, oracles to the women of Samaria, an oracle to the wealthy of Samaria and Jerusalem, and then an oracle to Israel as a whole. These are in chapters 3-6. This is followed then by five symbolic visions which receive interpretation. These are visions of judgment, first locusts, then a fire, then a plumb line that one uses in building a building, a basket of fruit, and then a vision of God standing by the altar at Bethel. This happens chapters 7-9, about verse 8 and 9 . This section, besides the five visions, also has a little narrative account of Amos' conflict with a priest at Bethel, the priest Amaziah who accuses Amos of treason. And then there is a concluding epilogue in the ninth chapter that runs for about seven or eight verses to the end of the book. The Book of Amos is a wonderful place to start for us because it contains many features that are going to be typical of all of the classical prophets, all of the literary prophets by and large. And also this book introduces certain major themes. These will become standard themes of prophecy with some variation here and there. So by setting them out in the Book of Amos then we can really go forward and just look at the variations on some of those themes that are sounded by some of the other prophets. So first some literary features, and then we will talk about the themes of the book. In terms of literary features, I have jotted down a few here. You see in the book what we would call editorial notes. That is to say, you have notes in the Book of Amos which are in the third person. These will very often occur at the beginning of a book. They sort of introduce or set the stage. So we have in Amos. "The words of Amos, a sheep breeder from Tekoa, who prophesied concerning Israel in the reigns of kings Uzziah of Judah and Jeroboam, the son of Joash of Israel, two years before the earthquake." So almost all of the prophetic books are going to contain an introduction of this type. Some third-person phrase which will identify the place and the prophet and his time. There is another kind of writing in some of these works, as well, which is in the first person. It is not always in the third person, but you sometimes have first person passages in which the prophet himself will speak about and describe something about himself. It's a stepping aside from the oracular moment and speaking in some way about some experience that he has had. So we have these first person and these third person passages that give us information about the prophet. The third-person passages, we surmise, may have been written by the prophet, but they were probably written by disciples or others who were responsible for collecting the prophets' oracles, inditing the prophet's oracles. Amos 7 is an example of this. In Amos 7, we find an example of this kind of writing, again, where you have a description of Amos in debate with a priest, Priest Amaziah, at the Shrine of Bethel. So you have the oracular statements, but you also have these other identifying passages as well, and descriptive passages. This brings us then to a second point, which is that the prophetic books are a compilation of a variety of materials. They consist of varied materials that have been collected. They have been revised. They have been supplemented. The prophets' oracles, which were delivered in various situations over a period of time, were apparently saved and then compiled, again perhaps by the prophet himself, perhaps by his disciples. We know that prophetic oracles were written down and transmitted in other ancient Near Eastern societies. We know this about Assyria, for example. These were literary compositions and the literary nature of these compositions will account sometimes for their ordering. Sometimes it appears that there is not chronological ordering. This is one of the things that can make it so hard to read some of the prophetic writings, because the oracles are not necessarily in chronological order. They are literary works, and sometimes the prophet or the disciple or the editor would combine principles--I'm sorry, combine oracles or juxtapose oracles according to principles other than chronology--literary principles. So for example, you very often find the principle of a catch word: a prophecy or oracle that might end with a particular word in its last line or last verse, and so next to it will be a second prophecy or oracle which echoes that word in its opening line, and so the two have been brought together for literary reasons. So Amos 3:2, reads: "You alone have I known of all the families of the earth." And that is the concluding line of that particular oracle, and that verb "to know" is probably the catchword for the oracle that follows, because the next one opens, "Do two people walk together unless they know each other?" So that may have suggested the juxtaposition of those two. So we need to understand that the prophetic books are really little anthologies, anthologies of oracles. They can be connected for literary rather than substantive or chronological reasons. You can't assume chronological sequence. It is not like reading the historical books of Joshua through 2 Kings. It is very, very different. An interesting question concerns the degree to which the prophetic books preserve the actual oracles of the prophets. Certainly there is no doubt that there has been revision and supplementation of the prophetic books. Not everything in the Book of Amos is from Amos, himself. Additions have been made to most of the prophetic books. It was believed that the words of the prophets had enduring significance. Those who received these words believed that they had enduring significance. And so they were supplemented because of the conviction that they had enduring relevance, not despite of it, because of it. And some scholars believe that this accounts for the oracle in Amos 2 that prophesies the fall of Judah. Amos is living in 750, the latter half of the eighth century, not in the sixth century. He is living in the eighth century. But he prophesies the fall of Judah, and most people would assume that this is an addition which is made to the Book of Amos after Judah's fall. These supplementations and additions and revisions that we will see in some of the prophetic books, and some of them are quite obvious, were not completely promiscuous. I don't want to give you the idea that they were, because there are many instances in which a prophet's words are not updated, are not modified, even though the failure to do this leaves the prophecy woefully out of step with what actually came to be later. So those kinds of inconsistencies between a prophet's words and later fact would suggest that there was a strong tendency to preserve the words of the prophet faithfully. So we will see both tendencies within the literature, a tendency to leave words intact, and at the same place, a tendency to supplement or to add sections to the prophet, the prophetic writing. A third feature that we will see in many of the prophetic books is what we call "the call." And this is common to most of the prophets. It is the claim to authority as a result of having been called by God to deliver his word. We talked before about apostolic prophecy, this notion of the prophet as someone who is sent by God with a message, not someone who is consulted by a client to find out what God thinks. The irresistibility of the call is a feature of these passages, and we find it illustrated in Amos 3:7-8, after citing a series of proverbs that illustrate inexorable cause and effect. For example, he says, "Does a trap spring up from the ground/Unless it has caught something?" And then the oracle continues, "A lion has roared,/Who can but fear?/My Lord God has spoken,/Who can but prophesy?" There is this irresistible call. We find metaphors used liberally throughout the prophetic writings. And Amos describes his prophecy by means of two types of metaphors, word and vision. So many of the prophetic oracles will be introduced by the phrase "the word of Yahweh came unto prophet X." The word of Yahweh came--sort of an image of God speaking directly to these prophets in human language, which is then repeated or passed on to the audience, to the listener. This could be understood in a literal sense. We could take this as a metaphor. Behind it, however, is the simple idea that it is God who is communicating to the prophet and the prophet then communicates the message to the people. But in addition to hearing, Amos and many of the other prophets also see. So the word of the Lord comes, but in other moments the prophetic oracle will be introduced by verbs or words connected with seeing and vision. Hence the word "seer" as a designation for a prophet also. Amos is shown visions of various kinds, particularly those five visions clumped in chapters 7,8 and 9. And this is true of the prophets generally. These visions might be visions of God speaking, or visions of God performing some kind of action. They might also be visions of perfectly ordinary objects or events that carry some sort of symbolic significance. So we have five visions in Amos in chapters 7-9, and some of them are visions of ordinary objects, but those objects have some special coded meaning or symbolic significance for Israel. And then we have visions of extraordinary things, as well. So we have a locust plague. It is about to consume the crop right after the king has taken his share, his taxes of the crop. Not such an extraordinary vision, but then there is a vision of a fire that consumes the lower waters that are pressed down below the earth, and which threatens to consume even the soil of the earth itself. So it is an extraordinary vision. We have a vision of a plumb line-- the tool that is used by builders. There is a vision of God destroying worshipers in the temple. The vision in chapter 8 is an ordinary vision. It is a vision of a basket of summer fruit. The Hebrew word for summer or summer fruit is kayits and this is a pun because the word kets means end. So the vision of kayits is indicating or symbolizing the kets, the end of Israel. And these kinds of symbolic visions will very often typically include puns of this type. So another point to make about just the literary features of prophetic writings is that they do contain or employ a variety of literary forms. One commonplace form that you will see over and over again in these writings is a form that we call the oracle, an oracle against the nations. This is found in Amos. It's found also in the three large prophetic writings: Isaiah, Jeremiah and Ezekiel. Amos 1 and 2 contains seven of these oracles that inveigh against the nations. But Amos gives the form a new twist. And this is what's interesting. Six of the seven oracles are directed against surrounding nations, and they are excoriated for their inhumane treatment of others, Israelites and non-Israelites during wars and conflicts, as punishment for their terrible war atrocities. A divine fire is going to break out and destroy all of their palaces and fortified places. But then the twist comes, because after these six horrific oracles, which condemn the nations for these brutal acts of atrocity in war, Amos then turns to address his own people. And he says the same divine power will consume the people of Yahweh because of the atrocities and inhumanities that they commit even in times of peace! So the seventh, the climactic oracle, announces that God's wrath will be directed at Israel, and this is a very unwelcome, unexpected statement. And you can see how he perhaps would almost draw his audience in, you know, with these images of their enemies getting what they deserve, only to then turn it around (having drawn them in, seduced them if you will with his words)--to turn around and then charge them with something even worse. The term "Israel" that he uses is, of course, ambiguous. That is one of the problems with some of the prophetic writings. You are never completely sure whether they're prophesying against the northern kingdom, Israel, or the House of Israel--both kingdoms together, the whole tribal confederation. Some passages in Amos would suggest one. Some passages suggest the other. The other thing that we find in Amos is an oracle against Judah, against the southern kingdom. This is in chapter 2. It is just two lines, verses 4 and 5, and it is in chapter 2. And many people identify that as a later addition by an editor. First of all, it's written in very standard, sort of Deuteronomistic language. And also, if we leave it out, then we have a nice literary pattern. We have six oracles plus one. We have six oracles against foreign nations, and then we have one against Israel. And that pattern is a very standard, literary pattern, particularly in poetic sections of the Bible and the prophets are written in an elevated poetic style. We very often have a six plus one pattern. That's related to another pattern that we also see in Amos, which is the three plus one pattern. This is just a doubling of it, six plus one. The three plus one pattern you will recognize. It is quite explicit at times. Amos will say, "for three transgressions of Damascus, for four, I will not revoke it"--the decree, the punishment. A similar kind of language is used in verse 6 for Gaza, in verse 9 for Tyre, in verse 11 for Edom, and verse 13 for the Ammonites, and so on. So we often have this pattern. And so the suggestion by scholars is that without that prophecy concerning the fall of Judah, which post-dates Amos, you would have a nice complete six plus one pattern. And this might be the sign of a later editor updating Amos' prophecy, so that it would look as though he had, in fact, prophesied the fall of Judah. You have other sorts of literary patterns and forms used in the prophetic works. Some of the literary forms we see are hymns. We see songs. We see laments, particularly laments or mourning for Israel as if her destruction is already a fait accompli. You find proverbs. Very often when the prophets cite a proverb, they will turn its accepted meaning on its head. They'll take an old proverb and they'll apply it to some new situation and give it a radically new kind of meaning, to sort of shock and surprise their audience. And Amos 3-8 contains a lot of proverbs. Another literary form that we will see, and this is an important one, is a literary form that is called the riv, r-i -v. I have it up there: a riv, which basically means a lawsuit, specifically a covenant lawsuit. Many of the prophetic books feature passages in which God basically brings a lawsuit against the people, charging them with breach of covenant, breach of contract, if you will. And in these passages, you have legal metaphors being used throughout: people testifying or witnessing against Israel--can she speak in her defense?--and so on. So the riv, or the covenant lawsuit is a form we will see here. We will also see it again when we get to the Book of Job. So the prophetic corpus draws on the entire range of literary forms that were available in Israelite literary tradition, and very often gives them a rich--and that is what give the books a very rich and varied texture. So Amos is a model for us in terms of its literary features, but it's also a model for us in terms of some of the themes or the content of the book--because Amos will articulate certain themes that we will see resounding throughout the prophetic literature. There will be some variations on these themes, but some standard themes appear here. So we will review those now. Many scholars, Kaufman among them, have noted that the literature of the classical prophets is most clearly and strongly characterized by a vehement denunciation of the moral decay and social injustice of the period. It really does not matter what period. "Vehement denunciation" of moral decay and social injustice, is the way the Kaufman phrases it . Amos criticizes the sins of the nation. He is critical of everyone, the middle class, the government, the king, the establishment, the priesthood--they're all plagued by a superficial kind of piety. For Amos, as for all the prophets we will be looking at, the idea of covenant prescribes a particular relationship with Yahweh, but not only with Yahweh: also with one's fellow human beings. The two are interlinked. It is a sign of closeness to Yahweh that one is concerned for Israel's poor and needy. The two are completely intertwined and interlinked. And so Amos denounces the wealthy. He denounces the powerful and the way they treat the poor. I am going to be reading some passages from Amos to illustrate some of these themes. So Amos 4:1-3--and listen to the dramatic rhetoric that is used: "Hear this word, you cows of Bashan/On the hill of Samaria"--that is the capital of the northern kingdom, Israel: Who defraud the poor, Who rob the needy; Who say to your husbands, "Bring, and let's carouse!" My Lord God swears by His holiness: Behold, days are coming upon you When you will be carried off in baskets, And, to the last one, in fish baskets, And taken out [of the city]-- Each one through a breach straight ahead-- And flung on the refuse heap. It's a wonderful pun here, because the wealthy women of Samaria are referred to as cows of Bashan. Now Bashan is an area that is very rich pastureland in the transJordan. And also it is very common in Canaanite literature to refer to the nobility, and even to gods, with terms like bull or ram or cow. These were not insulting terms, as they might be in our culture. These were, in fact, terms that did not offend. These were very complimentary terms. So when he refers to the cows of Bashan (he speaks to the women of Samaria as the cows of Bashan) he is flattering them to begin with. But the pun is quite wonderful because these women are going to end up like fat cows, as slabs of meat in the butcher's basket or in the fish basket which, you know, is flung out on the refuse heap once it is spoiled. So he takes that term "cows of Bashan," and leads it to this horrendous end. Amos 6:1 and 4-7. This is another scathing attack on the idle life of the carefree rich who ignore the plight of the poor: woe to those "at ease in Zion." Of course, that is the capital of the southern kingdom, Jerusalem, and those "confident on the hill of Samaria," the northern kingdom: You notables of the leading nation On whom the House of Israel pin their hopes; […] They lie on ivory beds, Lolling on their couches, Feasting on lambs from the flock And on calves from the stalls. They hum snatches of song to the tune of the lute-- They account themselves musicians like David. They drink [straight] from the wine bowls And anoint themselves with the choicest oils-- But they are not concerned about the ruin of Joseph. Assuredly, right soon They shall head the column of exiles; They shall lull no more at festive meals. It is a great image of them lying about as the head of the nation. They will be at the head of the nation as it moves into exile! And on an archaeological note, I understand that in Samaria they have, in fact, uncovered all kinds of ivory furniture and ivory coverings that would then be attached to furniture. So the image of them lolling on ivory couches in Samaria apparently makes a lot of sense. So the moral decay, the greed, the indulgence of the upper classes, this is directly responsible for the social injustice that according to the prophets outrages God. Amos 8:4-6: Listen to this, you who devour the needy, annihilating the poor of the land, saying, "If only the new moon were over, so that we could sell grain; the sabbath, so that we could offer wheat for sale, using [a measure] that is too small and a shekel [weight] that is too big, tilting a dishonest scale, and selling grain refuse as grain! We will buy the poor for silver, the needy for a pair of sandals. The Lord swears by the pride of Jacob: I will never forget any of [their] doings. [See note 1] Again, notice that they are prone to extreme formulations and high-flown rhetoric, and sometimes when you strip away the rhetoric, you see that the crimes that are being denounced are not murder, and rape, or horrendous physical violence. These are obvious and grievous violations of social morality. Rather many scholars have pointed out, I think Kaufman chief among them, that the crimes that are denounced here are crimes that are prevalent in any society in any era. The crimes that are denounced as being utterly unacceptable to God, infuriating God to the point of destruction of the nation, are the kinds of crimes we see around us everyday, taking bribes, improper weights and balances, lack of charity to the poor, indifference to the plight of the debtor. A second theme that is pointed out again by many scholars, is what Kaufman calls the idea of the primacy of morality. That is to say the idea or the doctrine that morality is not just an obligation equal in importance to the cultic or religious obligations, but that morality is perhaps superior to the cult. What God requires of Israel is morality and not cultic service. Now, the prophets are all going to have--we are going to see many different attitudes towards the cult among the prophets. So allow that to become a more nuanced statement as we go through. Some are going to reject the cult of the entire nation. Others will not. So there is going to be some variation, but certainly morality is primary. And their words could, at times, be very harsh and very astonishing. Amos 5:21-24. "I loathe"--he is speaking now as God, right? So God is speaking--God says: "I loathe, I spurn your festivals, I am not appeased by your solemn assemblies. If you offer Me burnt [sacrifices] or your meal [sacrifices] I will not accept them; I will pay no heed To your gifts of fatlings. Spare me the sound of your hymns, And let Me not hear the music of your lutes. But let justice well up like water, Righteousness like an unfailing stream." [See note 2] This is an attack on empty piety, on the performance of rituals without any meaning, perhaps, behind that performance, or in accompaniment to social injustice--the two can't happen at the same time. And that's a theme that is sounded repeatedly throughout prophetic literature. So for Amos, and for all the prophets, injustice is sacrilege. The ideals of the covenant are of utmost importance. That is why they are called the standard bearers of the covenant, harking back to the covenant obligations. And without these, without the ideals of the covenant, the fulfillment of cultic and ritual obligations in and of itself is a farce. That is not to say that they would be rejected were Israel to be upholding the covenant. So this rejection of the cult depends, of course, on a caricature of cultic and ritual performance. The prophets caricature it as meaningless. They caricature it as unconcerned with ethics or with the ideals of justice and righteousness. But internal cultural conflicts often do involve the caricaturing or the ridiculing of an opponent's beliefs or practices. But for some of the prophets rejection of the cult was quite radical. That is an idea that is not yet really fully formed in Amos. We are going to see, again, that some of the prophets will reject the cult of the nation, not just the cult of the wicked, but everyone. Even if performed properly and by righteous persons, there will be one or two prophets who believe the cult has no inherent value or no absolute value for God. In some sense, this is a view that we have already encountered in sources devoted to the cult even in a source like P, the Priestly material. The Priestly material is already moving towards the idea, or establishing the idea, that the cult is an expression of divine favor rather than divine need. It doesn't really have an actual value necessarily for God. It doesn't really affect his vitality. It is given to humans as a ritual conduit, as a way to attract and maintain God's presence within the community, or to procure atonement for deeds or impurities that might temporarily separate one from God. So already in the Priestly source, we have a very complicated notion of the function of the for society and humanity. So the prophetic doctrine of the primacy of morality seems to be a reaction against other views of cultic practice; perhaps there were popular assumptions about the automatic efficacy of the cult and its rites. But Kaufman has been joined by many other scholars who argue that the prophets raised morality to the level of an absolute religious value, and they did so because they saw morality as essentially divine . The essence of God is his moral nature. Moral attributes are the essence of God himself. So Kaufman notes that he who requires justice and righteousness and compassion from human beings is himself just and righteous and compassionate. This is the prophetic view. The moral person can metaphorically be said to share in divinity. This is the kind of apotheosis that you find then in the prophetic writings, not the idea of a transformation into a divine being in life or even after death, but the idea that one strives to be god-like by imitating his moral actions, the idea again of imitatio dei. A third feature of the prophetic writings, this is again underscored by Kaufman, but also many other scholars, and that is the prophets' view of history, their particular view of history, their interpretation of the catastrophic events of 722 and 586. It is an interpretation that centers on their elevation of morality, because the prophets insisted that morality was a decisive, if not the decisive factor, in the nation's history. Israel's acceptance of God's covenant placed certain religious and moral demands on her. Now in the Deuteronomistic view that we have talked about, one sin is singled out as being historically decisive for the nation. Other sins are punished, absolutely. But only one is singled out as being historically decisive for the nation, and that is the sin of idolatry, particularly the idolatry of the royal house. So the Deuteronomistic historian presents the tragic history of the two kingdoms as essentially a sequence of idolatrous aberrations, which were followed by punishment. And this cycle continued until finally there had to be complete destruction. While it is certainly true that moral sins and other religious sins in Israel were punishable in the Deuteronomist's view, it is really only the worship of other gods that brings about national collapse, national exile. And that view is exemplified in 2 Kings 17, which I have read to you. It does not mention moral sins as leading to the collapse of the state. It harps on idolatry. Idolatry was what provoked God to drive the nation into exile. The view of the classical prophets is a little different. Israel's history is determined by moral factors, not just religious factors. So the nation is punished not only for idolatry, but for moral failings. And, of course, the two are to a large degree intertwined. But the emphasis on the moral is striking in the prophets. And it may not be so startling to hear that God would doom a generation or doom a nation for grave moral sins, like murder and violence. This is something we have already seen in the generation of the flood. The cities of Sodom and Gomorrah--they were destroyed for grievous violations of morality: murder, violence and so on. The prophets, however, are claiming that the nation is doomed because of commonplace wrongs, because of bribe-taking, because of false scales and false weights that are being used in the marketplace. These are the crimes for which destruction of the nation and exile will take place. Amos 2:6 through 8: Thus said the Lord: For three transgressions of Israel, For four, I will not revoke it [the decree of destruction]: Because they have sold for silver Those whose cause was just [taking bribes in a courtroom setting], And the needy for a pair of sandals. You who trample the heads of the poor Into the dust of the ground, And make the humble walk a twisted course! So this is the first difference really between the Deuteronomistic interpretation of the nation's history--the destruction of Israel--and the prophetic interpretation. For the prophets, the national catastrophes are just punishment for sin, but not just the sin of idolatry, for all sins no matter how petty, now matter how venial, because all sins violate the terms of the covenant code, which is given specially to Israel. And the terms of the covenant--being vassals to the sovereign Yahweh means treating co-vassals in a particular way, and it is breach of covenant not to do that. And, again, how much the prophets were harking back to an older tradition, to ancient traditions about Israel and its covenant relationship, traditions according to which Israel's redemption and election entailed moral obligations; how much they were the ones to actually generate and argue for this idea again is hotly debated by scholars. It is not an issue that we need to decide. But I would note that the primacy of morality in Israelite religion certainly dates back at least to the times of the earliest prophets, Amos in the eighth century for example, and may indeed have had antecedents. It certainly didn't just arise in the exile as some scholars would have us believe. It certainly was not the invention of the Deuteronomistic historian. It's alive and well in some of these very early prophets. I am going to turn now to the second difference between the Deuteronomistic and the prophetic interpretation of Israel's history. And that is that the prophets coupled their message of tragedy and doom with a message of hope and consolation. And this is something that just simply doesn't come within the purview of the Deuteronomistic historian's writing. First let me say a little bit about the message of doom and then the message of hope and consolation. One of the things that's so interesting in the classical prophets is that they give a new content to older Israelite ideas about the end of days, or what we call eschatology. Eschatology = an account of the eschaton, eschaton meaning the end. So eschatology is an account of the end. The prophets warned that unless they changed, the people were going to suffer the punishment that was due them. And, in fact, the people were very foolish to be eagerly awaiting or eagerly expecting what was popularly known as the Day of Yahweh, or the Day of the Lord. And so the prophets refer to the Day of Yahweh as if it were a popular conception out there in the general culture. It was a popular idea at the time that on some future occasion God would dramatically intervene in world affairs and he would do so on Israel's behalf. He would lead Israel in victory over her enemies. They would be punished. Israel would be restored to her full and former glory. And that day, the Day of the Lord or the Day of Yahweh, in the popular mind, was going to be a marvelous day, a day of victory for Israel, triumph for Israel and a day of vengeance on her enemies. Amos 5:18 and 29, talks about the people as desirous of the Day of Yahweh. They are very confident that this is going to be a day of light, a day of blessing, a day of victory, he says. But the prophets, Amos among them, tell a different story. According to them, if there is no change then this Day of Yahweh is not going to be some glorious thing that the people should be eagerly awaiting. It's not going to be a day of triumph for Israel. It will not be a day of vengeance on her enemies. It's going to be a dark day of destruction. It is going to be a day of doom when God will finally call his own people to account. So this is another instance of the way in which the prophets try to radically surprise their audience by taking an older concept and reversing its meaning, changing its meaning. And here they have transformed the popular image of the Day of Yahweh from one of national triumph to one of national judgment. Amos 5:18 through 20: Ah, you who wish For the day of the Lord! Why should you want The day of the Lord? It shall be darkness, not light! --As if a man should run from a lion And be attacked by a bear; Or if he got indoors, Should lean his hand on the wall And be bitten by a snake![there is going to be no place to hide, in other words} Surely the day of the Lord shall be Not light, but darkness, Blackest night without a glimmer. Or chapter 8:9 through 12: And in that day--declares my Lord God-- I will make the sun set at noon, I will darken the earth on a sunny day. I will turn your festivals into mourning And all your songs into dirges; I will put sackcloth on all loins And tonsures on every head. [mourning rites] I will make it mourn as for an only child, All of it as on a bitter day. So again at the heart of this idea that the Day of Yahweh is being transformed into this day of judgment, is the old idea that God is the God of history. Right? God can control the destiny of nations. He can control the actions of nations. That is not a new idea. But in the past, or not so much in the past, I suppose--it would have been present to the prophets--the prophets were reacting against a notion that God's involvement with other nations was always undertaken on Israel's behalf. This is the idea they seem to be battling. In other words, they are battling the idea or the assumption that God controlled other nations by exercising judgment on them and punishing them and subjecting them to Israel. And the prophets are challenging this idea. And they are making what would have been heard as a shocking and extraordinary claim. God is, of course, yes, a God of history, of all history. He is concerned with all nations, not only Israel. But his involvement with other nations doesn't extend merely to their subjugation. If need be, or rather if Israel deserves, then God will raise up another nation against her. So the final chapter in Amos begins by proclaiming this idea of utter destruction. I will slay them all, God says, and "not one of them shall survive." Wherever they hide, under the earth, in the heavens, at the bottom of the sea, God is going to haul them out and He is going to slay them. And what about the covenant? Isn't it a guarantee of privilege or safety? Again, for Amos, its primary function is to bind the nation in a code of conduct, and violations of that code are going to be severely punished. So in chapter 9 verses 7 to 8, Amos makes the startling claim that in God's eyes Israel is really no different from the rest of the nations. He elevated her. He can also lower her. To Me, O Israelites, you are Just like the Ethiopians True, I brought Israel up From the land of Egypt, But also the Philistines from Caphtor And the Aramaeans from Kir. Behold, the Lord God has His eye Upon the sinful kingdom: I will wipe it off The face of the earth! These are harsh, harsh words. And you also have to remember that Amos was living in a time of relative peace and prosperity, about 750. National confidence is riding high. The people of Israel were pretty convinced that God was with them. They weren't in any real imminent or obvious danger. And Amos was convinced that despite this external appearance of health, the nation was diseased. They were guilty of social crimes and unfaithfulness to their covenantal obligations. And so he says they are headed down this path of destruction. Perhaps because of the optimism of the time, Amos had to emphasize this message of doom, because his book is a pretty depressing book. Later prophets who were speaking in a different historical setting, in a more desperate historical setting, would often speak words of much more comfort and hope. But Amos doesn't do this. He does indicate that his purpose is the reformation or the reorientation of the nation. He wants to awaken Israel to the fact that change is needed. Amos 5:14 and 15, "Seek good and not evil,/That you may live,/And that the Lord, the God of Hosts,/May truly be with you,/As you think." Right now you think he is with you. He's not. Change, so that he will truly be with you. "Hate evil and love good,/And establish justice in the gate;/Perhaps the Lord, the God of Hosts,/Will be gracious to the remnant of Joseph." The "perhaps" is important, and it is very indicative of Amos' fatalism. This is very much a fatalistic book. The overriding theme of Amos' message is that punishment is inevitable. It is pretty much inevitable. And this is one of the reasons that most scholars believe that the final verses of the book, verses halfway through 8 down to 15, are a later addition by an editor. It is an epilogue, and it was likely added in order to relieve the gloom and the pessimism and the fatalism of the prophet's message, because in these verses, Amos does an almost complete about-face. We have just finished the first half of verse 8 in Chapter 9. So 9:8a--you have this oracle of complete and devastating judgment: "Behold, the Lord God has His eye/Upon the sinful kingdom:/I will wipe it off/The face of the earth." But then, the second half of the verse, and the beginning of this epilogue that has been added, immediately dilutes this: "But, I will not wholly wipe out/The House of Jacob --declares the Lord." It seems that an editor has qualified this last oracle of doom, has desired to qualify this last oracle of doom. And the editor continues, For I will give the order And shake the House of Israel-- Through all the nations-- As one shakes [sand] in a sieve, And not a pebble falls to the ground. All the sinners of My people Shall perish by the sword, Who boast, "Never shall the evil Overtake us or come near us." In that day, I will set up again the fallen booth of David; I will mend its breaches and set up its ruins anew. I will build it firm as in the days of old, [...] A time is coming--declares the Lord -- [...] When the mountains shall drip wine And all the hills shall wave [with grain]. I will restore my people Israel. They shall rebuild ruined cities and inhabit them; [...] They shall till gardens and eat their fruits. And I will plant them upon their soil, Nevermore to be uprooted From the soil I have given them--said the Lord your God. In other words, according to this epilogue, God's punishment of Israel isn't the end of the story. It is one step in a process, and the affliction and the punishment serve a purpose. It is to purge the dross, to chasten Israel. They are going to be put through a sieve. Only the sinners will really perish. A remnant, presumably a righteous remnant, will be permitted to survive and in due time that remnant will be restored. To summarize Amos, and hopefully this will give us then some foothold as we move into other prophetic books, we need to understand that the Book of Amos is a set of oracles by a prophet addressing a concrete situation in the northern kingdom. It's been subject to some additions that reflect the perspective of a later editor. Amos' message was that sin would be punished by God and it would be punished on a national level--the nation would fall. When the northern kingdom fell, it was understood to be a fulfillment of Amos' words. The Assyrians were the instruments of God's just punishment. So his words were preserved in Judah. After Judah fell, presumably a later editor added a few key passages to reflect this later reality, most significantly the oracle against Judah in chapter 2, verses 4-5, and the epilogue in chapter 9, verse 8b through 15, which explicitly seem to refer to the fall of the southern kingdom. It refers to a future day when the fallen booth of David will be raised. That reflects a knowledge of the end of Judah, the end of the Davidic kingship. And the phrase "on that day" which is used, is a phrase that often signals what we feel is an editorial insertion in a prophetic book. It is pointing forward to some vague future time of restoration. Okay. On Monday, we are going to be moving on to Hosea and Isaiah. |
Literature_Lectures | 6_The_New_Criticism_and_Other_Western_Formalisms.txt | Prof: All right. Now last time we were giving examples of what might happen if one takes seriously that extraordinary eleventh footnote in Wimsatt's "The Intentional Fallacy" in which he says that the history of words after a poem was composed may well be relevant to the overall structure of the poem and should not be avoided owing simply to a scruple about intention. Essentially, that's what Wimsatt says in the footnote. So I went back to the great creator raising his plastic arm and suggested that, well, maybe after all there might be some good way of complicating the meaning of Akenside by suggesting that the modern, anachronistic meaning of "plastic" would be relevant to the sense of the poem. This by the way--just because one can make this claim and, I think, make it stick in certain cases, doesn't mean that the proposition is any less outrageous. Just imagine > a philologist being confronted with the idea that the meaning of words at a certain historical moment isn't the only thing that matters in understanding the meaning of a poem. So I just wanted to give another example a little closer to home in the poem of Yeats, the 1935 poem "Lapis Lazuli." I began talking about it last time. It's a poem which begins, "I have heard that hysterical women say / they are sick of the palette and fiddle-bow, / of poets that are always gay..." The storm clouds of the approaching war are beginning to gather. A lot of people are saying, "Enough of this kind of effete culture. We need to think about important things, particularly about politics and the social order"-- by the way, a very powerful argument in 1935. In any case, Yeats was on the other side of the controversy and insisted, after all, that there is a continuing role for art, as indeed, on the other hand, there may well be even in such times. So he's sick of everybody saying they don't want to talk about painting, they don't want to talk about music, and they don't want to talk about poets who are "always gay." All right. So then the poem continues. It involves a stone, a piece of lapis lazuli that has a kind of a flaw in it, which is like a "water-course," and where one can imagine a pilgrim climbing toward increased enlightenment. As the poem goes on, Yeats talks about the way in which civilizations crumble-- that is to say, all things fall apart, but then it's possible to build them back up. He says, "All things fall and are built again / and those that build them again are gay." Now, as I said last time, needless to say, Yeats was not aware of the anachronistic meaning that we may be tempted to bring to bear on the poem. Yeats is thinking of Nietzsche, he's thinking of a word, froehlich, which probably is best translated "joyous, energetically joyous." He is just borrowing that word from the translation of a book by Nietzsche. Well and good but, if you were a queer theorist or if you were interested in making not a weak, but a strong claim for the importance of queerness in our literary tradition, you would be very tempted to say, this enriches the poem-- not just, in other words, that they are energetically joyous as creators, but also that in our contemporary sense of the word they're gay. Now this again, as in the case of Akenside, may or may not raise the hackles of the philologists, but there's a certain sense in which from a certain point of view, it's difficult to deny that it doesn't lend a certain coherence, an additionally complex coherence, to the nature of the poem. All right. Then we have Tony the Tow Truck. You're probably beginning to wish I would refer to it, so why don't I? In the second line of Tony the Tow Truck, we learn that "I live in a little yellow garage." Now of course, the denotation of the word "yellow," as Cleanth Brooks would say, is that the garage is painted a certain color. The connotation, which undoubtedly the author had no notion of, wasn't thinking of--this is a book for toddlers-- the connotation is that somehow or another there's the imputation of cowardice, possibly also the derogatory imputation of being Asian. Maybe Tony is Asian. Well--okay. This has nothing to do with the text, we say, and yet at the same time suppose it did. We could interrogate the author psychoanalytically. We could say, "Hey, wait a minute. Okay. So you say it was painted yellow. Why don't you say it's painted some other color?" We could begin to put a certain amount of pressure on the text and possibly, as I say, begin to do things with it which are kind of a five-finger exercise-- we'll be doing a lot more of that sort of thing-- but which might work. All right. These are examples of the extraordinary implications of Wimsatt's eleventh footnote, and also, I think, perhaps in advance of today's discussion, clarify to some extent the importance for critics of this kind of notion of unity. In some ways, everything we have to say today will concern the idea of unity. In other words, a connotation is valuable and ought to be invoked even if it's philologically incorrect if it contributes to the unity, the complex building up of the unity, of the literary text. If, on the other hand, it is what Gadamer would call a "bad prejudice"-- that is to say, some aspect of my subjectivity that nothing could possibly be done with in thinking about and interpreting the text-- then you throw it out. So the criterion is: is it relevant to the unified form that we as critics are trying to realize in the text? That criterion, as I say--not just for the sorts of semi-facetious readings we can do with Wimsatt's eleventh footnote but also for readings that may at least have some marginal plausibility-- this sense of unity is what governs interpretive decisions of this kind. All right. Now a word or two about the antecedents of the New Criticism: In the first place, the thirties and forties in the academic world bear witness to the rise of a canon of taste largely introduced by the great Modernist writers, particularly by T.S.***Eliot. You may notice that Brooks, for example, has a kind of Donne obsession. He gets that from Eliot's essay "The Metaphysical Poets," which is a review essay of a volume of Donne's poems edited by somebody named Grierson which made Donne overnight, for a great many readers, the central poet in the English tradition. Brooks is still, as I say, very much under the influence of this. Well, Eliot, in "The Metaphysical Poets," says some rather interesting things that had far-reaching consequences for the New Criticism. He says, "Poetry in our own time--such is the complexity of the world we live in--must be difficult." He says that poetry has to reconcile all sorts of disparate experience--reading Spinoza, the smell of cooking, the sound of the typewriter. All of this has to be yoked together in the imagery of a good poem, particularly of a metaphysical poem, and this model of complexity is what matters both for modern literature and for literary criticism. Now by the same token, other Modernists like James Joyce are also contributing to this idea of the independent unity of the work of art. In "Stephen Hero" or "Portrait of the Artist as a Young Man," you remember Stephen in his disquisition on form and Aquinas and all the rest of it argues that the work of art is something that is cut off from its creator because its creator withdraws from it and simply pares his fingernails, in the famous expression. It's very interesting. You remember that in the Wimsatt that you read last time, Wimsatt argues--I think probably thinking about that passage in Joyce-- that the work of art is "cut off" from its author at birth. This is an umbilical cord he's talking about. It has no more connection with its author from birth on and roams the world on its own. Ideas like this, as I say, are taken from the aesthetic and practical thinking about the nature of the work of art that one finds in Modernism. In the meantime, let's consider the academic setting. In the 1930s, when Ransom in particular is writing his polemical manifestos, The New Criticism and The World's Body, and attacking most of what's going on as it's being done by his colleagues, he has two things in particular in mind: in the first place, old-fashioned philology, the kind of thinking about the literary text that would insist that "plastic" means what it means in the eighteenth century-- and a lot of that was being done. This was the golden age of the consolidation of the literary profession. Standard editions are being created. The great learned journals are in their early phase. Knowledge is actually still being accumulated having to do with the basic facts of the literary tradition. We didn't know a great deal about certain authors until this period of the flourishing of philology in the very late nineteenth and early twentieth century took hold and pretty much created for us the archive that we now use today in a variety of ways. So although the New Critics were fed up with philological criticism, I don't mean to be condescending toward it or to suggest that it didn't play a crucially important role in the evolution of literary studies. Now the other thing that was going on, and here--I don't know, depending on one's viewpoint, perhaps some measure of condescension might be in order, but these two were spectacular figures-- the other thing that was going on was that there was a vogue for what might be called "appreciative teaching." That is, the contemporary and colleague of I.A. Richards at Cambridge was the famous "Q," Sir Arthur Quiller-Couch, whose mesmerizing lectures had virtually no content at all. They were simply evocations, appreciative evocations, of great works of literature. I have to say that at Yale, exactly contemporary with "Q" we had a similar figure, the person after whom Phelps Gate is named: the great William Lyon Phelps, who would enter the classroom, begin rapturously to quote Tennyson, would clasp his hands and say that it was really good stuff, and the students were so appreciative that they gave hundreds and thousands of dollars to the university ever after. In other words, this was valuable teaching, > but again > > the New Critics were fed up with it. This was the atmosphere they found themselves in, and what they wanted--and this anticipates the atmosphere that you'll see the Russian formalists found themselves in when we turn to them next week-- what they wanted was something like rigor or a scientific basis or some sort of set of principles that could actually be invoked, so that the business of criticism could become more careful and systematic, less scattershot, less effusive and so on. So this is, in effect, the backdrop in which in the American academy-- influenced, as we'll now see, by certain trends in the British academy-- arose in the thirties and in the forties. All right. Now the first figure I want to talk a little bit about, and the first figure whom you read for today's assignment, is I.A. Richards. Richards, before he joined the English department at Cambridge, was actually a psychologist, trained as a Pavlovian psychologist, so that when you read in his essay about "stimuli" and "needs," you see pretty much where you stand. His sense of the way in which the mind reacts to the world, to its experience, and the way in which it's an uncomplicated reaction, a resisting reaction, or an adjusting reaction, all has very much to do with Pavlovian principles. These govern to some extent Richards' understanding even of his literary vocation during the period when in 1924 he wrote Principles of Literary Criticism. For Richards, reading is all about experience--that is to say, the way in which the mind is affected by what it reads. And so even though his subject matter is literature, he's nevertheless constantly talking about human psychology-- that is to say, what need is answered by literature, how the psyche responds to literature, what's good and bad about psychic responses, and so on. This is the intellectual focus, in other words, of Richards' work. Now another aspect of his having been and continuing to be a scientist is that Richards really did believe, seriously believed, in reference-- that is to say, in the way in which language really can hook on to the world. Verifiable and falsifiable statement is for Richards the essence of scientific practice and he cares very much about that. He does not, in other words, share with so many literary critics-- perhaps even with Brooks, who follows him in making the fundamental distinction I'm about to describe-- he does not share with the majority of literary critics and artists a kind of distaste for science. This, by the way, is also true of his student, Empson, who was a math major before he became an English major. Both of them take very seriously the notion that there can be a scientific basis for what one does in English or in literary studies. So another aspect of it for Richards is-- because he takes science so seriously-- is that he actually reverses the idea that we talked about last time in Sidney, Kant, Coleridge, Wilde, and Wimsatt. He actually reverses the idea that it's art that's autonomous. If you look on page 766 in the left-hand column, you'll find him saying that science is autonomous, and what he means by that is that scientific facts can be described in statements without the need for any kind of psychological context or any dependency on the varieties of human need. It is autonomous in the sense that it is a pure, uncluttered and uninfluenced declaration of fact or falsehood. Then he says: To declare Science autonomous is very different from subordinating all our activities to it. [Here's where poetry comes in.] It is merely to assert that so far as any body of references is undistorted it belongs to Science. It is not in the least to assert that no references may be distorted if advantage can thereby gained. And just as there are innumerable human activities which require undistorted references [scientific activities] if they are to be satisfied, so there are innumerable other human activities not less important which equally require distorted references or, more plainly, fictions. Here you see Richards' basic distinction between what he calls "scientific statement" and what he calls "emotive statement," the distinction between that which is truly referential-- that which is incontrovertibly verifiable or falsifiable on the one hand, and that which is emotive on the other. Later on Richards changes his vocabulary, and he no longer talks about scientific and emotive language. Even more dangerously, from the standpoint of anybody who likes poetry, > he talks instead of "statement," meaning science, and "pseudo-statement," meaning poetry. You are really out on a limb if you're going to defend poetry-- as Richards kept doing--as "pseudo-statement," but of course "pseudo-statement" is just another expression for what he calls here "fiction." Once we sort of settle into this vocabulary, and once we get used to this clearly unquestioningly scientific perspective, why on earth do we need pseudo-statement or fiction at all? We know very well, by the way, that there are scientists who simply cannot stand to read poetry because it's false, right? Just as Richards says, there's always something kind of archaic or atavistic about poetic thinking. It's not just that it's not trying to tell the truth, as Sidney said--"nothing lieth because it never affirmeth." It is in fact, Richards goes so far as to say, following Plato, lying. Poetry is constantly getting itself in trouble in all sorts of ways--on page 768, for example. He says, sort of toward the top of the right-hand column, page 768: It is evident that the bulk of poetry consists of statements which only the very foolish would think of attempting to verify. They are not the kinds of things which can be verified. In other words, they're a pack of lies. It usually follows from this that somebody like this points out that whereas we all know that a democratic society is the best society to live in, poetry prefers feudal society: it makes better poetry. Whereas we all know that the universe is of a certain kind-- we can't even call it Copernican anymore-- poetry has this odd preference for Ptolemaic astronomy. In other words, everything about poetry is atavistic. It's a throwback to some earlier way of thinking. There is some kind of latent primitivism in poetic thinking, and Richards seems cheerfully to embrace this idea. That's what he means by "fiction" or "pseudo-statement." So why on earth do we want it? We want it, according to Richards, because it answers needs in our psychological makeup that science can't answer. In other words, we are a chaos of desires. Some of them involve the desire for truth-- that is to say, for what we can learn from science-- but a great many of our desires have nothing to do with any notion of truth but, rather, are needs that require fanciful or imaginative fulfillment, fulfillment of other kinds. The reason this fulfillment is important and can be valued is, according to Richards, that these needs-- unless they are organized or harmonized so that they work together in what he sometimes calls a "synthesis"-- can actually tear us apart. Literature is what can reconcile conflicting or opposing needs, and Richards cares so much about this basic idea that in another text, not in the text you've just read, he says, shockingly, "Poetry is capable of saving us." In other words, poetry is capable of doing now what religion used to do. Poetry, you remember--this is a scientist-- is no more true than religion, but it can perform the function of religion and is therefore capable of saving us. And so even despite the seeming derogation of the very thing that he purports to be celebrating in books like The Principles of Literary Criticism, Richards does hold on to an extraordinarily important feeling for the mission of poetry to harmonize conflicting needs. That's the role of poetry and that's what it does, simply by evoking our wishes, our desires-- irrespective of truth--in their complicated, chaotic form and synthesizing them organically into something that amounts to psychological peace. It's a little bit like Aristotle's idea of catharsis, which can be understood in a variety of ways, but Milton at the end of Samson Agonistes understands it in one way when he says, Now we have as a result of this tragedy "calm of mind, all passion spent." That could be the motto for Richards' work. The experience of art, the experience of poetry, and the reconciliation of conflicting needs results in a kind of catharsis, a "calm of mind, all passion spent". All right. Now Richards had a student, an undergraduate student, William Empson, who had, as I say, been a math major who decided he'd switch to English. He went to Richards and he said he had an idea about ambiguity. He said he felt there was quite a bit that could be written about it, and so he wondered if Richards would mind if maybe he worked on that. Richards said, "Fine. Fine. Sounds terrific. Go do it." So a few months later Empson brought him the manuscript of one of the greatest books of criticism in the twentieth century ,and one of the most amazingly surprising: Seven Types of Ambiguity. The brief excerpt you have in your photocopy packet-- I trust that you have picked it up by this time at Tyco [copy center]-- from Empson is taken from Seven Types of Ambiguity. I think Empson is the funniest person who has ever written literary criticism. I think that his deadpan way of bringing things down to earth when they get a little too highfalutin' involves the skill of a genuine stand-up comic. His timing is perfect. He has, in other words, all of the attributes of a great comic writer. I've enjoyed reading him so much that when I was asked to write a book about him, I agreed to do so. I've always been like that. Byron was the only person I enjoyed reading during the nail-biting and tense period of studying for my orals. So I wrote my dissertation on Byron as a result of that--nothing complicated, no deep reason for doing these things. But Empson I hope you enjoy. He's a page-turner, and his extraordinary brilliance as a critic is really just part of the experience of reading him. I'm particularly interested in the excerpt you have and what he does with his notions-- because this is his way of responding to "enthusiastic" or appreciative criticism. One of the tricks of "Q" and Billy Phelps and all the other sort of authors and lecturers in this mode was to say that they read for "atmosphere," that there was something that one just felt along one's bloodstream or in the pulses when one encountered great literature, and their purpose as lecturers and as critics was to evoke the atmosphere of things. So Empson says, Well, atmosphere, certainly that exists and we can talk about it in all sorts of ways; but after all, what is the use of atmosphere? What is the use of any aspect of literature if, as good scientists, we can't analyze it or can't somehow or another account for it? If there is atmosphere in the passage I'm about to quote from Macbeth, it must be atmosphere of a certain kind and there for a certain reason. What follows, it seems to me, is one of the most staggeringly beautiful, wonderful, amazing riffs on a passage of literature that you can encounter. I'm sorry if I sound a little bit like Billy Phelps, but I do get excited. He quotes the passage from Macbeth. As Empson says, the murderers have just left the room, and Macbeth is sort of twiddling his thumbs, hoping it's getting dark because it's got to get dark before Banquo can be killed. So naturally he looks out the window to see > how the time is going, and this is what he says: … Come, seeling Night, Skarfe up the tender Eye of pitiful Day And with thy bloodie and invisible Hand Cancel and teare to pieces that great Bond That keeps me pale! Empson doesn't mention this word, "pale," but in juxtaposition with the crows and rooks it strikes me that it itself is an interesting moment in the passage. Light thickens, and the Crow Makes Wing to th' Rookie Wood. Empson italicizes that because while he has something to say about every part of the passage--which all good criticism by the way should do. If you quote something, say something about all of it. > Okay--but Empson italicizes these particular lines because it's going to be the true focus of what he'll say later. Good things of Day begin to droope, and drowse, While Night's black Agents to their Prey's do rowse. Thou marvell'st at my words, but hold thee still [Lady Macbeth has come into the room]; Things bad begun, make strong themselves by ill: So prythee go with me. All right. So Empson is fascinated by this passage, and then he gives you, in the next few paragraphs, the amazing variety of grounds for his fascination. He says, Look. This is what people mean when they talk about atmosphere. It's not just something you feel on your pulse. It's something that can be described, something that can be analyzed. And I just want to touch on the last part of it. He says, "Rooks live in a crowd and are mainly vegetarian…"-- Empson's the person who says that the ancient mariner shot the albatross because the crew was hungry. He points out that in the 1798 edition of The Rime of the Ancient Mariner, biscuit worms had gotten into the hard-tack, so naturally, he says, "The particular kind of albatross that the mariner shot, I am told, makes a very tolerable broth." > > This is the mode of William Empson. So he begins here: Rooks live in a crowd and are mainly vegetarian; Crow may be either another name for rook, especially when seen alone, or it may mean the solitary Carrion crow. This subdued pun [this ambiguity--remember, this is a book about ambiguity] is made to imply here that Macbeth, looking out of the window, is trying to see himself as a murderer and can only see himself in the position of the crow: that his day of power now is closing; that he has to distinguish himself from the other rooks by a difference of name, rook-crow, like the kingly title, only; that he is anxious at bottom to be one with the other rooks, not to murder them; that he can no longer, or that he may yet, be united with the rookery; and that he is murdering Banquo in a forlorn attempt to obtain peace of mind. I'm not at all sure there's anything more to be said about that passage, which I think lays it to rest. It does so by insisting on a complex mode of ambiguity that governs the passage--not atmosphere. Sure, call it "atmosphere" if you like, as long as you're willing to subject it to verbal analysis, as long as you're willing to show how and why the atmosphere is exactly of the nature that it is, and that it arises, in other words-- and here is the relationship between Richards and Empson-- out of a complex state of mind; that poetry, the poetry of this speaker, this speaker/murderer, is attempting desperately to reconcile and harmonize, just as he is attempting desperately to be reconciled and harmonized with the society from which he has alienated himself and, of course, is failing. Macbeth is not Shakespeare. Shakespeare is representing him in poetry, attempting to do something which in the immediate psychological circumstances poetry can't do, but in the process evoking an extraordinary complexity of effort on the part of the mind to be reconciled through the medium of language. As I say, this is the sense in which Empson follows Richards. But at the same time, there's something rather different between the two. First of all, Empson doesn't really kind of settle into a sense that it's all about the reader-- that is to say, that it's all about the reader's experience of the literary. Richards is actually an avatar of figures like Iser, like Hans Robert Jauss and Stanley Fish-- whom we'll be discussing later in the syllabus-- who are interested in reader response: that is to say, in the way in which we can talk about the structure of reader experience. Empson is sort of interested in that, just as he's fascinated by the texture of textual evidence itself. He is also very interested--much more so than Richards, and certainly more so than the New Critics from whom he sharply diverges in advance in this respect-- interested in authorial intention; that is to say, for him, literary criticism is always an appeal to authorial intention. Mind you, he ascribes to authorial intention the most amazingly outrageous things that other critics threw up their hands in despair about, but nevertheless it is for him always still an appeal to authorial intention. At bottom, Empson doesn't really settle into the rigorous consideration of the author, the text, or the reader as if they were separate functions. For Empson, there's a kind of a fluid and easy movement back and forth between what for hermeneutics are three very different phenomena: author, text, reader. For Empson, it's a kind of synthetic mélange that's ultimately an appeal to the author, but certainly involves both working on the text itself and also understanding its effects on the reader. So all of this distances Empson from Richards to a certain extent, but the most important difference, I think, between Empson and the other figures we're discussing-- a difference which makes it even a little bit complex to say that he's a precursor of the New Criticism-- is that Empson very rarely concerns himself with the whole of a text. He isn't really interested in the unity of "the poem." He is simply interested in saying as much as he can about certain local effects, certainly with the implication, possibly, that this has a bearing on our understanding of, let's say, the whole of Macbeth; but he doesn't set about doing a systematic reading of the whole of Macbeth. He always zooms in on something, thinks about it for a while and then goes away and thinks about something else, leaving us to decide whether it has a genuine bearing on the entirety or on the literary wholeness or unity of Macbeth. Empson is interested in the complexity of local effects. Another thing to say about Empson's perspective, which makes him differ sharply, I think, from Richards and from the later New Critics, is that Empson is perfectly willing to accommodate the idea that maybe-- just as in the case of the psychology of Macbeth the character-- that maybe poetry doesn't reconcile conflicting needs. Maybe, after all, poetry is an expression of the irreducible conflict of our needs. The last chapter of Seven Types of Ambiguity, his seventh ambiguity, is actually, as Empson said, about "some fundamental division in the writer's mind." There, you see, he diverges from his teacher, Richards. He's fascinated by the way in which literature doesn't unify opposites or reconcile needs but leaves things as it found them, but exposed in all of their complexity. Paul de Man more than once invoked Empson as a precursor of deconstruction, not of the New Criticism. For this reason--for the reason that he's not concerned with unity and that he's not concerned with the idea of the reconciliation of opposites-- Empson, I think, can rightly be understood as a precursor of deconstruction, if only because deconstruction follows the New Criticism, of course, in being a mode of close reading; and there has never been a better close reader than Empson. Before turning away from Empson, whose influence was widespread despite this divergence, it needs to be said that his purposes for close reading are actually very different from the purposes of the New Critics-- the American New Critics, particularly Brooks whose preoccupation with unity is something he freely confesses and something that-- well, we've got ten minutes, so I shouldn't rush ahead prematurely-- but something that you can see to be at the heart of what Brooks is doing. Here Brooks, in The Well-Wrought Urn, Modern Poetry and the Tradition, and the other books for which he's well known, uses a variety of different words to describe the way in which the complexity of literature is placed in the service of unification. In the essay you're reading here, he uses the word "irony." He admits that maybe he stretches the word "irony," but he tries to argue that the variety of effects that he focuses on in his essay have to do with irony. In another great essay, the first chapter of The Well-Wrought Urn, he talks about paradox. Obviously, these are related ideas, and elsewhere he takes up other ways of evoking the way in which complex feelings and thoughts are brought together. Empson's word, "ambiguity," continues to play an important role in the work of the New Criticism. It is--at least, it puts itself out there as a candidate to be an alternative term that one might use if one got tired of saying "irony" or "paradox." > There are a variety of words, in other words. Another word given by the poet and critic Allen Tate, one of the founding figures of the New Criticism, is "tension"-- that is to say, the way in which the literary text resolves oppositions as a tension; that is, a holding in suspension a conflict experienced as tension. So there are these varieties of ways for describing what's going on in a text. It's interesting I think that if one thinks of Tony the Tow Truck one can think of-- when you go home and study it, you'll see what I mean-- there's a complex pattern of imagery, as it were, between pulling and pushing. There's a tremendous amount of pulling and pushing that goes on in Tony the Tow Truck. We'll revert especially to the notion of "pushing" in other contexts later in the course, but for the moment you can see the way in which there is a tension between that which pulls and that which pushes, which is one of the motive forces of the story. That, I think, is an example also: if it is ironic that Tony is now stuck and instead of pulling needs to be pushed, if it is in some Brooksian sense ironic that that is the case, we can understand that as irony or as tension or ambiguity. Now there's one way in which Tony is probably not a good proof text for the New Criticism. You remember that in "My Credo," the little sort of excerpt that you get at the beginning of the Brooks section in your anthology, Brooks says, "Poetry should be about moral things but it shouldn't point a moral." Obviously Tony the Tow Truck points a moral and so would be subject to a kind of devaluation on those grounds by the New Criticism-- even though there are ways of reading Tony, as I've been suggesting, New Critic-ally. All right. Now the idea of unity for Brooks, and for the New Critics in general, is that it be complex, that it warp the statements of science, and that it bring to bear a tension between the denotation and the connotation of words. The word "yellow" in the second line of Tony the Tow Truck-- its denotation is that it is a certain color, the color that Tony's garage is painted. The connotation, I have suggested, is of the variety of kinds that one might gingerly approach in thinking about complicating the texture of the story. In any case, the tension between denotation and connotation is part of the way in which irony works. So the question again is--and the question it seems to me raised in advance by Empson-- why should these sorts of tension, these movements of complex reconciliation, result in unity? It's very interesting. Brooks's reading of "She Dwelt Among Untrodden Ways," the wonderful Lucy poem by Wordsworth, emphasizes the irony of the poem. Brooks feels that he's on very thin ice talking about Wordsworth and irony at all, but at the same time does bring it out rather beautifully, talking about the irony of the poem basically as the way in which you can't really say that Lucy can be a flower and a star simultaneously. She's a flower, she's perishable, she's half hidden, and she's ultimately dead and in the ground-- whereas a star would seem to be something that she just can't be mapped onto if she is this half-hidden thing. But at the same time, Brooks says, "Well, after all she is a star to the speaker," and he's just saying, "She's a star to me; she's a flower half hidden, unnoticed to everyone else." The relationship between the depth of the speaker's feeling and the obscurity of Lucy in the world is the irony that the speaker wants to lay hold of and that reconciles what seem like disparate facts in the poem. Well, now I just want to point out that close reading can always be pushed farther. That's the difficulty about close reading. It's all very well to say, "Look at me, I'm reconciling harmonies, I'm creating patterns, I'm showing the purpose of image clusters and all the rest of it," but if you keep doing it, what you have yoked together becomes unyoked again. It falls apart, or at least it threatens to do so. A contemporary of Brooks's named F.W. Bateson wrote an essay on this same poem, "She Dwelt Among Untrodden Ways," in which he points out-- the poem's on page 802--that the poem is full of oxymorons, contradictions in terms: "untrodden ways." A "way" is a path, but how can there be a path if it's not trodden? What is the meaning of an untrodden way, or of "there are none to praise" her but "very few to love"? Why call attention not so much to the difference between "few love her" and "none praise her" as the notion that none praise her? This is palpably false because here's the poet praising her, right? So what does he mean, "none"? Why is he calling attention, in other words, to this logical disparity? "She lived unknown and few could know"--how can she be unknown if few know anything about her? In other words, the poem is full of complexities, but who says they're being reconciled? They're just sitting there oxymoronically, not reconciling themselves at all. So Bateson's argument is that Wordsworth is calling attention to a conflict of emotion or feeling that can't be reconciled, hence the pathos of the ending, "[O]h, / the difference to me," and so on. This, as I say, is a different use of close reading. It's close reading which is not in the service of unity or of unification but recognizes that the very arts whereby we see a thing as a unified whole can just as easily be put to the purpose of blasting it apart again, and of calling our attention to that which can't be reconciled just as the speaker can't be reconciled to the death of Lucy. Now the New Critics can, I think, be criticized for that reason. The aftermath of--the historical close reading aftermath of-- the New Criticism does precisely that, if one sees deconstruction as a response to the New Criticism. It's not just that, as we'll see, it's a great many other things too. The deconstructive response consists essentially in saying, "Look. You can't just arbitrarily tie a ribbon around something and say, 'Ah ha. It's a unity.'" Right? The ribbon comes off. > "Things fly apart," as the poet says, and it's not a unity after all. There is another aspect of the way in which the New Criticism has been criticized for the last forty or fifty years which needs to be touched on. The notion of autonomy, the notion of the freedom of the poem from any kind of dependence in the world, is something that is very easy to undermine critically. Think of Brooks's analysis of Randall Jarrell's "Eighth Air Force." It concludes on the last page of the essay by saying that this is a poem about human nature, about human nature under stress, and whether or not human nature is or is not good; and arguments of this kind, arguments of the kind set forth by the poem, "can make better citizens of us." In other words, the experience of reading poetry is not just an aesthetic experience. It's not just a question of private reconciliation of conflicting needs. It's a social experience, in this view, and the social experience is intrinsically a conservative one. In other words, it insists on the need to balance opinions, to balance viewpoints, and to balance needs, precisely in a way which is, of course, implicitly a kind of social and political centrism. In other words, how can poetry in this view--how can literature be progressive? For that matter, how can it be reactionary? How, in other words, can it be put to political purposes if there is this underlying, implicit centrism in this notion of reconciliation, harmonization, and balance? That has been a frequent source of the criticism of the New Criticism in its afterlife over the last forty or fifty years. There's also the question of religion. There is a kind of implicit Episcopalian perspective that you see in Brooks's essay when he's talking about the Shakespeare poem, in which, under the aspect of eternity, inevitably things here on earth seem ironic. > There's always that play of thought throughout the thinking of the New Criticism as well. Naturally, one will think of things in ironic terms if one sees them from the perspective of the divine or of the eternal moment. |
Literature_Lectures | 23_Queer_Theory_and_Gender_Performativity.txt | Prof: Now, I don't think it's ever happened to me before-- although it might have but I can't recall its having happened-- that I found myself lecturing on a person who had lectured yesterday here at Yale, but that's what happened in this case. You read--let's just call it--the facetious article on the lecture in The Daily News this morning. Some of you may actually have been in attendance. I unfortunately could not be, but as it happened I ran into her later in the evening and talked to some of her colleagues about what she'd said, so I do have a certain sense of what went on. In any case, as to what went on, I'm going to be talking today about the slipperiest intellectual phenomenon in her essay having to do with what she calls "psychic excess," the charge or excess from the unconscious which in some measure unsettles even that which can be performed. We perform identity, we perform our subjectivity, we perform gender in all the ways that we'll be discussing in this lecture, but beyond what we can perform there is "sexuality," which I'm going to be turning to in a minute. This has something to do with the authentic realm of the unconscious from which it emerges. What Butler did in her lecture yesterday was to return to the psychoanalytic aspect of the essay that you read for today, emphasizing particularly the work of Lacan's disciple, Jean Laplanche, and developing the ways in which sexuality is something that belongs in a dimension that exceeds and is less accessible than those more coded concepts that we think of as gender or as identity in general. So conveniently enough, for those of you who did attend her lecture yesterday, in many ways she really did return to the issues that concerned her at the period of her career when she wrote Gender Trouble and when she wrote the essay that you've read for today. All right. Now I do want to begin with what ought to be an innocent question. Surely we're entitled to an answer to this question, and the question is: what is sexuality? Now of course you may be given pause-- especially if you've got an ear fine-tuned to jargon-- you may be given pause by the very word "sexuality," which is obviously relatively recent in the language. People didn't used to talk about sexuality. They talked about sex, which seems somehow more straightforward, but "sexuality" is a term which is not only pervasive in cultural thought but also has a certain privilege among other ways of describing that aspect of our lives. In other words, there is something authentic, as I've already begun to suggest, about our sexuality, something more authentic about that than the sorts of aspects of ourselves that we can and do perform. That's Butler's argument, and it's an interesting starting point, but it's not yet, or perhaps not at all, an answer to the question, "What is sexuality?" Now for Foucault sexuality is arguably something like desired and experienced bodily pleasure, but the problem in Foucault is that this pleasure is always orchestrated by a set of factors that surround it, a very complicated set of factors which is articulated perhaps best on page 1634 in his text, the lower right-hand column. He's talking about the difference between and the interaction between what he calls the "deployment of alliance" and the "deployment of"-- our word--"sexuality." I want to read this passage and then comment on it briefly: "In a word [and it's of course not in a word; it's in several words], the deployment of alliance is attuned to a homeostasis of the social body..." The deployment of alliance is the way in which, in a given culture, the nuclear reproductive unit is defined, typically as the "family," but the family in itself changes in its nature and its structure. The way in which the family is viewed, the sorts of activities that are supposed to take place and not take place in the family-- because Foucault lays a certain amount of stress on incest and the atmospheric threat of incest-- the sorts of things that go on in the family and are surrounded by certain kinds of discourse conveying knowledge-- and we'll come back to the latter part of that sentence-- all have to do with the deployment of alliance. On the other hand, the deployment of sexuality we understand as the way in which whatever this thing is that we're trying to define is talked about-- and therefore not by any state apparatus or actual legal system necessarily-- but nevertheless simply by the prevalence and force of various sorts of knowledge police. Okay. To continue the passage: In a word, the deployment of alliance is attuned to a homeostasis [or a regularization; that's what he means by "homeostasis"] of the social body, which it has the function of maintaining; whence its privileged link with the law [that is to say, the law tells us all sorts of things about the family-- including whether or not there can be gay marriage, just incidentally: I'll come back to that in a minute]; whence too the fact that the important phase for it is "reproduction." The deployment of sexuality has its reason for being, not in reproducing itself, but in proliferating, innovating, annexing, creating, and penetrating bodies in an increasingly detailed way, and in controlling populations in an increasingly comprehensive way. What he's saying is, among other things, that a deployment of sexuality, which isn't necessarily a bad thing-- these deployments aren't meant somehow or another to be terroristic regimes-- a deployment of sexuality, which for example favored forms of sexuality such as birth control or homosexuality, would certainly be a means of controlling reproduction. Just in that degree, the deployment of sexuality could be seen as subtly or not so subtly at odds with the deployment of alliance, alliance which is all for the purpose of reproduction or at least takes as its primary sign, as Foucault suggests, the importance, the centrality, to a given culture-- or sociobiological system, if you wil-- of reproduction. These are the ways in which the deployment of alliance and the deployment of sexuality converge, don't converge, and conflict with each other. But in all of these ways, we keep seeing this concept of sexuality; but, as I say, it continues to be somewhat elusive what precisely it is. Just to bracket that for the moment, let me make another comment or two on the concepts in the passage that I have just read. Let's say once and for all at the outset that the central idea in Foucault's text, the idea which he continues to develop throughout the three volumes on the history of sexuality-- the central idea is this idea of "power" as something other than that which is enforced through legal, policing or state apparatus means. This is power which is enforced as a circulation or distribution of knowledge, which is discursive in nature, and which enforces its norms for all of us, for better or for worse--because discourse can release and can constitute sites of resistance as well as oppress-- which, for better or worse, circulates among us ideas that are in a certain sense governing ideas about whatever it is that's in question, in this case, obviously, sexuality. Foucault calls this, sometimes hyphenating it, "power-knowledge." This is absolutely the central idea in late Foucault. I introduced it, you remember, last time in talking about Said. I come back to it now as that which really governs-- and guides you through--the whole text of Foucault: the distinction between power as it's traditionally understood as authoritative-- as sort of top- down, coming from above, imposed on us by law, by the police, by whatever establishment of that kind there might be-- the distinction between power of that kind and power which is simply the way in which knowledge-- and knowledge is not, by the way, necessarily a good word, it's not necessarily knowledge of the truth-- the way in which knowledge circulates and imposes its effects on us, our behavior, the way we are or the way at least that we think we are-- the way in which we "perform," in Butler's term. All of that in Foucault is to be understood as an effect of power-knowledge. Now notice, however, in terms of our question--What is sexuality?--that Foucault is being quite coy. He's talking about sexuality but he's not talking about it in itself, whatever it "in itself" might be. He's talking about the deployment of it, that is to say the way in which power-knowledge constructs it, makes it visible, makes it available to us, and makes it a channel through which desire can get itself expressed, but a channel which is still not necessarily in and of itself that natural thing that we look for and long for and continue to seek: the nature of sexuality. So when the emphasis in Foucault's discussion is really on deployment, that is, the way in which alliance-- the family, whatever the nuclear social structure might be-- or sexuality--whatever it is that gets itself expressed as desire-- the way in which these matters, these aspects of our lives, can be deployed, we still aren't necessarily talking about the thing in itself. Foucault isn't an anthropologist. He's not talking about the family in itself either. He's talking about the way in which a basic concept of alliance out of which reproduction arises and gets itself channeled can be deployed, and understood as manipulated by, the circulation of power-knowledge. The issue of gay marriage is very interestingly, by the way, between the concepts of the deployment of alliance and the deployment of sexuality, because there's a certain sense in which the deployment of sexuality is at odds with the deployment of alliance. If sexuality is something that is really just looking around for ways to get itself expressed, taking advantage of deployment where that's a good thing and trying to resist deployment where that seems more like policing-- if it's just looking around for a way to get expressed, it's not particularly interested in alliance. It's not interested in the way in which relationships involving sexuality could settle into any kind of a coded pattern or system of regularity, so that there is this tension which, of course, gets itself expressed whenever, within the gay community, people strongly support gay marriage and see that as the politicized center of contemporary gay life; or people also in the gay community, many of them theoretically advanced, think of it as a non-issue or a side issue which loses track precisely of what Foucault calls the deployment of sexuality, simply trying to extend the domain, arguably a tyrannical domain, of the deployment of alliance-- in other words, to redefine the basic concept of alliance in such a way that doesn't really touch very closely on the deployment of sexuality. So it's an interesting and rather mixed set of issues that the whole question, the whole sort of profoundly politicized question, of gay marriage gives rise to. So that's what sexuality is > in Foucault. In Butler it's just clearer that to ask the question--What is sexuality?-- is--well, it's just been a false start. We thought it was an innocent question, but you get into Butler and you see very clearly that you simply can't be a certain sexuality. You can perform an identity, as we'll see, by repeating, by imitating, and by parodying in drag. You can perform an identity, but you can't wholly perform sexuality precisely because of this element of psychic excess to which her thinking continues very candidly and openly and honestly to return. Butler's work, in other words, is not just about "the construction of identity." It's not just about the domain of performance, as one might say. It acknowledges that there is something very difficult to grasp and articulate beyond performance. Its main business is to explain the nature and purview and purposes of performance, but it's nevertheless always clear in Butler, as she returns to the question of the unconscious in particular, that there is something in excess of, or not fully to be encompassed by, ideas of performance. So we've made a false start. We've asked a question we can't answer, but at the same time we have learned certain things. We've learned certainly that sexuality, whatever it is, is more flexible and also in some sense more authentic-- that is to say, closest to the actual nature of the drives. Yesterday Butler made a distinction between instinct and drive which I won't go into because it had to do with her reflections on what is cultural and what is biological or not cultural in the life of the unconscious. For our purposes, whatever role sexuality may play in the unconscious, and however authentic--that is to say, however not culturally determined that role may turn out to be-- it's more flexible. That's the important thing, more than any kind of social coding: the sort of coding, for example, that Foucault would indicate in speaking of alliance or deployed sexuality and the sort of coding that Butler refers to repeatedly as "gendering." Still, for both of them--and this is the other thing we've learned-- even sexuality through deployment, or through the way in which it can get expressed in relation to gender and performance, is discursive. It's a matter of discourse. It arises out of linguistic formations, formations that Foucault understands as circulated knowledge and that Butler understands, again, as performance. Foucault sees sexuality as the effect of power-knowledge, power as knowledge. Butler sees it as the effect--insofar as it's visible, insofar as it is acted out--sees it as the effect of performance. So now to take the way in which Butler makes this relationship between what one might suppose to be authentic, actual, all about one's self, and that which is performed, that which is one's constructs toward being a self, let's take one of the most provocative sentences in her essay, which is on page 1711 about a third of the way down: "Since I was sixteen, being a lesbian is what I've been." Now what she's doing--remember at the very beginning of the essay she says that her whole purpose is to reflect, is somehow or another to register a politicized intervention in gender studies in terms of a philosophical reflection-- on ontology, on "being." What is it in other words, she says, to be something? Now what she's doing in this sentence, which is an awkward-seeming sentence, "[B]eing a lesbian is what I've been," is pointing out to us that to be something is very different from to be "being" something. For example, I can say I'm busy. (By the way, I am.) I can say I'm busy and I expect you to take it that there's a certain integrity, there's a certain authenticity in the fact that I'm busy. Yes, I'm busy, but suppose you say, suspecting that I'm not really busy, "Oh, he's being busy." In other words, he's performing busy-ness. He's going around being busy, sort of imposing on me the idea that this lazy person is actually accomplishing something. So, the performance of being busy. But here's the interesting point that Butler is making: the ontological realm is supposed to be about the simple being or existence of things, and it's always in philosophy contrasted with agency, with the doing of things, with getting something done, with the performance of things. But what Butler is saying--and that's why she says that she takes an interest in the ontological aspect of the question-- what she's saying is that there is an element of the performative which actually creeps into the ontological. Even being, she says, is something that in some measure--perhaps not altogether but in some measure--something we perform. Hence the doubling up of the word "being" in the sentence, "Since I was sixteen, being a lesbian is what I've been." In one sense, yeah, I am--that's what I am, but in another sense I've been performing it. I've been being one. > I've been outing myself, if you will. I have been taking up a role that can be understood, as all roles can, intelligibly in terms of its performance. So that's why she puts the sentence that way, and if you made a big mark in the margin and said, "Aha, got her! This is where she says she really is something. No more of this stuff about just constructivism, making oneself up as one goes along. This is where she says she really is something," then you're wrong. > She's escaped your criticism because she says, "Oh, no, no, no. I have been being a lesbian: I've been being one, which is a different thing, although not altogether a different thing, from being one." She is deliberately, in other words, on the fence between the sense of the ontological as authentic and her own innovative sense of the ontological as belonging within the realm of performance. She doesn't want to get off the fence. She really doesn't want to come down squarely on either side because for her-- and this is what I like best about her work, even though it's perhaps the most frustrating thing about it-- because for her, what she is talking about is ultimately mysterious. She has a great deal to say about it, but she's not pretending that in what she has to say about it she's exhausted the "subject." That's why it seems to me to be admirable that she stays on the fence about this, and not simply an occasion for our frustration. So with all of this said--and mystification aside, if you will, as well--with all of this said, it seems plain that Foucault and Butler do have a common political agenda. Foucault is a gay writer who was, in the later stages of writing The History of Sexuality, dying of AIDS; Butler is a lesbian writer. Both of them are very much concerned for the political implications of their marginalized gender roles, while at the same time--of course, being theoretically very sophisticated about them. Their common political agenda is to destabilize the hetero-normative by denying the authenticity, or in Butler's parlance "originality," of privileged gender roles. In other words, who says heterosexuality came first? Who says the nuclear family is natural? Who says sexuality can only get itself expressed in certain ways that power-knowledge deploys for it? These are the sorts of questions, the politicized questions, which these discourses raise in common. So it seems to me that they have a very broad agenda in common, and it also seems to me that they are very closely in agreement. I say that just in order to pause briefly on the moment in which they seem not to be. You've probably noticed that one text is referring to another at one point in your reading, and so let's go there: page 1712, the right-hand margin. The context for this, of course, is Butler talking about Jesse Helms having deplored male homosexuality in attacking the photography of Robert Mapplethorpe, and by implication, Butler argues, simply erasing female homosexuality because his diatribe pays no attention to it. Butler then complains that there's a certain injustice in that because, in a way, it's even worse, she says, sort of to be declared nonexistent than it is to be declared deviant. At least the male homosexual gets to be declared deviant: we're simply erased. That's the position she's taking here, and then at that point, what she says is: To be prohibited explicitly is to occupy a discursive site from which something like a reverse-discourse can be articulated; to be implicitly proscribed is not even to qualify as an object of prohibition. Here's where she gives us a footnote on Foucault, footnote fifteen (you know we love footnotes): It is this particular ruse of erasure which Foucault for the most part fails to take account of in his analysis of power. Butler's argument is that in Foucauldian terms, there's got to be discourse for there to be identity. Helms's refusal of the category of "lesbian" simply by omission-- and of course, we know, by the way, that this is a refusal only by omission-- Helms's refusal of this category is, in other words, an erasure of discourse. No discourse, no identity. That is, in other words, what Butler is claiming Foucault's position entails. Discourse creates power-knowledge. Power-knowledge creates identity. Therefore, where there's no discourse, there can be no identity, and since Helms has erased the lesbian by refusing discourse about it, it must follow that there is no such thing as a lesbian. That's the implication of this footnote. He almost always presumes [and we must do honor to that word "almost"] that power takes place through discourse as its instrument, and that oppression is linked with subjection and subjectivization, that is, that it is installed as the formative principle of the identity of subjects. Now in defense of Foucault, let's go to page 1632, the upper right-hand column, a passage that's fascinating on a number of grounds. It's rather long but I think I will read it, upper right-hand column. Foucault says: Consider for example the history of what was once "the" great sin against nature. The extreme discretion of the texts dealing with sodomy-- that utterly confused category--and the nearly universal reticence in talking about it made possible a twofold operation. Okay. Here's Foucault saying that this is a category. The homosexual identity, as understood in terms of sodomy, is a category. He's going to go on to say that it's punishable in the extreme by law, but in the meantime he's saying there's no discourse. There's a kind of almost universal silence on the subject. You don't get silence in Dante, as I'm sure you know, but in most cases in this period nobody talks about it. It's punishable, severely punishable by law, and yet nobody talks about it. This would seem to violate Foucault's own premise that discourse constitutes identity but also plainly does contradict Butler's claim that Foucault supposes that discourse always constitutes identity. Let's continue: … [T]he nearly universal reticence in talking about it made possible a twofold operation: on the one hand, there was an extreme severity (punishment by fire was meted out well into the eighteenth century, without there being any substantial protest expressed before the middle of the century) [Discourse is here failing also in that it's not constituting a site of resistance, and nobody's complaining about these severe punishments just as on the other hand nobody's talking very much about them: there is, in other words, an erasure of discourse], and [he continues] on the other hand, a tolerance that must have been widespread (which one can deduce indirectly from the infrequency of judicial sentences, and which one glimpses more directly through certain statements concerning societies of men that were thought to exist in the army or in the courts)-- In other words, he's saying there was an identity and that identity was not--at least not very much-- constituted by discourse. As you read down the column, he's going to go on to say that in a way, the plight of the homosexual got worse when it started being talked about. Yes, penalties for being homosexual were less severe, but the surveillance of homosexuality-- the way in which it could be sort of dictated to by therapy and by the clergy and by everyone else who might have something to say about it-- became far more pervasive and determinate than it was when there was no discourse about it. In a certain way, Foucault is going so far as to say silence was, while perilous to the few, a good thing for the many; whereas discourse which perhaps relieves the few of extreme fear nevertheless sort of imposes a kind of hegemonic authority on all that remain and constitutes them as something that power-knowledge believes them to be, rather than something that in any sense according to their sexuality they spontaneously are. It seems to me that this pointed disagreement with Foucault, raised by Butler, is answered in advance by Foucault and that even there, when you think about it, they're really in agreement with each other. Foucault's position is more flexible than she takes it to be, but that just means that it's similar to her own and, as I say, that fact together with the broad shared political agenda that they have seems to me to suggest that they're writing very much in concert and in keeping with each other's views. Now in method they are somewhat different. Foucault is a more historical writer, although historians often criticize him for not being historical. The reason historians don't think he's historical is that he never really explains how you get from one moment in history to the next. He talks about moments in history, but he talks about them in terms of bodies of knowledge-- "epistemic moments," as he sometimes says. Then these moments somehow mysteriously become other moments and are transformed. The kind of causality that might explain such a thing from an historian's point of view tends in Foucault's arguments to be left out. He nevertheless is concerned, however, with the way in which views of things change over time, and it's the change in those views that his argument in The History of Sexuality tends to concentrate on; so that he can say that starting in the nineteenth century and continuing to the present, there are essentially four cathected beings around which power-knowledge deploys itself. He describes them as the hysterical woman, the masturbating child, the Malthusian couple-- meaning the couple that is enjoined not to reproduce too much because the economy won't stand for it, which is a way of, you see, of deploying alliance in such a way as to manipulate and control reproduction. That's a moment, by the way, in which the deployment of alliance and the deployment of sexuality may be in league with each other, because obviously birth control and homosexual practices can also control reproduction. As you see, it's not always a question of conflict between these two forms of deployment. So in any case, there's the Malthusian couple and then the perverse adult, meaning the queer person in whatever form. He says about this--on page 1634 in the left-hand column-- that you get these four types, and he says that therapy, the clergy, family, parental advice, and the various ways in which knowledge of this kind circulates have to do primarily with the preoccupation with, tension about, anxiety about these four types. The hysterical woman is determined to be hysterical once it begins to be thought that her whole being is her sexuality. The masturbating child violates the idea that children are born innocent and must be-- because it suggests something terribly wrong about the cult of the innocent child that begins in the nineteenth century-- it's something that is subject to extreme and severe surveillance. "Who knows what will come of this?" Scientific thinking about masturbation had to do with the notion that it led to impotence, that by the time you got around to being in a relationship, there wouldn't be anything there anymore. Just terrible thoughts--also it stunted your growth and you died sooner--just terrible, terrible thoughts about masturbation existed. All of this dominated the scientific literature until well into the twentieth century. Then the Malthusian couple, which was primarily a phenomenon of what's called "political economy" in the earlier nineteenth century but has prevailed, by the way, in what we suppose to be, and indeed what is, our progressive technology of the promotion of birth control around the world. "We must control population" is still the Malthusian principle on which we base the idea that people really need to be enlightened about the possibility of not just having an infinite number of children. Again you see that Foucault is right still to suppose that the notion of the Malthusian couple prevails among us. Then finally the perverse adult, who is first discoursed about in the nineteenth century, as the earlier passage that I read suggested, and is still, of course, widely discoursed about. Of course it now has a voice and discourses in its own right: a literature, a journalism and all the rest of it, and is in other words very much in the mainstream of discourse and still has controversy swirling around it, precisely because of the discursive formations that attach to it. All of this Foucault takes to be in the nature of historical observation. For Butler on the other hand, as you can tell from her style-- I am sure that, as in the case of reading Bhabha, you recognize a lot of Derrida in Butler's style-- in Butler it's a question of taking these same issues and orienting them more in the direction of philosophy. I've already suggested the way in which she understands this particular essay as a contribution to that branch of philosophy called "ontology," the philosophy of being. In general she takes a particular and acute interest in that. Her basic move is something that I hope by this time you've become familiar with and recognize and perhaps even anticipate. For us, perhaps, the inaugural moves of this kind were the various distinctions made by Levi-Strauss. The one that I mentioned in particular-- as accessible and I think immediately explanatory of how the move works-- is "the raw" and "the cooked." I tried to show that intuitively, obviously, the raw precedes the cooked. First it's raw, then it's cooked, and yet at the same time if we understand the relationship between the raw and the cooked to be a discursive formation, we have to recognize that there would be no such thing as the raw if there weren't the cooked. If you talk about eating a raw carrot, you have to have had a cooked carrot. You don't just pick up a carrot, which you've never seen before, and say, "This is raw." The only way you know it's raw is to know that it can be and has been cooked. Well, this is the Butler move, the move that she makes again and again and again. What do you mean, the heterosexual precedes the homosexual? What do you mean, the heterosexual is an original and the homosexual is just a copy of it? Who would ever think of the concept of the heterosexual? You're the only person on earth. You stand there and you say, "I'm heterosexual." > You don't do that. You just say, "Well, I have sexuality." You could say that. If you had enough jargon at your disposal, you could say that, but you can't say, "I am heterosexual." You can't have the concept heterosexual without having the concept homosexual. They are absolutely mutually dependent, and it has nothing to do with any possible truth of a chicken and egg nature as to which came first. In sexuality, the very strong supposition is for Butler that neither came first. They're always already there together in that psychic excess with which we identify sexuality, but in social terms the idea that what's natural is the heterosexual and what's unnatural, secondary, derivative, and imitative of the heterosexual is the homosexual is belied simply by the fact that you can't have one conceptually without the other. It's the same thing with gender and drag. Drag comes along and parodies, mimics, and imitates gender, but what it points out is that gender is always in and of itself precisely performance. This could, of course, take the form of a critique, I suppose, but we're all quite virtuoso when it comes to performing. Here I am. I'm standing in front of you performing professionalism. I'm performing whiteness. I'm performing masculinity. I'm doing all of those things. I'm quite a virtuoso: what a performance! > Perhaps it's kind of hard to imagine my standing here sort of exclusively performing masculinity as opposed to all the other things that I am performing, but okay, I'm certainly doing that too. I'm insecure about all of these things, Butler argues, because I keep performing them. In other words, I keep repeating what I suppose myself to be. I'm not comfortable in my skin, presumably, and I don't just relax into what I suppose myself to be. I perform it. It is, in other words, a perpetual self-construction which does two things at once. It stabilizes my identity, which is its intention, but at the same time it betrays my anxiety about my identity in that I must perpetually repeat it to keep it going. All of this is going on in this notion of performance, so what drag does is precisely bring all this to our attention. It shows us once and for all that that's what's at stake in the seemingly natural categories of gender that we imagine ourselves to inhabit like a set of comfortable old clothes. Drag, which is not at all comfortable old clothes, reminds > us how awkward the apparel of ourselves that we can call our identity actually is, and so it plays that role. The relationship between identity and performance is just the same. This notion of performing identity should recall for you "signifyin'" in the thinking of Henry Louis Gates. It should recall for you, in other words, the way in which the identity of another is appropriated through parody, through derision, through self-distancing, and through a sense of the way in which one is something precisely insofar as one is not simply inhabiting the subject position of another. It should also recall for you the "sly civility" of the subaltern in Homi Bhabha's thinking: the way in which double consciousness is partly in the subject position of another, partly in one's own in such a way that one liberates oneself from the sense that it's the other person who is authentic and that one is oneself somehow derivative, subordinate, and dependent. All of these relations ought to gel in your minds as belonging very much to the same sphere of thought. The way in which you can't have the raw without the cooked is the way in which, generally speaking, categories of self and other and of identity per se simply can't be thought in stable terms in and for themselves, but only relationally. Now "why is this literary theory?" you ask yourself, or you have been asking yourself. Of course, Butler gives the greatest example at the end of her essay when she says, "Suppose Aretha is singing to me." "You make me feel," not a natural woman, because there's no such thing as natural. "You make me feel like a natural woman," "you" presumably being some hetero-normative other who shows me what it is really to be a woman. Suppose, however, "Aretha is singing to me," or suppose she is singing to a drag queen. That is reading. That's reading a song text in a way that is, precisely, literary theory. Now obviously I'm thinking of Virginia Woolf's Mr. Ramsay in writing this sentence [gestures to sentence on chalkboard: "The philosopher in a dark mood paced on his oriental rug."]. It's a terrible sentence for which I apologize. Virginia Woolf never would have written it; but just to pass in review the way in which what we've been doing is literary theory: the Marxist critic would, of course, focus on "his" because the nexus for the Marxist critic in this sentence would be possession-- that is to say, the deployment of capital such that a strategy of possession can be enacted. The African American critic would call attention to white color-coded metaphors, insisting, in other words, that one of the ways in which literature needs to be read is through a demystification of processes of metaphorization whereby white is bright and sunlit and central, and black, as Toni Morrison suggests in her essay, is an absence, is a negation, and is a negativity. This is bad, a dark mood. For the postcolonialist critic, obviously the problem is an expropriated but also undifferentiated commodity. By "Oriental" you don't mean Oriental. You mean Kazakh or Bukhara or Kilim. In other words, the very lack of specificity in the concept suggests the reified or objectified other in the imagination or consciousness of the discourse. Finally, for gender theory the masculine anger of the philosopher, Mr. Ramsay--you remember he is so frustrated because he can't get past r; he wants to get to s, but he can't get past r-- the masculinized anger of the philosopher masks the effeteness of the aestheticism of somebody who has an Oriental rug. That in turn might mask the effete professorial type, that might mask an altogether too hetero-normative sexual predation and on and on and on dialectically if you read this sentence as an aspect or element of gender theory. Okay. I will certainly end there, and next time we'll take up the way in which what we've been talking about for a few lectures, the construction of identity and of things, which has obviously been one of the common features of this course, is theorized at an even more abstract level, with certain conclusions. |
Literature_Lectures | An_Introduction_to_Anton_Chekhov.txt | Russians refer to Tolstoy and Dostoevsky as great writers great geniuses but Chekhov as a friend he actually engages with all those big questions of life and death but he does it with such lightness Chekhov always denied that he had any kind of agenda he was interested in human beings and how they interacted he did represent people actually as they really are it's as if he's examining the sort of eternal questions of love and of death and of life in the space of a raindrop [Music] he lived in taganrog in southern Russia in his childhood he graduated as a doctor from Moscow University and started writing funny stories to try and help pay the bills he confounded his critics from the beginning because his stories were so unconventional he also started writing plays in the late 1880s he had a kind of success with Iran off which was his first proper stage to play Chekhov lived a fairly short life 44 years he packed an awful lot into it he spent the last four and a half five years of his life based in in Yalta and that's where he wrote the the last great plays three sisters and the Cherry Orchard and then a couple of his greatest short stories what Chekov puts his finger on so brilliantly in all his last plays is the slow decline of this group of people who are being ousted from their houses and their homes and their comfortable positions [Music] there's a whole culture of the Russian country estate which is the setting for the great plays its Golden Age was perhaps in the 1820s 1830s there was something beautiful about them because in the middle of the Russian steppe there would be this classical mansion and there was a little sort of pool of culture in the middle of nothingness the Russian landscape was incredibly important to check off as it is to so many Russians it's something that's really fun wants to understand and I think the Chekhov sees a love of the landscape as over and over again as being a redemptive thing astro finds great beauty in the countryside but the others apparently don't you know it just leaves to them endlessly they're saying Moscow Moscow we must go to Moscow well I think everybody feels that but if you're as the three sisters were way out in some small town with an an army garrison then in your mind you idealize what Moscow must be like these are not members as aware of the aristocracy they are people who belong to the lower ranks of the gentry they are not extremely wealthy they're certainly not powerful or influential in the world that's what is his plays are mostly about about people feeling that real life is just out of reach and the realization of their own lives is just out of reach it was a way of life that was destined to die because it was built on on sand I mean the the people who are not enabled it to function work with a serfs and when the serfs were emancipated in 1861 obviously people lost their livelihoods and these beautiful country estates started to fall into decay and people became impoverished and abandoned them [Music] well I don't think it's a mistake that he called his players comedies the vision was essentially awry and amused view of human weakness I think the surface of the play is is actually rather brightened and animated and energized audiences can't have known what to make of it and in fact the reaction to some of the early plays is quite resentful particularly first neither see God because it was as if a trick was being played on the orders they didn't know what was to be taken seriously and what was funny and that's the he redefined all those terms Stanislavski and his part in the middle which done gingka decided to found a new theatre and Chekhov turned out to be the ideal author for them what Stanislavski enjoyed in Chekhov's writing was the fact that as he once put it the main meaning of the text is not by any means carried just by the words it's very often carried by what happens between the words it worked wonderfully for the first productions but for Chekhov eventually it got a bit much because the verisimilitude actually killed the great magic of the plays Chekhov actually was moving increasingly towards the abstract it seems that he was going into something more expressionistic and you can already see that in the Cherry Orchard if you look closely at the parkings facts and figures it's a vast vast area impossibly big bigger than any cherry orchard perhaps can be imagined to be [Music] he had this vision of a sea of white he says cherry trees in bloom in the branch of a cherry tree coming through the window into the house and girls in white dresses it's almost a hurt than hallucination he has of this white white landscape then ranevskaya for a moment her dead mother moving around in the orchard just for a second Trofimov sees the souls of Russia's suffering peasants in the trees these are all things which is you know suggests something which is you know something more than than real [Music] you [Music] |
Literature_Lectures | 11_Deconstruction_II.txt | Prof: I'd like to start with a little more discussion of Derrida before we turn to de Man. I know already that I'm going to forego what for me is a kind of pleasure-- -perhaps it wouldn't be for you--which is an explication of the last extraordinary sentence in Derrida's essay on page 926 in the right-hand column. I'm going to read it to you just so you can reflect on it. What I'd like to do is suggest to you that if you still haven't determined on a paper topic, you might very well consider this one. You may not find it congenial; but supposing that you are intrigued by Derrida to account for this last sentence, to show how it picks up motifs generated throughout the essay, how it returns the essay to its beginning, and to consider very carefully its metaphors-- it reflects on its own metaphors--I think you might find intriguing. The passage is: Here there is a sort of question, call it historical, of which we are only glimpsing today, the conception, the formation, the gestation, the labor. I employ these words, I admit, with a glance toward the business of childbearing-- but also with a glance toward those who, in a company from which I do not exclude myself, turn their eyes away in the face of the as yet unnamable, which is proclaiming itself and which can do so, as is necessary whenever a birth is in the offing, only under the species of the non-species in the formless, mute, infant, and terrifying form of monstrosity. Well, there is a sentence for you and, as I say, I don't have time to explicate it but I commend it to you as a possible paper topic if you're still in need of one. Now I do want to go back to the relationship between Derrida and Levi-Strauss. I suggested last time that while in some ways the essay really seems to stage itself as a critique of Levi-Strauss, to a remarkable degree, confessed or unconfessed, it stands on the shoulders of Levi-Strauss; at the same time, however, having made use of Levi-Strauss finding a means of distancing himself from the source text. Take, for example, page 924 over onto 925 when he quotes from Levi-Strauss' introduction to the work of Marcel Mauss on the subject of the birth, event, or emergence of language. What he quotes from Levi-Strauss would seem, on the face of it, to have exactly the same kinds of reservation and hesitation about the emergence or birth of language that Derrida himself has. Levi-Strauss writes: Whatever may have been the moment and the circumstances of its appearance in the scale of animal life, language could only have been born in one fell swoop. Things could not have set about signifying progressively. Following a transformation the study of which is not the concern of the social sciences but rather of biology and psychology, a crossing over came about from a stage where nothing had a meaning to another where everything possessed it. In other words, bam! All of a sudden you had language. You had a semiotic system, whereas before, yesterday, or a minute ago you had no language at all. In other words, there's no notion that somehow or another suddenly I looked at something and said, "Oh, that has a meaning," and then somehow or another I looked at something else and said, "Oh, that has a meaning," and in the long run, lo and behold, I had language--because the bringing into existence of the very thought of meaning, Levi-Strauss wants to argue, instantly confers meaning on everything. In other words, you don't have a gradual emergence of language. You have, like lava emerging from a volcano, a rupture. You have something which suddenly appears amid other things: something which is latent in those things, although they don't in themselves have it until you confer it on them, namely that which confers meaning--language. So this is Levi-Strauss' argument, and Derrida is interested in it because he recognizes its affinity with his own hesitation in talking about events, births, emergence and so on. At the same time, he points out by way of criticism that to suppose that yesterday there was no language, there were just things as they are without meaning, and that today there is language--that things have meaning as a result of there now being in place that semiotic system we call language-- he points out that this means that culture somehow or another must come after nature. There was nature; now there is culture, which is very much like an event or birth in the older sense. In fact, as soon as we have culture-- Levi-Strauss expresses this feeling especially in a famous book called Tristes Tropiques-- as soon as we have culture, we begin to feel overwhelming nostalgia for nature; but, says, Derrida, "What is this nostalgia other than the fact that the very thing we're nostalgic for comes into existence as a result of the nostalgia?" In other words, there is no nature unless you have culture to think it. Nature is a meaningless concept just like the lack of meaning within nature, where there's no culture until culture comes along and says, "Oh, not so much there is nature, but I'm terribly unhappy because before I came along, there was nature." Right? This is the nostalgia or regret of the ethnographer who says, "Now as a result of this terrible Eurocentrism, as a result of the terrible ethnocentrism of the Europeans studying these things, we no longer have a savage mind." That is to say, we no longer have the kind of mind which flourishes in nature, in a natural environment. You can see ramifications of arguments of this sort for environmentalism as well as for ethnography. It's a fascinating argument, but the bottom line is this. Even this critique, and it is a critique of Levi-Strauss because he's saying, "Oh, Levi-Strauss, that's very interesting what you say about language, but you've forgotten that this means that you yourself must think nature preceded culture even though culture brings nature into being." But this very critique leveled against Levi-Strauss, he could have found in Levi-Strauss and does find it on other occasions. Levi-Strauss' famous book, The Raw and the Cooked, essentially stages this critique in and of itself. What do you mean, "raw"? "Well, somebody's sitting in a field eating a carrot. That's raw," you say, but wait a minute: what is this notion of "raw"? You can't have a notion of "raw" until you have the notion of "cooked." I sit in my field. I'm eating my carrot. I hold it up and I say, "This is raw? It's ridiculous. 'Raw' as opposed to what?" Right? So there can be no "raw" without, in a certain sense, the prior existence of "cooked." "Cooked" brings "raw" into being in exactly the way culture brings nature into being. Now to pause over this for a moment, we realize that sort of this basic move-- a move that, when you start to think about it, we've been encountering ever since we started reading in this course of readings-- is not so much the inversion of binaries as the calling into question of how they can exist apart from each other. In other words, the question of criticizing the origin of one state of things out of or after another state of things, the process of criticizing that is basically-- and I'm sorry to be so reductive about it but I really can't see the distortion in saying this-- is basically saying, "Which came first, the chicken or the egg?" Right? It is a declaration of absolute interdependency among the things that we understand in binary terms but that we take somehow one to be causative of the other when we think about them. This is really the basic move of deconstruction, but it's a move which anyone who studies philosophy as well as literary theory will encounter again and again and again, all the way from Hegel right on through the post-deconstructive thinkers we encounter for the rest of our syllabus-- perhaps preeminently among them the gender theorist Judith Butler. Again and again and again you will encounter this idea in Butler. It's a question of saying, "How on earth would you ever have the concept 'heterosexual' if you didn't have the concept 'homosexual' in place?" Right? The absolute interdependency of these concepts is, again, central to her argument and to her understanding of things. Obviously, we'll be returning to that in the long run. Now I want to pause a little bit more, then, in this regard over Derrida's distinction between writing and speech--writing, ecriture. This is a distinction which is not meant sort of counter-intuitively to suggest that somehow or another, as opposed to what we usually think, writing precedes speech--not at all. He's not saying that we've got it backwards. He's just insisting that we cannot understand writing to be derivative. We cannot say writing came into being belatedly with respect to speech in order to reproduce, imitate, or transcribe speech. Writing and speech are interdependent and interrelated phenomena which do different things. Last time we spoke about différance. We said that the difference between deference with an e and différance with an a can't be voiced. It's a difference, or différance, that comes into being precisely in writing, and it's only in writing that we suddenly grasp the twofold nature of différance as difference and deferral. I'd like to pause a little bit--this will be my segue to de Man-- over an interesting example in French which we don't have in English but is, I think, so instructive that it's worth pausing over [writes on chalkboard "est/et"]. You remember last time--and there is a slight voicing difference here just as there is also a slight voicing difference: deference, différance, but it's not a big voicing difference. It's not something that's easy to evoke and get across, whereas in writing it's perfectly obvious. For one thing, the s in est, which means "signification," > is dropped out of this word when you say it, est [pron. ay], the word for is--which is also the pronunciation for et, the word for and. Now these two words precisely express in French what Derrida is trying to describe as the double meaning of supplementarity. Is in the sense of the metaphor-- "This is that, A is B," understood as a metaphor-- is a supplement that completes a whole. It's a means of completing a whole through the declaration that A is B. But is has another sense which is not a rhetorical sense, because metaphor is sort of the heart of rhetoric, the rhetorical sense A is B--when, by the way, we know perfectly well that A is not B. How can A be B? A is only A. In fact, it's even a question whether A is A, but it's certainly not B, right? This much we know. In the grammatical sense there is no sort of mystification about the metaphor. In the grammatical sense, this word is the means or principal of predication whereby we say one thing is another thing: the mare is the female of the horse, Notice that the relationship between the rhetorical is and the grammatical is is basically the relationship between what Jakobson calls the "poetic function" and the "metalingual function." As you'll see in de Man, there is an irreducible tension between the rhetorical sense of this word, which claims metaphoricity, and the grammatical sense of this word, which makes no such claim but is simply the establishment of predication in a sentence. Now the word est or et, which is almost like est, reinforces the idea of the supplement, not as the completion of something that needs it to be complete-- the fulfillment of meaning in a metaphor-- but rather "supplement" in the sense of adding on to something that's already complete. The appositional, sort of grammatical, perpetual addition of meaning in the expression and or et is after all very much like what Jakobson calls "metonymic": that is to say, the contiguous adding on of things, making no claim to be metaphorical just like grammatical predication. So the tension or the system of differences that can be established simply by looking at these two similarly voiced words, I think, gives us a kind of emblem or paradigm for what Derrida calls "supplementarity" and what de Man calls the irreducible tension between, difference between, and conflict between rhetoric and grammar. That is the main topic of what we have to say about de Man today. Now last time I said a little bit about the presence of Derrida and de Man together, together with a scholar named J. Hillis Miller, and scholars who associated themselves with them-- Geoffrey Hartman and Harold Bloom--in a kind of period of flourishing in the seventies and early eighties at Yale called abroad "the Yale school," subject to much admiration in the academy and much vilification both within and outside the academy. But this was a moment of particular and headlined notoriety in the history of academic thinking about literature, and a moment in which academic thinking about literature had a peculiar influence on topics much broader than literature. It began to infiltrate other disciplines and was in general a high-spirited horse for that certain period of time. Then Miller eventually in the eighties went to Irvine, Derrida followed him there, and in 1983 Paul de Man died, and the main force of the movement began to give way to other interests and other tendencies and trends both here at Yale and elsewhere. Then shortly after de Man's, death there was a revelation-- which is mentioned by your editor in the italicized preface to "Semiology and Rhetoric"-- about de Man which was horrible in itself and made it impossible ever to read de Man in quite the same way again, but which was also, I have to say, precisely what the enemies of deconstruction were > waiting for. That was the fact that in his youth, de Man, still living in Belgium, the nephew of a distinguished socialist politician in Belgium, wrote for a Nazi-sponsored Belgian newspaper a series of articles anti-Semitic in tendency, a couple of them openly anti-Semitic or at least sort of racially Eurocentric in ways, that argued for the exclusion of Jews from the intellectual life of Europe and so on. These papers were gathered and published as Paul de Man's wartime journalism, and there was a tremendous furor about them similar to the revelations, which had never been completely repressed but grew in magnitude as more and more was known about them-- the revelations about Heidegger's association with the Nazi government. In the late eighties, there was a furious public argumentation back and forth among those who had read de Man, those who hadn't who were opposed to his work, and those who scrambled in one way or another to attempt to defend it to preserve his legacy and also the legacy of deconstruction. Now all of this is a matter of record and I suppose needs to be paused over a little bit. One of the texts of de Man--also in the book called Allegories of Reading where you'll find also a version of the essay "Semiology and Rhetoric" that you read for today-- one of the essays that those who had actually read de Man actually argued about in a persistent fashion is called "The Purloined Ribbon." It has to do with the passage in Rousseau's Confessions where Rousseau has stolen a ribbon in order to give it to a serving maid to whom he felt attraction, and then when he was asked who had done it, or did he know anything about who had done it, he blurted out her name, Marion. De Man says this really wasn't an accusation-- in fact, this was just a meaningless word blurted out-- that there is no possibility really of confession, that there is no real subjectivity that can affirm or deny guilt or responsibility: in other words, a lot of things that, needless to say, attracted the attention of a public that wasn't perhaps so much concerned that he had written these articles but that he had never for the rest of his career admitted having done so; in other words, that he had suppressed a past. Nobody really believed he still had these sympathies, but the whole question was, why didn't he fess up? Why didn't he come clean? Of course, they took "The Purloined Ribbon" to be his sort of allegorical way of suggesting that he couldn't possibly confess because nobody can confess, there's no human subjectivity, etc., etc., etc. So, as I say, there was a considerable controversy swirling around this article, and just as is the case with Heidegger, it has been very difficult to read de Man in the same way again as a result of what we now know. Let me just say though also that--and I think this was largely confessed by the people engaged in the controversy although some people did go farther-- there is no cryptically encoded rightism either in de Man or in deconstruction. There are two possible ways of reacting to what deconstruction calls "undecidability," that is to say the impossibility of our really being able to form a grounded opinion about anything. There are two possible ways of reacting to this, one positive and one negative. The negative way is to say that undecidability opens a void in the intellect and in consciousness into which fanaticism and tyranny can rush. In other words, if there is a sort of considered and skillfully argued resistance to opinion-- call that "deconstruction"-- then in the absence of decently grounded, decently argued opinion, you get this void into which fanaticism and tyranny can rush. That's the negative response to undecidability, and it's of course, a view that many of us may entertain. The positive reaction, however, to undecidability is this: undecidability is a perpetually vigilant scrutiny of all opinion as such, precisely in order to withstand and to resist those most egregious and incorrigible opinions of all: the opinions of fanaticism and tyranny. In other words, you can take two views in effect of skepticism: > the one that it is, in its insistence on a lack of foundation for opinion, a kind of passive acquiescence in whatever rises up in its face; and on the other hand, you can argue that without skepticism, everybody is vulnerable to excessive commitment to opinion, which is precisely the thing that skepticism is supposed to resist. Now this isn't the first time in this course that I've paused over a moment at a crossroads where you can't possibly take both paths > but where it is obviously very, very difficult to make up one's mind. More than one can say or care to admit, it may ultimately be a matter of temperament which path one chooses to take. All right. Now in any case, while we're on the subject of deconstruction in general and before we get into de Man, let me just say that there is one other way, if I may, not to criticize deconstruction. It's always supposed popularly that deconstruction denies the existence of any reality outside a text. Derrida famously, notoriously, said "there is nothing outside the text," right? What he meant by that, of course, is that there's nothing but text. That is to say, the entire tissue, structure, and nature of our lives--including history, which we know textually is all there is-- our lives are textual lives. That's what he meant. He didn't mean to say the text is here, the text contains everything that matters, and nothing else exists anyway. What he meant to say is that there is "nothing but text" in the sense that absolutely everything we ordinarily take to be just our kind of spontaneously lived existence is, in fact, mediated in the ways we've already discussed at length in this course, and we'll discuss more by our knowledge and that our knowledge is textual, right? That's what he meant but, as I say, it's widely misunderstood, and de Man in the fourth passage on your sheet returns to the attack against this popular supposition and says: In genuine semiology as well as in other linguistically oriented theories, the referential [and notice the citation of Jakobson here] function of language is not being denied. Far from it. [In other words, it's not a question of the idealist who was refuted by Dr. Johnson who kicked a stone and leaped away in terrible pain saying, "I refute it thus." Nobody denies the existence of the stone, right? That is not at all the case. Reality is there, reality is what it is, and the referential function is perpetually in play in language, trying to hook on to that reality.] What is in question is its authority for natural or phenomenal cognition. [That is to say, can we know what things are--not that things are but what things are using the instrument of language? De Man goes on to say very challengingly:] What we call ideology is precisely the confusion of linguistic with natural reality, of reference with phenomenalism. [In other words, ideology is nothing other than the belief that language, my language--what I say and what I think in language--speaks true.] That's the position taken up, not at all the same thing as saying what's out there doesn't exist--nothing to do with that. All right. Now de Man's early career was influenced-- I'm not speaking of the very early career in which he wrote these articles, but the early career involving the essays which were collected in his first book, Blindness and Insight. His early career is mainly influenced by French intellectualism, in particular Jean-Paul Sartre's Being and Nothingness, and the argument of Blindness and Insight is largely to be understood not so much in terms of de Man's later preoccupations with linguistics as with the negotiation of Sartre and existentialism into a kind of literary theory. The texts, in particular the text called "Criticism and Crisis"-- the first one that I quote on your sheet-- can best be read in those terms; but soon enough, de Man did accept and embrace the influence of Saussure in linguistics and structuralism, and his vocabulary henceforth took these forms. The vocabulary that we have to wrestle with for today's essay is taken in part from Jakobson's understanding of the relationship between metaphor and metonymy, and we will have more to say about that. But in the meantime it's probably on this occasion, once we accept them both as having come under the influence of the same form of linguistic thinking, to say a little bit about the similarities and differences that exist between Derrida and de Man. Now similarly, they both take for granted that it is very difficult to think about beginnings, but at the same time, one has to have some way, some proto-structuralist way, of understanding that before a certain moment-- that is to say, before a certain synchronic cross-section-- things were different from the way they were in some successive moment. So in the second passage on your sheet to which I'll return in the end, we find de Man saying, "Literary theory can be said to come into being when"-- that is de Man's version of the event, and he agrees with Derrida in saying, "Well, sure God came into being; man came into being; consciousness came into being. That's all very well, but they're just head signifiers in metaphysics. There's something different about language." Right? What both Derrida and de Man say about the difference when one thinks of language coming into being, from thinking about all those other things coming into being, is that language does not purport to stand outside of itself. It cannot stand outside of itself. It cannot constitute itself. It is perpetually caught up in its own systematic nature so that it's a center. We have to resist excessive commitment to this idea of it being a center, but it is at least not a center which somehow stands outside of itself and is a center only in the sense that it is some remote, hidden, impersonal, distant cause. Language is caught up in itself in a way that all of these other moments were not. Then also, I think that you can see the similarity to Derrida and de Man's way of insisting on these binary relations as interdependent and mutual, comparable to the sort of thing that I've been talking about in Derrida. Take page 891 and 892 for example, the very bottom of 891 over to 892. De Man says: It is easy enough to see that this apparent glorification of the critic-philosopher in the name of truth is in fact a glorification of the poet as the primary source of this truth... Now he does not mean, as Freud, for example, meant in saying, "The poets came before me and the poets knew everything I knew before I knew it." He does not mean that at all. What he means is what he says in the following clauses. [I]f truth is the recognition of the systematic character of a certain kind of error, then it would be fully dependent on the prior existence of this error. In other words, truth arises out of error. Error is not a deviance from truth. Right? Error is not a poetic elaboration on things which somehow, as it does in Plato's view, undermines the integrity of that truth identified by philosophers. On the contrary, philosophy properly understood is what comes into being when one has achieved full recognition of a preexisting error. That is the way in which de Man wants to think about the relationship precisely between literature and other forms of speech. In saying that, I want to move immediately to the differences with Derrida. Derrida, as I said, believes in a kind of seamless web of discourse or discursivity. We are awash in discourse. Yes, we can provisionally or heuristically speak of one form of discourse as opposed to another-- literature, law, theology, science and so on-- but it is all easily undermined and demystified as something that has real independent integrity. De Man does not believe this. De Man thinks, on the contrary, that there is such a thing as literariness. He follows Jakobson much more consistently in this regard than Derrida does. Again and again he says that the important thing is to insist on the difference between literature and other forms of discourse. There are all kinds of passages I could elicit in support of this. Let me just quickly read a few, page 883, about two thirds of the way down the left-hand column, where he's sounds very much like a Russian formalist talking about the what literature, in particular, has exclusively that other forms of discourse don't have. He says: … [L]iterature cannot merely be received as a definite unit of referential meaning that can be decoded without leaving a residue. The code is unusually conspicuous, complex, and enigmatic; it attracts an inordinate amount of attention to itself, and this attention has to acquire the rigor of a method. The structural moment of concentration on the code for its own sake cannot be avoided, and literature necessarily breeds its own formalism. In the interest of time, I'm going to skip over a few other passages that I was going to read to you in reinforcement of this insistence, on de Man's part, that literature differs from other forms of discourse, the remaining question being: literature differs from other forms of discourse how? Well, it is the disclosure of error that other forms of discourse supposing themselves to refer to things remain unaware of. Literature knows itself to be fictive. Ultimately, we reach the conclusion that if we're to think of literature, we're to think of something that is made up: not something that is based on something but something that is made up. In the first passage, the statement about language by criticism, that sign and meaning can never coincide, is what is precisely taken for granted in the kind of language we call "literary." Literature, unlike everyday language, begins on the far side of this knowledge. It is the only form of knowledge free from the fallacy of unmediated expression-- in other words, free from the fallacy that when I say "It is raining," I mean I'm a meteorologist and I mean it is raining. Literature, when it says "It is raining," is not looking out of the window, right? This is after all perfectly true. The author may have been looking out of the window, > but literature, as we encounter it and as a text, is not looking out of the window. How can a text look out of the window? When literature says "It is raining," it's got something else, as one might say, in view: All of us [de Man continues] know this although we know it in the misleading way of a wishful assertion of the opposite, yet the truth emerges in the foreknowledge we possess of the true nature of literature when we refer to it as fiction. This is why in the last passage on your sheet from the interview with Stefano Rosso, de Man is willing to venture on a categorical distinction between his own work and that of his very close friend, Jacques Derrida. He says: I have a tendency to put upon texts [and he means literary texts] an inherent authority which is stronger, I think, than Derrida is willing to put on them. In a complicated way, I would hold to the statement that the text deconstructs itself [In other words, literature is the perpetual denial of its referentiality], is self-deconstructive rather than being deconstructed by a philosophical intervention [that which Jacques Derrida does-- that is to say, Jacques Derrida bringing his sort of delicate sledgehammer down on every conceivable form of utterance from the outside-- right--rather than being deconstructed by a philosophical intervention from outside the text]. So those are some remarks then on the differences and the similarities between de Man and Derrida. Now "Semiology and Rhetoric" historically comes near the end of the period that "Structure, Sign, and Play" inaugurates. That is to say, it is published in Allegories of Reading and is a text which we can date from the early 1980s. Well, it was published originally as an article in 1979, but this is also near the end of a period of flourishing that Derrida's essay inaugurates, and other things have begun to become crucial. Even before the death of de Man and the revelations about his past, there were a lot of people sort of shaking their fists and saying, "What about history? What about reality?" I've already suggested that in a variety of ways this is a response that can be naïve but it is still very much in the air. De Man says in this atmosphere of response--at the top of page 883, the left-hand column, he says: We speak as if, with the problems of literary form resolved once and forever and with techniques of structural analysis refined to near-perfection, we could now move "beyond formalism" toward the questions that really interest us and reap, at last, the fruits of the aesthetic concentration on techniques that prepared us for this decisive step. Obviously, I think by this time you can realize what he's saying is if we make this move, if we move beyond formalism, we have forgotten the cardinal rule of the Russian formalists: namely, that there's no distinction between form and content-- in other words, that we in effect can't move beyond formalism and that it is simply a procedurally mistaken notion that we can. That's the position, of course, pursued in this essay. The task of the essay is to deny the complementarity-- the mutual reinforcement even in rigorous rhetorical analysis like that of Gerard Genette, Todorov, Barthes and others, all of whom he says have regressed from the rigor of Jakobson-- to deny that in rhetorical analysis rhetorical and grammatical aspects of discourse can be considered collusive, continuous, or cooperative with each other. Now I've already suggested the problems that arise when you consider this term even in and of itself. I'm actually ripping off, by the way, an essay of Jacques Derrida's called, hm > > -- anyway, it's that essay and > now you'll never know my source. > In any case, Derrida, too, in this essay which > > is at pains to argue that you can't reduce grammar to rhetoric or rhetoric to grammar. So as we think about these things as I suggest, we've already introduced what de Man drives home to us. He says, "Boy, this is complicated theory. I'm in over my head, so I better just get practical and give you some examples of what I mean." So he takes up "All in the Family" and talks about the moment in which Archie becomes exasperated when Edith begins to tell him that the difference between bowling shoes laced over and bowling shoes laced under-- this in response to Archie's question, "What's the difference?" In other words, Archie has asked a rhetorical question. "I don't care what the difference is" is the meaning of the rhetorical question. Edith, a reader of sublime simplicity, as de Man says, misinterprets the rhetorical question as a grammatical question: "What is the difference? I'm curious to know." Then she proceeds to explain that there's lacing over, on the one hand, and lacing under, on the other hand. Archie, of course, can't stand this because for him it's perfectly clear that a rhetorical question is a rhetorical question. De Man's point is a question is both rhetorical and grammatical, and the one cannot be reduced to the other. Both readings are available. He complicates, without changing the argument, by then referring to Yeats' poem "Among Schoolchildren," which culminates you remember-- it has a whole series of metaphors of attempting, seeming at least to attempt, the synthesis of opposites concluding: "how can we can tell the dancer from the dance?" Another question, right? Now the rhetorical question completes the usual reading of the poem. The answer to the rhetorical question is that we can't tell the difference between the dancer and the dance. They are unified in a synthetic, symbolizing, symbolic moment that constitutes the work of art, and all the preceding metaphors lead up to this triumphant sense of unity, of symbolic unity, as the essence of the work of art-- a unity which, by the way, entails among other things the unity of author and text: the unity of agent and production, the unity of all of those things which, as we've seen, much literary theory is interested in collapsing. How can we tell the dancer from the dance? Well, de Man says, "Wait a minute though. This is also a grammatical question." If you stop and think of it as a grammatical question, you say to yourself, "Gee, that's a very > good question, isn't it, because, of course, the easiest thing in the world is to tell the dancer from the dance. > I am the dancer and this is the dance I am doing and > obviously they're not the same thing," right? What nonsense poetry speaks. It's perfectly ridiculous. There is also a grammatical sense which won't go away just because your rigorous, sort of symbolic interpretation insists that it should go away, right? Then de Man, who happens to be a Yeats scholar-- he published a dissertation on Yeats and really knows his Yeats-- starts adducing examples from all over the canon of Yeats to the effect that Yeats is perfectly knowing and self-conscious about these grammatical differences, and that there is a measure of irony in the poem that saves it from this sort of symbolizing mystification. He makes a perfectly plausible argument to the effect that the question is grammatical rather than rhetorical. He's not claiming--and he points this out to us--that his explication is the true one. That's not his point at all. He's claiming only that it is available and can be adduced from what we call "evidence" in the same way that the symbolic interpretation, based on the rhetorical question, is available and can be adduced from evidence-- and that these two viewpoints are irreducible. They cannot be reconciled as traditional students of the relationship between rhetoric and grammar in studying the rhetorical and grammatical effects of literature take for granted. That's his argument. It's a kind of infighting because he's talking about two people who are actually very close allies. He's saying they're doing great work but they forget this one little thing: you cannot reconcile rhetoric and grammar. Every sentence is a predication, and if every sentence is a predication, it also has the structure of a metaphor; and the metaphor in a sentence and the predication in a sentence are always going to be at odds. A metaphor is what we call a poetic lie. Everybody knows A is not B. A predication, on the other hand, usually goes forward in the service of referentiality. It's a truth claim of some kind--right?-- but if rhetoricity and grammaticality coexist in any sentence, the sentence's truth claim and its lie are perpetually at odds with each other. Just taking the sentence as a sentence, irrespective of any kind of inference we might make about intentions-- we know perfectly well what Edith intends and what Archie Bunker intends. It's not as if we're confused about the meaning of what they're saying. It's just that other meanings are available, and since they're not on the same page, those two other meanings coexist painfully and irreducibly at odds, right? But there are cases--suppose Archie Bunker were Arkay Debunker. Suppose Archie Bunker were Jacques Derrida, and Jacques Derrida came along and said, "What is the différance?" Right? > That would be an entirely different matter, wouldn't it, because you would have absolutely no idea whether the question was rhetorical or grammatical, right? There it wouldn't be possible to invoke an intention because the whole complication of Derrida is precisely to raise the question about not knowing, not being able to voice the différance between difference and différance and not knowing whether Archie is right or whether Edith is right. Proust I don't have time for, but it's a marvelous reading of that wonderful passage in which-- remember that he's set it up at the beginning of the essay with a kind of wonderful, cunning sort of sense of structure by talking about the grandmother in Proust who's always driving Marcel out into the garden because she can't stand the interiority of his reading. Well, later on in the essay de Man quotes this wonderful passage in which Marcel talks about the way in which he brought the outside inside as he was perpetually conscious of everything that was going on out there during the process of his reading, so that ultimately in the charmed moment of his reading, there was no difference between inside and outside. In other words, a metaphor, a rhetorical understanding of the relationship between inside and outside has been accomplished, but then grammatical analysis shows that the whole structure of the passage is additive-- that is, adding things on--and is complicating and reinforcing an argument without insisting on identity, on the underlying identity on which metaphor depends; so he calls this metonymic. By the way, I'm going to leave also to your sections the strange confusion that ensues in taking a rhetorical device, metonymy, and making it synonymous with grammar on the axis of combination. I leave that to your sections. In the meantime he says, "No, no, no then. I guess this passage isn't rhetorical after all. It must be metonymic--but wait! It is spoken by a voice. There is this wonderful overarching voice that unifies everything after all. This is what I call," says de Man, "the rhetoricization of grammar, right--but wait! That voice is not the author. That voice is a speaker. That voice is Marcel performing his wonderful sort of metaphoric magic, but we know that the author is painstakingly putting this together in the most laborious kind of composed way, making something up in an additive way that's not rhetorical at all; it's grammatical. This is a supreme writer putting together long sentences and so wait a minute. It must be, after all, a grammaticization of rhetoric," the whole point of which is that the worm of interpretation keeps turning. All right? It doesn't arbitrarily stop anywhere because rhetoric and grammar remain irreducible. We have to keep thinking of them as being uncooperative with each other. Okay, have to stop there--might add a word or two--but on Thursday we turn, I'm afraid with a certain awkwardness; I wish there were an intervening weekend, to Freud and Peter Brooks. In the meantime, we'll see you then. |
Literature_Lectures | 10_J_D_Salinger_Franny_and_Zooey.txt | Professor Amy Hungerford: In light of the fact that I have just sent you paper topics, my lecture today is going to do two things. It is going to give you a way into Franny and Zooey, but it's going to actually give you more than a way into it. It is really going to give you a whole packaged reading of Franny and Zooey. We have just the one day on this novel, and what I'm going to be doing for you is modeling the way literary critics use evidence to advance an argument. It's useful to you when you think about writing a paper to remember, if it's been a long time since you've written an English paper, or even if it isn't a long time since you've written an English paper, that the facts that we, literary critics, and you, writers on literature, the facts that we deal with are the details of the text itself. You may have noticed that I am very fond of reading aloud to you from these novels. I'm very fond of reading out passages. I do it a lot. Why do I do it? Well, there are two reasons, one because I want you to hear literary art. Literary art is a verbal art, and I think too often we only read it silently; probably not since you were children that people read to you so much. So, to get a sense of that, you have to have it in your ear and feel the sound and the rhythm and the quality, the timbre, the expression of the voices that we have in these novels. Our writer for today thinks so highly of that capacity of literature to embody the human voice that he imagines a whole religious world around him. That's going to be the gist of my argument today. But then, there is a second, sort of, less mystical reason, and that's that these are the facts of a literary argument, these words that I give to you. It's like, if you're in an astronomy lecture, they're going to give you some facts about the composition of a planet, or its atmosphere, or whatever. Those are the facts for that field. For this field, these are the facts. So, in your papers, if you find yourself writing and you get to the end of a page and you look back, you scan back over your page, and you see that there are no quotation marks, you are not using any of the facts of the novel to produce your argument, to support your claims. So, that's like the eye test, the glance test. Are you supporting your claims? If you have very few quotations, chances are you are not. So, think of this lecture, as I go through it, as a kind of model. Pay attention to what I'm doing in using these textual bits and pieces and putting them together and making claims for them. I do it every week. It just so happens that this argument is more closed, more settled, in my own mind. It's less of an opening argument than it is something that I want to convince you of. So, there's a reason for that and that is that I'm writing about this novel. It's in the introduction to a book that I'm writing about the literature of this period, and so it's very present to my mind as a sort of piece of a larger argument about religion and the American novel in this period, so that's what I'm giving you. When you approach any novel to make an argument about it, if you want to be ambitious, the first thing to think about is well, what's obvious about the novel? What can you observe at first glance about its style, about its form, about its setting, about its character, about its presuppositions? In Franny and Zooey, what did you notice? Tell me what you noticed, at first bat, if any of you have read it. What did you notice about the novel? Uh huh.Student: It doesn't move around very much. It just stays in a limited space.Professor Amy Hungerford: Absolutely. Confined settings, very confined settings, absolutely. Yes. What else? Yes.Student: A lot of dialog?Professor Amy Hungerford: Lots of dialog, yes. What else? Uh huh.Student: [inaudible]Professor Amy Hungerford: Yeah, yeah. Absolutely. Yeah. There's a back story. You can feel that back story to the novel. Yeah. What else? What else did you notice? Yes.Student: There's a lot of focus on like little motions that people do, like picking up cigarettes and dropping things.Professor Amy Hungerford: Yeah. A lot of attention to physical detail and physical movement, and that's connected to this point about confined spaces. It's the movement of bodies within confined spaces that really preoccupies this novel. What about the style of the novel? You talked about dialog. Is there anything else about the style that you noticed? Yes.Student: There's a lot of italics.Professor Amy Hungerford: A lot of italics. What does that connote to you?Student: Trying to convey feeling.Professor Amy Hungerford: Yeah. Absolutely. A lot of emphasis, a lot of variation in tone, and the italics are part of the representation of that. Yeah. What else? Yes.Student: A lot of the dialog seems to be combative. There's arguing between two people. Professor Amy Hungerford: Yes. This is a book about arguments, absolutely. What are they arguing about most of the time? All right. Well, that's where I will pick up. Oh, Sarah. Do you want to say? Student: There are a lot of abstract ideas.Professor Amy Hungerford: Yeah. Yeah. Absolutely. They are talking about abstract intellectual ideas, often religious or philosophical ones, and that, plus its setting: I hope you noticed the sort of New Havenish setting of Franny's breakdown. We're told that Lane isn't exactly a Yale man, but he sure looks a lot like a Yale English major, dare I say, such an unpleasant character, and so, so pompous. What you do, when you write a paper or try to advance an argument, is try to write an argument that will attend to all those things that you guys just said, that you take the obvious things, and when you craft an argument, the best thing, the most ambitious thing, to do is to come up with something where, in the end, you can say something about those major aspects. You don't have to do it in the paper, but it should be an argument that has something to say back to those obvious things. Why is the style this way? Why is the plot working this way? Why are these particular characters behaving in this way? Why use those confined spaces? So, my argument today is going to try to have something to say back to all those obvious aspects that you pointed out so rightly. But I'm going to start from a much more pointed and local question. And this is the other thing that a good short paper especially does, is that you don't get at all that big stuff by, kind of, taking it head on. You have to come down to the facts that I was talking about, the bits of text, the text itself, the words that author chose; that's where you begin. And part of the genius of a strong paper is choosing the place in the text to begin that pointed analysis. So, my choice for this is that odd introduction in-between the two stories, and this is on 48 and 49. This is, we come to find out, Buddy, Zooey's older brother, narrating the story, and Buddy gives us a little preamble telling us how the real characters in the story, the real people who are then characters in his story, how they felt about the story and what their objections to it were (and this is on 48). We find out that Franny objects to the story's distribution in the world or the movie, the prose home movie as Buddy calls it, because it shows her blowing her nose a lot. His mother, Bessie, objects because it shows her in her housecoat. But Zooey has a more substantive objection. It's the leading man, however, who has made the most eloquent appeal to me to call off the production. [This is in the middle of page 48.] He feels that the plot hinges on mysticism or religious mystification. In any case, he makes it very clear, a too vividly apparent transcendent element of sorts, which he says he's worried can only expedite, move up, the day and hour of my professional undoing. People are already shaking their heads over me and any immediate further professional use on my part of the word "God" except as a familiar, healthy American expletive will be taken or rather confirmed as the very worst kind of name dropping and a sure sign that I'm going straight to the dogs. And then, he speaks back to Zooey. He says, "Well, I'm going to still distribute my story. I still want to tell this story," and he does it in a kind of roundabout way. And this is on page 49. Somewhere in The Great Gatsby, which was my Tom Sawyer when I was twelve, the youthful narrator remarks that everybody suspects himself of having at least one of the cardinal virtues and he goes on to say that he thinks his, bless his heart, is honesty. Mine, I think, is that I know the difference between a mystical story and a love story. I say that my current offering isn't a mystical story or a religiously mystifying story at all. I say it's a compound or multiple love story, pure and complicated. What Buddy does, in this passage, is set up this opposition between his own reading of his story and Zooey's. Now, why are we given these objections? I think it's to give us a dynamic sense of the family conversation going on between them, but it also addresses one of those obvious things. They talk a lot, as Sarah said, they talk a lot about abstract questions, and this puts the meaning of the story in that abstract register. Is it a love story, or is it a religious story, a mystifying story? Which is it? I am going to argue that it's both. And I'm going to advance that argument by going straight to the theological question that Zooey is so intent on solving when Franny is having her breakdown in the living room. So, just to review: Franny has her breakdown when she comes into what I suspect is New Haven to attend the Yale-Harvard football game with her boyfriend, Lane. So, Franny when she sees Lane, affects great enthusiasm, and so on, but this is what we hear about Lane from the narrator. This is on page 11. Lane was speaking now as someone does who has been monopolizing conversation for a good quarter of an hour or so and who believes he has just hit a stride where his voice can do absolutely no wrong. [I always read this and I think, "I'm lecturing."] "I mean, to put it crudely,'" he was saying, "the thing you could say he lacks is testicularity. You know what I mean?" He was slouched rhetorically forward toward Franny, his receptive audience, a supporting forearm on either side of his martini. "Lacks what?" Franny said. She had had to clear her throat before speaking. It had been so long since she had said anything at all. Lane hesitated. "Masculinity," he said. "I heard you the first time." "Anyway, it was the motif of the thing, so to speak, what I was trying to bring out in a fairly subtle way," Lane said, very closely following the trend of his own conversation. "I mean God. I honestly thought it was going to go over like a goddamn lead balloon and when I got it back with this goddamn A on it in letters about six feet high I swear I nearly keeled over." Franny again cleared her throat. Apparently, her self-imposed sentence of unadulterated good listenership had been fully served. "Why?" she asked. Lane looked faintly interrupted. "Why what?" "Why did you think it was going to go over like a lead balloon?" "I just told you. I just got through saying this guy, Brughman is a big Flaubert man or at least I thought he was." "Oh," Franny said. She smiled. Franny is disgusted by his pomposity. This experience, combined with her experience in a religion seminar with this man, Professor Tupper, at school, has convinced herself that the world is superficial, that it's impossible to find anything meaningful in the academic discussion of these pseudo-intellectual problems, the "testicularity" of one writer or another. And Lane's engagement with literature, specifically, is all about his ego inflation. So, he can't wait to tell Franny that the professor said he should try to publish it, and then my favorite thing: he wants to read it to her over the football weekend. "Hey, come, let's read my English essay." Hello. Student: Hi. Can I interrupt? We have a couple of singing valentines. Can we deliver them now?Professor Amy Hungerford: No, you can't. Sorry. Student: Thank you.Professor Amy Hungerford: And I'm worried about e-mail! Talk about pricking my pomposity. All right. So, she starts saying the Jesus prayer, which is, "Lord Jesus Christ, Son of God, have mercy on me, a sinner." Now, she has taken this prayer from a book called The Way of the Pilgrim. It's a Russian Orthodox religious classic, a very old text, and it depicts the life of a pilgrim. And we get the summary of this a little bit in the novel, as Franny explains it, or tries to explain it, to Lane, who is entirely uninterested. It is about a man who tries to take seriously the Bible's injunction to pray without ceasing, and the prayer for Franny becomes a kind of mantra. She is trying to say it over and over again as she goes about in this world that is so disappointing to her, feels so false to her. And so, finally, the strain of trying to hold out this kind of religious awareness in the face of Lane and his English paper is just too much, and she faints. Now, Zooey has a big problem with her use of this prayer, and this is what gives the book that sort of combative tone that we were talking about a little earlier that somebody mentioned. So, if you look on page 169, my question now is, in my argument: What is Zooey's critique of Franny's use of the prayer? What constitutes that critique? What's wrong with it? So, on 169, he says to Franny as she's sniveling on the couch: "God almighty, Franny," he said. "If you're going to say the Jesus prayer, at least say it to Jesus and not to Saint Francis and Seymour and Heidi's grandfather all wrapped up in one. Keep Him in mind if you say it, and Him only, and Him as He was and not as you'd like Him to have been. You don't face any facts. This same damned attitude of not facing facts is what got you into this messy state of mind in the first place, and it can't possibly get you out of it." And then, this argument goes on for a couple of pages, and I'm just going to pick up the end of it here, on the bottom of 171. He is explaining who Jesus was. "If you don't understand Jesus, you can't understand His prayer. You don't get the prayer at all. You just get some kind of organized cant. Jesus was a supreme adept, by God, on a terribly important mission. This was no Saint Francis with enough time to knock out a few canticles or to preach to the birds or do any of the other endearing things so close to Franny Glass's heart. I'm being serious now, goddamit. How can you miss seeing that? If God had wanted somebody with Saint Francis's consistently winning personality for the job in the New Testament, He'd have picked him, you can be sure. As it was, He picked the best, the smartest, the most loving, the least sentimental, the most unimitative master He could have possibly picked. And when you miss seeing that, I swear to you, you're missing the whole point of the Jesus prayer." So, Zooey's critique is that Franny is not being specific in her use of the prayer. She's paying no attention to who Jesus was and what it means to actually pray to that figure. But, to anyone paying attention to the other things that Zooey says and the other things that he does in this novel, this is kind of odd, and it's hard to square. So, my next kind of question is: How does that very doctrinally specific understanding of the Jesus prayer relate to the whole religious education that Buddy and Seymour gave him, and that he seems to be thinking so hard about as he reads that letter in the bathtub? The letter in the bathtub tells us about that education, and let's look on 61. Sorry. That's not exactly the right page. This is 65. I'm sorry. In this letter Buddy explains to Zooey what he and Seymour have been trying to do. "Much, much more important though," [Buddy says in the middle of 65] "Seymour had already begun to believe, and I agreed with him as far as I was able to see the point, that education by any name would smell as sweet, and maybe much sweeter, if it didn't begin with a quest for knowledge at all, but with a quest, as Zen would put it, for no-knowledge. Dr. Suzuki says somewhere, that to be in a state of pure consciousness, satori, is to be with God before He said, 'Let there be light.' Seymour and I thought it might be a good thing to hold back this light from you and Franny, at least as far as we were able, and all the many lower, more fashionable lighting effects--the arts, sciences, classics, languages--'til you were both able at least to conceive of a state of being where the mind knows the source of all light." So, the religious education that Zooey's response to Franny comes out of, is precisely not a doctrinally specific Christian education. Rather, it's something more like a Buddhist tradition, a syncretic, mystical tradition. The idea is that there is some state of being with God. Knowledge, all the arts and sciences, literature, all of the religious writings of the world are manifestations of that voice that at its origin is God saying, "Let there be light." It's the voice of creation. Seymour and Buddy want Franny and Zooey to rest at that origin, undistracted by the manifestations of the creation, and know some kind of consciousness of God in that place. So, Zooey, pretty much, subscribes to these tenets, and you can see it especially on page 175, when he goes into his brother's old room. Now, let me explain a detail that I think is important, but I think a little lost to us in today's world of technology. There's a phone in Buddy and Seymour's old room that is a private internal line, and it just goes from one room to another in the apartment; it's not an outside phone line. And what's interesting about it, and what indicates its importance to Buddy, is that Bessie gets on about him getting a phone where he's teaching in upstate New York; he's teaching writing as a visiting writer at a college in upstate New York. And Bessie, his mother, keeps saying, "Well, why won't you get a phone, Buddy? You're paying to maintain this interior line in our apartment, and yet you won't get a phone." For Buddy, the phone that's within the family compound, so to speak, family apartment, is the more important line of communication. So, when Zooey goes upstairs to use that phone, it's freighted with all the significance that Buddy has put upon it. But there's a whole ritual involved in Zooey's entrance into this place. This is on 175. At the far end of the hall he went into the bedroom he had once shared with his twin brothers, which now, in 1955, was his alone, but he stayed in his room for not more than two minutes. When he came out, he had on the same sweaty shirt. There was, however, a slight but fairly distinct change in his appearance. He had acquired a cigar and lighted it, and for some reason he had an unfolded white handkerchief, draped over his head, possibly to ward off rain, or hail, or brimstone. So, why does he do this? What's the meaning of this little detail of Zooey's appearance? Well, one thing that a literary argument can do is take something small like this and try to give an account for it, so that's what I'm going to do. He's venerating the room that Seymour and Buddy occupied. He's covering his head in a traditional religious fashion, so in order to enter this holy place he covers his head. (The cigar? I don't have an account of that. You guys figure that out. That's the other nice thing about literary arguments. There are always little details that they don't account for, and that's the loose thread that you can pull to make your own.) And so, what does he find when he goes in to this holy space? Well, he finds two panels of beaver board, on 178,179, and they have the quotations that Seymour and Buddy have collected from all their favorite religious, philosophical, mystical, literary reading, and I'd like you just to think about one of them. So this is the bottom of 178. This is from Sri Rama Krishna. "Sir, we ought to teach the people that they are doing wrong in worshipping the images and pictures in the temple." Rama Krishna: "That's the way with you Calcutta people. You want to teach and preach. You want to give millions when you are beggars yourselves. Do you think God does not know that He is being worshipped in the images and pictures? If a worshipper should make a mistake, do you not think God will know his intent?" This is, I think one, of the best examples of that syncretic view of religion, that basically all worshippers are worshipping the same god. They may do it in different forms; they may make mistakes; they may be mistaken about where God resides. But, in this view, God is so powerful and so transcendent that God will know the heart of the worshipper. So, if you apply that back to Franny, why does Zooey have this difficulty? Why does he have this difficulty in the specificity of her prayer? What is it that is bothering him? Well, I think you begin to get an answer to this tension between specific doctrine and syncretic religion when Zooey gets to the subject of acting, which is what this second attempt at speaking to Franny, this time over the phone, is concerned with. There is a detail here of course that Zooey, in making a second attempt to converse with Franny about this, impersonates his brother, Buddy, on the phone. Now I'm just going to leave that observation, remind you of that. I have a way to account for that, but it's going to take to the end of my argument to do that, so I'm going to argue that that's significant, but I'm not going to talk about why it's significant yet. But let's look at that theology of acting. This is on page 198. This is coming towards the climax of their conversation. Part of what's been bothering Franny is her frustration with acting, and that's one of the things that Lane is so surprised she has given up; it was the only thing she was passionate about. And we know--from reading Buddy's letter over Zooey's shoulder in the bathtub--we know that Zooey had similar concerns about his own acting career, his own commitment to acting that Buddy tried to persuade him out of. "You can say the Jesus prayer" [Zooey says to Franny], "from now 'til doomsday, but if you don't realize that the only thing that counts in the religious life is detachment, I don't see how you'll ever even move an inch. Detachment, buddy, and only detachment, desirelessness, cessation from all hankering. It's this business of desiring. If you want to know the goddam truth, that makes an actor in the first place. Why are you making me tell you things you already know? Somewhere along the line in one damn incarnation or another, if you like, you not only had a hankering to be an actor or an actress, but to be a good one. You're stuck with it now. You can't just walk out on the results of your own hankerings. Cause and effect, buddy, cause and effect. The only thing you can do now, the only religious thing you can do, is act. Act for God if you want to, be God's actress if you want to. What could be prettier?" Zooey has this understanding of the cosmos that suggests that strong, specific human desires actually change the course of cosmic futures. So somewhere, maybe in pre-incarnational time before Franny became Franny, she wanted to be an actress. The religious thing Zooey says is to inhabit that, to honor that, to follow up on the results of that prior desiring. But why is it acting? Why specifically acting and why this weird comment at the end, "What could be prettier?" What does prettiness have to do with this? Well, if you look at the description, for instance, of Zooey's face, there's a beautiful description of how his face is beautiful, in what way his face is beautiful. We know that Franny is an attractive young woman. We know that she worries about beauty, and especially in poetry. When she's trying to explain what's wrong, part of what's wrong is that when she learns poetry in the classroom none of it seems beautiful to her; it all seems like some other kind of production, not the production of beauty. So prettiness, beauty, the aesthetic is at the heart of the spiritual practice that Zooey is urging upon Franny, the spiritual practice of acting. And I would remind you, looking back to that passage on page 65 and into the 66, that specifically among the figures that Buddy mentions, the religious and literary figures, we find Shakespeare. And I think Shakespeare in this train of figures represents the literary that is also the dramatic. So, in our tradition Shakespeare is the literary name above all others. It's important for Salinger that Shakespeare was a dramatist. It's important for this novel that Shakespeare was a dramatist, not just because Zooey wants Franny to inhabit acting fully as her desire and as her religious practice as opposed to saying the Jesus prayer. Acting has a deeper relation to the novel and here's where we get back to that question of being in closed spaces and the lack of movement. If you think about this novel, it has the structures of drama. It takes place in small rooms. If you begin to think about it, you can almost see the set changes: in the diner, on the train station. That's about the most open place, on the train platform. That's about the most open place we see. In the diner, in the apartment: all you do in the apartment is move from one room to another. These are dramatic spaces. Moreover, the bathroom: completely a dramatic space. It even has a curtain hiding Zooey from his mother. Acting becomes a religious practice for much more than Franny, not just for Franny and for Zooey 'cause Zooey's an actor too. It's a religious practice for the novel itself. And that, I would suggest to you, is where we can begin to bring some of those obvious things: the prevalence of dialog, those enclosed spaces, the tone, the exaggerated tone, the somewhat histrionic quality, the combativeness of that conversation, its sheer style. These are great talkers! But, I would suggest to you, Salinger is trying to balance something very carefully, that relates back to this question of doctrine versus syncretism in the religious sphere of the novel, in the religious thematic material of the novel. And, for this, I'd like to look at the very end of the novel. Zooey finally suggests that it's attention to the audience that makes an actor really a special actor, a religious actor, and he points back to advice that Buddy gave him about--sorry, that Seymour had given him--about performing on a radio show. So, they were all radio show whiz kids, and one day Zooey had not wanted to shine his shoes and says--this is on page 200--that: "The announcer was a moron, the studio audience were all morons, the sponsors were morons, and I just damn well wasn't going to shine my shoes for them, I told Seymour. I said they couldn't see them anyway where we sat. He said to shine them anyway. He said to shine them for the Fat Lady." Now, why the Fat Lady? It's this mythical, incredibly humanly embodied-- whenever you see a fat lady in a novel, one of the first things you want to ask is: why does that person need to be excessively embodied? That's what fatness is in a novel like this. It's excessive embodiment, the human. That's what this woman represents, the human. Connect to the human audience; respect the human audience. Act for them, to them. Don't act as if they are just some bunch of Philistines out there who can't appreciate your art. And then he says to Franny: "I'll tell you a terrible secret. Are you listening to me? There isn't anyone out there who isn't Seymour's Fat Lady. That includes your Professor Tupper, buddy, and all his goddamn cousins by the dozens. There isn't anyone anywhere who isn't Seymour's Fat Lady. Don't you know that? Don't you know that goddamn secret yet? And don't you know--Listen to me, now. Don't you know who the Fat Lady really is? Ah, buddy. Ah, buddy, it's Christ Himself, Christ Himself, buddy." This seems like a completely Christian answer to Zooey's problem, and we're back on the horns of that dilemma. Is this a syncretic religious vision, or is it a Christian one? But look what follows. This is not the last word. For joy, apparently, it was all Franny could do to hold the phone even with both hands. For a foolish half minute or so there was no other words, no further speech, then "I can't talk anymore, buddy." The sound of a phone being replaced on its catch followed. Franny took in her breath slightly but continued to hold the phone to her ear. A dial tone of course followed the formal break in the connection. She appeared to find it extraordinarily beautiful to listen to, rather as if it were the best possible substitute for the primordial silence itself. The dial tone is that state of awareness of the divine that Buddy speaks of when he says- when he speaks of being with God before God said, "Let there be light." Zooey's voice breaking that dial tone in the beginning in his phone call, and then the resumption of it afterwards, that dial tone encases Zooey's voice, so that what Zooey says to her is one of the rays of light of God's creation, one of those things, like Shakespeare, that is part of the whole created world, but what Franny can tune into, after hearing his voice, is that very essential divine sound, meaningless sound. And so, this is how Salinger balances the syncretic, the sort of empty mysticism of Seymour and Buddy, with the embodied, doctrinal, specific insistence that we see from Zooey, from the insistence on human specificity, the Fat Lady, the very material human fleshly person. Salinger's own novel performs in this way, and that's how you would want to think about moments like on the bottom of 180. This is describing the bedroom of Seymour and Buddy as Zooey walks in. A stranger with a flair for cocktail party descriptive prose might have commented that the room, at a quick glance, looked as if it had once been tenanted by two struggling twelve-year-old lawyers or researchists. And then if you flip back to--let's see--172 describing Zooey's sweaty shirt, "His shirt was, in the familiar phrase, wringing wet." And there are lots of moments like this, self-conscious moments of style. So, "in the familiar phrase, 'wringing wet,'" he's saying, "I'm about to use a cliché. Here it is. There it is." He says someone with a flair for cocktail party conversation, a witticism, would say this. He gives it to us, but he frames it as an affectation of style. So, what Salinger, I think, shows us is that affectation, without something like love, is just affectation, and that's what Lane represents. That's the affectation of literature without any human connection. That's why when he talks on and on, it's as if Franny isn't even there listening to him. He's been going on a quarter hour, and he's just hitting his stride. Franny, Zooey, Buddy, Bessie: they all try to speak directly to each other. The family language is what makes them very human; they embody this very specific family language.And so, I would argue to you that Salinger imagines literature as a performance of this kind, a performance of a language of family love that is nevertheless also an aesthetic language. And I think, actually, probably the best image of that is in Seymour's diary. When Zooey sits down to make that phone call, he opens up Seymour's diary and he sees Seymour's account of his birthday celebration, where the family had put on a vaudeville show right in their living room. Remember, his parents are vaudevillians. And that description, which I won't read just because we're running out of time, it's on 181 and 182. You can look at it yourself. It's brimming with pleasure and love. This is why Buddy really can't insist that Zooey is wrong about this being a religious novel, because being a religious novel and being a love story are finally for Salinger the same thing. It's the performance of human connection. That's the phone line; that's conversation; that's letters. The performance of family conversation is like acting, and that is why Zooey impersonates Buddy; he's acting. But Franny can hear the specific voice, and this is when you know that Franny is not just a sort of empty air head. She may be mistaken about who she's praying to in the Jesus prayer, but she damn well knows the timbre of her brother's voice and his particularity of speech. And so, when he tries to imitate Buddy, she finds him out very quickly. And this is when you know that Franny really does benefit from Seymour and Buddy's religious education in the same way that Zooey has. And so, if we step back for a minute now from my reading, there are a couple of things I want to say. First of all, I hope you can see, using that as a model, how I went from big claims about the novel into specificity to support those claims. That's the structure of any good literary argument. The attention moves from the very small to the large and back again. There is a kind of rhythm to that, that folds in those obvious parts of the novel to a more thematic set of concerns, in this case about the religious philosophy of this novel. So, as you think about writing papers, go through that two-step process of thinking about the large picture of what a novel is doing as a piece of literary art, and then thinking about a focused set of concerns. And, in the final development of your paper, making sure that those two can relate to one another. The second thing I want to say is less about paper writing, and more about the trajectory of this course and what we're seeing in common between these novels. So, you can read this very closely to On the Road. If Dean cared for "nothing but for everything in principle," you could say, conversely, that Salinger cares for everything in particular, and in principle, nothingness. It's nothingness that is the mystical state rather than everythingness. And it's interesting to think about whether those two are really opposites. I think these novels imagined them to be opposites, but it's something for you to think about, about whether they really are. So, Zooey's specificity is the specificity of doctrine, but it's also the specificity and more importantly the specificity of person. So that's the everything that he cares about, person. I will stop there, and please bring both On the Road and Franny and Zooey to section this week. And, by the way, one last thing: If you've been sketchy about your section attendance, I suggest that you try to pull up your socks and go. We will be talking about papers in the section. It will be helpful to you, and it will also give you a chance to talk about these books, so please do go. |
Literature_Lectures | Chaucer_what_is_hidden_in_the_Canterbury_Tales_by_Dolores_Cullen.txt | have you seen it someone's just written a book called who murdered Chaucer you may think that's a joke but it isn't and when you hear what I have to say you may be ready to take that title seriously Geoffrey Chaucer died in 1400 forget images of Robin Hood damsels being rescued and peasants happy in their work the 14th century has recently been called one of the most turbulent periods in English history that turbulence is the setting of Chaucer's life in the sixty or so years that he lived the Black Plague devastated Europe five times one attack killed 50,000 in London within a few months the Hundred Years War between England and France was on-again-off-again peasant uprisings were crushed in England and elsewhere the English crown passed from Edward the 3rd to Richard the second to Henry the fourth one needed to keep his wits about him as the ruling power has changed so as not to lose one's head a number of powerful friends of Chaucer lost favor and were executed or fled the country then there was the matter of the church the Great Schism one Pope in Rome and other in France it was never resolved while Chaucer lived and the Inquisition was functioning on the continent and about to cross the channel into England although Luther would not make his public protests against the church for more than a hundred years he surely was not the first to have protesting thoughts I knew none of the foregoing information when I entered college as a middle aged student I sought out those facts and many more because my introduction to Chaucer opened a new and tantalizing Vista it changed my life as I read The Canterbury Tales the images created in my mind were double they were like double exposed snapshots one image superimposed on another it was startling but fascinating even more absorbing was the cast of pilgrims why precisely that combination of characters after weeks of pondering the group I suddenly saw the answer as the pilgrims merged with another set of images each character masquerades in the disguise of a pilgrim I had to tell someone or burst as I blurted out tales to one professor he said mind your humility another tried to dissuade me from pursuing the idea truth to tell I wasn't discouraged by these professors because I knew they could be wrong that insight came from having been the wife of a professor for many years and then there was Virginia Hamilton Adair the poet she was also one of my professors and she encouraged me to develop what I saw it was as if I'd been handed a map to a secret treasure I began researching on my own and pieces started to fit together strong statements against Authority had been concealed by the poet but would this damage Chaucer's reputation he'd no longer be just a jolly storyteller the college chaplain advised never fear the truth those words still guide me so I determined to do what I could to bring Chaucer's message to light here's a thumbnail sketch of the Canterbury Tales and a hint of the double intention after an elaborate introduction of each of the 30 or so pilgrims a character called the host offers to guide the pilgrims and declares they will tell stories as they travel from London to Canterbury a banquet is promised at the end of the pilgrimage when this host is introduced it is his actions that are described not his appearance he provides the best food and strong wine he tells the pilgrims what they will do and expects immediate agreement he is often seen as a pompous boy that will do for the surface but what of a hidden meaning I found no critics commenting on a second meaning an allegorical structure of Chaucer's masterpiece modern scholars it seems hold allegories in disrepute although a covert meaning was expected by medieval readers critics had judged Chaucer to have modern tendencies freeing him from allegories I got a couple of small articles from my research accepted but when the material I offered was outside the mainstream of current thinking it was rejected I soon realized I'd have to write a book to present sufficient support for what needed saying the search was so exciting that I included that in the book too how could a double meaning remain hidden for 600 years by Elizabethan times 200 years after Chaucer his work was already thought to be old-fashioned and it languished later when scholars of the 1800's took a renewed interest in his writings they did so with prejudices Chaucer's religious loyalties papist or no reluctance of these Victorians to deal with body nasaan obscenities cause distortions as well unfortunately some of the 19th century statements are offered today as if they are fundamental truths instead of old opinions then came 20th century prejudice robertson says in his often recommended a preface to Chaucer allegory was almost universally regarded with suspicion if not with contempt that frustrated me because I could see splendid double images that beg to be interpreted the host for example as the central figure of the Canterbury Tales is always present to the action the most notable host in Chaucer's day was found in church the presence of Christ in the Eucharist that is in the host was of tremendous importance celebrations of that belief out shown Christmas and Easter and often involved whole communities in day-long processions and dramas when the Canterbury host provides sustenance for the pilgrims the chosen words are those used to describe the Eucharist the host will guide pilgrims and a banquet will conclude the pilgrimage if we see this journey as a pilgrimage of life and focus on Christ within this character the covert intention of his pronouncements can be read as expressions of God's will that's just a hint of what's to come to test my ability to explain what was so fascinating I joined a writers group they were my helpful and essential audience book one which is all about Chaucer's host developed over several years it tells how Chaucer's work had taken up a life in my intellect it began gesturing to me teasing me cavorting along the paths of my mind I found it irresistible when it coaxed I followed where it led I invite you to join us the poem and me in our adventure together Chaucer's poetry will by locate with inexplicable ease and be alive to you as well as to me if you will extend your mind in a gesture of welcome when the book was finished which means it had the approval of my writer friends I began sending it out to publishers of literary criticism I never kept track of the rejections thank you for your submission it doesn't fit our current needs but good luck I figured this was a job I had to do so I just did it then in June of 1997 a publisher replied I wish my brother had lived to read your book the brother had been a medievalist and chairman of a university English department John my publisher had understood every word I said and found it worthy of publication how wonderful a professor once objected that I would of course say the host was Christ because I'm a Catholic the true emphasis is that I see the image of Christ because Chaucer a Catholic presents clues to be recognized as in the hosts activities for and demands upon the pilgrims this covert element does nothing to replay the popular storyline the second meaning is a sub structure and added dimension to the already remarkable tales the book about the host touches upon many clues scattered through the speeches and scenes in the tales each clue does its part to bring the image of Christ into focus we here is actual name Harry Bailey mentioned in passing just once but Chaucer prawns us with numerous repetitions of the host the host the host as a means of prompting us to catch sight of the Christic identity a scene much is made of for other reasons actually presents one of the strong identifications of Christ within the host it's the pilgrim partners describing the host as most enveloped in sin the poet with one word distinguishes a picture of Christ who has taken all of man's sins upon himself if we mistakenly see the partners words as indicating a sinful a sin filled person we fail to grasp the creative subtlety of Chaucer's genius I had the pleasure of reading the hosts book to Virginia Adair you may not realize that she had lost her sight I mentioned Virginia because when I began to read the final chapter the serious and devastating examination of a character that has always been seen as a comic stereotype she interrupted to say she knew what would be revealed I was astonished perhaps as a poet she shared the poetic thought pattern with Chaucer with the publication of the first book Chaucer's hosts upso doom that's upside down because my ideas were unconventional the next project would be to interest my publisher in a second book that concentrated on Chaucer as both real-life and fictional pilgrims the second volume first looks at facts that had been discovered about Chaucer's life then we examine his narrative when called upon by the host who is as we said the covert image of Christ the poet was born about thirteen forty and served in a royal household as an adolescent he survived onslaughts of the plague during military service he was captured by the French and subsequently ransomed by Edward the third he proved to be a trustworthy emissary for Edward as well as for Richard ii while i'm many missions to italy france and spain he had the opportunity to see gothic cathedrals being built and no doubt observed evidence of the actions of the inquisition back in england he held many responsible positions as controller of customs dealing with taxes on imported commodities as clerk of the works overseeing repair of bridges royal buildings and such and as a member of parliament he was able to survive political machination zat the royal court although close and powerful friends left the country or lost their heads very near the end of his life Henry the fourth came to the throne Chaucer was about sixty when he died his final residence was on the grounds of Westminster Abbey sensational events in the life of the real and the fictional Chaucer are given very little circulation prudery no doubt was the early reason for being hush-hush but with our scandal loving world today it is difficult to understand how a potential scandal remains unexposed unexploited in the first instance the real-life Chaucer was taken to court for rape this basic fact is ignored in most biographies diminished in others by presenting the charge as an abduction instead or turned inside out by casting him as a hero protecting a young woman a lawyer in the 1940s who analyzed the case in defying 14th century legal terms said the claimed heroics are entirely unsupported by evidence JA Syrians however are reluctant to tarnish the poet's reputation even at the expense of truth and what can we say about the fictional Chaucer a pilgrim within the Canterbury Tales Chaucer the writer gives his fictional counterpart a signal honor by placing him at the beginning middle and end of the tales the pilgrim introduces the adventurer closes with a prayer and at the midpoint in the journey is the one called upon for a story oddly enough all that being said the story's two in a row by the author himself are regularly neglected one because it's presumably dull the other because it's ponderous with the first story they say Chaucer is poking fun at himself by telling the very dullest story that opinion can only be perpetuated if we ignore all the double entendre the ongoing activity of sir TOPAS the main character is pricking and performance in the saddle images of sexual activity are avoided when footnotes define pricking as galloping or spurring the word however meant intercourse as it does today but that goes unmentioned when Chaucer for example says the hero was pricking on the soft grass students are told to see him galloping over the grass poetic Xing has been eliminated when pilgrim Chaucer finishes one segment of this first story he asks his listeners if he should go on then he continues his account of equestrian delights he has barely hit his stride again when the host cries halt in the middle of a sentence the host who is Christ claims his ears are aching from the drast II that's filthy speech the action as a catalog of seductions would be expected to raise an objection from Christ now let's go back into the story to the lines where Chaucer asks his audience if he should continue he says if you will any more of it to tell it will i fund fund is the only word there that has not survived into modern English notes generally say it means strife I will strive that's definition number seven of the word in the Middle English dictionary and it works for the surface plot but the covert intention is found in the very first meaning to try the patience of God that definition astonished me it discloses a completely different intention on the part of the narrator if the audience wants more the storyteller says to tell it I will try the patience of God exactly the limit of Christ's patience is dramatized 26 and a half lines later as pilgrim saucers account is cut short another favorite memory I have of my friend Virginia has to do with sir Tobias when we accept the sexual basis of the tale the enemy who threatens the activities of topaz must be at threat to that frequent pastime the enemy is a giant called Sir elephant as I was reading Virginia did a mental leap at this point and exclaimed don't tell me he's really elephantiasis and then she laughed again she took me by surprise she was right of course but I told her it wasn't fair because I hadn't built up to it yet what about the subsequent tale from pilgrim Chaucer the reputedly ponderous one it really is ponderous filled with quotations from homilies and philosophies a fruitful discovery comes however from comparing the Middle English vocabulary to the old French it's patterned after small additions and deletions raised the tale to a new level of significance the original French woman who is purposely left at home is nameless but Chaucer calls her Sophia that is wisdom is purposely left at home and where the French has only a form of Lord to address God Chaucer expands the reference to Lord God Almighty the High God and many other variations the changes demonstrate a previously unrecognized unity of the two stories from the author pilgrim this second book pilgrim Chaucer's centerstage amused and surprised my publisher and so was published in 1999 the first story tells us about the host the second of Chaucer himself now all that was left to explain was Chaucer's plan for all the other pilgrims what a job that was going there were three parts to getting the third booking print first I had to interest my publisher next I needed to gather sufficient background to include so readers could understand Chaucer's creativity and finally I wanted very much to have it published by the six hundredth anniversary of Chaucer's death in the year 2000 I constructed a sort of game to see if others could recognize the covert identities by the clues Chaucer incorporated with just general knowledge groups of my friends to lineatus images of several of the pilgrims beyond the expected human figure I sent the game to my publisher who saw the creative scheme immediately and sent me the contract for book three the first public test of the game was at an AP class in high school it was an exciting afternoon for me and for them I explained Chasseur setting many characters who all arrived about sunset and come to stay for the night it only makes sense of course that Chaucer as well as people today be familiar with these characters as soon as I had completed the details of the game one young man said there stars of course wonderful then from the descriptive oddities the class recognized many of the veiled identities of constellations here are a few as an illustration a Miller has wide black nostrils broad shoulders tremendous strength and crashes into things with his head do you see the image of a bull Taurus a pair of brothers who traveled together hardly needs prompting there is a constellation that is just such a pair of brothers Gemini a hired cook with a running sore is an attention-getter it was meant to be the hideous or is just one of several clues for the zodiac sign of cancer the game became the basis of the introduction to book 3 with the publishers go ahead I started the next project gathering background for the chapter that showed Chaucer's creativity he'd written a textbook about astronomy which was merged with astrology when he lived fables about star groups would be part of his thinking and because the structure of allegory was common knowledge in the Middle Ages there would have to be a brief explanation of its characteristics one rule of allegorical construction is that when you perceive a part or two the remaining parts must be there that's a given simply put if Cinderella is the basis there must be a prince charming so knowing that we are dealing with zodiac figures each description needs careful consideration to complete all the identities here are a few examples a woman wearing Spurs a monk with a golden clasp at his chin a friar whose eyes twinkle as do the stars doesn't that qualifier tell you that Chaucer loved playing this game each odd detail points to major stars in the constellation representatives covert identities as heavenly figures explains how each pilgrims distinctiveness is possible we also recognize that they would naturally arrive simultaneously with no confusion and no details of belongings or personal arrangements as the surface story tells and the covert images are the basis of the seeming equality portrayed among the pilgrims of various ranks a worthy Knight a nun who resides over a convent a man who transports gum there is a lot more to the pilgrim plan than just seeing alternate images each clue I mentioned and many others are carefully placed celestial figures are the core of the plan they both conceal and reveal concealment allows the surface meaning to dominate and what can be revealed demonstrates Chaucer's protests against oppressive Authority 7 say the poet lacked high seriousness but it simply isn't true he has serious criticisms were carefully cleverly disguised organizing the explanations of medieval thinking took a while you can imagine how encouraging it was for me to read a review in the Library Journal that said she writes well making difficult ideas accessible to beginners and sharing her excitement about Chaucer studies that was my aim exactly the poet with one of his fanciful moves has a stranger russian joined the pilgrims for a while and then rush off again if you're tuned to the celestial wavelength you think comet his description and actions tell us that that's what he represents they say Chaucer failed to have every pilgrim tell a story but with the celestial pilgrims as the true basis this is no longer the case Chaucer did indeed get a story from each planet and constellation just as one example if the parson and Plowman together are the sign of Gemini then one story is expected from the pair so when the parson tells a tale even though the ploughman does not the story requirement is satisfied there are similar explanations for the other apparently missing tails my third aim for this book was to get the manuscript ready to publish by Chaucer six hundredth anniversary the weekly Breyers group sprang into action to help me several would each take a chapter home to read and critique then they returned at the following week I'd modify the text until precisely what I wanted to say was clear in addition reference librarians were particularly helpful we're searching out old texts I needed somehow it all came together by the deadline in book three Chaucer's pilgrims the allegory was officially published on October 25th 2000 the three books together tell the story of a literary adventure my pursuit of a trail of clues in the Canterbury Tales all evidence of scholarly research may be found in the notes at the back of the books but the text itself was not written with scholars in mind instead the audience I'm addressing is my friends at the writers group I reflect at the end of book three who could have known how amazing and far-reaching the pilgrim adventure would be when it started I found the answers to my original questions and much more I've traveled so many byways followed so many of the poets clues in search of treasure some clues continue to elude sleuthing but many have turned out to be pure gold how exciting I hope you've shared some of the excitement in studying Chaucer you must read his own words the middle English he wrote that frightens some people off but reading it on the page that is bypassing thoughts of proper pronunciation is easier than you think for example Squire come near if your will be and say somewhat of love for Serkis ye cunnin thereon as much as any man sir cheese meaning certainly and kannan a form of to know can almost be guessed by context you'll find these words in the glossary of course that's what it's there for if you only want the idea of the poet's clever plots and such modern English versions can give you that but be forewarned it's not Chaucer you'll be reading it's the modern editor who will take liberties with the text as fit his needs back to the real Chaucer here is just one all-encompassing aspect of the pilgrimage that in general is simply accepted and then ignored the lack of specific detail one of Chaucer's great skills is making you think he gives you all sorts of information but he really doesn't you can feel the liveliness of the journeys until you sit back and look for details you naturally expect would be there that's when you discover they're missing consider this we have a group of 30 or so pilgrims the number is debated because it is never stated precisely they start a long trip with no description of a morning meal problems with late sleepers organizing the transporting of personal belongings or arrangement of who follows whom as they write out are they in single file then how do they hear this stories that are told or are they a cluster of writers in that case the width the road must determine that the contour of the cluster changes of necessity at times there is no clue to any of this Chaucer never presents a picture of the countryside they apparently traveled through even though that would be easy enough for him to do and on the way there is no mention of weather conditions or road conditions that would either create problems or perhaps add to the pleasure of the journey they never eat any food at an inn or by purchasing supplies from a seller of victuals in a town we should wonder why the five guilds Minh arranged to have that cook accompany them they never stopped to sleep although it has been determined that the Overland trip would take three days from London to Canterbury Chaucer certainly knew that then why is he so vague one more item and then we'll consider the reason when he says they've reached a village near the end of the journey he doesn't name the village all of this lack of information is essential for his underlying structure it allows the second meaning a vision of stars moving to be valid specifics of human needs and earthly locations would disallow such an interpretation a plan lack of specifics also explains why to my complete surprise they never reach Canterbury geoffrey chaucer lived through plagues and wars and other violence he traveled far and wide across Europe dealing with people of authority as a trusted servant of his King he experienced 14th century life to the fullest he saw and heard matters of great importance matters that could change the world I believe he felt deeply about life about in justices and about his only salvation these concerns are woven into the texture of the Canterbury Tales if you appear the dthe to Circus I'm dedicated to keeping the name of Chaucer before the world I make his importance an issue every chance I get there is so much more that lies waiting to be discovered within the language of his final masterpiece he tells us of the matters close to his heart if we will hearken to his carefully chosen words it is a matter - of courage on his part a subtle aspect of the poet's plan is the precise destination he announces before the rest of the travelers join him it's the shrine of thomas a becket who is renowned for having had the courage to die for what his conscience dictated Chaucer's often ambiguous vocabulary disguises a profoundly serious message a message that if expressed openly would have brought his career to an end I'm convinced that when Chaucer envisioned his multi-level plan and determined to write the tales as his genius supplied the words and his courage supplied the will for the task he knew he was risking his life you |
Literature_Lectures | 3_Flannery_OConnor_Wise_Blood.txt | Professor Amy Hungerford: We finished Black Boy last time, and one of the big questions coming out of my discussion of that autobiography is: how do you manage the question of context in reading a novel or an autobiography--in reading any text? And we had a very complex publishing history to think about with that text. Flannery O'Connor's work raises questions of a similar kind, but they look very different. And so, my lectures on Flannery O'Connor will highlight the methodologies that we can bring to any reading of a novel, and it will highlight the differences between different methodologies and what they allow us to see in a different text. Flannery O'Connor, as most of you probably know, is a Southern writer. She is very often assimilated to a whole group of southern writers who were working in the 1930s and '40s, the "Southern Agrarians." She was friends with a lot of the major figures of that movement, especially Allen Tate and Caroline Gordon. She lived out her life mostly in a small town called Milledgeville, Georgia. She was born in Savannah in 1925. She studied writing at the Iowa Writers Workshop. She lived in New York for a short time, but she was afflicted with lupus, a very serious illness, and she died at the age of 39 in 1964. So, she lived a pretty short life. Over the course of that life, she wrote mostly short stories, and so she is very much known for her short stories. She has a couple of novels. Of them, this is, I think, the most successful. O'Connor, you may also know, has been understood as a religious writer. She was a Catholic, and she very much made her Catholicism at the center of all the things that she said about her fictional practice. And so, we're going to see a couple of those things today. Let's look--just to begin with, if you brought your books--let's look at the cover of this book. What does this cover say to you? What does this image remind you of? What does it look like to you? Do you want to answer that? Student: Is it the Sacred Heart?Professor Amy Hungerford: It's the Sacred Heart, yes. It's the Sacred Heart of Jesus. In Catholic iconography of a certain kind, the figure of Christ is shown usually parting His clothes and His flesh and showing you His Sacred Heart, which is usually crowned with flame and often encircled with thorns. So it's an image of Christ the suffering godhead: the very human, fleshly person who will part His own flesh in order to connect with, in order to redeem, the believer. So right in the packaging of this novel that we have today--this cover has changed over time--nevertheless, even today, that very Catholic iconography is right on the front of the cover. And when you see Wise Blood, that title, right below the Sacred Heart, you can't help but think of: well, this blood is somehow the blood of Christ. That's the kind of blood we're talking about. It's already entered a sort of metaphorical register, religious register, in the way this book is packaged. Then, when we open up the front, we see the author's note to the second edition, and this was something O'Connor added to the novel in 1962. I just want to read that with you today. She says: Wise Blood has reached the age of ten and is still alive. My critical powers are just sufficient to determine this, and I am gratified to be able to say it. The book was written with zest, and if possible it should be read that way. It is a comic novel about a Christian malgré lui, and as such very serious. For all comic novels that are any good must be about matters of life and death. Wise Blood was written by an author congenitally innocent of theory, but one with certain preoccupations. That belief in Christ is to some a matter of life and death has been a stumbling block for readers who would prefer to think it a matter of no great consequence. For them, Hazel Motes's integrity lies in his trying with such vigor to get rid of the ragged figure who moves from tree to tree in the back of his mind. For the author, Hazel's integrity lies in his not being able to do so. So, right up front, we are told that Hazel Motes is a Christian in spite of himself, that this is how we are to understand this character who we will come to know. I also want to give another layer to this understanding of O'Connor as a religious writer by looking at what she said in her correspondence to one of her readers who asked her some questions, and this is on the handout that I passed around. This is a letter to a man named Ben Griffith from 1954. So she had just finished Wise Blood, and people are starting to read it, ask her questions. She was a prolific correspondent. She was very generous in her letter writing. She would write to almost anyone who wrote to her. She would write back in a substantive way. I think it's in part that, suffering from lupus, she was very much confined to her house in Georgia, and the letter writing, this kind of correspondence, was certainly a way for her to keep in contact with the world of readers and other writers and friends. So he had written to her. He was teaching writing at a local college. He's obviously been asking her about the sources of some of the images and characters and themes in Wise Blood. So, I want to point out a couple of things. This is the first full paragraph: I don't know how to cure the sourcitis, except to tell you that I can discover a good many possible sources myself for Wise Blood, but I am often embarrassed to find that I read the sources after I had written the book. I have been exposed to Wordsworth's Intimation Ode, but that is all I can say about it. I have one of those food-chopper brains that nothing comes out of the way it went in. The Oedipus business comes nearer to home. Of course, Haze Motes is not an Oedipus figure, but there are obvious resemblances. At the time I was writing the last of the book, I was living in Connecticut [actually very close, here at Yale, and one of the people who is still at Yale, Penny Laurens--I don't know if you've met her--she was married to Robert Fitzgerald, and she knew Flannery O'Connor]. When I was living in Connecticut with the Robert Fitzgeralds, Robert Fitzgerald translated the Theban Cycle with Dudley Fitz, and the translation of Oedipus Rex had just come out and I was much taken with it. Anyway, all I can say is I did a lot of thinking about Oedipus. This is very typical in tone for O'Connor. When she talks about education or learning--and if you read more in this letter (I won't go through the whole thing), you will see that--she's very self-deprecating. She says she has "what passes for an education in this day and age." She says that she has read a little bit of Kafka and "doesn't know what to make of him, but it makes you a bolder writer." She reads a little Henry James because she thinks that makes her a better writer, somehow, but she doesn't quite know how. There's always this veneer of innocence, or lack of learning, or lack of sophistication; so, she is presenting herself as a simple person. I think that's important, although not directly connected to her presentation of herself as a Catholic person. I think it does factor into her sense that the truth she is accessing, or the truth that she is trying to present to the world in her stories, is one that even a child might be able to understand. And that fits very comfortably within a New Testament understanding of the teaching of Christ. So, Christ is that one to whom the little children can come, and I think she cultivates that childlike sense in her self-presentation. But then there is this very explicit discussion of her Catholicism, a little further down: My background and my inclinations are both Catholic, and I think this is very apparent in the book. Something is usually said about Kafka in connection with Wise Blood, but I have never succeeded in making my way through The Castle or The Trial, and I wouldn't pretend to know anything about Kafka. I think reading a little of him perhaps makes you a bolder writer …And so on. If you turn over, this is another letter to Ben Griffith written fairly shortly after this first one. She expands a little bit on this sense of her Catholicism. This is in the middle of the page: Let me assure you that no one but a Catholic could have written Wise Blood even though it is a book about a kind of Protestant saint. It reduces Protestantism to the twin ultimate absurdities of the Church without Christ or the Holy Church of Christ without Christ, which no pious Protestant would do, and of course no unbeliever or agnostic could have written it because it is entirely redemption centered in thought. Not too many people are willing to see this, and perhaps it is hard to see, because Hazel Motes is such an admirable nihilist. His nihilism leads him back to the fact of his redemption, however, which is what he would have liked so much to get away from. When you start describing the significance of a symbol like the tunnel, which recurs in the book, you immediately begin to limit it, and a symbol should go on deepening. Everything should have a wider significance. But I am a novelist, not a critic, and I can excuse myself from explication de texte on that ground. The real reason of course is laziness. There is that characteristic self-deprecation. With letters like this--which were published copiously in a beautiful edition that Sally Fitzgerald edited--with letters like this, or her frequent essays and lectures, which are collected in a book called Mystery and Manners, she was expounding a certain reading of her fiction, even while she was still writing it. And those who were close to her have picked up that understanding of her fiction and promulgated it. And there's a huge critical industry around Flannery O'Connor, and at the core of it is a body of criticism that finds and articulates and explains the religious meanings of her texts. With that in mind, I want to point up to the two quotations that I put on the board to start us off today: "'I like his eyes. They don't look like they see what he's looking at, but they keep on looking.'" This is Sabbath Lily Hawks. And then, a character you haven't met yet if you stopped at page 100, Onnie Jay Holy: "'I wouldn't have you believe nothing you can't feel in your own hearts.'" These two quotations seem to me a kind of rubric under which we can start to think about what it means to read this novel, and what it means to read it in the light of the religious context that O'Connor herself, critics, marketers, have built up around her work. The first quotation from Sabbath Lily of course focuses on the eyes, and it is not hard to read into Haze Motes's name that the trope of sight is going to be important. "Haze Motes." There is that famous passage in the New Testament (or is it--oh, gosh--now I'm going to forget if it's New or Old Testament; someone will correct me): "Do not try to remove the mote from your neighbor's eye before you have removed it from your own," or "lest you fail to remove the mote from your own eye." So there is this sense of occluded sight; "Haze," that haze. Somehow, something is wrong with Haze Motes's eyes, something wrong with his sight, or rather there is something important about his sight that we're going to have to unpack. But what I want to take out of Sabbath Lily's comment about Haze is this sense of what you look at. What does Haze look at and what does he see? What do we look at when we read this novel, and what do we see? Those are questions that are going to frame the two lectures that I give on this novel. The second quotation, from Onnie Jay Holy, raises the question of sentiment. This novel--as you will soon see, once you get to the parts where Onnie Jay Holy begins to preach--this novel is very much a critique of sentimentality. If Richard Wright's ideal response to his fiction was that, for the reader, the words would disappear, and all they would be left with is their emotional response, for O'Connor it's precisely that kind of response--to any call: be it textual, be it an act of reading, an act of audition, hearing someone preach--that kind of response is precisely not the one you are supposed to have. And so I would ask you to think about a couple of simple questions as you move through this book, and as you think about what I have to say about it. One of them could begin with a reflection like this. Would you ever want to sit down to dinner with any of the people in this novel? I see people shaking their heads. They are quite unlikable, and this is consistent pretty much across O'Connor's fiction: short, long, medium, whatever. Her characters are not very endearing. So you want to ask yourself why that's so. This is a conscious decision on her part, and you want to think about that decision. If there is any character who seems kind of endearing, at least for me, it's probably Enoch. And we'll talk a little bit more about him: not today, but in the second lecture and in section. So, with these questions in mind, I want you think about how we can see the novel and how we can think about it in the face of the interpretation that's already layered on to it. And what I want to do is now, kind of just descend in to the text, and read with you the passage when Haze first takes the Essex out for a spin: his wonderful car, the Essex. So, this is on page 73, is about where it begins, and I'm going to read through the next two or three pages. And I'll skip around in the book as things come up that I want to show you in other parts of the book. So let's think about seeing and theology and all the issues that are already on the table for us. I'm going to begin at the bottom of 73: "When the car was ready…."-- If you have your book, go ahead and open it up. "When the car was ready, the man and the boy stood by to watch him drive it off." (Is it the wrong page numbers? Shoot. Oh, dear. Sixty-nine. Okay. So, it's four off. Thank you for telling me. You rely on publishers and then they let you down. Okay. Does everyone have it?) When the car was ready, the man and the boy stood by to watch him drive it off. He didn't want anybody watching him because he hadn't driven a car in four or five years. The man and the boy didn't say anything while he tried to start it. They only stood there looking in at him. "I wanted this car mostly to be a house for me," he said to the man. "I ain't got any place to be." "You ain't took the brake off yet," the man said. He took the brake off and the car shot backwards because the man had left it in reverse. In a second he got it going forward and drove off crookedly past the man and boy still standing there watching. He kept going forward, thinking nothing and sweating. I just want to stop there for a minute. Haze sees the car as a kind of home. Well, how are we meant to understand the meaning of that? It has the feeling of a rare moment of explanation from Haze. He almost never explains himself to other people. Here he is accounting for his need for the car. Now, of course, O'Connor was very good at imbuing her writing with repeated symbols that grow and accrue meaning across the text. So we've already seen the trope of the house. And if you look back at 24 (oh, no--try 20; see if we can find it here.), when Haze describes--or, we sort of know through his consciousness--the story of his time in the Army, this is what we learn about how he felt there, after he's wounded: He had all the time he could want to study his soul in and assure himself that it was not there. When he was thoroughly convinced, he saw that this was something he had always known. The misery he had was a longing for home. It had nothing to do with Jesus. When the army finally let him go, he was pleased to think that he was still uncorrupted. All he wanted was to get back to Eastrod, Tennessee. The black Bible and his mother's glasses were still in the bottom of his duffle bag. He didn't read any book now, but he kept the Bible because it had come from home. He kept the glasses in case his vision should ever become dim. O'Connor has already put in place in the novel through that little passage the sense that the longing for home and the longing for redemption--or the resistance to redemption--these things are very close to one another. You can mistake--here we find out about Haze--you can mistake the longing for Jesus for the longing for home, or vice versa: the longing for home, for the longing of Jesus. The Bible that he carries around is important to him because it comes from home. I want to suggest to you that the fact that it's the religious book for him, for his culture, for his family, is not of course incidental to the fact that it's what reminds him of home. It's not just that you can mistake the longing for home for the longing for Jesus. You can in some ways see religion and home as conflated. And this gets to a traditional Christian notion of the believer as not being at home in the world: that the believer somehow belongs to God's kingdom, and that this is either countercultural--at odds with the general world in which he or she would find herself--or it is totally incompatible with the world in which we live. The Bible is a physical manifestation of the proximity of the spiritual and the material in this world. So what makes him feel close to home, in a way, has to make him feel close to the religion he's trying to reject. That conflation is part of what makes it impossible for Haze to escape the question of redemption, even if he wants to answer it in a way that's at odds with how, for example, his grandfather, the preacher, would answer it. This is why he's continually mistaken for a preacher, no matter what he does. Remember he goes in to the prostitute's house, Mrs. Watts's, and he's got a hat on, and the hat just makes him look like a preacher. There's nothing he can do. He says, "I'm not a preacher," and she says, "That's okay if you're not a preacher." It's just something that's in his body; it's physical. So the car--going back to the passage that I was talking about before with the Essex--the car (even though religion is not mentioned directly right here), the sense of home that it embodies, carries with it all that sense of unhousedness. And, because it's a moving house, it carries the sense of the wandering believer with it as well. And you get that reinforced on the very next page (if you just skip over about a page from there): "A black pickup truck turned off a side road in front of him. On the back of it, an iron bed and a chair and table were tied, and on top of them a crate of Barred Rock chickens." So other cars on the road looked like houses, mobile houses, as well. It's not just Haze's Essex that is imagined as home. So O'Connor is giving us a version of the road, and I want you to keep this in mind because of course we're going to read On the Road, and we are going to see a major road trip in Lolita, actually two of them. So the iconography of the American road is something that is going to come back to us. Well, here is our first example. This is the road of the unhoused, of the spiritually seeking, of the wandering, of the lost. People wander in search of some kind of coherent meaning. I want to now move down a little bit and observe how landscape is presented to us. This is after that, "since he was going very fast…": The highway was ragged with filling stations and trailer camps and roadhouses. After a while, there were stretches where red gullies dropped off on either side of the road, and behind them there were patches of field buttoned together with 666 posts. The sky leaked all over all of it, and then it began to leak in to the car. The head of a string of pigs appeared, snout-up, over the ditch, and he had to screech to a stop and watch the rear of the last pig disappear shaking into the ditch on the other side. He started the car again and went on. He had the feeling that everything he saw was a broken-off piece of some giant blank thing he had forgotten had happened to him. So what do we notice about this landscape? First of all, it's very much constructed. It's buttoned together with posts, as if someone had built it, and--what's more--these are described as 666 posts. I think this is probably a size of lumber, but you can't get away from the mythology of that number. In the Book of Revelation it's the number of the beast; it's the number of the devil. It's also a landscape that is full of pigs, wandering pigs, so if the people are wandering through this road, the pigs are equally wanderers throughout these fields. They're unconfined; they seem to cross the road at will. The sky, the world above, is really bound up with the world below. There is very little separation, even if there is a sense that the sky is impinging on the earth and not the other way around. "The sky leaked over all of it." It's really, in a sense, the physical image is of rain; it's raining, so it's leaking all over it. But there's more than that. There is this sense of the concerns of the sky somehow, the concerns of the above; the concerns of the transcendent are seeping their way in to the concerns of the material world below. And then you get that sort of lyrical moment of interpretation: "He had the feeling that everything he saw was a broken-off piece of some giant blank thing he had forgotten had happened to him." Now this is where the omniscient narrator comes in quite forcefully, and gives us something to work on as we analyze Haze and we think about who he is as a character and where he finds himself. This connects I think with a whole host of other passages that have to do with nothingness. And one of them is right above on that page, and I read it a little earlier: "thinking nothing and sweating." It's as if "thinking nothing" is not a passive activity, but an active one. So that, to think nothing is something you have to work at; it's something that you can be preoccupied with. And similarly, if you look back at 37--this is again a description of landscape--you can see this connection between (or, well, somewhat of a disconnection between) the above and the below, another description of sky. This is Hazel walking in Taulkinham: "The black sky was underpinned with long, silver streaks" (this is the very beginning of Chapter 3 if you're trying to find it): …that looked like scaffolding and depth on depth behind it were thousands of stars that all seemed to be moving very slowly, as if they were about some vast construction work that involved the whole order of the universe and would take all time to complete. No one was paying any attention to the sky. Here that omniscient narrator, as when that narrator looks in to Haze's mind, offers you a reading of the sky and its separation from the minds of the characters that suggests, or makes you look for, kinds of structure. Here, it's the construction work; she actually uses that word. But you get scaffolding; you get depth or perspective: counting thousands of stars, "moving…as if they were about some vast work that involved the whole order of the universe." It vaults the very concrete materiality, the physicality, of these characters and their circumstances. It vaults that discussion into a much larger, metaphysical, transcendent context, the whole order of the universe. It's moments like these when that omniscient narrator lives up to its name, that sense of omniscience that we might associate with God. Another example is at the very opening of the book: The train was racing through treetops that fell away at intervals and showed the sun standing very red on the edge of the farthest woods. Near the plowed fields curved and faded, and the few hogs nosing in the furrows looked like large spotted stones. Mrs. Wally B. Hitchcock, who was facing Motes in the section, said that she thought the early evening like this was the prettiest time of day, and she asked him if he didn't think so too. "Pretty" is not exactly the word that comes to mind--at least not to my mind--when I read this. It's more like "heavy" or "saturated," and there's again pigs running around. Again there is a biblical iconography behind this. There are two instances that come to my mind. When demons are cast out by the apostles and sent into a herd of pigs, and the pigs go running off a cliff and die; that's one image. Another is the admonition not to throw your pearls before swine, not to preach to those who can't hear, or won't be perceptive. So these moments of landscape description offer up that consistently Christian-inflected theory of the universe, that sense of transcendence as structure, as something that's moving inexorably, that will take all time to complete. It has a project; it has a teleology. So, that's present in all of these moments, but--equally present--I want to get back to this sense of blankness. There is a vagueness to this language that I think is quite calculated, and it relates, in Haze's case, to his determination to not be converted to evil, but to nothing. When he's in the army, he says--he decides--he can get rid of Jesus by converting not to evil, but to nothing, to believe in nothing. So what O'Connor does, is she presents a sense of the world imbued with structure and meaning, but a structure and meaning that looks essentially blank. And I think the task of the novel is to fill that structure in. The last thing I want to point out, in this passage from here to the end of the chapter, is the way that Haze's senses are described. We already talked a little bit about his name and the occlusion of sight. The trope of sight is obviously extremely important here. We have the blind preacher. There are more things, which I won't reveal, that happen at the end of the novel to do with this. If you haven't read, I won't give it away. But here, there are simpler examples: when the truck pulls up in front of Haze and starts moving very slowly, "Haze started pounding his horn, and he had hit it three times before he realized it didn't make any sound." He keeps doing this. When he comes to the roadside sign, "Woe to the blasphemer and whore monger. Will hell swallow you up?" it says "The pickup truck slowed even more, as if it were reading the sign, and Haze pounded his empty horn. He beat on it and beat on it but it didn't make any sound." He doesn't at first hear the horn fail to blow, and then later, when a truck pulls up behind him, he fails to hear a horn that does blow: He was looking at the sign, and he didn't hear the horn. An oil truck as long as a railroad car was behind him. In a second, a red, square face was at his car window. It watched the back of his neck and hat for a minute, and then a hand came in and sat on his shoulder. The driver's expression and his hand stayed exactly the way they were, as if he didn't hear very well. These two characters are as if there is a wall between them, a wall of foam. They can't hear each other. They're insulated from understanding what the other is preoccupied with. In Haze's case, he does this over and over and over again. And the most pitiful example of it is on 57. Poor Enoch! I feel so bad for him in this passage. Enoch is trying to hang out with Haze. This is on the bottom of 56, probably your 52. Haze is trying to get rid of Enoch: "Listen," Haze said roughly, "I got business of my own. I seen all of you I want." He began walking very fast. Enoch kept skipping steps to keep up. "I been here two months," he said, "and I don't know nobody. People ain't friendly here. I got me a room and there ain't never nobody in it but me. My daddy said I had to come. I would never have come but he made me. I think I seen you somewheres before. You ain't from Stockwell, are you?" "No." "Melsey?" "No." "Sawmill set there- set up there once," Enoch said. "Looked like you had kind of a familiar face." They walked on without saying anything until they got to the main street again. It was almost deserted. "Goodbye," Haze said. "I'm going thisaway too," Enoch said in a sullen voice. On the left there was a movie house where the electric bill was being changed. "[And then I'm going to skip down.] "My daddy made me come," he said in a cracked voice. Haze looked at him and saw he was crying, his face seamed and wet and a purple-pink color. "I ain't but 18 years old," he cried, "and he made me come and I don't know nobody. Nobody here'll have nothin' to do with nobody else. They ain't friendly. He done gone off with a woman and made me come but she ain't going to stay for long." Okay, and so on and so on. Poor Enoch! Does Haze care? No; not at all. "Haze looked straight ahead, with his face set." Poor Enoch. Nothing can penetrate Haze's imperviousness to other human beings. If Haze is busy looking at something, what he's looking at is manifestly not the person in front of him. He can't hear major elements of the soundscape: the truck horn behind him. He can't hear his own horn, whether it blows or not. He can't hear the voices of other people. What he sees is a mystery. As Sabbath Lily says, "his eyes, they don't look like they see what he's looking at." What is he looking at, then? I think we're meant to understand that he is so focused on the question of redemption that he fails to see anything else; he fails to see anyone else in his preoccupation with that problem. Now I want to switch gears, just for the last couple minutes, and ask you: what do you see when you read this novel? And I'm going to suggest to you something to think about. I see body parts. When I read this novel, I see a lot of dismembered body parts. What do I mean by that? Well, let's take a look. On page 32(try 28; see if you can find it), this is Haze coming to the house of Leora Watts: "He went up to the front porch and put his eye to a convenient crack in the shade and found himself looking directly at a large, white knee." And what's she doing? She's cutting her toenails. "Mrs. Watts was sitting alone in a white iron bed cutting her toenails with a large pair of scissors. She was a big woman with very yellow hair and white skin that glistened with a greasy preparation. She had on a pink nightgown that would have better fit a smaller figure." That large, white knee: the way this narration allows us to see through Haze's eyes begins to take the whole body apart, so what he sees is not Mrs. Watts; he sees a large, white knee. We saw a version of this also in the passage I was reading just before, where "a hand" comes in the window and rests--"lands"--on Haze's shoulder. "A square, red face." And then these things, these body parts, are then referred to with the pronoun "it"; "it," the hand, did this or that. Take a look at page 18. This is Mrs. Hitchcock in the train; it's Haze bumping into Mrs. Hitchcock: Going around the corner, he ran in to something heavy and pink. It gasped and muttered, "Clumsy." It was Mrs. Hitchcock in a pink wrapper with her hair in knots around her head. She looked at him with her eyes squinted nearly shut. The knobs framed her face like dark toadstools. She tried to get past him, and he tried to let her, but they were both moving the same way each time. Her face became purplish except for little, white marks over it that didn't heat up. It's that she's rotting; there is mushrooms growing on her, figurative mushrooms growing on her head. Her face is purple except for the white marks. The white marks are little scars, acne scars perhaps. She is a sort of mass of flesh. As Mrs. Watts, that pink wrapper--actually two pink wrappers, too tight on their bodies--suggest the excess of their corporeality, they"re big hunks of flesh. On 62, we get an account of Haze's childhood sin. He goes into the freak show at the fair, and he joins the crowd where his father also is. "They were looking down into a lowered place, where something white was lying, squirming a little, in a box lined with black cloth. For a second he thought it was a skinned animal, and then he saw it was a woman." On 15 (I'm going to skip back to the train; I'm just going to rack these up for you, and then we'll think about them), this is Haze waiting to be seated in the dining car of the train: Haze hesitated and saw the hand jerk again [the hand of the steward]. He lurched up the aisle, falling against two tables on the way and getting his hand wet in somebody's coffee. The steward placed him with three youngish women dressed like parrots. Their hands were resting on the table, red speared at the tips. He sat and looked in front of him--[I'm skipping down a little bit], glum and intense, at the neck of the woman across from him. At intervals her hand holding the cigarette would pass the spot on her neck. It would go out of his sight, and then it would pass again going back down to the table. What do we make of these odd moments of description? Why all these body parts hanging around? Why this sense of disgusting, excessive body matter? It's often women who appear in this guise, but it's not always women. What I want to suggest to you is that, when we actually look at the sentences on the page, when we look at the words that O'Connor chose in the moments of the narration, we see something that becomes more complicated than the "Flannery O'Connor is a Catholic writer"; "Haze Motes is a Christian malgré lui." That's a kind of focus. If we think about this, analogize it to how Haze looks, it's a way of looking at O'Connor's fiction that sees nothing but the theology behind it, that sees nothing but the Christian iconography. And I want to ask: what is it that we don't see, when that's all we see? What do we miss? I've begun to point out a few things that I think we miss: the fragmentation of bodies. Why are bodies consistently fragmented--not just here--everywhere in O'Connor's fiction? People are always losing a wooden leg and having parts of their limbs fall off. It's very hard to keep a body together in O'Connor, hard to keep body and soul…well, I won't get in to that. So why is that? What kind of methodology for reading would allow us to have something to say about that? Is it something we need to have something to say about? Is it in the same register of importance in our reading as some of these more theological, structural considerations that have been offered to us in her letters, in her preface, and in the very overt symbology of the landscape scenes, of these other scenes that I was reading to you today, in that image of the unhoused believer trying to find a home in an alien world? So, in my next lecture, what I'm going to do is pretty much contradict most of what I said today. I'm going to set aside theology as the lens through which I read. And, if you felt you were convinced by my reading of that iconography in these passages, then you want to think about why that's convincing. You want to think about how much attention and primacy we should give to an author's statements about what her work mean--as readers. Maybe you want to say, "You can't argue with that; we have to accept that. That's really what the writer intends to say, and that's what we should see, and that's what we should strive to understand." Well, I'm going to offer you two different ways--actually more like three different ways--to look at the novel in the second lecture. So, finish the novel for next Wednesday, and we will go from there. |
Literature_Lectures | Harvard_ENGL_E129_Lecture_1_Introduction.txt | um let me get a sense if I may of how many of you have studied Shakespeare before how many may I even ask this how many of you have studied Shakespeare with me before okay uh um and uh how many of you are brand new at the adult level that is the the the you haven't studied Shakespeare in college or more recently than College how many of you for for you this is a new thing okay great uh Shakespeare is an equal opportunity author it is uh uh both easy and hard always to study Shakespeare it's hard for us and easy for you that is to say you keep finding new things I get to the bottom of a Shakespeare play or a Shakespeare passage or a phrase in Shakespeare that's what keeps me going in this it's this is my my index of what makes great literature I'm quite nervous about valuative words like great but it seems to me that one of the indices of greatness if we're going to have that or the high value that we put on literature is that as you return to it again and again it continues to give you things that you didn't see before uh this is also I should say one of the great values of literary criticism and literary analysis that it continues to open up New Paths and just as you think you have gotten really everything you possibly can out of a passage or a scene or a character or a kind of character a new approach and we'll discuss some of those during the course of the semester uh we may open up things that actually one has not seen because one's been looking through a lens of this kind and we want to look through a lens of this kind think of it as a set of filters that you might be putting in front of your eyes to show you the the the the image through green or through red or through indeed rose color um so we'll talk about Shakespeare's plays and we'll sh talk about approaches to the plays um and and when I say we this is not the Royal we uh this is the actual wi of those of us who are in this room um I have been teaching Shakespeare at Harvard for more years than I like to admit um I for many years taught uh lecture course in Sanders Theater uh which was a shakes two semester Shakespeare course uh sometimes people took the whole SE whole year of it and sometimes they took one semester of it uh and I wondered years ago how I was ever going to bring myself to stop teaching that course which was a very wonderful course for me to teach uh very exhilarating and kept showing me new things about Shakespeare and about the plays and then uh a publisher came to me and said how about writing a book about Shakespeare that does that thing that many people had been telling the publisher was wanted that is to say uh provides a chapter on every play and I did that and that's what this book Shakespeare after all is and once I published the book I felt I could no longer teach that course because what I would have said ex cathedra was now in the book so this course is deliberately called sh Shakespeare after all it could be called Shakespeare after Shakespeare after all I think um because it builds what I have to say in writing into the experience that I hope and that we ask that you bring to these sessions every week so that each week we ask you to read a play of Shakespeare and also my chapter on that play and I may in my remarks go over some of the things that I've said here I may not you should also understand that this like any piece of literary criticism or literary theory is interpretation it is my view of things it is I hope supported by evidence that I bring to bear on on on the arguments that I put forward but it is not the only truth and you may think that in some places I'm actually quite far from the truth or that I'm I'm actually making an argument that that with which you want to take issue or where you need me to explain further in order for me to persuade you of something so that I we will hope that in the discussion part of our time together you'll feel free to pose questions both about the plays and also about my interpretation of the plays or indeed about any other interpretation that you may have read or experienced or any production that you have seen uh because certainly every production is an interpretation every actor every director uh every audience finds something new in these plays uh there at the very this very moment I think at the the Brooklyn Academy of Music Ian McKellen is doing his King leer uh which got a lot of very interesting reviews you may have seen them in the New York Times and other major Publications this is a major Shakespearean actor who's he he was here many years ago on an American tour and he and I actually did a radio program together on MC Beth he was then appearing in MC Beth and we did a radio program together at WB discussing what it was like for him to perform MC Beth I wish I could do that with him about his King Lear uh but this is he he certainly uh has has been praised very much for bringing his experience as a lifelong actor as well as as a as a a lifelong person into this production of King Lear which is otherwise so far as I can tell I have not been able to get a ticket to it uh a relatively traditional production uh of the play uh that is to say it is sat on a heath he is wearing Rags it it is not transformed into some other time period or some other place uh though I GA the fool in that play in this this this version of the play is a a theatrical entertainer of some kind uh and is thought of as somebody who actually stages plays of his own uh closer to home a production that you will certainly be able to see and that I might well invite you to see uh is production by the uh the the actor Shakespeare project which is a theater a Shakespeare theater in town here which did last year a very brilliant Titus andronicus and is doing MC Beth in in October and they've asked me actually to to moderate the discussion about the production and in this case it's a relatively untraditional production again I've not seen any rehearsals of it yet but every character in the play is played by a woman so this is an all female MC Beth and uh it will interrogate issues of women and power and issues of gender as performance and a whole range of other things but Titus andronicus was done in what would have been Shakespeare's own the the kind of production of Shakespeare's own time that is say all the parts were played by men uh you may know that in Shakespeare's time uh in England no women appeared upon the public stage so that every female character in Shakespeare including those that we have come to rely upon as icons of Womanhood or femininity whether it's Juliet or lady McBeth or Cleopatra or cresa or Porsche that all of these characters are cast for and played by men in Shakespeare's day which is to say that gender uh is from the beginning within Shakespeare an invented and performed thing not a natural or merely imitative thing uh and what's fascinating about this to me is that so many of our own stereotypes about what women are like a Juliet a lady MCB Beth a Cleopatra uh come from these characters who were initially performed by played by and designed for male actors uh but the the actors shakespear project is turning the tables on this question and producing an entirely female MC Beth and the discussion which will be happening on think the production begins on October 18th and the discussion which I'm asked to moderate is on a Tuesday night October 30th and uh there will be people who are directors and and people who are Scholars of psychology and I I I'll once I get the list of who they are I'll be happy to share this with you but I we we absolutely encourage you to see as much Shakespeare as you can how many of you actually acted in Productions of Shakespeare recently fairly recently good um directed Productions of Shakespeare right uh used Shakespeare in classroom or other teaching situations like what uh what no in what kind of situation oh high school high school teaching okay right uh you because shakes the reason I ask is that Shakespeare these days is being used in Business Schools a great deal as kind of case studies for problem solving of one kind or another uh that the plays are so available as kind of the myths of our time stories that people are thought to be quite familiar with that they're often being used to exemplify hard choices or leadership issues or racial discrimination or some other Uh current day issue which is localized in a fictional situation in this case not a you know Jane went to the market and encountered farmer X but a story that would have to do with porsa going to the the the courtroom and encountering and Antonio and so so um the uh Shakespeare has moved into modern culture and into popular culture in this way uh so we would encourage you to experience your Shakespeare our Shakespeare you know the Germans used to say unzer Shakespeare the Shakespeare is a initially German writer uh you should you should make Shakespeare your own in fact the very beginning of this book of mine this Shakespeare after all uh Begins by claiming that we all create every age creates its own Shakespeare that Shakespeare is like a mirror or like a portrait whose eyes follow you around the room uh that people recognize themselves in Shakespearean characters and situations and phrases that we think through Shakespeare that we use the phrases that we encounter in these plays uh to express the situations of common life uh and that Shakespeare is already inside us whether you have ever studied Shakespeare formally or informally before you will come upon things there a famous story about a a probably apocryphal story about a woman who went to the theater and saw a production of Hamlet and walked out at the interval and said that it was all made of quotations and that's because supposedly she had encountered it as you know as sets of quotes beforehand but I think that you'll find uh even in the plays with which you may be less familiar that you're that's where that comes from uh uh the uh let me think what what would be a uh Lord uh I'm trying to think of a line from trius and cres that would function in this way and I had one on the top of my tongue I'll I'll think of one but anyway you you you'll find that there there are lines that that that function precisely in this way so this this this course is is is a completely self-contained one you need not have read the earlier play of Shakespeare in order to read these um I I teach the plays as early and late by preference rather than by genre rather than by teaching a course on the comedies and the histories or the tragedies or the romances because it's so interesting to me to see what the playwright and his company seem to have been doing and engaging at a given moment in time that Echoes from play to play uh at a given time period seemed to me absolutely as interesting as anything that we could say was determined by genre and I'll say some things in just a second about these these Shakespearean genres and the degree to which they do or don't actually function but but first if if you just have a look with me at the syllabus I just want to want to indicate to you something about what it is that we are doing and then I'll say something about about what what this actually contains um so today we're going to talk in general about Shakespeare about his times about the the the start Point really of these so-called later plays uh about uh in the second half of this time period we'll look some closely at a couple of passages in order to to come to grips with Shakespearean language and see how rich it is uh and then we plunge into uh Tois and CA and measure for measure again let me take a kind of census here how many of you have read or seen from and Cresta recently in The Last 5 Years great okay um measure for measure fantastic okay U aell Because by the time we get to aell we're getting to the plays that for a long time in the last century the 20th century were thought of as you know the great tragedies of Shakespeare uh a fellow King Le Hamlet McBeth Hamlet is written in 1599 1600 uh and does not fall into the later plays category we'll be talking about Hamlet from time to time uh but these other plays the the great tragedies so-called uh became the centerpiece of the study of Shakespeare really partly as a result of the famous Oxford lectures of the critic AC Bradley uh who thought of these as Shakespeare's mour tragedies and the Bradley plays so-called became for the 20th century the kind of measures of Shakespearean great and I just want to say as a kind of parenthesis around this that there have been times in the history the long history of Shakespearean reception when the comedies were thought of as as more important than the tragedies when the romances were which are the latest plays that we'll we come to in a moment and that will come at the end of our semester we thought of as the culmination of his work uh that the the the the different kinds of plays that Shakespeare writes have been valued by different time periods and different places at different at different levels so that that we should not automatically assume because we are ourselves The Heirs of the 20th century that these Great Central tragedies are the best the strongest the mature the the the culmination of his work on the other hand they are fabulous plays and uh they are certainly the work of a playwright working and and a and a a theatrical manager working at the height of his powers uh but I I I will want to try try to demonstrate to you week after week how remarkable each of these plays is on its own and in conversation with the others and with the times uh so so we're going to start um with with these plays that Mark really the Turning Point uh from really from the late 16th century to the early 17th century uh the moving into in historical terms the time when King James actually came to the throne replacing Queen Elizabeth in now uh again one can make too much of the break between the Elizabethan plays and the jackan plays uh but once upon a time again a course like this would have been called Elizabethan and jacoban Shakespeare or the plays by by authors who were not Shakespeare writing in the same period would have been called Elizabethan and jacoban drama uh and and this acknowledges the degree to which uh Shakespeare and his company wrote and performed Med in a patronage culture in a culture in which the Monarch and or a a set of leading nobles were very determinative in what was approved in what was was was performed what was disapproved um the this this is a moment when the Renaissance theater is coming into being uh the uh prior to the time of Shakespeare and his contemporaries whether it's Christopher Marlo in the 1600s or Ben Johnson in the early 1700s whether it's marsten or Middleton or Haywood uh but prior to this time uh the theater is very much more related to either the church or to morality uh the in in the north of England in particular there are what are called Miracle plays and morality plays being performed which are either depict scenes from The Bible uh and with a kind of theological function that is to say starting at the beginning of Genesis and moving through uh toward Revelation uh with the the idea now again remember that we're dealing in a culture in which not everybody can read this is not necessarily a literate culture so you see performed scenes from The Bible uh and this is one way that you understand how how uh this founding text functions and uh these performances indeed were personalized by the development among other things of certain kinds of theatrical types so that Mrs Noah uh became a kind of type of the nagging housewife and a type that's still recognizable in situation comedies today and that has its more sophisticated mature versions in a play let us say like The Comedy of Errors of Shakespeare in which there's a husband and a wife in which the wife keeps saying why aren't you home for dinner and and and much a leftover of this idea of the sort of the the man being being controlled by the wife and this this this thematic will show up let us say even in uh mcbath in slightly different way but but the so these types developed uh King Herod the the the king of the Jews who who orders the slaughter of the firstborn is thought of as a kind of ranting Tyrant when hamlet in his play cautions the players not to out Herod Herod not not to to uh to overact to throw their hands around to gesture too much and so forth he is making reference to this figure of Herod from the old uh Miracle plays which again were still in the memory of some of the populace um these plays continue to be performed together with the morality plays that is plays like every man plays that where where characters have symbolic names uh like Good Deeds or every man or the four wits or the the seven seven deadly sins uh and and characters like that too are carried over into the Shakespearean drama in for example uh the very famous figure called the vice VI I the embodiment of a certain um resistance to happiness resistance to to to personal success a trickster figure and a figure indeed of viciousness uh and the vice figure uh is mentioned explicitly in a play like Richard iiii it's mentioned in by Fall staff in Henry ivth part one these are both early plays and we'll see that in a play like aell there are in uh Yago some vestiges of this stock character of the vice who is both comic and uh uh antisocial often witty at the expense of others often confiding to the audience in a set of asides and that not only Yago but the figure of Edmund and King leer will carry some of those aspects too so there's a kind of residual cultural memory here it's not necessary necessarily the case that anyone in The Shakespearean audience has ever seen any of these Miracle or morality plays but they are the literary Heritage or one of the many literary heritages that are being carried through and we'll see that there are other heritages for example when we come to trus and cresa you'll see you'll see that the recovery of the ancient Greek and Roman texts uh makes a tremendous difference to how it is that Elizabethan and jacoban people and writers imagin themselves that they saw themselves as successor cultures we talk about the American Century this was in a way the Renaissance English Century that thought it was recovering and recapturing the glor stories of ancient Greece and Rome and so they measured themselves to a certain extent against these older cultures uh the in the in the late 1500s Sir Thomas North uh translates plutarch's parallel lives of the ancient Greeks and Romans and plutarch's lives was a book that that compared an ancient Greek hero with an ancient Roman hero these Pluto's lives in the North translation are one of the many sources for Shakespeare's plays and especially for his plays that that speak about ancient Greek Greece or ancient Rome uh but we have to also Imagine uh the the the writers and the audiences of Shakespeare's day as putting a third comparison figure next to the Greek and the Roman and that was the figure of their own time the figure of the Renaissance how is a Renaissance king or queen or Monarch or Duke comp comparable to br brus or to Mark Anthony uh or to timman of Athens uh and so so this comparative mode uh this mode that is both referential and comparative is one of the things that makes it useful to know something about the intellectual background and the theatrical background of these plays at the same time there's a radical difference between those plays and Shakespeare's plays in that The Shakespearean theater or I should say the early modern theater the theater of the Renaissance is a commercial theater being performed in a non-sacred place in a purpose-built place in a in theaters built to be theaters the first one that Shakespeare's company played in was called the theater uh and it was built in 1576 and its timers were torn down at the end of the 16th century to build the sh the the theater that you probably most associate with Shakespeare what's the name of that theater The Globe the Globe Theater uh and it is built out of in part the the the disassembled pieces of this earlier theater uh how many of you have any of you been to the globe the reconstructed globe in England on on the bank side of the temps uh very faithful reconstruction uh with the company getting increasingly good I've seen some good things and some not so such good things there but if you've been there you will know that the uh acting is is played in the out ofd doors there's no roof uh there and there's no electricity in 16th and 17th century England either so the uh plays are per performed in the afternoon uh and there it's a very Democratic kind of theater space uh in that the the below the stage stages on a little platform below the stage is an area in which people stand and do not sit and they're called the Groundlings they could bring their own stool but there's no place for provided for them to sit and then there are tears of seats around the sides in which for a little bit more money if you were a person with more to spend uh you could have a seat uh you could even if you were a person with good deal of money sit in a curtained off space and have a sexual diance while you were watching the plays uh I want you to think about this as being as much like a modern sporting event as like a sacred theatrical event because food is being sold uh Delian is of various kinds canoodling is going on uh Behind these these curtains uh people shout out and speak back to the actors uh it's it's a space in which the theater is truly interactive and it has there's no front curtain uh so that there is a uh a an intimacy a potential intimacy between the actor and the audience uh even though the Globe Theater we think could have held about two th000 people so we're not talking about a space like this talking about a space in which uh there could be lots and lots and lots of people uh people of different social stations women as well as men apprentices as well as kings and queens and and noblemen all sitting in this space that became a democratic space wrong place take care um and uh the the the when an actor comes out to speak in Soliloquy or you may think of it as monologue Soliloquy the single voice speaking uh he or she is speaking intimately to the audience in the theater uh and whether what's happening in that Soliloquy is confiding one's inner thoughts or confiding a plot or a plan uh that is to say whether it's Hamlet telling you that to be not to be that is the question where you think oh to be or not that is the question you know it's a universal question or whether it's Yago as you'll see coming out and saying I have it it is engendered hell and night will bring this monsterous plot to the world's light uh bring making you his confident making you guilty in the theater because you know what's going to go on that a fellow doesn't know and you were therefore ganging up on him together with Yago because you already know the plan and you because of the convention of the theater cannot warn AOW what's going on uh this device of the Soliloquy was a way of creating among other things a kind of psychological inside for a character shows you how the inner workings of somebody's mind would would function shows you their inner thoughts a figure like yo or a figure Like Richard III or any other quotes villain has got to have two faces has got to be able to make nice on stage while making very much not nice in his own mind or her own mind and sharing that not niceness with you so it creates an inside and an outside of a character and we'll see when we come to look at some passages that it's not only the Soliloquy but the language itself that creates thickness Dimension uh a roundness of these characters that these are not merely allegorical characters though they partake of allegory they are not merely symbolic characters though they have become for us deeply symbolic of ambition of self-doubt of true love of a whole variety of things because they're Shakespeare uh but these are characters who have what we think of as personality and personality here is if I can use a technical term here from from from literary study a back formation that is you develop a sense that somebody has a personality from the the contradictions that you sense within their personal from the resistances that they should I do this or should I do that uh if they have no self-doubt if they have no internal dialogue then it's much harder to see uh that they are not merely cardboard cutouts but almost every Shakespearean character from the lowest to the highest has this kind of Dimension has a a a hidden story as well as a present story and this will all be disclosed in language whether it's the language of the individual or the language of dialogue because of course we're dealing here with theater not with lyric poetry and not with novels so that uh every character could represent something that Shakespeare whoever that was uh thought or didn't think every utterance made by a character in these plays has its own validity there is no space and you'll see this very explicitly when we come to Tois and cresa which is a play very much built on dialogues between people somebody says let's go left the other one says let's go right and so forth and there's no way that you can say Shakespeare's on the side of those who are saying let's go right uh and he's against those who are saying let's go left these dialogues are meant to make you understand that there are many sides to these questions and whether the sides are moral or ethical or aesthetic or romantic or erotic or political uh the the the the dialogue is really what you get you never get we're never going to get to a point where say aha Shakespeare believes this uh we cannot know what Shakespeare believes and so one of the things that I want want you to sort of put a big mental X through uh as we we come to look at these plays and even as we come to talk today is the question of trying to read Shakespeare's mind what did Shakespeare think what did he have in mind when he did this I can tell you and and Mel and Larry can tell you and we can all guess and hypothesize about what would have been likely in the period it would have been unlikely that he' ever seen a bicycle or been in an airplane I mean there some things that we can say he probably wasn't thinking about because they would be anachronistic which doesn't mean that they wouldn't be we couldn't you know today have as Peter sers famously did a king leer uh at at Adam's house that were which was what was it Cadillac on the stage uh now we I think we can probably say fairly confidently that he wasn't thinking about that Shakespeare didn't have General Motors in mind uh but that that that something anachronistic can function extremely well within a Shakespearean play in fact the plays themselves are deeply anachronistic there's a striking clock in julus Caesar at a time when they were not striking clocks in ancient Rome and so forth and what in fact this business of anachronism does is to show you something about the modernity of Shakespeare again that Shakespeare is always in the present time as well as in an earlier time so no matter what the time period or the nature in which a play is set if it's ancient Greece ancient Rome um medieval England medieval Scotland uh it's set in that time period it is written and initially performed in Shakespeare's own time the late 16th or the early 17th century and it is being read by us and or performed or seen by us in the present day and don't ever erase the present day you cannot make yourself into an Elizabethan or jacoban person you cannot put yourself into a purifying time machine and go back and get rid of all your modernity you can learn a lot about earlier periods but you're still going to be thinking about them from from your perspective as a person of today and this is a plus rather than a minus the authentic Shakespeare is a moving Target the authentic Shakespeare is not only the Shakespeare of the early modern period but also the Shakespeare that you experience so on the one hand we want to to be cautious about about certain kinds of anachronisms on the other hand we want to allow interpretively the function of anachronism creatively here but we're not going to be able to read Shakespeare's mind we're not going to be able to uh it's not I should say that it's prob it's not not not profitable uh for us to try to speculate too much about whether he liked blondes or brunettes for example did the dark Lady of the sonnets represent his own personal taste or did he prefer you know Fair ladies or Fair gentlemen uh I mean these are interesting questions these are questions that belong to the world of speculation and biography uh but in fact Shakespeare's plays are full of both blondes and brunettes and their Chief function is to be notional opposites or alternatives to one another it's not about blondness and it's not about brunette in the plays at least it's about an interplay between tall and short or blonde and brunette or old and young or or or uh Greek or or Trojan it's it's it's about interplays and about dialogues and conversations so uh the The Shakespearean drama as we will be entering into it is a Shakespearean drama that has been evolved on the public stage in England for decades already that this is a moment of high theatrical ferment think about as an analogy about the early days of Hollywood for example the early days of film making when it was really exciting to be creating a new medium and seeing what it could do and changing what the medium could do adding sound to film adding plot to film uh adding fiction to film adding color to film so also for these early modern writers uh they had a medium the theater uh that they were able to make do lots of different things and part of it is the discovery of how to use language character costume the physical stage uh as a way of of stretching the medium making the medium the theater do something new and so there will always be something self-referential about these plays something meta theatrical iCal as we say theater about theater some some little moment in which people say oh this is just like being in a play or or uh as an actor said walking on the stage or MC Beth MC Beth says you know the uh like a poor player who strs shuts and Frets his hour upon the stage that that there'll always be some gesture some acknowledgment that this is a medium that it's talking about its own own possibilities whether it's talking about language or about character or about costume or about disguise or about impersonation there's always going to be this meta theatrical referential element within the plays this is not the the the reductio of the plays this is not the only thing that they are about but they are about the medium just as they are about the narrative and the characters and the language uh and so by the time sh Shakespeare I should say uh is is part of an acting and Theater Company He Is We we we know an actor as well as a theatrical manager he he belonged to a company that actually made more money than any other theater company but there are a lot of startups a lot of companies that are starting up in this period and they're all called things like the Lord Admirals men or the Lord Chamberlain's men or Lord Strange's men uh they're all called somebody or others men because of a statute uh promulgated under the time time of Queen Elizabeth against what are called masterless men in this culture that is marauding bands of people who were homeless and hungry and and and wandering about England uh and of whom the state was frightened that that they the so the idea that you didn't want mobs as use an anachronistic word but that you didn't want masterless men wandering about England got in the way of the the way theater was practiced in this period which is to say that these were traveling players very of often they moved especially in the summer months from great house to great house they performed uh not only in the in the state theaters but also in private homes and especially when there was plague in England which there was frequently especially in the warmer months uh if there was plague for too many days the theaters were s shut down by the state and in order for the players to make any money they had indeed to go traveling so traveling groups of male actors were masterless men unless they were in fact had a master and so if you had a patron of your company Lord strange as your Patron or the Lord Admiral as your Patron you were not Lord you were not masterless you belong to a acting company and you could be legitimate that's why this this this these so and so's men um functions here as a kind of of cover uh to allow these acting companies to function this is kind of early capitalism this is the beginning of you have a product or a service to s and Shakespeare's company which began as the Lord Chamberlain's company and then when King James came to the throne was adopted by the king who made made himself their Patron and became The Kingsmen in 16003 uh this was the most successful company of many successful companies in Shakespeare in in in early modern England uh and and the plays that they wrote and the plays that they performed uh were not it's not like cats it's it's not like something runs for 400 performances 10 is a good number of performances in this period so they're rehearsing they're learning a new play in the morning they're performing in the afternoon they're may be writing a play at night when they're not drinking uh they're they're uh they're and the plays are performed in Repertory the plays are performed not a solid run of one thing because again You' got an audience that's before television before radio uh there are very few books around actually this is a major entertainment this this is a kind of entertainment that along the banks of the temps function side by side with u be baiting and uh I mean it's Michael Vic you know it's it's it's it's uh it's all kinds of animal things there there's there there dogs there's fighting there's there's there's all kinds of displays it's a much more robust era than our own uh public hangings were among the most most popular popular events to be attended by audiences uh so that this is a kind of entertainment which has a major social function and a ma major kind of collective social function uh and just to give you an idea an apprentice's daily wage would be about the same amount of money as a ticket to the theater um so so they are writing and Performing and inventing and being rivalrous to one another and uh the theater flourishes under Queen Elizabeth who is very interested in it and then then and and the so the early plays uh very often uh have some glancing reference to her to uh it's full of of of empowered young women who become strong uh and then King James comes to the throne in 1603 and it's about this time it's actually a little earlier than that that we're going to start but it's about this time uh that our of plays begin uh and James was very different from Elizabeth James is the son of Mary Queen of Scots who was Elizabeth's great rival and whom Elizabeth had beheaded uh James was King of Scotland uh James the the six of Scotland uh and he was since Elizabeth is the Virgin Queen she does not have a husband she does not have any any any Heirs of her own body uh and she kept control of the state for a long time par because she did not have to share her power with a man and she did not have children who would succeed her uh and so James is her Heir and he inherits the throne from her in 16003 and the plays that we're going to be looking at are all plays or mostly plays that are produced uh under uh the under the time of James again one can overstress this when we come to MC Beth we'll talk about James's interest in witches and Witchcraft and the degree to which uh writing a play that was about witchcraft was certainly uh a gesture that would have been of great interest to the king and we we'll we I can talk to you when we come to measure for measure even about things that people have said about the the the Duke in measure for measure the the uh the Duke of Dork Corners as he's called um and and how he seemed to some of them to to some some some critics I should say uh to resemble King James in his pensent for uh spying from afar having intelligence brought to him of what was going on uh James was a very peculiar guy in lots of ways he was a scholar he was also very interested in the court mask in a certain kind of Court performance um he uh had very strong feelings about about certain political matters uh and certain historical matters and we'll talk all about these as we go through uh but the place that we're looking at are the plays that are roughly speaking the the second half of Shakespeare's career uh starting about 16003 and moving through to the Tempest uh which is uh the we think a play written in 1611 uh and performed in 1611 which we we is often misguidedly described as Shakespeare's farewell to the stage in fact he didn't say good by to this age in 1611 he wrote at least one other play completely on his own which we're not going to have time to look at Henry VII or all is true how many are familiar with Henry VII or all is true okay great uh he collaborated with other people on other plays at this point uh Shakespeare's born in 1564 he dies in 1616 and uh the first folio of his plays is published in 1623 which is to say he did not design it uh it is published as as a homage to him uh by two members of his acting company hemings and Condell uh and they publish it in folio form which is a great big I wish I'd brought one in a great big form uh which is normally used in this period for sermons and for learned information of various kinds not for plays which were thought of as trash thought of as as paperback books of their time as Sir Thomas bodley when he built his his his uh library at Oxford forbad play scripts to be put into the badan library because he they were he said baggage books they were trash uh they they they were low they were were were uh uh too funny and too dirty and not edifying and so forth um Shakespeare's plays aren't the first plays to be published in folio form Ben Johnson published his own plays in folio form in in 1612 and he had he was a great great uh classical scholar had the word operah meaning Works lettered on the spine of the of the of the folio and he took a lot of heat for publishing plays in the folio form it's as if uh you were to do you know an opera out of the South Park or something like that that you would be bringing together two things that would sometimes be thought of as not not cohering uh but the first folio uh is so to speak the first authorized Collective publication of Shakespeare's plays before that the plays are published in quto form and CTO is about this size it's a piece of paper folded to be about this size and so they are like little paperback paperbacks and some of them are authorized and some of them are not authorized and some of them have the playwright's name on the title page and some don't because the playright isn't really the most important person at this point the person who who who made money out of the CTO was the book seller not the the not the playwright not the company uh but the guy who actually produced the physical object uh the playright is I mean the the the copyright appears in English law uh in uh the beginning of the the uh 18th century 1710 the statute of an before that time authors don't really own their works in the way that we think of authors as having copyright rights it's a lot of borrowing back and forth a lot of creative borrowing but the first folio of Shakespeare's place is what divides the plays into these hypothetical sub areas of comedies histories and tragedies uh sometimes plays are sort of on the cusp between between one status and another trist and cres is one such play is it a comedy is it a history uh it's not quite clear on on the title page of one version it claims to be one thing on the title page of another version it claims to be something else these categories are categories that are invented and made to be played with in other words there's no uh there's no uh genre police out there saying uh here the rules of a comedy and if you disobey these rules then you know you fail as a comedy uh in fact what we'll see as we come to look at these plays is that all comedies have bits of history in them and bit bits of tragedy in them that all tragedies have bits of comedy and bit bit bits of history and so forth and often very very very self-consciously so and that they kind of push against these genres so that I mean what would be some things that you would hypothetically say would be true of a comedy yes happy ending happy ending what would that mean mean the community comes together okay good uh and yet I will be happy to prove to you that every single Shakespearean comedy does not do that that there's somebody left out some marriage that doesn't take place some for boing remark about you know what might happen in the future there's some thread some loose endings which makes them such good place where it doesn't all the pieces don't come together so perfectly they move in that Direction and yet there's something left over there's something left to go uh and the tragedies we'll see very often end with somebody often a person of high rank uh coming forward and saying now we've all come together we live through all this the the all this together and now we've learned our lesson and so forth and inevitably you can hear the Jaws music behind uh that there there's there are there's for boing in the very attempt to bring everything together uh it doesn't mean that uh that it's actually dark rather than light I mean that would be too simp simple to say as well but just that it's always nuanced at the end that as it's nuanced in the middle that these things are not perfectly happy or perfectly tragic uh that in fact the the playright is playing extremely knowledgeably with these impulses um the there's a Temptation in talking about these genres also to try to align them with uh ancient Greek structures a Shakespearean tragedies not very much like a Greek tragedy uh it has certain broad structural things that we could say it has in common it has recognition it has has uhu uh Discovery moments uh they but but but in fact they're they they had a notion of how it was that classical tragedy was written uh there was this idea that somehow a good play had unities attached to it if you go back and you look at Aristotle's U art poetry Poetics uh you'll see that he talks really about one play of unity of action that is say it shouldn't be uh disperate uh Shakespeare plays violate this notion of the unity of action all the time by having for example double plots anybody who knows King leer knows that there's the Gloucester plot and the King leer plot uh and uh that that that the two plots will come together that's what makes for an interesting Shakespearean structure but it's not quite the same as Unity of action Unity of the the Elizabethan jacap period thought that also there should be Unity of time and unity of place and you'll see that these plays violate that those those supposed precepts all the time that in the the late place that we're going to look at uh like the Tempest or syene or or Pericles uh there are great blocks of time in the middle of of of Winter's taale there's a block of 14 years between act three and act four this is not Unity of time um in um the the in paricles the action is set as the stage Direction says dispersedly in many Mediterranean countries so this is not Unity of place uh they they uh these plays again have have somewhere in the back of their their minds this idea about unities but they have them there in order to sort of push against them and in fact the the idea of these three unities was slightly a Renaissance misreading of Aristotle the idea the time place and action were the three unities action is really what Aristotle seems to speak about and action here for us would have to do again with whether whether there was something satisfactory or fitting for us in the ending of the play so I I meant to direct your attention to the syllabus and I'm going to now go back just through it uh just to to to say out loud that we're going to do trus and cresa and then measure for measure both plays that have sometimes been described there was a time in the gosh the early in the middle part of the last century the 20th century when these were described as problem plays or dark comedies uh the word the phrase problem plays and we'll talk about this more when we get to these plays was really borrowed from uh uh the middle plays of of of ibson or of Shaw plays that's that that that seem to be about social problems uh problems of of the culture whether it's water pollution or syphilis or women's rights or something like that this where this phrase problem play comes from and these plays really are about the measure measure you'll see is is a City Comedy it's about a corrupt society and how you can fix that it's got they're hores and pimps it's not uh kings and queens uh uh and so so these plays were sort of diagnosed as problem plays uh borrowing this epithet from the early 20th century dramatists they are problem plays after a fashion but this is a kind of madeup title and it's not one that's used very much anymore but they are deeply and interestingly problematic and their plays very much about sort of emergent characters and character types uh a fellow King Le MC Beth I don't really need to say anything about about why they're here Anthony and Cleopatra is in my view one of the most magnificent plays uh in the entire Shakespearean Canon I never like to play favorites here but I love this play uh and uh it's great fun and exciting and exhilarating to talk about uh the it is a quotes Roman play as well as a tragedy because it's said in ROM and in Egypt uh and corilanus is also a Roman play but where Anthony comes at sort of the end of the Empire uh coralus comes at the very beginning of Republican Rome or nent Rome uh it's a very different moment and uh you know the it's a long time period in Rome so we don't think of these as exactly the same time period even though they're both quotes Roman and then we come to this cluster of four plays pericle symbol the Winter's Tale and the Tempest that have been variously called romances or late plays or tragic comedies or a variety of other things uh all of these again uh labels that people have used to try to control them a little bit to give them a generic expectation that would allow for you to know how to read them or how to perform them when we come to talk about them I'll say some more about the genre of Romance the adventure the fantasy tale and so on but we'll we'll encounter an encapsulated romance much earlier uh when we get to AOW when we find that AOW has told the stories of his adventures as a hero in Uncharted territories fighting against Monsters and tribes and and and sea figures and so forth in such a way that he has Enchanted Desdemona and made her stop what she was doing and stand dumbfounded and listen to him and the story that a fellow tells even though there're his actual stories are really romance stories stories from a kind of romance genre um these were so the romance is a term that really begins at the end of the 19th century these plays become called romances uh and it's meant to kind kind of Praise their fairy tale aspect this is the great Victorian period in which fairy literature is interesting to people in which empire is also interesting to people and this idea of adventuring beyond the local late plays was meant to be on the one hand a very neutral kind of designation that would not give anything away these are late in Shakespeare's career but I think that we need also to think about them in the way in which people talk about late style these days say you know the late uh music of Beethoven or the the the the the the kind of work that is done or the or the late work of rembrand kind of work that is done by an accomplished artistic Master toward the end of a career when what you can see very often and we'll see this with these plays is a a mere if it were rbr a mere pin stroke a mere gesture you wouldn't have to paint the whole canvas thickly you could have a sketch would show you the whole figure behind it because the gesture is itself so accomplished we'll see that with Shakespeare that there are there are these characters are often sketched rather than fully formed because they're referenced to characters we met earlier in Shakespeare's plays um and uh the the last of these designations tragic comedy was was actually what these plays were often called In the period a mixture of tragedy and comedy uh and and this was thought of by some people as mingling kings and clowns to use Philip Sydney's terms as a kind of mixture that you shouldn't do but it's actually a highly desirable highly audience pleasing genre at the time that Shakespeare is writing them so these plays all a little bit match up to what it was that interested audiences we will now come and take a little break uh part we we're going to always take a break in the middle and this is partly again to think about the medium partly a break that is enforced for us by the fact that we have tapes Rolling and we come to the end of the tape but this is also time for you to take a little breath of air walk around if you like use the restroom come back in about 5 minutes and we'll Begin Again say just one or two it was asked a question which I a question I should have answered before it was asked about what books you what Shakespeare text you should use uh and my answer really is any good text with notes attached to it uh We've B we've ordered the Norton Shakespeare for you I'm a great fan of the Riverside Shakespeare the uh Bantam Shakespeare's are fine the signate individual volumes are fine the penguin volumes are fine please use however an addition that does have textual notes and and uh glossed notes notes that that explain what words mean and so forth don't use one don't use the old y Yale Shakespeare or or something that's got no notes you were going to need the notes you're going to need to have that information yesel Pelican's fine Pelican's absolutely fine some of you may prefer to have individual volumes rather than carry a big heavy single volume I don't care what addition you use use my Shakespeare's Shakespeare after all is keyed to that is say that when when I give aene and line numbers they refer specifically to the Norton but that was just because I had to refer to something and the reason that line numbers will vary from addition to addition is because Shakespeare writes in Pros as well as in verse and a line of Pros depends upon how wide the page is so you will find that there's a slight variation if you're using the Riverson or the Pelican and I say act 3 scene 2 line 42 it could be line 46 in your addition but you will find it easily and that's not not sufficient reason to buy the Norton rather than some other Edition I do however insist that you buy this um uh but but which what what Shakespeare Edition you use really is it doesn't matter to us uh one of the editions that has the most notes is the ardan uh which I do recommend if you want to invest in some interesting you know something that's edited very very fully uh but whatever you have is fine with us I'm now rather inelegantly going to take off these various microphones and give them to Mel so that she can explain the web component of the course which is one of his most exciting aspects and as Mel suggested very often we hear people because people become so involved with Shakespeare and it's so it does feel so familiar and so appealing uh that people will say well you know I get the feeling that or I was just wondering whether and utterances like that which are absolutely human and instinctive uh fall into the same category as why did Shakespeare do X there there there's no proof that can be evidenced from that so uh I get the feeling that is going to be answered by us by saying well what in the text do you think has given you that feeling and we're going to want you to go back to the language or the gestures of the plays in order to say well I see that I get the feeling that Hamlet is a stunted adolescent because he is spending so much time worrying about his mother uh rather than about anything else uh that's not a particularly good example because it's so General an example uh the the other thing I should say is that that it's not so much about what not do as the pleasures of what you can do with Shakespeare and the enormous pleasures of what you can do with Shakespeare are to encounter the plays as as intimately and specifically as possible so we uh thought we might start with some passages for the time that we have left and read through them and try to encourage a general discussion Larry does have the microphone and so um I I guess so Larry you got a microphone right so so can you be my other on the on the dialogues and be understood okay you want to be Cleopatra or you want to be dolabella his face would I should say that the the topic here is just give you set the scene for you a little bit uh the one of the fascinating things about atan Cleopatra as you'll see is these are two very Titanic figures uh which is going to subsume the other he is the embodiment of a certain romanness she the quintessence of Egypt in all its variations and the structural answer to who wins is that Anthony dies in act four and Cleopatra lives through Act five so here we have Act five act 5 scene 2 and uh Cleopatra is conversing with one of Antony's soldiers dolabella about her imagination of anony you you'll see when we come to the play that everybody has a fanty fantasy fantasy about Anthony everybody talks about Anthony in some extraordinarily symptomatic way but this is a particularly powerful one because it comes after his death and so automatically he's a grandis and she is taking pleasure I would say even kind of erotic pleasure in having a discussion with someone else who loved him about Anthony his face was as the heavens and therein stuck a sun and moon which kept their course and and lighted the little o the Earth most Sovereign creature his legs bestrid the ocean his reared arm crested the world his voice was property as all the tuned spheres and that to friends but when he meant to Quail and shake the orb he was as rattling Thunder for his Bounty there was no winter in an Autumn it was that grew the more by reaping his Delights were dolphin-like they showed his back above the element they lived in in his Livery walked crowns and crownit Realms and Island were plates dropped from his pocket Cleopatra think you there was or might be such a man as this I dreamt of you remember the next line gentle Madam no gentle Madam no this is your fantasy so let's look at this fabulous passage and I'm so glad that Mel did the plates thing because you silver plate is the sort of medial thing you've heard about you know uh plate in that sense but the idea that plates were coins uh is one very good example uh what what do You observe in this passage it doesn't have to be what is a word mean but what do You observe about the whoops the language of the passage seems very you you're going have to raise your hand and and get Larry I'm sorry this is quite cumbersome no no down here down here there will be two microphones in future just somewh of an arbitrary observation but it seems very nautical just right off the bat nautical yeah show us some nautical well the descriptive terms you know his legs bestro the ocean raised above the crested world let's see his Delights were dolphin-like they showed his back above the elements uh his uh Realms and Islands were his plates dropped from his pockets just seems very notic for some reason the Dolphins and the ocean I I will indeed associate with the sea and with its uh shakespare very often Associates the sea with both lifegiving forces and with storms that are threatening uh his legs bestrid the ocean what what kind of a figure is that I mean what if his legs best the ocean what does it say about him exactly so so we think about maybe the Colossus of Roads we in any case we think about an enormous figure because to bestride the ocean is to have to one foot on land on either side and what about the Dolphins L we're not quite finished what about the Dolphins his Delights were dolphin like oh it kind of jop poses from the uh the Colossal image uh brought about with his legs bestro the ocean now you have very delicate image of a dolphin you know purposing to the ocean so so you you're closer to the ocean here you return to the ocean Elemental figure rather than a menacing colossal figure and uh the the the the semicolon and again Shakespearean Punctuation is is is all imposed by editors so that the that you shouldn't in general make an argument about a dash or a period or something because these things will vary from from moment to moment uh punctuation in the Elizabeth and jacoban period was an evolving art and there were lots of theories about it but here the semicolon seems to suggest that they showed his back above the element they lived in is an explanation of what dolphin like meant there a kind of physical image what does element mean here it's yes is the water now uh element in this Peri you'll see that uh well I guess we're not reading 12th KN but it in in the in in the element could be air it could be water it could be the sky but it's it it does have to do with how many elements in general were they thought to be four elements air water fire and Earth and this is this old structure of very much older than Shakespeare of thinking of the world in these four parts uh comes back in play after play after play that that we'll see in the Tempest that there seems to be a s of split between the characters that are airy and fire likee and the characters that seem to be earthy and waterlike and so forth so so exactly we have a kind of microcosm of the world here in these the the the showing of The Elements In this passage what else there were there was hand behind you all right no more no this gentlemen here here well along the same lines and in a more General sense just the use of the heavens and as all the tune spheres and as rattling Thunder uh makes him sound Godlike or or attune to the heavens and things larger than the earth he he's he's mag hugely magnified he has become a creature much bigger than the ordinary human remember she's having a conversation with a human being and she's describing this apotheosis of Anthony where where every single thing his his arm crested the world his voice was property what does that mean propertied does it mean I don't know it has the properties of having the properties of yes exactly um and comes from the word proper meaning your own so it was propertied uh as all the tuned spheres now again here's the idea of The Music of the Spheres this is again an old notion about perfection that the the planets travel and they make music as they travel is a wonderful poem called Orchestra by a poet called John Davies that writes about this but because we're Fallen because we're mere mortals we can't hear that Music of the Spheres which is the sign of heavenly Perfection but Anthony somehow is that Music of the Spheres his voice was propertied as all the tuned spheres and that to friends it was it's the perfect sound all the way over here sorry sorry um it also seems it it seems like a not only Perfection but a kind of a tyrannical tyrannical Perfection it's it's you know she's evoking all these images of nature but they're not images of human nature the um the one thing that struck me was the and moon which kept their course and lighted the little o of the earth um you know not only is he not only is he um sort of shedding his light but he's also in he's also controlling why do you say tyranical though um well the his dominating I can see tyrannical say why well that that might just be uh me being a little hyp hyperbolic hyper well but Tyrant I mean even in this period a tyrant has a has a not only controlling but a kind of politically negative veilance to it and I I want to be clear whether whether you're suggesting that there's something inadvertently or deliberately negative about this figure um well when when he meant to Quail and shake the orb he was as rattling Thunder um suggests you know that he had he he had um one of his uh methods of you know keeping it it seems like she she saw him as a let's not do it seems like let's not do it seems like uh but go back no because we can do it a different way go back to the previous line when his voice was property is all the tuned spares who was he talking to two friends two friends so this other stuff the quailing and shaking the orb who's he talking to um to anyone he meant to intimidate sure the non friends whether they're enemies whether they're inferiors whether they're right we don't know what they are but they're they're not in the other the others than friends to Quail what does to Quail mean here sh sorry sh to to yes now it's not that he is quailing but he's making somebody else quail in this case this a kind of transferred epithet and shake the orb right and I I think the the shake the orb with the with the uh the Sun and Moon which kept their course and lighted the Earth and um you know for his Bounty there was no winter in it and Autumn TW that grew the more by reaping right let's just pause on that image for a second an Autumn T that grew the more by reaping what kind of an image is that what is what what's happening in that image yes there's season of death there's only season of autum the season of fruition or or when we Harvest say the first thing that you said because it wasn't captured on the table there's no season of death there's no season of death yes that that the Autumn this is very much like K's OD to Autumn and in fact I think K's OD to Autumn is very much influenced by Antony and Cleopatra the idea seasons of season of Mists and mellow fruitfulness uh is how that that poem Begins the idea that Autumn is infinitely fruitful but the Paradox built into this a bounty Ts that grew the more by reaping now we might ecologically say that we understand that that in fact you have to keep things moan in order to keep them growing and so forth but but the there's meant to be a kind of paradox here that somehow uh the more you pluck from the tree the more fruit grows on the tree the more you take the more he gives this is not the first time and we'll see this that that Shakespeare uses this very image he uses this same image for Romeo and Juliet as he does for Anthony and Cleopatra that the idea that love is infinite that Bounty precisely comes out of love and that that it cannot be measured by ordinary monetary or numerical or space or time limits it's about something else it's about Infinity in replication about Infinity in space about Infinity in generosity uh so so good so the the um the uh the Sun and Moon I think it's crucially important to sort of and think about this as a kind of cinematic moment in which they she start what was he like well he was like you know something Celestial and kind of zero in on the sun and moon and then you kind of come down this is goes back to what you're saying about the ocean then turning into the Dolphins where from way up here where you Sun see the Sun and the Moon you're in the the hood blimp or something you're not going to see those Dolphins it's only when you see the ocean then you can come down closer and closer and closer till you see the dolphins and you get to the plates in his pocket so you're all the way now down focused on him but what's in his pocket what's in the pocket of this colossal figure is it really plate it's Realms Realms and Islands were as plates uh dropped from his pocket so so you have simultaneously the figure of a man with coins in his pocket and the figure of a colossal figure with countries and Islands spilling out of his pockets because he owns so many of them he is a figure of the earth look back through this passage and see how many times the word as appears his face was as the heavens his voice with properties as all the tuned spheres uh the uh Realms and Islands were as plates dropped from his pocket so you have both comparison and metaphor here you're dealing both with the sense that she is always in control of generating these images and also with the overpowering beauty and intimidation of this huge figure itself um it's so it's a wonderful passage and it's wonderful passage because it performs what it describes it starts big and it gets small and it describes precisely both his magesty and also his accessibility and and at the same time she is talking to a realist she's talking to a Roman soldier and so he keeps trying to interrupt her as if she's having some kind of fug like fantasy that he thinks it's dangerous for her to carry on and have most Sovereign creature Cleopatra uh and and finally when she says to him think you there was or might be such a man that's like as as this I dreamt of gentle Madam know her answer you lie up to the hearing of the Gods so she just rejects his rationalization she rejects his idea that no man could be that big or have that much wealth or have Realms and Islands in his pockets uh you lie up to the hearing of the Gods she's going to make her own reality uh and that's that's what we see her do from this part in the play on to the end how are we doing for time we're just about I think out of time um let me rather than start another passage which I think we don't have time to do yes sir what's the reference what do we infer by the reference to the little o of the Earth yes ah thank you so much for for for because the look The O this is a great question O is a round figure it is at this point both the letter O and also the letter the number zero uh this is precisely the moment in European history when the zero comes from Arabic numbers into use in England and you'll see in King leer and all over the place there tremendous fascination with the power of zero with the cipher with the idea that a zero with a figure in front of it becomes 10 or 100 or a thousand but we're also dealing with what in uh Henry V uh he describes as the wooden o that is the theater the Globe Theater which we think had 16 or 18 sides which was as close as they could come to making something actually round here and the wooden o became a phrase to describe the theater space so the little o the Earth here is zero it is O it is zero was Al because Circle was also the figure for everything because it was perfect it was round it had no beginning and no ending so we have the perfect circle everything we have the zero which is nothing and we have the theater which is the place that mediates between everything and nothing and we have all of that in the O and that maybe is as good a place to stop as any uh we'll see you next week for trist and cres please bring your plays with you and please do come with questions because we are going to ask four questions and depend upon your questions thank you |
Literature_Lectures | ENGL_3328_LECTURE_4B.txt | okay and I think we have a question don't we well some of them will I mean you know some of them will we're going to start talking about this stuff next week by the way as we begin to get into the Victorians but remember that we're going to be taking one of the issues that in our text will be raised as the Victorian issues and if you take something like industrialism progress or decline or questions about science and religion and so on and so on I mean these things are either directly or indirectly being talked about by some of these writers so there will be some carryover and we can be talking about that more as we go along you know for example somebody I'm just about to talk about just a few seconds Mary Shelley Mary Shelley was the author of Frankenstein as I guess everybody knows and which we have in our textbook and that raises in a very interesting way lots of ethical questions about science and I mentioned this one time before just because you have the scientific knowledge and technological capability to do something should you do that thing is it ethically responsible to do that thing and there's some very very interesting questions that come up and of course in her novel it comes up because it or not by the way she's married to Shelley so I mean she's very much in the midst of this group of people who are studying right now and of course what the basic theme of that pot is is that here we have this scientist who figures out a scientific way of creating life with of course terrible consequences so it raises very interesting questions not so much about science as about the ethic of science so we can talk more about though that though as we go along now if we can go to the tablet please go to the tablet great okay somebody asked me a question about these relationships so I thought I would draw them out here over the break William Godwin I mentioned before we took our break was a very famous and very influential radical political and social thinker of the late 18th century in the beginning of the 19th century he married another very great and very famous radical of the time Mary Wollstonecraft who probably nowadays is even better known than Godwin because her vindication of the rights of women became one of the principle texts defending the rights of women in the 19th century among their children was a Mary Wollstonecraft who as it happens married Shelley and became therefore a Mary Wollstonecraft Shelley or sometimes simply Mary Shelley and so since I realized that that could be a little bit confusing I thought that it might be helpful if I actually drew that out in this way in this little diagram and the Shelley's were part of a group first of all around Godwin and Mary Wollstonecraft and then later on with Byron and others in Italy and it was when they were in Italy by the way that they were playing a kind of and people were going to produce a story and so Mary Shelley produced her story and her story was the story a Frankenstein which apparently was based on a nightmare she had and then she expanded on that and of course eventually developed it into the novel which of course has become very famous and we've had numerous film versions of it in modern times and it continues to be a famous work by the way if you've seen some of the movie versions of Frankenstein and then go and actually reads the book Frankenstein you're probably going to be surprised it's it's a much more interesting book than many of its film adaptations would make it out to be so okay well having said that let's go back to Shelley the poet Shelley for just a few minutes I wanted you to just look once again and think about that song men of England and of course as I mentioned earlier Shelley was a great radical in his day and obviously in his time this would have been a very radical poem just as blake's chimney-sweep poems would have been very radical in his time and also of course makes a little black boy would have been as well so notice what he's saying to the men of England is you working people of England are being exploited by this new system which is coming into play and you are the hands in the factory to use Dickens this term which we're going to be looking at a little bit later on when we get to Dickens novel hard times you are the hands who are actually producing all of these goods and all of the great wealth of England so why aren't you not only getting but why aren't you demanding your share of this and so in the very beginnings of a labor movement in the early 19th century Shelley while he was not a working-class guy himself became a kind of spokesman for those aspirations and as I mentioned this poem of his was sent to music and now his son at labor union meetings in England okay right after that in our anthology is another one I mean I just want to give you a sense of the other side of Shelley because sometimes we think of Shelley there's this you know guy who's got his head up in the clouds you know simply thinking about the awful spirit of beauty and clapping his hands in ecstasy over it and so on Shelley also had this other side to him very critical even politically critical of his society England and 90 excuse me in 1819 an old mad blind despised and dying King this is George the third as enough points out Americans of course know George the third because he's the George who is a dressed in the Declaration of Independence an old man blind despised and dying King princes the dregs of their dull race princes the dregs of their dull race I mean this is no bowing down before monarchy is it who flow-through public scorn med from a medi spring that's what the monarchy is here for Shelley med from a muddy spearing rulers who neither see nor feel nor know but leech like to the fainting country cling till they drop blind and blood without a blow a people starved and stabbed in the untold field an army whom Roberta side and pray makes as two-edged sword to all who wield golden and sanguine laws which temptin say religion Christ 'less godless a book sealed a Senate times worst statute unrevealed our graves from which a glorious phantom may burst to illumine our tempestuous day well this is anything but a quiescent in the status quo the political status quo this would have been somewhat unusual in how strong it is it's not that there weren't people who who criticize the monarch or even criticize the institution of monarchy at this time but there are few who would have been this blunt you know and you know used this kind of even violent imagery to describe what he believes is a corrupt system that badly needs to be changed and of course that's in 1819 he's writing this ok let's look at one other Shelley poem very very quickly if you'll just follow me back over this is right after the Hindu intellectual Beauty Ozymandias famous poem famous poem I met a traveller from an antique land who said two vast and trunkless legs of stone stand in the desert near them on the sand well having a little trouble with the pages here near them on the sand half sunk a shattered visage lies whose frown and wrinkled lip and sneer of cold command tell that it's sculptor well those passions read which yet survive stamped on these lifeless things the hand that mocked them and the heart that fed and on the pedestal these words appear my name is Ozymandias king of kings look on my works ye mighty and despair nothing beside remains round the decay of that colossal wreck boundless and bare the lone and level sands stretch far away and again while this is not a direct critique of the English monarchy notice that it is really about how the mighty can be brought low so let me just say one final thing about this that well shall we tended to confine his political activities mainly to speeches poems that sort of thing writing pamphlets and so on for which he got himself into trouble you may have read that he got himself expelled from the University of Oxford for a pamphlet that he published and fed himself into all kinds of difficulty with his family as well as a result but his good friend Byron and they were good friends for quite a number of years that they unfortunately had a bit of a falling out toward the end of Shelley's life but Byron actually held a kind of service not a conventional religious service you could well imagine from Byron but nevertheless a kind of memorial service for Shelley after he died in a boating accident and Baron held this on the shore but Byron actually was a political activist you know I mentioned when we were talking about Byron that he served in the House of Lords at least briefly and defended certain liberal and radical causes of the time which don't sound so terribly liberal and radical any longer but he also late in his life he didn't live that old by the way but later in his life he went to Greece and he devoted himself to the Greek revolution and it happens that the revolutionaries were not very well trained and they weren't very well equipped and so Byron reached into his own purse and helped to clothe and feed and equip the troops and he also learned how to train them and he developed all kinds of interesting leadership abilities that nobody ever would have expected from Byron based on his earlier life and he really dedicated himself to this to the point where he totally wore himself out he eventually contracted a fatal disease and he died there and to this day he is regarded as one of the great national heroes in Greece by the Greeks themselves so interesting kind of background for some of these guys yeah okay you're gonna jump up there to a microphone thank you okay that just passed that on at the time of these writings what was the political prosecution or persecution that somebody would face and was there any so whatever cell he puts out these pamphlets and whatnot is he facing jail time or is it already past that point where blood go and rest is he could do he could face jail time that somebody has well-placed as he was probably wouldn't probably wouldn't I mean if you actually came out and advocated that people take up arms and violently overthrow the government yes you probably would have been thrown into prison but somebody who was simply critical of the government would not very likely have been thrown into prison unless it was in time of war and then possibly yes sure he got himself into all kinds of trouble for various reasons at one time and by the way this had to do with the morality and the social mores of the time not not just the politics Shelley was married to another woman before he met Mary Shelley and some of you may know this story if you've read the ahead note here in the anthology and after a while things did not work out very well in their marriage and you have to put this in a certain kind of context well it was possible to get a divorce at that time it was extremely difficult and it was also quite rare to get a divorce and it required a lot of money and a lot of effort and the only grounds on which you could get a divorce would be on the grounds of adultery and so who to go to court you see and you know make accusations of that kind and then you had to have witnesses and co-respondents and all the rest of it and it would just be a huge scandal and so when people separated they usually did what thyrion-- did they would have a kind of legal separation so that the parties would have you know some kind of legal security even though there wasn't a formal divorce as such well then shall we took up with with Mary Shelley he was quite young by the way and that was part of the scandal and they really pretty much had to leave England and so they went abroad and then Shelley the kind of idealist he was invited the woman who is still technically his wife even though they were separated he invited her to come and live with Mary and himself as a kind of sister and you know I know it's all very interesting kind of relationship and then when Byron showed up it became even more interesting that the that then what happened was that that Shelley's wife died so that he was then free to marry Mary and so he and Mary married but they had simply been living together up to that time and you know that would not sound like the sort of thing that would make front-page news papers in two thousand and and four or five or six or whatever but in you know the beginning of the 19th century that was scandalous indeed especially for a public figure so okay well let's turn next to John Keats and what I'd like to look at first of all is his notion of negative capability now let's let's go to the text let's go to the text and I'm going to explain through the text the points that I'm writing up here on the screen this is his letter to George and Thomas Keats his letter to George and Thomas Keats and you'll notice that there's a lot of chitchat early on in the letter that the part that I want you to pay special attention to is all about 20 pages or 20 pages about 20 lines or so into the letter where he says and at once it struck me what quality went to form a man of achievement especially in literature you see where I am okay in the 7th edition this is on page 889 and it's down toward the end of the page for those of you who use another edition obviously it's going to be you know the pagination is going to be a little bit different and once it struck me that quality went to form a man of achievement especially in literature and which Shakespeare possessed so enormous Lee now I've mentioned this here before briefly that there was a tremendous rise of interest in Shakespeare among the romantics then : became one of the greatest Shakespearean critics of all time give a series of lectures on Shakespeare which fortunately have survived from the notes of people who were present and they've been reconstructed by a color and scholar so that we can now have them but that set of lectures was enormously popular and powerful and influential on the whole history not only of Shakespearean criticism that of the way in which Shakespeare is performed as a play okay now here's Keats also caught up in that sometimes was called Bard our tree bard our tree you know how Shakespeare is often referred to as the Bard is in the Bard of Avon bar dolla tree and which Shakespeare possessed so enormous that is the quality i mean- capability that is when man is capable of being in uncertainties mysteries doubts without any irritable reaching after fact and reason Coleridge for instance would let go by a fine isolated verisimilitude caught from the penetration of mystery from being incapable of remaining content with half knowledge okay let's take that apart negative capability first of all such a person of genius according to Keats would not strive for definite answers to complex problems you know how there's always a temptation to want to simplify something to give a simple answer to a very very difficult and complex problem okay oh that's such and such oh that's just X or it's just Y and so forth whether it be in politics or religion or philosophy or discussions of history or whatever okay so according to Keats someone like Shakespeare has the capability this is a negative capability in other words to hold himself back that's what negative capability means that's what's negative about this capability is his ability not to jump to definite answers to difficult and complex problems but the capacity to hold himself back from that temptation and he says by the way courage couldn't hold himself back from the temptation remember the point that we made at the end of The Rime of the Ancient Mariner based on colleges report on a dinner conversation in which there was a person at the dinner who said to college you know I really wish that your Rime of the Ancient Mariner had had more of a moral and his response was I think the problem is rather that it has too much of a mall and we looked at those concluding stanzas where he's really beating us over the head with the ball of the whole poem that is not negative capability okay so here's Kate's secondly it's a capacity for dealing with in definite and the mysterious in other words the ability to live with a certain amount of uncertainty now many people really really really feel uncomfortable about living with lack of quality and uncertainty they want things definite they want things clear they want things outlined right it makes everybody a lot more comfortable however what Keats is saying is that the person of real achievement is going to have the capacity to hold back from jumping for the simple explanation and to be able to live with in definiteness lack of clarity even mystery okay yeah obviously such a person will not win very many elections as a matter of fact that's what seven candidates including one that I could mention but will not have been criticized for is that the person doesn't come out definitely enough and clearly enough for some people at any rate to state exactly what his positions are on very clearly defined issues well that's precisely what King says is a terrible mistake if you want truly to be a person of achievement okay that because truth ultimately is going to be united with beauty and that's something that can't be pinned down that's something which can't be pinned down in terms of neat categorical definitions okay now let's look at a couple of Keats's own poems to what extent does Keats really live out his own dictum to what extent do we find negative capability in his own poems there is this something that we can really apply in a kind of practical criticism at all however interesting it might be theoretically is it practical in criticism to think about negative capability well let's look first at ode on a Grecian urn you've seen these urns right you know our Museum here in Houston the Museum of Fine Arts here in Houston we have some Greek urns and also in the Menil Museum there's some Greek urns and I'm sure you've seen pictures of lots of these there are museums for example in the Metropolitan Museum in New York which has a very good collection of Greek urns probably the best collection in the world is in the British Museum in London and of course we know that Keats actually went and saw some of these and so he's going to write a poem now his ode which is typically a celebratory poem they poems celebrating something in ancient times it could be the celebration of the victory of a general there of an Olympic athlete there are lots of very famous ODEs to athletes in in ancient Greece okay ode on a Grecian urn thou still unravelled bride of quietness they'll foster child of silence and slow time Sylvan historian who can stuff express a flowery tale more sweetly than our rhyme what we fringed legend haunts about thy shape both deities or mortals or both in tempie or the wall or the Dales of our kadhi what men our gods are these what maiden slave what mad pursuit but struggles to escape that pipes with timbrels what wild ecstasy well what we're about to see we're just beginning to get intimations of is what this poem is really all about is not simply the Grecian urn that if you've seen Grecian urns well let's see let me try my hand at drawing here okay let me we're going to go to the tablet here in a second okay we got a okay great all right a Grecian urn well I have to go back to the same okay the Grecian urn typically will look something like this don't expect any great drawing here by the way this is just to give you an idea of what it is that Keats is looking at okay so here's the urn and it could have been decorative but it also could have been utilitarian in some sense and it could have been used to put you know some kind of food or drink or whatever in and perhaps for storage purposes but you will notice that the urns typically have bands around them and in these bands let's just assume that what I'm doing is I'm doing floral floral patterns in this band and then we're having these are these are branching trees here and you know some other kind of figures down here and then the in here we've got someone who is pursuing someone else and so on okay we're gonna have you know whatever going on in the different bands but as as he goes through it with my silly little drawing here as he as he goes through what he's looking at as he's looking at the different bands on the vase or the urn right and like at one band there would be a floral motif and then on another band there could be people okay and then of another bands there might be another kind of maybe grapes or you know because it may be the urn was originally designed to to hold wine okay so maybe there would be some kind of grape vine motif and then maybe another abandoned which there would be more people with a scene described and you've seen some of these where they'll have warriors and the Warriors will be you know using their spears or their swords and so forth and going after one another but there are all kinds of other motifs that are used in the the various bands of these urns some of them by the way being put in back corners so that most people will never see them unless they know where they are because they'll have all kinds of sexual activities being depicted on here the the Greeks didn't have the same notions that and thought that you know all of the activities of the body were natural and healthy and therefore why should they be hidden away so and even celebrated you know and in art well anyway so let's look once again now still unravel tried of quietness is going to be both the urn itself and a figure on the urn because what we're going to see is that on the urn are plants now obviously images of plants that never fade in the natural course of things that happens plants flower in springtime they mature in the summer and then in the fall they begin to wilt and you know the in some parts of the country the leaves turn different colors and so forth and then in the winter time they seem to die off with the exception of some evergreens but then to be reborn again but there's this constant sense of flux of movement and of change in the world but that happens when you paint vegetation flowers trees grapes whatnot there they are they are frozen in time they are changeless they are made changeless by art so that's part of what's going to be going on in this poem it's a meditation on what art does to an otherwise another wise changing nature constantly changing nature okay also how does art represent something which is in motion now that has been a very very interesting and serious artistic one might even say aesthetic problem for as long as people have been drawing probably think about that you know what do you do you want to have say piece of sculpture which represents some kind of violent action violent motion and yet the very nature of a sculpture is that it's going to be standing still right it's made out of stone generally speaking I mean I know sometimes we can use other materials but let's say stone in the 18th century there was a famous work by a philosopher aesthetician named resting and it was entitled walk away now walk alone you don't have to remember this by the way but walk alone was a figure in ancient epic about Troy you know Homeric and virgilian epics about Troy in the fall of Troy and the aftermath and so forth and you remember the story of the Trojan horse the famous story of the Trojan horse well you know the Greeks have had the city of Troy under siege for nine years actually over nine years and they haven't been able to defeat the Trojans because they haven't been able to breach the walls of Troy so what do they do they come up with the scheme of the Trojan horse being that supposedly is a gift signifying peace between their peoples to the Trojans and of course the trick is that inside this large wooden horse are some Greek warriors who once the Trojan horse is brought into Troy the gates shut behind it everybody goes to sleep these guys will come out of the Trojan horse open up the gates of Troy and the whole Greek army will come pouring in well there was only one Trojan who advised against bringing this in and who made the famous statement which goes down through 2,000 years of actually more than 2,000 years of our of our literary history fear the Greeks when they come bearing gifts which by the way is if any of you are great core of Greek ancestry this is you know I'm not saying anything about Greeks here it's simply about the Trojan War okay fear people who come bearing gifts enemies of kamma bearing gifts really is what it's about okay well anyway it happens that the gods have already decided under high Zeus that the Greeks are going to lose the war and the excuse me the Trojans are going to lose the war and the Greeks are going to win so naturally the gods therefore do not want for a waka when to be heeded by his fellow Trojans they want the Trojans to take that horse into the walls of Troy so they send these huge serpents out of the nearby river that come up and they wrapped themselves around walk Owen and his two sons and dragged them off into the river where they are drowned okay why do I go through this long discussion of that because there's a famous statue of Lachlan and here is this nude male figure large figure with two other nude male figures who are smaller because they're boys and they have these serpents wrapped around them and they're in the middle of a death struggle against these serpents okay now here is something very dynamic and very violent and very much in motion except it's carved in stone right so part of what Lessing is meditating on is the relationship between the static that which stays the same and the kinetic which is that which is in motion in art and how in a static medium something which doesn't move can you create the sense or the illusion of movement that was a very very interesting problem and of course much 19th century art was devoted to the solution of that problem think just for example of impressionist paintings I mean what do you see when you look in impressionist paintings you see the effort in a static medium and that is to say static means that things don't move okay in a static medium to create some kind of Kinesis or movement I'm thinking for example of one painting which is a painting thickets of the Luxembourg Gardens in Paris and you're looking at a center Center day across a stretch of land and then there's some water and then there's some land again on the other side and people are out in the park it's like on a Sunday you know and they're they're out there the women have their parasols and people are walking around and kids are playing and so forth and it's painted in such a way that as you look across the water it's a hot summer day the light seems to shimmer and all the light shimmers in the heat the right actually seems to shimmer or you have some of the famous water lily garden paintings of Monet in which there seems to be almost a kind of shimmering quality to the light okay or paintings of the Cathedral of Notre Dom you know at dawn in which there's a kind of shimmering or movement of the light or at least it appears that way in the painting and people actually experimented to try to figure out ways of creating that sense of motion in an otherwise static medium and that was one of the things that absolutely believed people away when they came up with moving pictures moving pictures can you imagine anything like that a moving picture was it appeared the solution to the ancient problem of how you bring together a static medium and put it into motion because of course as you know what you have when you look at a moving picture is actually a series of static images right and there are lots of very early experiments 19th century experiments with that okay there's the famous one by the way of the series of photographs of a nude man who is running and jumping okay well did you hold one of those little books when you were a kid where you would riffle the pages those still around another way when I was a kid you'd riffle the pages and of course if you looked at any individual page the image would obviously be static who would be still would be unmoving but when you riffle the pages you would see this you're moving doing whatever it was doing okay jumping up and down or throwing balls or whatever okay so all of these four experiments in putting together in a work of art in which one is confined it would appear at least so far to static images but in such a way that you can create the illusion of motion so that a movie film of course consists of stills right which are sent through the the the projection mechanism at the proper rate of speed so that our eyes register them as moving in normal ways but you'll remember in early movies and you've probably seen pictures of early movies they would be very jerky in their movements because they had really worked it out completely yet okay now having said all of that what does that have to do with Keats well let's go back to the Grecian urn now still unravel quietness now foster child of silence and slow time okay the urn itself is a foster child of silence and slow time so fan historian because what's going to be on this is a scene from the antiquity okay sylvan has to do with the Latin word Silva means forest these forest or woods Sylvan historian who can suss express a flowery tale more sweetly than our rhyme but we've fringed legend and with my little drawing that we've fringed legend around one of the Bands haunts about the shape of deities or mortals or of both in Tempe or the dales of Arkadiy out of Greek mythology what then or gods are these what maidens both both what while we're going to see in just a moment what mad pursuit that struggled to escape that pipes and temples that wild ecstasy being represented on the bands of this urn heard melodies are sweet but those unheard are sweeter it's very interesting what on earth is he talking about here therefore his soft pipes play on see there's somebody playing the pipes depicted on the urn so that there's a melody but it's an unheard melodies and unheard ability not to the sensual ear but more endeared pipe to the spirit Diddy's of no to absolutely no tone fair youth beneath the trees thou canst not leave thy song nor ever can those trees be bare okay on the earn the for youth can never leave his song playing his song or singing his song nor can those trees ever be bare because they're painted now on this urn this is not a motion picture you can't represent change here bird lover never never canst thou kiss the winning near the goal yet do not grieve she cannot fade though thou hast not thy this forever wilt thou love and she be fair okay speaking now to young man who's running after the young woman and he wants to get a kiss from her and she's running away from him he can never reach her he could never reach her but the consolation is that she will forever be fair she'll never change ah happy happy boughs that cannot shed your leaves as real boughs would shed their leaves in autumn nor ever bid the spring adieu and happy mellitus unmarried forever piping songs forever new more happy love more happy happy love forever warm and still to be enjoyed forever panting Forever Young all breathing human passion far above that leaves a heart high sorrowful and Cloyd a burning forehead and a parching tongue see nothing is ever going to change in this world of art and there's something consoling about that because they're never going to change they will be forever youthful who are these coming to the sacrifice to what green altar this is would be another band with another set of images on it well mysterious priest liebe style that heifer rowing at the skies this is for a sacrifice apparently a religious sacrifice and all her silken flanks with Garland's dressed one little town by river or seashore or mountain built with peaceful citadel is emptied of this folk this pious morn and little town thy streets forevermore will silent be and not a soul to tell why thou are desolate can err return o ant it shape attic Greek ancient Greek shape their attitude with breed of mortal men of marble men and maidens overwrought alright with forest branches and the trodden weed thou silent form does tease us out of thought as death eternity there's something eternal about the urn because what is depicted on the urn and in the urn is frozen it is out of time and to be out of time is to be in eternity that's what eternity means cold pastoral a pastoral is a pastoral poem here the urn is metaphorically treated as a pastoral poem pastoral goes back to the Latin word pasture which means Shepherd which means we roll which means out in the countryside so what is being represented here as in pastoral poetry dealing with people out in the countryside is this scene when old age shall this generation waste my generation John Keats when old age shall this generation waste thou shalt remain the only thing you're still going to be here when all the rest of us are going to grow old eventually and we're going to die but you will still be here in the midst of other roads and ours a friend two men to whom thou sayest beauty is truth truth beauty that is all you know on earth and all you need to know well this is one of the most famous poems in the English language and it's one of those things that you can just go over and over and over and you just keep finding other little nuances and shades of meaning and you know other kinds of problems and it's you can turn it around like the bank the Grecian urn and view it from many many many different perspectives so I asked the question what about negative capability is that a useful concept here as Keats followed his own advice does he try to give us some sort of moral here or does he leave it implicit so it may be there but it's not stated outright and certainly will not hit over the head with it well that's an interesting question and for comparison I would like to work over at 2:00 autumn but first of all let me put this screen up on the screen because it has some of the terms that I've been using ok Stathis and Kinesis these are actually ultimately I mean we Vanka size them but they're ultimately two Greek words stasis meaning standing in one place and Kinesis meaning in in motion or moving and of course that's what's happening in the poem isn't it and I just give an example there's a statue of Aphrodite in our own Museum of Fine Arts up on the second level which is illustration of a very posed figure and Kinesis kinases that's Lessing's name by the way the philosopher I was telling you about on wakka one the statue of wakka one and his sons but also Rodin you've seen the the statuary of Rodin we have three or four pieces in our Museum of Fine Arts that there are well dams in other museums around the world and of course if you get to Paris there's the wonderful musee rodin which has some of the statues inside but many of them are actually outside including the absolutely wonderful burghers of Calais and the gates of hell and what Dan did and you may you may know what I'm talking about at least some of you I hope know what I'm talking about is he will sometimes have a figure who is emerging out of stone like struggling to be born out of a block of stone that is half still in the block of stone in half out which is a remarkable way of representing the the kind of problem that Rodin was setting for himself I mean after all he's do just with a block of stone but that figured that he is struggling to create is as it were struggling out of the stone okay now here this is posed in a little bit different way so that in our poem we have the whole issue of how these figures are apparently in motion and yet they're not in motion the world is in motion our world is in motion everything changes you and I are going to change you and I ultimately will no longer even be here and yet what is here by virtue of its being static will always be here and the figures on the bays will always be young so okay well that's let's look at to autumn in terms of the kind of pure lyricism that Keats was trying to get in some of his most mature poetic efforts this is one of his great ODEs by the way there are several of them including the other gradation season of mists and mellow fruitfulness close BISM friend of the maturing son conspiring with him how to load and bass with food the vines that round the thatch eaves run and fill excuse me to bend with apples the lost cottage trees and fill all food with ripeness to the core to swell the gird and plump the hazel shells with a sweet kernel to set bedding more and still more later flowers for the bees until they think one days will never cease for summer has opened their clammy cells who have not seen the oft amid thy store sometimes whoever seeks abroad may find the sitting careless on a granary for thy hairs soft lifted by the winning winnowing wind or on a half-baked furrow sound sleep vows with the fume of poppies while thy hook spares the neck suave and all its twined flowers then sometimes like a Gleaner now just keep steady thy laden head across a book or buy a cedar press with patient look thou watches the last usings hours by hours where are the songs of spring I where are they think not of them thou hast thy music - while bard clouds bloomed the soft dying day and touched the stubble plains with rosy hue then in a world well full choir the small gnats mourn among the river sallows borne aloft or sinking as the light wind lives or dies and full-grown lambs loud bleat from Hilary born heads cricket sing and now with treble soft the redbreast whistles from a garden craft in gathering swallows Twitter in the skies now that may be as close to pure lyricism as you can get notice that what we have is a series of images that we have very little attempt to suggest that the meaning of those images might be okay with the exception of the beginning perhaps of the third stanza where are the songs of spring highway are they thinking out of them thou hast thy music to with perhaps the implication that people tend to think oh how beautiful the spring is and may not think so seriously of how beautiful the autumn is but notice how this really does achieve a kind of negative capability and it certainly doesn't do what he accuses Coleridge of doing whether that's a fair criticism or not okay well a wonderful poet and unfortunately he died at 26 and as one of our editors points out neither Chaucer nor Shakespeare normal t'en and accomplished anything close to what Keats it accomplished by age 26 and that has always led people to wonder what he would have accomplished and he lived he unfortunately died of tuberculosis in a time when they did not yet have any means seriously for controlling tuberculosis and so if you got it you were almost certainly going to die from it in Keats's time okay well now let's turn over to Leticia Elizabeth Landon who published under L II L under LEL and that became one of the most famous pseudonyms if you will of the 19th century Leticia Elizabeth Landon LEL now remember in part there weren't a whole lot of opportunities for women publishing and so women sometimes either published under names where you couldn't tell whether it was a man or a woman or even as in the case of George Eliot published under a man's day a fictitious man's name but there were also women writers who published under their own names okay let's look first at the proud lady this is in a medieval ballad form and it relates a tale from medieval romance and this too is part of what happened in the Romantic period remember that I said in the midst of industrialization and urbanization there were many who looked back to an earlier time and said that is a kind of innocence that we have lost a kind of golden age back there in the Middle Ages so here's the proud lady which is not unlike Keats's well Belle Dom Sol merci by the way which is also a kind of version of a medieval or medieval like ballad the proud lady Oh what could the lady's beauties match on it we're not the ladies pride a hundred Knights from far and near we'd at that lady's side the rose of the summer slept on her cheek it's Lily upon her best and her eyes shone forth like the glorious star that rises the first to the west there were some that need her for land and gold and some for her noble name and more that would for her loveliness but her answer was still the same this of course is going to be the woman who refuses to fall in love and refuses any lover unless the lover can accomplish an impossible or near impossible tasks but of course the one who does you see is supposed to get the lady and she's supposed to fall in love with him and this is also the famous theme of The Taming of the Shrew you know the title of shakespeare's comedy The Taming of the Shrew the woman who is reticent the woman who refuses to fall in love and so forth who then does there is a steep and lofty wall where my waters trembling stand he who at speed shall ride round its height for him shall be my hand you see this is a near-impossible task many turned away from the deed the hope of the wooing or but many a young knight mounted the steed he never mounted more at last there came a youthful night from a strange and far country the steed that he rode was white as the film upon a stormy sea okay he's the one who's going to make it the others have fallen off they've died they're gone and she who had scorned the name of laws now bowed before its might and the lady grew me cause if disdain were not made for that stranger night okay this is an ironic twist now in the The Ballad tale she sought at first to steal his soul by dance song and festival at length on bended knee she prayed he would not ride the wall okay please don't do it don't do it I don't want you to be killed because now I had for the first time fallen in love but gaily the young knight laughed at her fears and flung him on his steed there was not a saint in the calendar that she prayed not to in her need she dared not raise her eyes to see if Heaven had granted a prayer till she heard a light step down to her side the gallant knight stood there he's done it he's won and took the lady Adaline from her here a jeweled band that the knight repelled the offered gift and turned from the offered hand she offers him you see this band it's Jade band from her hair and he refuses to gift and Dimas now that I dared this deed lady for love of thee the honour that guides the soldiers Lance's mistress enough for me so here notice the victorious Knight not only achieved the quest but he now triumphs over the lady by rejecting her enough for me to ride the ring the victors crown to wear but not in honor of the eyes of any lady there and then he reveals something I had a brother whom I lost through thy proud cruelty and far more was to me his of that woman's love can be I came to triumph or the pride through which that brother fell I laugh to scorn thy love and D and now proud Dame farewell and from that hour the lady pond for love was in her heart and honor slumber there came dreams she could not bid depart her I lost all its tiny light her cheek grew wan and pale till she hid her faded loveliness beneath the sacred veil and she cut off for a long dark hair and bad the world farewell and she nabbed Wells have bailed none in st. Mary's cell she's become a nun where else did she have to go in many of the romances by the way at the end if the lady is separated from her lover or whatever it may be you know she goes off and she lives in a convent is it none and then we have another poem which is very interesting which will close with which is a poem of grand passion both in falling in love and in suffering love's Ross and in this poem there were lots of allusions to Byron this is the passion of a female romantic hero not a male romantic hero but a female romantic hero with lots of allusions to Byron and Byron's poems and there's a certain kind of Byron ISM in its passionate extremes here teach at me if you can forgetfulness she cries and look over in line 35 and falling but you first called speaking to a lever my woman's feelings forth and taught me love here I had dreamed Webb's name I loved unconsciously your name is all that seemed in language and to me the world was only made for you to see and I became as it were a kind of slave and then in 46 and following at last I've learned my heart's deep secret for I hoped I dreamed you loved me wonder fear delight swept my heart like a storm 54 and following as it was I gave all I could my love my deep my trade my favor and faithful love and now you bid me learn forgetfulness it is a lesson that I soon shall learn there is a home of quiet for the wretched a somewhat dark and cold and silent West but still it is rest for it is the grave well after this we have the poet as it were speaking in her own voice flinging aside that scroll and saying why should you write this what could you write this a woman's pride for bed to let him look upon her heart and see it was in utter ruin an utter ruin so we have the poetic commentator now who unlike the speaker at the beginning of the poem is not naive she is experienced and she has experienced the pain of rejection in love but it is nonetheless painful to her it is despair in 95 and following craving scorpion-like stinging itself the heart Burke crushed passions earthquakes scorched withered up rise in it's desolation this is love the crane that's the tale that I can tell but the one thing she can hope for at the very end their stanza and help length is somewhat of revenge for man's most golden dreams of Pride and power are vain is any woman dreams of love both end in weary Bell and withered heart and the grave closes over those whose hopes have reigned there long before the woman who wants love and is rejected suffers greatly but here in her cynicism perhaps she takes comfort in the fact that men in their pride and ambition likewise are going to come but to the grave so we'll pick up your next time |
Literature_Lectures | 12_Freud_and_Fiction.txt | Prof: Well, now today is obviously a kind of watershed or transition in our syllabus. You remember we began with an emphasis on language. We then promised to move to an emphasis on psychological matters, and finally social and cultural determinants of literature. So far we have immersed ourselves in notions to the effect that thought and speech are constituted by language or, to put it another way, brought into being by language and that thought and speech have to be understood as inseparable from their linguistic milieu-- language here being understood sometimes broadly as a structure or a semiotic system. Now obviously our transition from language-determined ideas about speech, discourse, and literature to psychologically determined ways of thinking about discourse and literature has a rather smooth road to follow because the first two authors who borrow from Freud and understand their project to a degree in psychoanalytic terms are nevertheless using what is now for us an extremely familiar vocabulary. That is to say, they really do suppose that the medium of consciousness to which we now turn-- the psyche, the relationship between consciousness and the unconscious-- they really do suppose that this entity, whatever it may be, can be understood in terms that we take usefully from verbal thought and from linguistics. Lacan famously said, as you'll find next week, "The unconscious is structured like a language," and Brooks plainly does agree. You open Brooks and you find yourself really apart perhaps--I don't know how well all of you are acquainted with the texts of Freud. We'll say a little bit about Beyond the Pleasure Principle, which is the crucial text for our purposes; but plainly apart from the influence of and the ideas borrowed from Freud, you'll find Brooks writing on what for you is pretty familiar turf. For example, he begins by borrowing the Russian formalist distinction in trying to explain what fiction is between plot and story. I feel that I do ultimately have to cave in and admit to you that the Russian words for these concepts, plot and story, are syuzhet and fabula respectively, because Brooks keeps using these terms again and again. I've explained my embarrassment about using terms that I really have no absolutely no idea > of the meaning of except that I'm told what the meaning of them is in the books that I am reading, which are the same books that you're reading. In > any case, since Brooks does constantly use these terms, I have to overcome embarrassment and at least at times use them myself. They're a little counterintuitive, by the way, if you try to find cognates for them in English because you'd think that syuzhet would be "subject matter," in other words something much closer to what the formalists mean in English by "story." On the other hand, you'd think that fabula might well be something like "plot" or "fiction," but it is not. It's just the opposite. Syuzhet is the plot, the way in which a story is constructed, and the fabula is the subject matter or material out of which the syuzhet is made. All right. In addition to the use of the relationship between plot and story, we also find Brooks using terms that are now, having read Jakobson and de Man, very familiar to us: the terms "metaphor" and "metonymy." There's plainly a tendency in modern literary theory to reduce all the tropes of rhetoric to just these two terms. When needed, they back up a little bit and invoke other terms, but the basic distinction in rhetoric, as literary theory tends to understand it, is the distinction between metaphor-- which unifies, synthesizes, and brings together-- and metonymy, which puts one thing next to another by a recognizable gesture toward contiguity but which nevertheless does not make any claim or pretension to unify or establish identity-- to insist, in short, that A is B. These two terms, as I say, are understood reductively but usefully to be the essential topics of rhetoric and appropriated by modern theory in that way. Now Brooks then uses these terms in ways that should be familiar to us, as I say. We have now been amply exposed to them in reading Jakobson and de Man. So there is a language of language in Brooks' essay, "Freud's Masterplot," despite the fact that the framework for his argument is psychoanalytic and that he is drawing primarily on the text of Freud's Beyond the Pleasure Principle. So what does he take from Freud? What interests Brooks about Freud? He is, by the way, a distinguished Freudian scholar who knows everything about Freud and is interested, in fact, by every aspect of Freud, but for the purpose of constructing the argument here and in the book to which this essay belongs, the book called Reading for the Plot-- for the purposes of constructing that argument, what he takes in particular from Freud is the idea of structure: the idea that, insofar as we can imagine Freud anticipating Lacan-- Lacan himself certainly believed that Freud anticipated him-- the idea that the unconscious is structured like a language. In terms of creating fictional plots, in terms of the nature of fiction, which is what interests Brooks--well, what does this mean? Aristotle tells us that a plot has a beginning, a middle and an end. "Duh!" of course, is our response, and yet at the same time we can't understand a degree of mystery in even so seemingly simple a pronouncement. A beginning, of course--well, it has to have a beginning. We assume that unless we're dealing with Scheherazade, it has to have an end, but at the same time we might well ask ourselves, why does it have a middle? What is the function of the middle with respect to a beginning and an end? Why does Aristotle say, as Brooks quotes him, that a plot should have a certain magnitude? Why shouldn't it be shorter? Why shouldn't it be longer? In other words, what is the relation of these parts, and what in particular does the middle have to do with revealing to us the necessary connectedness of the beginning and the end: not just any beginning or any end but a beginning which precipitates a kind of logic, and an end which in some way, whether tragically or comically, satisfactorily resolves that logic? How does all this work? Brooks believes that he can understand it, as we'll try to explain, in psychoanalytic terms. So this he gets from Freud, and he also gets, as I've already suggested, the methodological idea that one can think of the machinations of a text in terms of the distinction that Freud makes-- not in Beyond the Pleasure Principle, but in The Interpretation of Dreams in the passages that you read for today's assignment taken from that book, The Interpretation of Dreams, about the dream work. It's there that Freud argues that really the central two mechanisms of the dream work are condensation and displacement. Condensation takes the essential symbols of the dream and distills them into a kind of over-determined unity so that if one studies the dream work one can see the underlying wish or desire expressed in the dream manifest in a particular symbolic unity. That's the way in which the dream condenses, but at the same time the dream is doing something very, very different, and it's called displacement. There the essential symbols of the dream-- that is to say, the way in which the dream is attempting to manifest that which it desires, are not expressed in themselves, but are rather displaced on to sometimes obscurely related ideas or symbols, images, or activities that the interpreter, that the person trying to decode the dream, needs to arrive at and to understand. So displacement is a kind of delay or detour of understanding, and condensation, on the other hand, is a kind of distillation of understanding. The extraordinary thing that Freud remarks on as he studies dreams in this book-- published in 1905, by the way--the extraordinary thing about the way in which dreams work is that there seems to be a kind of coexistence or simultaneity of these effects. The dream work simultaneously condenses and displaces that which it is somehow or another struggling to make manifest as its object of desire. Now the first person to notice that there might be-- there are a variety of people who noticed that there might be a connection between condensation and displacement and metaphor and metonymy, most notably Jacques Lacan whom Brooks quotes to this effect: that the work in everyday discourse, in what we say but also in our dreams and in what we tell our analyst, can be understood as operating through the medium of these two tropes. Condensation, in other words, is metaphorical in its nature, and displacement is metonymic in its nature. Metonymy is the delay or perpetual, as we gathered also from Derrida, différance of signification. Metaphor is the bringing together in a statement of identity of the discourse that's attempting to articulate itself. Again we see in fiction, as Brooks argues in his essay, that these two rhetorical tendencies, the metaphorical and metonymic, coexist-- and of course you can hear the implicit critique of de Man in the background-- and may or may not work in harmony, may or may not conduce to an ultimate unity, but nevertheless do coexist in such a way that we can understand the unraveling of a fictional narrative as being like the processes we see at work in the unraveling of dreams. So it's these two elements that Brooks is interested in in Freud and that he primarily does take from Freud. Now this means, among other things, that Brooks is not anything like what we may spontaneously caricature perhaps as a traditional psychoanalytic critic. Brooks is not going around looking for Oedipus complexes and phallic symbols. Brooks is, as I hope you can see, interested in very different aspects of the Freudian text, and he says as much at the end of essay on page 1171 in the right-hand column where he says: … [T]here can be psychoanalytic criticism of the text itself that does not become ["This is what I'm doing," he says]-- as has usually been the case--a study of the psychogenesis of the text (the author's unconscious), the dynamics of literary response (the reader's unconscious), or the occult motivations of the characters (postulating an "unconscious" for them). In other words, Brooks is not interested in developing a theory of the author or a theory of character. Now I don't think he really means to be dismissive of Freudian criticism. I think he's really just telling us that he's doing something different from that. I would remind you in passing that although we don't pause over traditional Freudian criticism in this course, it can indeed be extremely interesting: just for example, Freud's disciple, Ernest Jones, wrote an influential study of Shakespeare's Hamlet in which he showed famously that Hamlet has an Oedipus complex. Think about the play. You'll see that there's a good deal in what Jones is saying; and in fact, famously in the history of the staging and filming of Shakespeare-- as you probably know, Sir Laurence Olivier took the role of Hamlet under the influence of Ernest Jones. In the Olivier production of Hamlet, let's just say made it painfully clear in his relations with Gertrude that he had an Oedipus complex. Again, there were actual sort of literary texts written directly under the influence of Freud. One thinks of D.H. Lawrence's Sons and Lovers, for example, in which the central character, Paul Morel, is crippled by an Oedipus complex that he can't master and the difficulties and complications of the plot are of this kind. Moving closer to the present, an important figure in literary theory whom we'll be studying in this course, Harold Bloom, can be understood to be developing in his theories of theoretical text, beginning with The Anxiety of Influence, a theory of the author-- that is to say, a theory that is based on the relationship between belated poets and their precursors, which is to say a relationship between sons and fathers. So there is a certain pattern in--and of course, I invoke this pattern in arguing that Levi-Strauss' version of the Oedipus myth betrays his Oedipus complex in relation to Freud. Plainly, Freudian criticism with these sorts of preoccupations is widespread, continues sometimes to appear, and cannot simply be discounted or ignored as an influence in the development of thinking about literature or of the possibilities of thinking about literature. But the odd thing, or maybe not so odd-- the interesting thing, that is, in Brooks' work is that although the text is not there to tell us something about its author or to tell us something about its characters, even though character is important in fiction and that's what Brooks is primarily talking about-- although it's not there to do those things it is nevertheless, like an author or a character, in many ways alive. That is to say, the text is there to express desire, to put in motion, and to make manifest desire or a desire. That is a rather odd thing to think about, especially when Brooks goes so far as to say that he has a particular desire in mind. The text, in other words, the structure of the text, or the way in which the text functions is to fulfill in some way or another a desire for reduced excitation: that is to say, the desire which can be associated with the pleasure principle in sexual terms and can be associated with the idea of the death wish that Freud develops in Beyond the Pleasure Principle that I'll be coming back to as the reduction of excitation that would consist in being dead. In these ways--and it remains to see whether, or to what extent, these ways are cooperative-- Brooks understands the structure, the delay, the arabesque, or postponement of the end one finds in the text to involve a kind of coexistence of the sort that I have been talking about between relations to the possibility through desire of reducing excitation, being excited, and reducing excitation. Now obviously both dreams and stories don't just express this desire; they also delay it. I'm sure we have all had the experience of waking up--it's an experience, by the way, which is an illusion; it hasn't really been the case--and thinking that we have been dreaming the same damn thing all night long: in other words, that we have just been interminably stuck in a dream predicament which repeats itself again and again and again to the point of absolute total tedium. Many of the dreams we have are neither exciting nor the reverse but simply tedious. Whatever excitement they may have entailed in the long run, we feel as we wake up that they go on too long. Perhaps fiction does have this superiority over the dream work: that its art, that its structure, is precisely the protraction of delay to a desired degree but not unduly beyond that degree. But it's not just that the middles of fiction involve these processes of delay. It's that they seem also--and this is one of the reasons Brooks does have recourse to this particular text of Freud-- they also have the curious tendency to revisit unpleasurable things. That is to say, it's not that--the middles of fiction are exciting. We love to read and everything we read is a page turner, all to the good; but the fact is our fascination with reading isn't simply a fascination that takes the form of having fun. In fact, so much of what we read in fiction is distinctly unpleasurable. We wince away from it even as we turn the page. One way to put it, especially in nineteenth-century realism which particularly interests Brooks, is all these characters are just madly making bad object choices. They're falling in love with the wrong person. They're getting stuck in sticky situations that they can't extract themselves from because they're not mature enough, because they haven't thought things through, and because fate looms over the possibility of making a better choice-- however the case may be, the experiences that constitute the middles even of the greatest and the most exciting fiction do have a tendency, if one thinks about them from a certain remove, to be unpleasurable. Why, in other words, return to what isn't fun, to where it isn't pleasure, and what can this possibly have to do with the pleasure principle? Now that's precisely the question that Freud asked himself in Beyond the Pleasure Principle, a text which begins with a consideration of trauma victims. It's written at the end of the First World War, and you should understand this text as not isolated in the preoccupation of writers in Europe. Almost contemporary with Beyond the Pleasure Principle are novels written in England partly as a result of the making public of findings of psychologists about traumatic war victims as the war came to its conclusion. Most of you have read Virginia's Woolf's Mrs. Dalloway, and you should recognize that her treatment of Septimus Smith in Mrs. Dalloway is a treatment of a traumatized war victim. Rebecca West, a contemporary and an acquaintance of hers who wrote a good many novels, wrote one in particular called The Return of the Soldier, the protagonist of which is also a traumatized war victim. So it was a theme of the period and Freud's Beyond the Pleasure Principle contributes to this theme. Brooks himself likes to refer to the text of Beyond the Pleasure Principle as itself a master plot-- in other words as having a certain fictive character. It would be, I think, extremely instructive to read it alongside The Return of the Soldier or Mrs. Dalloway for the reasons I've mentioned. Okay. So anyway, Freud begins by saying, "The weird thing about these trauma victims whom I have had in my office is that in describing their dreams and even in their various forms of neurotic repetitive behavior, they seem compulsively to repeat the traumatic experience that has put them in the very predicament that brought them to me. In other words, they don't shy away from it. They don't in any strict sense repress it. They keep compulsively going back to it. Why is that? How can that possibly be a manifestation of the only kind of drives I had ever thought existed up until the year 1919, namely drives that we can associate in one way or another with pleasure-- with the pleasure principle, obviously; with a sort of implicit sociobiological understanding that the protraction of life is all about sexual reproduction and that the displacement or inhibition of the direct drives associated with that take the form of the desire to succeed, the desire to improve oneself, and the desire to become more complex emotionally and all the rest of it? All of this we can associate with the pleasure principle. How does this compulsion to return to the traumatic event in any way correspond to or submit itself to explanation in terms of the pleasure principle?" So then he turns to an example in his own home life, his little grandson, little Hans, standing in his crib throwing a spool tied to a string out of the crib saying, "Fort!" meaning "away, not there," and then reeling it back in and saying, "Da!" meaning "there it is again": "Fort! Da!" Why on earth is little Hans doing this? Well, Freud pretty quickly figures out that what little Hans is doing is finding a way of expressing his frustration about the way in which his mother leaves the room; in other words, his mother is not always there for him. So what is this play accomplishing? He's got her on a string, right? Sure, she goes away--we have to understand this: we know our mother goes away, but guess what? I can haul her back in, and there she is again. This is the achievement of mastery, as Freud puts it and as Brooks follows him, that we can acquire through the repetition of a traumatic event. So maybe that's the way to think about it, but it can't just be the achievement of mastery alone, because nothing can do away with or undermine the fact that part of the drive involved seems to be to return to the trauma-- that is to say, to keep putting before us the unhappy and traumatic nature of what's involved. So the compulsion to repeat, which of course manifests itself in adults in various forms of neurotic behavior-- by the way, we're all neurotic and all of us have our little compulsions, but it can get serious in some cases-- the compulsion to repeat takes the form, Freud argues, especially if we think of it in terms of an effort at mastery, of mastering in advance through rehearsal, as it were, the inevitability of death, the trauma of death which awaits and which has been heralded by traumatic events in one's life, a near escape: for example, in a train accident or whatever the case may be. So Freud in developing his argument eventually comes to think that the compulsion to repeat has something to do with a kind of repeating forward of an event which is in itself unnarratable: the event of death, which is of course that which ultimately looms. Now it's in this context that Freud begins to think about how it could be that the organism engages itself with thoughts of this kind. What is this almost eager anticipation of death? He notices that in certain biological organisms, it can be observed--this by the way has been wildly disputed by people actually engaged in biology, but it was a useful metaphor for the development of Freud's argument: he noticed that there is in certain organisms a wish to return to a simpler and earlier state of organic existence, which is to say to return to that which isn't just what we all look forward to but was, after all, that which existed prior to our emergence into life. The relationship between the beginning and the end that I have been intimating, in other words, is a relation of death. I begin inanimate and I end inanimate, and Freud's argument is that there is somehow in us a compulsion or a desire, a drive, to return--like going home again or going back to the womb to return to that inanimate state. "The aim of all life," he then says, "is death." Well, now maybe the important thing is to allow Brooks to comment on that so that you can see how he makes use of Freud's idea and move us a little bit closer to the application of these ideas to the structure of a literary plot or of a fictional plot. So on page 1166 in the right-hand margin, the beginning of the second paragraph, Brooks says: We need at present to follow Freud into his closer inquiry concerning the relation between the compulsion to repeat and the instinctual. The answer lies in "a universal attribute of instinct and perhaps of organic life in general," that "an instinct is an urge inherent in organic life to restore an earlier state of things." Building on this idea, page 1169, the left-hand column, about halfway down: This function [of the drives] is concerned "with the most universal endeavor of all living substance-- namely to return to the quiescence of the inorganic world." Kind of pleasant, I guess, right? "The desire to return to the quiescence of the organic world." The aim in this context, in this sense--the aim of all life is death. But there's more, and this is why novels are long: not too long, not too short, but of a certain length-- of a certain magnitude, as Aristotle puts it. There is more because the organism doesn't just want to die. The organism is not suicidal. That's a crucial mistake that we make when we first try to come to terms with what Freud means by "the death wish." The organism wants to die on its own terms, which is why it has an elaborate mechanism of defenses-- "the outer cortex," as Freud is always calling it-- attempting to withstand, to process, and to keep at arm's length the possibility of trauma. You blame yourself as a victim of trauma for not having the sufficient vigilance in your outer cortex to ward it off. Part of the compulsion to repeat is, in a certain sense--part of the hope of mastery in the compulsion to repeat is to keep up the kind of vigilance which you failed to have in the past and therefore fail to ward it off. So the organism only wishes to die on its own terms. If you are reminded here by the passage of Tynjanov that I gave you where he makes the distinction between literary history as evolving and literary history as modified by outside circumstances, I think it would be a legitimate parallel. What the organism, according to Freud, wants to do is evolve toward its dissolution, not to be modified--not, in other words, to be interfered with by everything from external trauma to internal disease. It doesn't want that. It wants to live a rich and full life. It wants to live a life of a certain magnitude, but with a view to achieving the ultimate desired end, which is to return to an inorganic state on its own terms. So there is this tension in the organism between evolving to its end and being modified prematurely toward an end, a modification which in terms of fiction would mean you wouldn't have a plot, right? You might have a beginning, but you would have a sudden cutting off that prevented the arabesque of the plot from developing and arising. Now what Brooks argues following Freud is that to this end, the creating of an atmosphere in which with dignity and integrity, as it were, > the organism can progress toward its own end without interference, as it were--what Brooks following Freud argues is that in this process, the pleasure principle and the death wish cooperate. This is on page 1166, bottom of the right-hand column, and then over to 1167, a relatively long passage: Hence Freud is able to proffer, with a certain bravado, the formulation: "the aim of all life is death." We are given an evolutionary image of the organism in which the tension created by external influences has forced living substance to "diverge ever more widely from its original course of life and to make ever more complicated détours before reaching its aim of death." In this view, the self-preservative instincts function to assure that the organism shall follow its own path to death, to ward off any ways of returning to the inorganic which are not imminent to the organism itself. In other words, "the organism wishes to die only in its own fashion." It must struggle against events (dangers) which would help to achieve its goal too rapidly--by a kind of short-circuit. Again on page 1169, left-hand column, a little bit farther down from the passage we quoted before, Brooks says: … [W]e could say that the repetition compulsion and the death instinct serve the pleasure principle; in a larger sense [though], the pleasure principle, keeping watch on the invasion of stimuli from without and especially from within, seeking their discharge, serves the death instinct, making sure that the organism is permitted to return to quiescence. It's in this way that these two differing drives coexist and in some measure cooperate in the developing and enriching of the good life, and in the developing and enriching of the good plot. An obvious problem with this theory, and Freud acknowledges this problem in Beyond the Pleasure Principle, is that it's awfully hard to keep death and sex separate. In other words, the reduction of excitation is obviously something that the pleasure principle is all about. The purpose of sex is to reduce excitation, to annul desire. The purpose of death, Freud argues, is to do the same thing. Well, how can you tell the one from the other? There's a rich vein of literary history which insists on their interchangeability. We all know what "to die" means in early modern poems. We all know about "Liebestod" in "Tristan and Isolde," the moments of death in literature which obviously are sexually charged. There is a kind of manifest and knowing confusion of the two in literature-- and Freud always says that the poets preceded him in everything that he thought-- which suggests that it is rather hard to keep these things separate. For example, by the way, the compulsion to repeat nasty episodes, to revisit trauma, and to repeat the unpleasurable-- well, that could just be called masochism, couldn't it? It could be called something which is a kind of pleasure and which therefore could be subsumed under the pleasure principle and would obviate the need for a theory of the death drive as Freud develops it in Beyond the Pleasure Principle. Now Freud acknowledges this. He says that it is difficult to make the distinction. He feels that a variety of sorts of clinical evidence at his disposal warrant the distinction, but it is not an easy one. It's one that I suppose we could continue to entertain as a kind of skepticism about this way of understanding the compulsion to repeat as somehow necessarily entailing a theory of the death wish. All right. Now quickly, as to the plot: desire emerges or begins as the narratable. What is the unnarratable? The unnarratable is that immersion in our lives such that there is no sense of form or order or structure. Anything is unnarratable if we don't have a sense of a beginning, a middle, and an end to bring to bear on it. The narratable, in other words, must enter into a structure. So the beginning, which is meditated on by Sartre's Roquentin in La Nausee and quoted to that effect by Brooks on the left-hand column of page 1163-- the narratable begins in this moment of entry into that pattern of desire that launches a fiction. We have speculated on what that desire consists in, and so the narratable becomes a plot and the plot operates through metaphor, which unifies the plot, which shows the remarkable coherence of all of its parts. A narrative theory is always talking with some satisfaction about how there's no such thing in fiction as irrelevant detail. In other words, nothing is there by accident. That is the metaphoric pressure brought to bear on plotting, sort of, in the course of composition. Everything is there for a reason, and the reason is arguably the nature of the underlying desire that's driving the plot forward; but on the other hand, metonymy functions as the principle of delay, the detour, the arabesque, the refusal of closure; the settling upon bad object choice and other unfortunate outcomes, the return of the unpleasurable--all the things that happen in the structure of "middles" in literary plots. The plot finally binds material together, and both metaphor and metonymy are arguably forms of binding. Look at page 1166, the right-hand column, bottom of the first paragraph. Brooks says: To speak of "binding" in a literary text is thus to speak of any of the formalizations (which, like binding, may be painful, retarding) that force us to recognize sameness within difference, or the very emergence of a sjužet from the material of fabula. Okay. Now I want to turn to Tony as an instance of the way in which reading for the plot can take place. I also want to mention that the choice of these materials for today's assignment is not just a way into questions of psychoanalysis as they bear on literature and literary theory, but also a gesture toward something that those of you whose favorite form of reading is novels may wish we had a little more of in a course of this kind-- namely narrative theory: narratology. I commend to you the opening pages of Brooks' essay where he passes in review some of the most important work in narrative theory, work that I mentioned in passing when I talked about structuralism a couple of weeks back and work which, for those of you who are interested in narrative and narrative theory, you may well wish to revisit. Roland Barthes, Tzvetan Todorov, and Gerard Genette are the figures to whom Brooks is primarily expressing indebtedness within that tradition. Anyway: Tony the Tow Truck. I would suggest that in the context of Beyond the Pleasure Principle we could re-title Tony the Tow Truck as The Bumpy Road to Maturity. It certainly has the qualities of a picaresque fiction. It's on the road, as it were, and the linearity of its plot-- the way in which the plot is like beads on a string, which tends to be the case with picaresque fiction, and which by the way is also a metonymic aspect of the fiction-- lends the feeling of picturesque to the narrative. Quickly to reread it--I know that you all have it glued to your wrists, but in case you don't, I'll reread it: I am Tony the Tow Truck. I live in a little yellow garage. I help cars that are stuck. I tow them to my garage. I like my job. One day I am stuck. Who will help Tony the Tow Truck? "I cannot help you," says Neato the Car. "I don't want to get dirty." "I cannot help you [see, these are bad object choices, right?]," says Speedy the Car. "I am too busy." I am very sad. Then a little car pulls up. It is my friend, Bumpy. Bumpy gives me a push. He pushes and pushes [by the way, this text, I think, is very close to its surface a kind of anal-phase parable. In that parable, the hero is not Tony in fact but a character with whom you are familiar if you're familiar with South Park, and that character is of course the one who says, "He pushes and pushes…"] and I am on my way." [In any case that is part of the narrative, and then:] "Thank you, Bumpy," I call back. "You're welcome," says Bumpy. Now that's what I call a friend. So that's the text of Tony the Tow Truck. Now we've said that it's picaresque. We can think in terms of repetition, obviously, as the delay that sets in between an origin and an end. We've spoken of this in this case as--well, it's the triadic form of the folk tale that Brooks actually mentions in his essay; but it is, in its dilation of the relationship of beginning and end, a way of reminding us precisely of that relation. He comes from a little yellow garage. The question is, and a question which is perhaps part of the unnarratable, is he going back there? We know he's on his way, but we don't know, if we read it in terms of Beyond the Pleasure Principle, whether he's on his way back to the little yellow garage or whether-- and there's a premonition of this in being stuck, in other words in having broken down-- whether he's on his way to the junkyard. In either case, the only point is that he will go to either place because the little yellow garage is that from which he came; in either case--little yellow garage or junkyard-- he's going to get there on his own terms, but not as a narcissist and not as the person who begins every sentence in the first part of the story with the word "I," because you can't just be an autonomous hero. On your journey, and this is also true of the study of folklore, you need a helper. That's part of fiction. You need another hero. You need a hero to help you, and having that hero, encountering the other mind as helper, is what obviates the tendency, even in a nice guy like Tony, toward narcissism which is manifest in the "I," "I," "I" at the beginning of the story. Notice that then the "I" disappears, not completely but wherever it reappears it's embedded rather than initial. It is no longer, in other words, that which drives the line in the story. So the arabesque of the plot, as I say, is a matter of encountering bad object choices and overcoming them: neatness, busyness--choices which, by the way, are on the surface temptations. We all want to be neat and busy, don't we? But somehow or another it's not enough because the otherness, the mutuality of regard that this story wants to enforce as life-- as life properly lived--is not entailed in and of itself in neatness and busyness. Resolution and closure, then, is mature object choice and in a certain sense there, too, it's a push forward, but we don't quite know toward what. We have to assume, though, in the context of a reading of this kind that it's a push toward a state in which the little yellow garage and the unnarratable junkyard are manifest as one and the same thing. Now as metonymy, the delays we have been talking about, the paratactic structure of the way in which the story is told-- all of those, and the elements of repetition, are forms that we recognize as metonymic, but there's something beyond that at the level of theme. This is a story about cars. This is a story about mechanical objects, some of which move--remember those smiling houses in the background-- and some of which are stationary, but they're all mechanical objects. They're all structures. In other words, they're not organic. This is a world understood from a metonymic point of view as that which lacks organicity, and yet at the same time the whole point of the story is thematically metaphoric. It is to assert the common humanity of us all: "That's what I call a friend." The whole point of so many children's stories, animal stories, other stories like this, The Little Engine that Could, and so on is to humanize the world: to render friendly and warm and inviting to the child the entire world, so that Tony is not a tow truck--Tony's a human being, and he realizes humanity in recognizing the existence of a friend. The unity of the story, in other words, as opposed to its metonymic displacements through the mechanistic, is the triumphant humanization of the mechanistic and the fact that as we read the story, we feel that we are, after all, not in mechanical company but in human company. That's the effect of the story and the way it works. In terms of the pleasure principle then, life is best in a human universe and in terms of-- well, in terms of Beyond the Pleasure Principle, the whole point of returning to an earlier state, the little yellow garage or junkyard, is to avert the threat that one being stuck will return to that junkyard prematurely or along the wrong path. Okay. So next time we will turn to the somewhat formidable task of understanding Lacan. |
Literature_Lectures | 2_Introduction_cont.txt | Prof: Last time we introduced the way in which the preoccupation with literary and other forms of theory in the twentieth century is shadowed by a certain skepticism, but as we were talking about that we actually introduced another issue which isn't quite the same as the issue of skepticism-- namely, determinism. In other words, we said that in intellectual history, first you get this movement of concern about the distance between the perceiver and the perceived, a concern that gives rise to skepticism about whether we can know things as they really are. But then as a kind of aftermath of that movement in figures like Marx, Nietzsche and Freud--and you'll notice that Foucault reverts to such figures when he turns to the whole question of "founders of discursivity," we'll come back to that-- in figures like that, you get the further question of not just how we can know things in themselves as they really are but how we can trust the autonomy of that which knows: in other words, how we can trust the autonomy of consciousness if in fact there's a chance-- a good chance, according to these writers-- that it is in turn governed by, controlled by, hidden powers or forces. This question of determinism is as important in the discourse of literary theory as the question of skepticism. They're plainly interrelated in a variety of ways, but it's more to the question of determinism I want to return today. Now last time, following Ricoeur, I mentioned Marx, Nietzsche and Freud as key figures in the sort of secondary development that somehow inaugurates theory, and then I added Darwin. It seems particularly important to think of Darwin when we begin to think about the ways in which in the twentieth century, a variety of thinkers are concerned about human agency-- that is to say, what becomes of the idea that we have autonomy, that we can act or at least that we can act with a sense of integrity and not just with a sense that we are being pulled by our strings like a puppet. In the aftermath of Darwin in particular, our understanding of natural selection, our understanding of genetic hard-wiring and other factors, makes us begin to wonder in what sense we can consider ourselves, each of us, to be autonomous subjects. And so, as I say, the question of agency arises. It's in that context, needless to say, that I'd like to take a look at these two interesting passages on the sheet that has Anton Chekhov on one side and Henry James on the other. Let's begin with the Chekhov. The Cherry Orchard, you know, is about the threat owing to socioeconomic conditions, the conditions that do ultimately lead to the Menshevik Revolution of 1905, to a landed estate, and the perturbation and turmoil into which the cast of characters is thrown by this threat. Now one of the more interesting characters, who is not really a protagonist in the play for class reasons, is a house servant named Yepihodov, and Yepihodov is a character who is, among other things, a kind of autodidact. That is to say, he has scrambled into a certain measure of knowledge about things. He is full of a kind of understandable self-pity, and his speeches are in some ways more characteristic of the gloomy intellectual milieu that is reflected in Chekhov's text really than almost anyone else's. I want to quote to you a couple of them. Toward the bottom of the first page, he says, "I'm a cultivated man. I read all kinds of remarkable books and yet I can never make out what direction I should take, what it is that I want, properly speaking." As I read, pay attention to the degree to which he's constantly talking about language and about the way in which he himself is inserted into language. He's perpetually seeking a mode of properly speaking. He is a person who is somewhat knowledgeable about books, feels himself somehow to be caught up in the matrix of book learning-- in other words, a person who is very much preoccupied with his conditioning by language, not least when perhaps unwittingly he alludes to Hamlet. "Should I live or should I shoot myself?"--properly speaking, "To be or not to be?" In other words, he inserts himself into the dramatic tradition to which as a character he himself belongs and shows himself to be in a debased form derived from one of those famous charismatic moments in which a hero utters a comparable concern. So in all sorts of ways, in this simple passage we find a character who's caught up in the snare--if I can put it that way--the snare of language. To continue, he says at the top of the next page, "Properly speaking and letting other subjects alone, I must say"--everything in terms of what other discourse does and what he himself can say, and of course, it's mainly about "me"-- "regarding myself among other things, that fate treats me mercilessly as a storm treats a small boat." And the end of the passage is, "Have you read Buckle?" Now Buckle is a forgotten name today, but at one time he was just about as famous as Oswald Spengler who wrote The Decline of the West. He was a Victorian historian preoccupied with the dissolution of Western civilization. In other words, Buckle was the avatar of the notion in the late nineteenth century that everything was going to hell in a handbasket. One of the texts that Yepihodov has read that in a certain sense determines him is Buckle. "Have you read Buckle? I wish to have a word with you Avdotya Fyodorovna." In other words, I'm arguing that the saturation of these speeches with signs of words, language, speaking, words, books, is just the dilemma of the character. That is to say, he is in a certain sense book- and language-determined, and he's obscurely aware that this is his problem even as it's a source of pride for him. Turning then to a passage in a very different tone from James's Ambassadors. An altogether charming character, the elderly Lambert Strether, who has gone to-- most of you know--has gone to Paris to bring home the young Chad Newsome, a relative who is to take over the family business, the manufacture of an unnamed household article in Woollett, Massachusetts, probably toilet paper. In any case, Lambert Strether, as he arrives in Paris, has awakened to the sheer wonder of urbane culture. He recognizes that he's missed something. He's gone to a party given by a sculptor, and at this party he meets a young man named Little Bilham whom he likes, and he takes Little Bilham aside by the lapel, and he makes a long speech to him, saying, "Don't do what I have done. Don't miss out on life. Live all you can. It is a mistake not to. And this is why," he goes on to say, "the affair, I mean the affair of life"-- it's as though he's anticipating the affair of Chad Newsome and Madame de Vionnet, which is revealed at the end of the text-- "couldn't, no doubt, have been different for me for it's"-- "it" meaning life-- "[life is] at the best a tin mold either fluted or embossed with ornamental excrescences or else smooth and dreadfully plain, into which, a helpless jelly, one's consciousness, is poured so that one takes the form, as the great cook says"-- the great cook, by the way, is Brillat-Savarin-- "one takes the form, as the great cook says, and is more or less compactly held by it. One lives, in fine, as one can. Still one has the illusion of freedom." Here is where Strether says something very clever that I think we can make use of. He says, "Therefore, don't be like me without the memory of that illusion. I was either at the right time too stupid or too intelligent to have it. I don't quite know which." Now if he was too stupid to have it, then of course he would have been liberated into the realm of action. He would have been what Nietzsche in an interesting precursor text calls "historical man." He simply would have plunged ahead into life as though he had freedom, even though he was too stupid to recognize that it was an illusion. On the other hand, if he was too intelligent to, as it were, bury the illusion and live as though he were free, if he was too intelligent to do that, he's a kind of an avatar of the literary theorist-- in other words, the sort of person who can't forget long enough that freedom is an illusion in order to get away from the preoccupations that, as I've been saying, characterize a certain kind of thinking in the twentieth century. And it's rather charming at the last that he says--because how can we know anything--"I don't quite know which." That, too, strikes me as a helpful and also characteristic passage that can introduce us to today's subject, which is the loss of authority: that is to say, in Roland Barthes' terms, "the death of the author," and in Foucault's terms, the question "What is an author?" In other words, in the absence of human agency, the first sacrifice for literary theory is the author, the idea of the author. That's what will concern us in this second, still introductory lecture to this course. We'll get into the proper or at least more systematic business of the course when we turn to hermeneutics next week. Now let me set the scene. This is Paris. It wouldn't have to be Paris. It could be Berkeley or Columbia or maybe Berlin. It's 1968 or '69, spilling over in to the seventies. Students and most of their professors are on the barricades, that is to say in protest not only against the war in Vietnam but the outpouring of various forms of authoritative resistance to protest that characterized the sixties. There is a ferment of intellectual revolt which takes all sorts of forms in Paris but is first and foremost perhaps organized by what quickly in this country became a bumper sticker: "Question authority." This is the framework in which the then most prominent intellectual in France writes an essay at the very peak of the student uprising, entitled "What is an Author?" and poses an answer which is by no means straightforward and simple. You're probably a little frustrated because maybe you sort of anticipated what he was going to say, and then you read it and you said, "Gee, he really isn't saying that. In fact, I don't quite know what he is saying" and struggled more than you're expected to because you anticipated what I've just been saying about the setting and about the role of Foucault and all the rest of it, and were possibly more confused than you might have expected to be. Yet at the same time, you probably thought "Oh, yeah, well, I did come out pretty much in the place I expected to come out in despite the roundabout way of having gotten there." Because this lecture is introductory, I'm not going to spend a great deal of time explicating the more difficult moments in his argument. I am going to emphasize what you perhaps did anticipate that he would say, so that can take us along rather smoothly. There is an initial issue. Because we're as skeptical about skepticism as we are about anything else we're likely to raise our eyebrows and say, "Hmm. Doesn't this guy Foucault think he's an author? You know, after all, he's a superstar. He's used to being taken very seriously. Does he want to say that he's just an author function, that his textual field is a kind of set of structural operations within which one can discover an author? Does he really want to say this?" Well, this is the question raised by the skeptic about skepticism or about theory and it's one that we're going to take rather seriously, but we're going to come back to it because there are ways, it seems to me, of keeping this question at arm's length. In other words, Foucault is up to something interesting, and probably we should meet him at least halfway to see, to measure, the degree of interest we may have in it. So yes, there is the question--there is the fact that stands before u--that this very authoritative-sounding person seems to be an author, right? I never met anybody who seemed more like an author than this person, and yet he's raising the question whether there is any such thing, or in any case, the question how difficult it is to decide what it is if there is. Let me digress with an anecdote which may or may not sort of help us to understand the delicacy of this relationship between a star author, a person undeniably a star author, and the atmosphere of thought in which there is, in a certain sense, no such thing as an author. An old crony and former colleague of mine was taking a course at Johns Hopkins in the 1960s. This was a time when Hopkins led all American universities in the importing of important European scholars, and it was a place of remarkable intellectual ferment. This particular lecture course was being given by Georges Poulet, a so-called phenomenological critic. That's one of the "isms" we aren't covering in this seminar. In any case, Poulet was also a central figure on the scene of the sixties. Poulet would be lecturing along, and the students had somehow formed a habit of from time to time-- by the way, you can form this habit, too--of raising their hand, and what they would do is they would utter a name-- at least this is what my friend noticed. They would raise their hand and they would say, "Mallarmé." And Poulet would look at them and say, "Mais, oui! Exactement! A mon avis aussi!" And then he would go on and continue to lecture for a while. Then somebody else would raise his hand and say, "Proust." "Ah, précisément! Proust. Proust." And then he'd continue along. So my friend decided he'd give it a try > and he raised his hand and he said, "Voltaire," and Poulet said "Quoi donc… Je ne vous comprends pas," and then paused and hesitated and continued with his lecture as though my friend had never asked his question. Now this is a ritual of introducing names, and in a certain sense, yes, the names of authors, the names of stars; but at the same time, plainly names that stand for something other than their mere name, names that stand for domains or fields of interesting discursivity: that is to say-- I mean, Poulet was the kind of critic who believed that the oeuvre of an author was a totality that could be understood as a structural whole, and his criticism worked that way. And so yes, the signal that this field of discursivity is on the table is introduced by the name of the author but it remains just a name. It's an author without authority, yet at the same time it's an author who stands for, whose name stands for, an important field of discourse. That's of course what my friend--because he knew perfectly well that when he said "Voltaire," Poulet would > have nothing to do with it--that's the idea that my friend wanted to experiment with. There are relevant and interesting fields of discourse and there are completely irrelevant fields of discourse, and some of these fields are on the sides of angelic discourse and some of these fields are on the side of the demonic. We simply, kind of spontaneously, make the division. Discursivity, discourse: that's what I forgot to talk about last time. When I said that sometimes people just ultimately throw up their hands when they try to define literature and say, "Well, literature's just whatever you say it is. Fine. Let's just go ahead," they are then much more likely, rather than using the word "literature," to use the word "discourse" or "textual field," "discursivity." You begin to hear, or perhaps smell, the slight whiff of jargon that pervades theoretical writing. It often does so for a reason. This is the reason one hears so much about discourse. Simply because of doubt about the generic integrity of various forms of discourse. One can speak hesitantly of literary discourse, political discourse, anthropological discourse, but one doesn't want to go so far as to say literature, political science, anthropology. It's a habit that arises from the sense of the permeability of all forms of utterance with respect to each other, and that habit, as I say, is a breakdown of the notion that certain forms of utterance can be understood as a delimited, structured field. One of the reasons this understanding seems so problematic is the idea that we don't appeal to the authority of an author in making our mind about the nature of a given field of discourse. We find the authority of the author instead somewhere within the textual experience. The author is a signal, is what Foucault calls a "function." By the way, this isn't at all a question of the author not existing. Yes, Barthes talks about the death of the author, but even Barthes doesn't mean that the author is dead like Nietzsche's God. The author is there, sure. It's a question rather of how we know the author to be there, firstly, and secondly, whether or not in attempting to determine the meaning of a text-- and this is something we'll be talking about next week-- we should appeal to the authority of an author. If the author is a function, that function is something that appears, perhaps problematically appears, within the experience of the text, something we get in terms of the speaker, the narrator, or--in the case of plays-- as the inferred orchestrator of the text: something that we infer from the way the text unfolds. So as a function and not as a subjective consciousness to which we appeal to grasp a meaning, the author still does exist. So we consider a text as a structured entity, or perhaps as an entity which is structured and yet at the same time somehow or another passes out of structure-- that's the case with Roland Barthes. Here I want to appeal to a couple of passages. I want to quote from the beginning of Roland Barthes' essay, which I know I only suggested, but I'm simply going to quote the passage so you don't have to have read it, The Death of the Author. It's on page 874 for those of you who have your texts, as I hope you do. Barthes, while writing this--he's writing what has perhaps in retrospect seemed to be his most important book, it's called S/Z. It's a huge book which is all about this short story by Balzac, "Sarrasine," that he begins this essay by quoting. This is what he says here about "Sarrasine": In his story "Sarrasine" Balzac, describing a castrato disguised as a woman, writes the following sentence: "This was woman herself, with her sudden fears, her irrational whims, her instinctive worries, her impetuous boldness, her fussings and her delicious sensibility." [Barthes says,] "Who is speaking thus? Is it the hero of the story bent on remaining ignorant of the castrato hidden beneath the woman? Is it Balzac the individual, furnished by his personal experience with a philosophy of Woman? Is it Balzac the author professing "literary" ideas on femininity? Is it universal wisdom? Romantic psychology? We shall never know, for the good reason that writing is the destruction of every voice, of every point of origin. Writing is that neutral, composite, oblique space where our subject [and this is a deliberate pun] slips away ["our subject" meaning that we don't quite know what's being talked about sometimes, but also and more importantly the subject, the authorial subject, the actual identity of the given speaking subject-- that's what slips away] the negative where all identity is lost, starting with the very identity of the body writing. So that's a shot fired across the bow against the author because it's Barthes' supposition that the author isn't maybe even quite an author function because that function may be hard to identify in a discrete way among myriad other functions. Foucault, who I think does take for granted that a textual field is more firmly structured than Barthes supposes, says on page 913 that when we speak of the author function, as opposed to the author--and here I begin quoting at the bottom of the left-hand column on page 913-- when we speak in this way we no longer raise the questions: "How can a free subject penetrate the substance of things and give it meaning? How can it activate the rules of a language from within and thus give rise to the designs which are properly own- its own?" In other words, we no longer say, "How does the author exert autonomous will with respect to the subject matter being expressed?" We no longer appeal, in other words, to the authority of the author as the source of the meaning that we find in the text. Foucault continues, Instead, these questions will be raised: "How, under what conditions, and in what forms can something like a subject appear in the order of discourse? What place can it occupy in each type of discourse, what functions can it assume, and by obeying what rules?" In short, it is a matter of depriving the subject (or its substitute)… [That is to say, when we speak in this way of an author function,] it is a matter of depriving the subject (or its substitute) [a character, for example, or a speaker, as we say when we don't mean that it's the poet talking but the guy speaking in "My Last Duchess" or whatever] of its role as originator, and of analyzing the subject as a variable and complex function of discourse. "The subject" here always means the subjectivity of the speaker, right, not the subject matter. You'll get used to it because it's a word that does a lot of duty, and you need to develop context in which you recognize that well, yeah, I'm talking about the human subject or well, I'm talking about the subject matter; but I trust that you will quickly kind of adjust to that difficulty. All right. So with this said, it's probably time to say something in defense of the author. I know that you wish you could stand up here and say something in defense of the author, so I will speak in behalf of all of you who want to defend the author by quoting a wonderful passage from Samuel Johnson's Preface to Shakespeare, in which he explains for us why it is that we have always paid homage to the authority of the author. It's not just a question, as obviously Foucault and Barthes are always suggesting, of deferring to authority as though the authority were the police with a baton in its hand, right? It's not a question of deferring to authority in that sense. It's a question, rather, of affirming what we call the human spirit. This is what Johnson says: There is always a silent reference of human works to human abilities, and as the inquiry, how far man may extend his designs or how high he may rate his native force, is of far greater dignity than in what rank we shall place any particular performance, curiosity is always busy to discover the instruments as well as to survey the workmanship, to know how much is to be ascribed to original powers and how much to casual and adventitious help. So what Johnson is saying is: well, it's all very well to consider a textual field, the workmanship, but at the same time we want to remind ourselves of our worth. We want to say, "Well, gee, that wasn't produced by a machine. That's not just a set of functions--variables, as one might say in the lab. It's produced by genius. It's something that allows us to rate human ability high." And that, especially in this vale of tears--and Johnson is very conscious of this being a vale of tears--that's what we want to keep doing. We want to rate human potential as high as we can, and it is for that reason in a completely different spirit, in the spirit of homage rather than cringing fear, that we appeal to the authority of an author. Well, that's an argument for the other side, but these are different times. This is 1969, and the purpose that's alleged for appealing to the author as a paternal source, as an authority, is, according to both Barthes and Foucault, to police the way texts are read. In other words, both of them insist that the appeal to the author-- as opposed to the submersion of the author in the functionality of the textual field-- is a kind of delimitation or policing of the possibilities of meaning. Let me just read two texts to that effect, first going back to Roland Barthes on page 877. Barthes says, "Once the Author is removed, the claim to decipher a text becomes quite futile." By the way, once again there's a bit of a rift there between Barthes and Foucault. Foucault wouldn't say "quite futile." He would say, "Oh, no. We can decipher it, but the author function is just one aspect of the deciphering process." But Barthes has entered a phase of his career in which you actually think that structures are so complex that they cease to be structures and that this has a great deal to do with the influence of deconstruction. We'll come back to that much later in the course. In any case, he continues. To give a text an Author is to impose a limit on that text, to furnish it with a final signified, to close the writing. Such a conception suits criticism [and criticism is a lot like policing, right--"criticism" means being a critic, criticizing] very well, the latter then allotting itself the important task of discovering the Author (or its hypostases: society, history, psyché, liberty) beneath the work: when the Author has been found, the text is "explained"-- a victory to the critic. In other words, the policing of meaning has been accomplished and the critic wins, just as in the uprisings of the late sixties, the cops win. This is, again, the atmosphere in which all of this occurs-- just then to reinforce this with the pronouncement of Foucault at the bottom of page 913, right-hand column: "The author is therefore the ideological figure by which one marks the manner in which we fear the proliferation of meaning." Now once again, there is this sort of the skepticism about skepticism. You say, "Why shouldn't I fear the proliferation of meaning? I want to know what something definitely means. I don't want to know that it means a million things. I'm here to learn what things mean in so many words. I don't want to be told that I could sit here for the rest of my life just sort of parsing one sentence. Don't tell me about that. Don't tell me about these complicated sentences from Balzac's short story. I'm here to know what things mean. I don't care if it's policing or not. Whatever it is, let's get it done." That, of course, is approaching the question of how we might delimit meaning in a very different spirit. The reason I acknowledge the legitimacy of responding in this way is that to a certain extent the preoccupation with-- what shall we say?--the misuse of the appeal to an author is very much of its historical moment. That is to say, when one can scarcely say the word "author" without thinking "authority," and one can definitely never say the word "authority" without thinking about the police. This is a structure of thought that perhaps pervades the lives of many of us to this day and has always pervaded the lives of many people, but is not quite as hegemonic in our thinking today perhaps as it was in the moment of these essays by Barthes and Foucault. All right. With all this said, how can the theorist recuperate honor for certain names like, for example, his own? "All right. It's all very well. You're not an author, but I secretly think I'm an author, right?" Let's suppose someone were dastardly enough to harbor such thoughts. How could you develop an argument in which a thought like that might actually seem to work? After all, Foucault--setting himself aside, he doesn't mention himself--Foucault very much admires certain writers. In particular, he admires, like so many of his generation and other generations, Marx and Freud. It's a problem if we reject the police-like authority of authors, of whom we may have a certain suspicion on those grounds, when we certainly don't feel that way about Marx and Freud. What's the difference then? How is Foucault going to mount an argument in which privileged authors-- that is to say, figures whom one cites positively and without a sense of being policed-- can somehow or another stay in the picture? Foucault, by the way, doesn't mention Nietzsche, but he might very well because Nietzsche's idea of "genealogy" is perhaps the central influence on Foucault's work. Frankly, I think it's just an accident that he doesn't mention him. It would have been a perfect symmetry because last time we quoted Paul Ricoeur to the effect that these authors, Marx, Nietzsche, and Freud, were--and this is Ricoeur's word-- "masters." Whoa! That's the last thing we want to hear. They're not masters. Foucault couldn't possibly allow for that because plainly the whole texture of their discourse would be undermined by introducing the notion that it's okay to be a master, and yet Ricoeur feels that these figures dominate modern thought as masters. How does Foucault deal with this? He invents a concept. He says, "They aren't authors. They're founders of discursivity," and then he grants that it's kind of difficult to distinguish between a founder of discursivity and an author who has had an important influence. Right? And then he talks about the gothic novel and he talks about Radcliffe's, Anne Radcliffe's--he's wrong about this, by the way. The founder of discursivity in the gothic novel is not Anne Radcliffe; it's Horace Walpole, but that's okay-- he talks about Anne Radcliffe as the person who establishes certain tropes, topoi, and premises that govern the writing of gothic fiction for the next hundred years and, indeed, even in to the present, so that she is, Foucault acknowledges, in a certain sense a person who establishes a way of talking, a way of writing, a way of narrating. But at the same time she isn't a person, Foucault claims, who introduces a discourse or sphere of debate within which ideas, without being attributable necessarily, can nevertheless be developed. Well, I don't know. It seems to me that literary influence is not at all unlike sort of speaking or writing in the wake of a founder of discursivity, but we can let that pass. On the other hand, Foucault is very concerned to distinguish figures like this from scientists like Galileo and Newton. Now it is interesting, by the way, maybe in defense of Foucault, that whereas we speak of people as Marxist or Freudian, we don't speak of people as Radcliffian or Galilean or Newtonian. We use the adjective "Newtonian" but we don't speak of certain writers who are still interested in quantum mechanics as "Newtonian writers." That's interesting in a way, and may somehow or another justify Foucault's understanding of the texts of those author functions known as Marx and Freud-- whose names might be raised in Poulet's lecture class with an enthusiastic response-- as place holders for those fields of discourse. It may, in some sense, reinforce Foucault's argument that these are special inaugurations of debate, of developing thought, that do not necessarily kowtow to the originary figure-- certainly debatable, but we don't want to pause over it in the case either of Marx or of Freud. Plainly, there are a great many people who think of them as tyrants, right, but within the traditions that they established, it is very possible to understand them as instigating ways of thinking without necessarily presiding over those ways of thinking authoritatively. That is the special category that Foucault wants to reserve for those privileged figures whom he calls founders of discursivity. All right. Very quickly then to conclude: one consequence of the death of the author, and the disappearance of the author into author function is, as Foucault curiously says in passing on page 907, that the author has no legal status. And you say, "What? What about copyright? What about intellectual property? That's a horrible thing to say, that the author has no legal status." Notice once again the intellectual context. Copyright arose as a bourgeois idea. That is to say, "I possess my writing. I have an ownership relationship with my writing." The disappearance of the author, like a kind of corollary disappearance of bourgeois thought, entails, in fact, a kind of bracketing of the idea of copyright or intellectual property. And so there's a certain consistency in what Foucault is saying about the author having no legal status. But maybe at this point it really is time to dig in our heels. "I am a lesbian Latina. I stand before you as an author articulating an identity for the purpose of achieving freedom, not to police you, not to deny your freedom, but to find my own freedom. And I stand before you precisely, and in pride, as an author. I don't want to be called an author function. I don't want to be called an instrument of something larger than myself because frankly that's what I've always been, and I want precisely as an authority through my authorship to remind you that I am not anybody's instrument but that I am autonomous and free." In other words, the author, the traditional idea of the author-- so much under suspicion in the work of Foucault and Barthes in the late sixties-- can be turned on its ear. It can be understood as a source of new-found authority, of the freedom of one who has been characteristically not free and can be received by a reading community in those terms. It's very difficult to think how a Foucault might respond to that insistence, and it's a problem that in a way dogs everything, or many of the things we're going to be reading during the course of this semester-- even within the sorts of theorizing that are characteristically called cultural studies and concern questions of the politics of identity. Even within those disciplines there is a division of thought between people who affirm the autonomous integrity and individuality of the identity in question and those who say any and all identities are only subject positions discernible and revealed through the matrix of social practices. There is this intrinsic split even within those forms of theory-- and not to mention the kinds of theory that don't directly have to do with the politics of identity-- between those for whom what's at stake is the discovery of autonomous individuality and those for whom what's at stake is the tendency to hold at arm's length such discoveries over against the idea that the instability of any and all subject positions is what actually contains within it-- as Foucault and Barthes thought as they sort of sat looking at the police standing over against them-- those for whom this alternative notion of the undermining of any sense of that which is authoritative is in its turn a possible source, finally, of freedom. These sorts of vexing issues, as I say, in all sorts of ways will dog much of what we read during the course of this semester. All right. So much for the introductory lectures which touch on aspects of the materials that we'll keep returning to. On Tuesday we'll turn to a more specific subject matter: hermeneutics, what hermeneutics is, how we can think about the nature of interpretation. Our primary text will be the excerpt in your book from Hans-Georg Gadamer and a few passages that I'll be handing out from Martin Heidegger and E.D. Hirsch. |
Marketing_Basics_Prof_Myles_Bassell | 2_of_20_Marketing_Basics_Myles_Bassell.txt | [Music] all right so let's [Music] start all right so let's get started all right so what we're going to do today is we're going to continue our discussion about marketing we're going to look at the 3M um video segment remember I told you that we're going to try each class to look at a video case study and um today we're going to look at the case about uh 3M so I need you a cooperation all right cuz we have a big group here and um that means that that only one person should be talking at a time right cuz class is for your benefit so the first thing I want to talk about is where we left off I want to pick up where we left off which was talking about the BCG model which is what what does BCG stand for that's a a clothing line right isn't that a brand of clothing yes who knows what what is it what's BCG somebody anybody go ahead business corpor no Boston something Boston something what Boston cream what it's Boston Consulting Group so the Boston Consulting Group came up with this model and the model the model is discussed in Chapter 2 and it's related to portfolio analysis so this is what the borson Consultant Group um proposed they said that when we do portfolio analysis when we're evaluating sbus which are strategic business units when we're evaluating product lines when we're evaluating divisions in an organization we need to classify them so what they suggested is that we classify them using one of four criteria or four names there's uh four names that uh we could assign to the product line to a given product line or to a given um strategic business unit we could either classify the Strategic business unit or division as a star a cash cow a question mark which remember I said sometimes that's referred to as a problem child or believe it or not this is actually a dog it looks like a dinosaur but it's really a door as in right so this model is very insightful because literally on one page we could document our port portfolio analysis and this is going to help us make decisions as marketers and business people this is going to inform our decision our decision to spend advertising our decision to not spend advertising on a given strategic business unit or product line or decision to um let's say sell off a given strategic business unit which is an spu remember we use that acronym so a star the model indicates that a star is a product line for example that operates in an industry that has high growth so not just growing but has high growth so 3% growth is not high growth when we think about growth especially um in Tech technology Industries because that's where we're seeing a lot of growth and the beverage industry remember we looked at the beverage industry a couple of classes ago that's grows in the United States that's growing about 3% per year based on retail dollar sales that's not we consider to be high growth high growth is 50% 100% growth of a given category or industry and we have a High market share so in other words what is the percentage of the products that are sold in a given category that carry our brand name so keep in mind that a product is wrapped in a brand so the product is wrapped in a brand every product in a given category has the same generic functionality what does that mean so for example in the Auto industry all cars have the same generic functionality which is they provide transportation so if we were to look at cars let's say these are all cars they provide the same generic functionality all of these cars provide transportation what makes them unique now don't be fooled what makes them unique is that they're each wrapped in a brand so each of those products are wrapped in a different brand that's what differentiates one product from another do you guys agree how do we differentiate how do we distinguish one product from another in a category how do we distinguish one car from another the only way that we can do that and be able to communicate it is through branding now think about it this is really profound because if it wasn't for Branding advertising wouldn't exist do you see why I say that because if it wasn't for a brand what would you talk about in the commercial what would you talk about in The Print ad what would you say on a billboard the brand is what distinguishes one product from the other in a given category have you ever seen a movie The Invention of Lying I don't think I have in in the movie that that exact same thing happens it's the movie that no one can lie in the entire movie so they have a scene where they show the advertising of the time they're like our product's not too good you really shouldn't buy it but we have to advertise anyway so here you go yeah that's um concerning but um remember when we advertise we have to have proof points there needs to be proof points sometimes we refer to that as pillars of support so it can't be smoking mirrors it's got to be real if we say that our product is of a high quality it really needs to be of a high quality and then we need to have supporting documentation that substantiates that and if we don't that's going to be a problem why because the customer is going to have an experience with our product and if we said the product was of high quality and it's not then the customer is not going to be happy and our competitors will also challenge us so for example if our tagline or in some cases it might actually be the slogan for our advertising campaign who could tell us quickly what's the difference between a tagline and a slogan okay maybe not tagline could go underneath the slan but uh tagline is more of like What U what the company like uh for instance by Oreo America's America's uh milk's favorite cookie is a tagline a slogan is what we represent ourselves Miss well you you you've got um you got you're on the right track let me let me try and um paraphrase what you're saying there the slogan is the theme for our advertising campaign and our advertising is going to change sometimes every 3 months every six months why is that why would you what do you mean you're going to change our advertising campaign every six months yeah because what happens is um the air the ad gets tired right it loses its Effectiveness but the tagline is that short phrase the few words that captures the essence of our brand and that we link to our brand name and our logo so if you have a tagline then you should always show the tagline with the logo so so that you can create an association between those three words that embodies just do right your value proposition or your brand promise with your brand name with your logo yes go ahead do a lot of companies get their products from the same Factory and just put stickers on saying it's their brand that happens yes um a lot of companies it's hard to generalize to say a lot of companies but certainly um there's been a significant amount of consolidation in many categories where you're right in a given Factory um they're making product for several different um brands that doesn't mean it's an identical product because remember the formulation for examp example of a given product is something that's proprietary so contract manufacturing is definitely become very popular um because what it does is it allow us to overcome a barrier to entry which is the used fixed cost associated with having our own manufacturing capability so instead of us spending billions of dollars in building a manufacturing facility we could go to a company that's manufacturing let's say soup and they produce soup for 10 different brands but their chicken noodle soup for let's say um a particular brand is different than for another brand so um the recipe is going to be different but absolutely so for example um in um the large Appliance category like refrigerators washing machines there's definitely been a consolidation in manufacturing so absolutely it's correct to say that for example a Kenmore refrigerator and um like maybe a Whirlpool or a kitchen aid more than likely those are made in the same Factory but obviously their product designs are different and they have different features and benefits but yeah absolutely the brand is different but they know that too it's not like somebody's being fooled so their features and the design and the way it looks and the color and the the dials and all those things they deliberately design them to try and make them unique and then communicate that through advertising yeah go ahead I recently saw uh an app for Subway they change your slogan from like like subway E fresh they change it to stay fit E fresh yeah so they're changing the um or trying to refocus that's another reason why you want to um change your advertising campaign is so that you could refocus people's attention and teach them a new benefit or something new about your product so we've already communicated and educated the target market about that particular feature or benefit of a Subway sandwich now that we advertise extensively now we're going to teach them this message CU there's only you know a typical ad is 15 seconds there's only so much that you could cover in a 15-second ad obviously that's why if your product is either very expensive or complicated why you would also run print ads in magazines does that make sense because in a print ad you could outline all the product specifications and somebody could read that to three times four times five times as many times as they want until they feel comfortable with the specifications of the product so you might see a commercial let's say for like um the uh the iPad you know it's a lot for somebody to process well you could get the iPad in 16 gab or 32 GB or 64 GB and it could be 3G or it could be wifi now some of you know what I'm talking about I'm not sure that I know what I'm talking about with all those um specifications but because that's something you have to rehearse before class to be able to to pull that off like that but if you um can't communicate that effectively in a 15-second commercial then that's why you need to have a print ad so that you could map out very clearly this version comes with a 16 gab hard drive this one um is 32 GB of storage this is 64 GB and the different price points and of course you could put that on your company website so one of the things that we do with um commercials is we'll direct people this what we call Direct response advertising is we'll direct people to go to our website so what can you tell them in 15 seconds go to ipad.com that people could get right so then they go to our website and then everything is mapped out um very neatly and in a very organized way that shows the different models at the different price points and then you could determine what's a good value do you remember we said that marketing is about creating communicating and delivering value but what we didn't mention is I don't think we had a chance to talk about this is what is value value is a function of price quality and benefits a function of those three components doesn't mean there can be other aspects but those are three major components of value so what does that tell us why is that important because that's something that's subjective it tells us that that value doesn't mean low price it might be but take for example what do we say we said price quality and benefits so in other words if the product has numerous benefits and it's a very high quality and the price is also High then it's a good value because you get what you pay for do you guys agree do you see where where I'm going with that so it doesn't mean cheap there's a perceived value now somebody else might say but I'm okay with less benefits and I understand it's a lesser quality but the price is also less and so um the iPad is a good example of that they have an iPad at $4.99 5.99 and $6.99 so you say why would somebody buy the one that's at $6.99 why not just get the one that's $4.99 well because for $6.99 you get twice as much storage so instead of 32 gab you get 64 gabes now for David he might say well you know what I don't need 64 gigabytes I don't have that many um videos I don't have as many photos now M do 32 gigabytes is quite a bit of storage I mean David has a lot of pictures but he doesn't have as many as somebody else question going back we were saying that like we got with do cars right it's all it's just a brand name that makes them different right and but generally they're all like from the same category what if it's something that has like two categories for example let's take like um music devices right like MP3 players iPods and then we got a iPhone or a smartphone that has sort of iPod and a phone like two functions how does it work in terms uh like is it become into different category for like hybrids kind of or is that is it a still part of a MP3 players like in terms of marketing also like commercializing it how do you like is there differ so the question is are those products how do we classify them yeah like are they in the same category like here we got old cars like what if we got like car plane I don't know like just no oh but that's a good point because yeah so car plane is a good example because the category might be Transportation so we have what are the um the markets right so segmentation which we're going to look at in chapter nine is about dividing a market into submarkets so your point is ex excellent we're dividing the transportation category into submarkets which is car plane train bus those are all markets within the transportation category but if something has like more than one cl if something has like it's like a plane and a car at the same time then where do we put it in just do we have a separate category for that Transportation or it's just yeah you should have a separate category If the product is um is multi-functional you could have a separate category or you have to decide what is the primary feature of the product so for example we talked about phones well it's very common now that phones have camera capability well so to your point is it a phone or is it a camera right well it really depends on who's using the product for some people they have a phone they don't really use the camera but it comes with that feature so that kind of ties back to our discussion last time about direct and indirect competitors is our competition camera manufacturers or is it phone manufacturers that's a strategic decision that we're going to have to make but let's go back to um this uh this model so high growth High Market share that's what we were saying is what percentage of the market is our brand so to say let's for example we could look at market share in terms of dollars and units now why would we have let's say 25% market share in terms of dollars but only 5% market share in terms of units in a given category who could explain that because we we touched upon this last time a little bit probably means that you're more of a luxury luxurious Cass where your product is uh more expensive than the other ones you have less units out there but they're still worth more than all the other ones yeah so if we're looking at the total dollar sales then of course if your product sells at a higher price then you're going to have a greater percentage of the dollars spent in that category but at a higher price you're probably selling less units do you guys follow does that make sense so in the beverage category we said that some of the key segments when we divide the market into submarkets that segmentation we're going to talk more about that next time and that's a very important concept that's why we're taking the time to touch upon it today we talked about it last time it's critical to understand segmentation so we said that some of the markets in the beverage category are alcohol soft drinks water juice all of those are segments in the beverage category but if we look at a percentage of dollar sales alcohol is 60% of the beverage category in terms of dollar sales in the United States so why is that well because a bottle of alcohol cost a lot more than a gallon of orange juice is that true well you're not sure yes and it's um cost is a lot more than soft drinks or water so again market share could be quoted in terms of dollars or in terms of units so what that means is that a star we classify uh strategic business unit or a partic line as a star if we have a significant percentage of the share in that given market and the industry is growing rapidly now I'll tell you this we have to determine how we Define the market to your point is it really an excellent one when we say we have high market share well what does that mean do we have a significant percentage of the beverage Market or do we have a significant percentage of the orange juice market and that's why you you remembered what we had talked about as it relates to Oreo they said that we're America's favorite cookie which means you're the market share leader their competitors pushed back on them and said no you're not the market share leader who sells the most cookies in grocery in different channels of distribution like drug stores in wholesale clubs Etc so they said you might be the market share leader in certain categories like grocery for example EX for Oreo if you go into any grocery store you walk down the aisle right the cookies and crackers you see um first of all their Master brand well actually Oreo is their Master brand the corporate brand is Nabisco So within um if we look at a brand hierarchy you have a corporate brand a master brand and sometimes you might have a sub brand so for example a corporate brand would be Toyota Motor Sales USA the master Brands would be Zion Toyota and Lexus and then some of the sub Brands like for Toyota would be Echo Corolla Solara cam Avalon that's an example of uh of a brand hierarchy so that last subdivision of the of the the brand hierarchy would that be for like in terms of Toyota would it be uh sedan SUV and stuff like that or would it be specific oh okay this is a really important point we need to make a distinguish between a brand hierarchy and Market segmentation in Market segmentation we're focusing on product types what you described is a product type A Car an SUV a minivan those are product types that's what we focus on in segmentation in product segmentation because we'll see there's a lot of different ways that we could segment the market the brand hierarchy is where we determine and we use the same type of visual which is this graph not a graph this chart if you will so for segmentation we could have a product segmentation which is what you guys are talking about we could have a geographic segmentation for example but we'll use the same type of diagram for a brand hierachy so don't be thrown off if you see the same type of diagram it's this is a very compelling way to analyze either a portfolio of brands or a particular market so that's an example of a star high growth High market share but then the cash cow remember last time we was saying the cash cow is a product line or strategic business unit that operates in an industry a market that's experiencing low growth which is not horrible especially in this case since although the market is not growing rapidly we have a significant market share so in other words a significant percentage of the products being sold are carrying our brand name so we call that a cash cow why because very often what happens is the company will Milk The Cow which which means that money that's generated from cash cows is invested in Stars so does that make sense remember we said we have a $100 million as a company to advertise how much are we going to spend to promote and advertise strategic business units that are stars that are cash cows that are remember what is this you sure it's not a dinosaur it looks like a dinosaur doesn't it maybe it's a cat or a crocodile no but a dog so the dog has low market share and low growth we classify strategic business units and categories and product lines that have low growth and low market share is being a dog so if we're going to spend $100 million on Advertising as a company and the company makes up what's what makes up the company is multiple strategic business units remember last time we looked at the electronics company and we said they had multiple strategic business units is that ring a bell and we talked about that at this electronics company remember we were talking about their mission their Vision to be a worldclass provider of electronics we said that there's different product divisions TVs MP3 players phones laptops game consoles so we have to decide how we're going to classify each of those divisions each of those strategic business units now if we decide that one of those strategic business units let's say laptops is having a small percentage of the market and a category that's not growing or has very little growth remember it doesn't mean that it doesn't have any growth we classify it as a dog how much money are we going to spend to advertise the dog what do you think not much not much not much if anything that's why it's so important because this is going to help us allocate our resources once we know and this is called in Chapter 2 we talk about portfolio analysis this is going to help us allocate our resources so we're going to spend a lot of money on the star which is in a category that's growing very rapidly and we have a high market share we don't want to give that up but you snooze you lose so once we we start to stop spending money stop innovating stop promotions we're going to vacate our leadership position so dogs we're not going to spend very much if any money to advertise the dogs the question mark Or we said sometimes the problem child well the industry is growing very rapidly but we don't have much share so we need to decide look what are we going to do here are we going to be happy with 1% of the market or are we really going to be a player are we going to try and get 10% of the market or 20% of the market and we need to ask ourselves is that even possible is it possible to get 20% of the market because we need to do Market sizing once we segment the market then we need to quantify the segments which is what we call Market sizing and then we need to ask ourselves well in order for us to break even we need to have a product line that produces 15 million units maybe the whole category is 15 million units or maybe for the automatic um coffee maker in the United States which is about 25 million units each year maybe it's unrealistic for us to think we're going to all of a sudden we have a very low market share now 1% where how long is it going to take us to reach a level where we're producing 15 million coffee makers producing and selling so that's why this is so critical to our decision-making process is man managers as Executives as marketers questions about that in terms of two questions um by by like for instance Apple that their stars are their handheld devices iPod iPad iPhone right and then um and their their Cash Cow you would say is iTunes would that would that be a um a good would that be a good example of a cash down could be but we don't need to depending on the business you might not have any dogs if you don't that's great but we need to keep an eye out for product lines or strategic business units that are in a low growth industry and have low market share hey who knows we might we might be in a category where and have a um a business where they're all stars now that's not such an enviable position you to be in you don't have a cash that can yeah you don't have a cash cow and that's a different sort of problem so in ter in terms of like a company um could you have a lot of like you have different subdivisions like we said with the example of the electronics company are we saying as a whole we have this BCG model of this uh we have as a company as a whole we have a bunch of stars and then we have a bunch of cash cows or is it each department in its own right like for instance the laptop company that we have a couple models that are are stars and then we have a couple couple models that are are cash cows or is it as an in organization as a whole do we have cash cows and stuffs you could use it um both ways I think that's uh perfectly fine you could use it um at the corporate level to evaluate all your strategic business units or as you're suggesting if you have U multiple product lines in a given category um like you're saying um different types of laptops then we could evaluate those and determine which are stars which are cash cows and we refer to that actually as skew rationalization cuz what we're going to find out it's very common that if we do an analysis of our product line like you're suggesting and as part of this portfolio analysis that 80% of our Sal sales are generated by 20% of our products did you guys get that yeah so that means 20% of our products if we have a 100 products 20 of them generate 80% of the sales so we need to ask ourselves what do we do what do we do with the other 80 skus that's stock keeping units so on an ongoing basis companies go through this SKU rationalization and we constantly go through and sort the items in our product line to determine which are the best sellers and then there's some that we're going to discontinue but we all need to understand why why they're not the best sellers maybe it was just introduced so there's judgment that um and insight that needs to be um utilized to make that decision so you don't want to just um drop an item that was um just introduced we need to know but you might have something that's been on the market for a year and it's selling significantly less units than other items what does SKU stand for stock keeping unit um with the case of the the stop keeping units where you have 20% making 80% of the of the revenue you consider those cash GS or stars for both um we'd have to decide because remember it depends on whether or not we're in a high growth industry or low growth right so these classifications are being based on two Dimensions market share and growth and I don't think we're going to get to it today but we're going to talk about perceptual mapping which is um discussed in chapter 9 as well um where we talk about um positioning because perceptual mapping is a way to visualize the position that we have in a given market right positioning is the space that we occupy in the customer's mind and we could look at that on a variety of different dimensions this model talks about these Dimensions but in in any given category we could develop a perceptual map and when we do perceptual map I can tell you from my experience you don't just develop one perceptual map for a category you develop 10 because you want to look at where our brand is relative to the position of other brands in the marketplace on those Dimensions so we're GNA plot on this map where we are relative to our competition in terms of let's say price and quality so this would be high quality and low quality low price high price so importantly it doesn't just show where we're positioned the key significance of a perceptual map is it shows us where we are positioned relative to the competitors because we need to know who is in the competitive set we need to know who our direct competitors are who our indirect competitor are now these are only two Dimensions price and quality we could look at other dimensions and that's what we do but the dimensions and the importance of different dimensions is going to vary from um category to category so we might look at for example not just price and quality we might look at um Innovation so high Innovation low innovation or maybe in some cases the level of ease of use is important so it might be easy to use difficult to use again the importance is relative to competitors and we could do this through um branding research to understand the perceptions of the target market so when we do research when we do branding research we want to find out the level of awareness of our brand but we also want to find out purchase intent remember we talked about that motivation for purchase for example and also what is the customer's evoke set and the consideration set so the evoke set are all brands that come to mind in a given category so I'm going to go around the room and everybody tell me the name of a brand of Beverage alcoholic any beverage type that you want all right we're going to do this take one minute we'll go around the room everybody name one brand of Beverage it have to be different Arizona caol Gatorade H hin go ahead power power raid s Snapple Simply Orange Simply Orange Sprite Gatorade Seven Up Seven Up Fanta Fanta Johnny Walker okay go ahead David Vitamin Water Vitamin Water Chana chaana good good coke coke pepper spray so when we think about thirst when we realize and we'll talk more about problem recognition we need to understand the decision- making process we talk about problem recognition when we realize we're thirsty or we realize that we're hungry for example there's a lot of different brands that are going to come to mind all those brands are the evoked set those are all the brands that come to mind in a given category in importantly though now that's interesting to find out when we're doing research but importantly we need to find out and determine what is the consideration set because the consideration set are only those few or maybe one brand that we would actually purchase so it's not enough to say we're aware you're aware of the Pepsi brand you might think great everybody here said as part of our re research that they're aware of the brand Pepsi but then wouldn't you just fall over if they then said but I would never actually buy Pepsi because none of us drink soda we only drink iced tea and the only brand of iced tea that we drink is Snapple so you see the difference it's not enough just to find out the level of awareness of course that's an important marketing metric but we need to understand the consideration set we need to understand what brands you would actually seriously consider purchasing so we all might be aware of the brand Lamborghini but how many of us would actually currently consider purchasing or in the market for a Lamborghini you all right see me after class so do you see the difference between the evoke set and the consideration set okay so we talked about the BCG model that's where we left off remember importantly we said that the marketing mix are the controllable factors the four Ps Price Place promotion and product those are the four Ps and importantly that's something that we manage we influence we can have an impact on we decide right the price if we're going to change the price presumably the price is based upon what customers are willing to pay because remember we said five key marketing activities we're just going to go through this quickly as a refresher one five key marketing activities one identify an unmet need check your notes two develop a concept three determine a price the customer is willing to pay four gain distribution gain distribution and five build awareness build awareness good job all right you guys are awesome awesome so aome coach thank you I appreciate you saying that what about like like most things like food and like drinks like they have like a set price but what about gas where like every day like it changes and your question is I I got your scenario but what what like like address like set price and how that doesn't really it doesn't happen uh with gas oh so um what you're suggesting is that some markets are in elastic some markets are in elastic which means they're not price sensitive but there's varying degrees so some markets are not price sensitive some are somewhat price sensitive so what you're suggesting is that if the price of gas goes down 5 that demand is not going to change if the price of gas goes up 5 cents or 10 cents or a dollar is's actually a song like that Cent 5 Cent 10 cent dollar you see you heard that song so that's um a situation where the market is not price sensitive where it's in elastic so people need to drive they need to um get from one place to another so in your scenario if that's the case if the price of gas goes up a dollar and demand doesn't decrease then we would describe that market as in elastic however it's very common that um that we hear that people stop driving not everybody so remember it's not absolute that it's either an elastic Market or an inelastic Market in some cases the number of people that will drive let's say to work might decline as the price of gas goes up a dollar SO gas now is what like so if it went from3 to $4 some people might stop stop driving to work some people might stop driving to Florida some people might startop start taking a plane or a train a car plane yeah we're going we're looking for that so in that scenario the market could be inelastic or it could be um somewhat in elastic and um you could say the same about um a variety of utilities like electricity so let's say the contis company decides that electricity um is too high or maybe um they get some pressure from The Regulators to lower the price of electricity so what does that mean so we all rush home and turn on all the lights and get the air conditioner going and run the fan all night probably not it doesn't mean that some people might um not say well now the price of electricity has gone down now I could run the air conditioning last year I didn't run it but now there's um been a decrease in the price so I'll use the airin conditioner this summer but it depends on the individual so in um as regards to electricity you might be price sensitive the price goes down might say that's it 24/7 AC but you might be less price sensitive price of electricity goes down 20% you might think well I made it through the summer without air conditioning last year so I don't know how you did it but you did it so get a good sound yes you want to share something go ahead gas would be considered an inelastic um I mean an elastic thing because people don't have to buy gas they can just take transit cars I don't know how you can make it in in the uh in elastic thing inas means that they have no other choice yeah if you have a car you need gas or drive right so take take train or a bus you well no it depends it depends that you can start car pooling yeah you might you might car pool some people might um might decide that the price of gas is too high now so they're going to um yeah they consumption of gas is going to decline I thought in elastic implies that no matter what the price people are still going to buy but that would mean that they don't have the option of taking a train or a bus well remember when we talk about elasticity of demand it doesn't mean that it's perfectly inelastic or perfectly inelastic do you follow me so it could be not one but um the elasticity of demand could be 08 OR7 or it could be netive .7 or it could be ne4 so that's what I think I'm I'm hearing is that are there markets that are perfectly inelastic or perfectly elastic well we could talk about and try to come up with different scenarios but often that's not the case because some people might reduce their consumption or increase their consumption and others might not so it's hard to generalize to say this Market is inelastic because what you're saying in that case you have no choice right and there's no substitute products so that means you have to drink milk because orange juice is not an option and if you need Vitamin D calcium and vitamin A then if there's no substitute products there's no orange juice then you have to drink milk but what I'm telling you is that some markets are perfectly inelastic and some are not and when we talk about elasticity of demand we're looking at um when you we're using this index so the elasticity of demand could be one which is perfect or it could be less than one which is what I think I hear some of you are trying to get your hands around is that yes the elasticity of demand could be 04 that means that consumption is going to vary but it it depends it doesn't mean it's going to a very um in a perfect way it doesn't mean it's directly proportional all right so let's um go over the quiz we're going to go over the quiz for chapter one you ready let's keep rolling we're here till midnight right all right but we're going to just keep rolling all right so the first question who's going to read it somebody anybody go [Music] ahead the common factor among the chairman of the board the stockholders the suppliers the laborers in the factory and finally the customer who purchases the product is C all of the people listed in the question are stakeholders remember we said that a shareholder can be a stakeholder so let's try to distinguish between those terms even though they sound a little bit alike a stakeholder could be a shareholder it could be an employee it could be a supplier or a customer all of those are stakeholders and that's what's described here and that's on a page six of our book question two yes go ahead right um for marketing to occur there must be two or more parties with unsatisfied needs C canbury beverages Incorporated has begun Distributing Country Time Lemonade through the supermarket at a price comparable to that of soft drinks and most likely second party needed for marketing to occur would be be someone with a desire for a beverage other other than soda or one exactly so that's discussed on page seven so B is the best answer so remember for marketing to occur there has to be two or more parties there has to be a desire and an ability to satisfy them there has to be a way for the parties to communicate and something to exchange so there's four components there and so this addresses the fact that remember we said one of the major marketing activities is identifying an unmet need so this describes the fact that there's two or more parties with an unsatisfied need and cadburry cadburry is specifically says Beverage Company the categary um beverage company is Distributing lemonade through the supermarket at a price comparable to that of what what we're assuming is a substitute product so either going to buy a lemonade or a soft drink a soda so the most likely second party needed for marketing to occur is what is B which is the component that wasn't mentioned is someone with a desire for a beverage other than S soda or water third question the first three let's go keep going the first objective in the marketing is to discover consumer uh consumer needs d right on page nine D so remember I said first marketing major marketing activity is to identify the unmet need that's what this um question addresses number four yes go ahead if you follow the suggestion of Robert M mcth which answer provides the best advice for Mark preparing a consumer health bage e stud best product right absolutely on page nine e is the best answer study past product failures and learn from them because what he did was he studied 100,000 new product launches and he came up with two suggestions is learn from the past mistakes and focus on the customer benefit so you see why that's important because it's not just that the product failed we need to understand why why did it fail maybe the product was ahead of its time really the most successful products are those that are introduced before before the customer recognizes that they have a problem that requires a solution that's where we're going to be able to achieve what's called first mover advantages that means we're first to market with the product all right number five the United States Army has recently been both praised and criticized for its use of a popular video game America's Army designed to reach potential recruits the game creator the games Creator called Casey wardinski wanted to provide a sense of the training and teamwork one could find in the military environment the game is designed for boys 14 years or older these players represent the Army's the target market right the target market remember we said the target market is those that we want to buy our product or those that we want to sell our product to but that's different from the target audience the target audience is just who we want to reach without our advertising and we said that the target audience is very often a subset of the target market you see what that is who could explain that quickly why is a target audience very often a subset of the target market because maybe if you're uh marketing a a toy for a 5-year-old but the you want you want to convince the parent that it's okay for the child to play with that toy and therefore you Market it you advertise it through the parent with inevitably get a buy it for the child well absolutely in that um purchase decision process we definitely we have influencers we have the decision maker we have the purchaser and the end user so what is being suggested here is that we need to advertise not just to the child who's going to play with the toy but we need to also advertise to the parents who would purchase the toy and ultimately are the ones that have decided that it's okay for the child to use the toy but here also it says boys that are 14 years or older so the um the target market is boys 14 years or older but our target audience might be Hispanic boys or Caribbean American boys or Asian-American boys you follow so we want to our target market is old boys that are 14 years or older but for our advertising who do you want to reach with our advertising for a particular campaign so it would be very compelling as part of a multicultural marketing campaign that we have a unique advertising campaign for different ethnicities all right number six that page 11 number six the marketing mix the marketing mix refers to marketing mix refers to C the marketing managers controllable factories product price promotion and place that can be used to solve marketing problems right absolutely the four Ps is the marketing mix number seven a business traveler joined the Starwood Preferred Guest program in order to earn points each time he stayed overnight in a Weston or chevan hotel once he has accumulated enough points he can trade his points in for a free night stay as a member of this program The Traveler receives periodic updates on new hotels and learns of ways to earn additional points the marketing term that best describes this scenario is relationship marketing right absolutely relationship marketing on page 13 so the best answer is a so we want to link the organization to the individual customers employees suppliers and other stakeholders to achieve a mutual long-term benefit number eight who's going to do number eight the American Business period that attempts to satisfy consumer needs while achieving organizational goals is called the marketing concept era so before class we were talking a little bit about the four Ps and we said well the marketing mix people have been talking about that since the 60s right and so the point here is that there's a focus on meeting the needs of the customers very well while still being able to achieve the organizational goals remember we said there's a corporate plan there's a business plan and there's a functional plan in the organization so we want to still customize our product to meet the needs of our target market but also achieve our organizational goals number nine customer relationship management which is a very important concept often referred to as CRM is most closely related to D customer relationship era in the US business history that's talked about on page 16 and number 10 which of the following most directly explains why pharmaceutical giant fizer offered and this is an excellent example low income senior citizens many of its most widely used prescriptions for just $15 each month now just to give you a sense of perspective most prescriptions most prescriptions are $150 for a month supply that's just not statistical but to give you a sense some could be more some could be less some could be a lot more some could be $1,500 for a month supply but certainly it's not $15 for a month supply so why is fizer doing this because they wouldn't be for they couldn't afford to buy it for 150 so they just would not have bought it in the first place and because there's so many of them it's worth it to still charge them less so well that might be one of the reasons what else what else is driving that decision to responsibility yeah social responsibility we're selling a prescription that normally we could um would be sold at a pharmacy for $150 a month of a one month supply for only $15 and that's based on the company's commitment to social responsibility so our goals are what what are some of the other goals that are discussed sales right we want to achieve a certain level of sales we want to achieve a certain level of profit customer satisfaction customer satisfaction and social responsibility right those are some of the things that are going to influence our our Marketing in a particular organization so this is is an example the fiser example is is a great example of social responsibility now we'll talk about next time in other examples of where the company's decision is based on the desire to maximize sales or maximize profits if they wanted to maximize profits probably depending on their course structure it's not going to be by selling the product for $15 instead of1 50 |
Marketing_Basics_Prof_Myles_Bassell | 18_of_20_Marketing_Basics_Professor_Myles_Bassell.txt | so we've been talking about we talked about um the marketing mix and we've been getting into some detail about products types of products we talked about goods and services and this different classifications for products we talked about new product development and why products fail and we've talked about pricing different types of pricing approaches different pricing objectives and today what we're going to talk about is place we're going to talk about distribution so we're going to talk at first about different channels of distribution and then specific retailers so I want to talk broadly about different channels of distribution different types of retailers and then what I want us to brainstorm is a list of retailers that fit within those channels so I want us to come up with examples of retailers that operate in those channels now for us one of our challenges is going to be to decide where we're going to sell our product are we going to sell it at a physical location or a virtual location which retailers are we going to sell our product so that's not something that happens on its own that's our strategic responsibility we need to decide we need to select specific retailers now keep in in mind while we're choosing we're also being chosen which means that even if we want to be on the Walmart planogram they're looking to optimize their shell space productivity so they may not select our product to be in their stores so we have to keep that in mind that we may not be able to get placement at certain particular retailers or in the worst case scenario maybe even at certain channels of distribution so we have to decide where our product is going to be relevant what channels and at what retailers does it makes sense for our products to be sold so we're going to talk about distribution strategy do we want to sell our product everywhere or do we want to sell it at a limited number of locations or do we want to sell it at maybe even one particular retailer all of those are strategic decisions um that need to be made and we'll talk about what are some of the metrics that retailers use when they're looking at show space productivity and different types of retail strategies some retailers we refer to as edlp and some as high low for example we're going to talk more about what that means but let's start now and talk about different channels of distribution so let's see if we can identify different channels of distribution what are some of the channels of distribution in the United States how are they defined what are the terms that we use to define some of these channels of distribution how about Mass Merchants the mass Merchant is a channel of distribution a certain type of retailer operates in that channel sometimes we use the term Discounter what are channels what other retail types okay we're not going to talk about specific retailers just yet but what are some of the other retailer types which we also refer to as channels channels of distribution exclusive what is it exclusive well we want to identify the type of channels first let's talk about the specific types of channels before we talk about our approach in dealing with with those channels so what do you think so Mass Merchants is um one type of retailer one channel of distribution so our goal might be to sell our product in this channel of distribution to this retailer type Mass merchants and we're going to talk in a minute about who are some examples of those retailers what about grocery stores grocery stores are a type of retailer that's a channel of distribution again we have to decide does it make sense for us to sell our product in that Channel at those types of retailers now for these apples that would seem to make sense what about for the iPhone 5 should we um should we sell should we try to sell the iPhone 5 at grocery stores CU they have a lot of outlets they have a lot of physical stores a lot of locations did something to think about CU grocery stores do have what's known as a scrambled merchandising strategy which means that they sell things other than groceries is that that true don't they sell tea kettles light bulbs mops that's not the same thing as a can of soup or a bag of Oreos milk's favorite cookie right so we need to decide so this is very strategic CU it's going to impact whether or not we're successful what else department stores convenience stores yes convenience stores what else Department storees what is it department stores yes department stores these are all types of retailers we'll talk about who some of the retailers are that's selling these channels what do you think no department stores you don't like those no that's a ant or no oh okay that was a sidebar okay any anything else she said on the Block huh she said on the Block on the Block on the Block okay so well yeah restaurants could be um yeah let's put restaurants there for now good that's good what else online stores online online yes so on the internet so we could um we could have either vis a physical or a virtual presence now do you think that's um a no-brainer like we say well why don't we just sell everything online because think about we have to look at this in terms of where we are as an organization are we introducing a brand new product are we a brand new company or have we been existence for a while are we an ongoing concern because that's going to impact whether or not we decide to sell online right whether we decide to have a physical storefront or virtual storefront because of the startup course so the startup course for a virtual storefront is much much less than for a physical storefront now if we have already been in business for a long time then that's like a non issue for us then it's just a supplementary part of our distribution strategy part of our retailing operations but when we're first starting out like out of this class we might come up with an idea product that we want to Market and we don't have enough investment we don't have enough Capital Maybe by default we have to open up an online store e-commerce which depending on the product could be a great idea that could be very effective I'm not saying that that's bad I'm just saying that when we have to decide there's going to be some constraints to open up a physical store involves a lot more time a lot more money some retailers for example have thousands of locations some have a few hundred locations but even to open up a few hundred locations takes many years because you need to First identify the location and then find a shop that's available in that location so you might think for example yeah I got an idea I want to have um a luxury line of products let's say um clothing and I want to have a shop on Fifth Avenue sounds good sounds good Fifth Avenue would be a good location but are there any shops available there and so very often it takes a long time to acquire premium locations um and depending on the economy um sometimes even secondary locations that are available that have shops available and then it takes time to retrofit them because the place that was in there before they might have been selling apples now we want to sell clothing so we have to tear everything down put in displays change the flooring the lighting everything about the shop that could take certainly many months and thousands upon thousands of dollars that could easily be $250,000 right there and the year passes and we haven't even sold one pair of jeans for $300 yet that are torn for $300 but if we open up a shop online we could start selling pretty quickly we have a web designer um create our website and we could start processing orders pretty quickly now we still have to deal with the issue of inventory now one of the advantages of selling online is that it helps us manage inventory what do I mean by that well for example in a retail shop when people come in they expect to see 50 pairs of jeans in this size in um with this design Etc right they need to see that the the store is well stocked that could be a few million dollars worth of merchandise a few million even I remember I was always very impressed um at a small shop a very small shop like literally the size of this space here where this Podium is in Hong Kong and the storekeeper told me that he had 1,200 items there could you imagine now he wasn't selling you know bulky items but he was selling all these little things on what we call jpegs these hooks in retail that they have like um you know uh little scissors or nail clippers all these different things hair pins right all these he said he had 12200 items there I kid you not it was a tiny tiny shop just just like this 1,200 items for a tiny shop like that could you just imagine if we're selling shoes how many items we would need to have in the store different colors different sizes shoes for men shoes for women shoes for adults shoes for kids and how much that inventory how much we going have it invest in that inventory but sometimes and we could probably think of some products where selling online is not ideal why what would what would be the issue why would some products you want to sell in a physical location things like Prestige items you might not want to just buy online because beyond the fact that you get scan for it it's also loses its Prestige because it's just sold online as opposed to say a really specific store so they could um be um image issues yeah an issue of image and also credibility that people might have concern about shopping from online for certain products they might feel that um maybe the authenticity of the product is an issue so with certain products you might want to yeah you might want to see it for yourself touch it feel it examine it automobiles automobiles they have uh they don't have a fixed rate you can't put that on the intern and you have someone so for an automobile certainly you remember when we talked about the Innovation model and we talked about some of the characteristics we talked about the level of complexity of the product remember we talk about how quickly a product can move through the Innovation model we said that the level of complexity of the product is going to impact the rate and we also said the trialability so what you rais is a good point what is your name good the betri right Demitri says that well you can't test drive a car online well sure in terms of trialability the idea of being able to test driver car is an excellent solution to that problem cuz coffee we could always sell it in these tiny jars and people could try it most of the time what um large coffee manufacturers do is they give away free samples well when Demetrius saying says yeah they understand that you want people to buy cars let me drive it first I'm not going to spend 20 30 $130,000 for a car unless I could drive it first so online right that becomes problematic although it depends on the level of involvement of the purchase for some people maybe they feel they don't need to test drive the car or maybe they've owned a Chevy before but certainly yeah I think that's a good example of one where yeah maybe selling online is going to be an issue for some people because they can't test drive the car like certain clothing or jewelry for instance like you wouldn't really want to buy online you know it's not only try now you want to see how it looks in real life rather than a Photoshop picture things like that yeah I agree there's definitely for some products and also depending on the individual right we're kind of generalizing a little bit um for Edward that might be an issue um for molus Maybe not maybe for um Christina it's it's not an issue maybe for Steven it is an issue depends on the product it depends on the person so maybe for someone else let's say um who wants to buy an engagement ring they know about um the four C's about the clarity about the um the cut the carrot and the color so those things you could all specify and you could say I want you know si1 and you know spe justify all the different um criteria and they say just give me that just like if you're going to buy um a laptop for example so online you might feel maybe you might be reluctant to buy a laptop online but now for example like with Dell you could customize a laptop so it's not like you just have to buy what they have there what they show you could say no I want the one with the i7 processor not I5 not I3 say okay no problem you can give me that I want 1 tab of ram not 512 gab or 256 gab okay no problem we could give you that and I want 12 gab instead of eight or four gigabyte so depending on whether or not it's a high involvment purchase or low involvment purchase that's going to impact um our ability or receptivity to purchasing the product online if you feel that you could if you know the specifications and you have that confidence you may not have the confidence that they're going to deliver what you say then you may or may not be willing to shop online what do you think I was going to say would buy perfum onl smell unless you're like aown br know what else so for a new product if we introduce a new product we're going to introduce an item um a perfume then there needs to be that trial people need to you're not going to just buy this um Bassel perfume you never smelled it before you're going to want to see what it smells like well not see what it smells like you want to smell it right you want to um experience the the um the scent well you can't right you can't do that online now if it was something if it was a repeat purchase to say oh but I bought Chanel before or I bought the um this other fragrance before then what would you think would you be willing to to make the purchase so if it's not um at the introductory stage but maybe the growth stage of the new product life cycle then maybe it going to be more receptor say well I know what it I know what it smells like and I purchased it before I'm just it's just a VP purchase it's not trial oh I was going to say uh perishable goods and also shipping shipping because whole another issue in itself like you could miss the item there's no instant satisfaction as buying from brick Mor stores and it goes along with the perishable Goods if you missed miss the shipping and they wait next day it could possibly spoil you know whole you mean if you don't receive it yes yeah so then as the um as the retailer then we need to convince people that that's not an issue that the way it's shipped it's in a styrofoam container with um packed with ice or whatever it is that um is going to keep the item fresh so you're right we need to anticipate that and convince people that we solved for that problem that you're right it might get spoiled but in our container it'll stay fresh for 3 Days anything else you want to add to that relationships like they have a Match.com and stuff may not want to do that in case a person's crazy oh I see okay interesting interesting keep that in mind all right anything else now what is it about why did I say about the inventory how does that address the issue of inventory as an online retailer yeah so we're going to need a distribution center even if we don't have a retail shop or going to need a place to store the product but also very often online retailers one of the advantages that we have and we have to manage expectations is our operation is based on a strategy known as make to order which is different than make to stock so when you order something online and it says that um it'll ship in two weeks that may not um be desirable because you want um very often people want that instant satisfaction you go to the store you pick it up and you leave with it you could try it on you could wear it that night whatever it is you could start using it the reason why that's the case is because when you see those long or longer delivery times is because they're making the product to order that means they don't hold the inventory they're not holding inventory when they get your order then they're going to get the inventory and ship it to you even Amazon for example they're not holding um the inventory for many of the products many of the items ship directly from the manufacturer their whole business model has um evolved over the last 15 years or so so one of our biggest challenges is around minimizing the amount of money that we have tied up in the business right our working capital inventory for retailers that's one of our biggest challenges is the amount of investment required in inventory in some cases that could be a barer to entry which means that while we might have a good idea about opening up a business right opening up a storefront we don't have enough investment Capital to buy the inventory and to furnish it to buy the merchandisers the racks and all the other accessories the cash registers the counters and so forth are we missing any um any other retail channels so we talked about make to order make to stock means that we just we have a certain amount of inventory on hand so that when customers either come into the store or they order it online we have the product now there may be a point when we are out of stock so we had 10 items we got 12 requests that means that we're out of stock we sold what we had so some retailers what they do is they anticipate a certain amount of demand and carry inventory to meet that demand in fact in most um physical storefronts that's what's done right just about any retailer major retailer that you go into like Walmart Kmart Target any of those Best Buy all the ones that we're familiar with they have a store full of inventory why because they have um an expectation about how many units are going to be demanded for each product for each and every single item on their Shelf and so when they reach a certain level of inventory for each item then they order more uh some businesses have like a combination of both for instance uh the laptop Market Dell if you order it online they only have a certain number of choices because those are the you know the products you have in the inventory so when you it is made to order but they they already have like stock items in there already right so there's um companies can have a hybrid strategy that lends both of these strategies absolutely what about other channels what about drugstores holel clubs specialty stores all right so let's see if we could identify some specific retailers for each of these retail types each of these channels of distri and talks about and talk about some of the similarities or some of the differences there's a reason why we classify them this way they're they're not the same they could have some similarities there's um obviously some differences so let's let's start let's see if we could identify some some retailers for each of these channels what do you think who's going to help me here what could we um let's say specialty stores what's an example of a specialty store which retailer goad Demitri fishing supp store which one fishing supp well give me the name of one ber Beres okay Beres anything else stles Staples City do you ever have trouble getting out of that place like do you just keep like walking around and around and it keep saying exit this way and you walk past that side and then you come back to the other aisle and you walk back and you're like they want to keep you in there forever yeah so in retail we refer to that as the flow that's not by accident every retailer has a store layout and the store is laid out in a particular way that they believe is going to result in a greater amount of cells they have a certain flow so you walk through um one section that might have let's say shirts and then you reach a point at the end of the aisle where you could only go right you go right and what do you pass by hands so you walk through the area where they have shirts now you walk past the area where they have pants you follow the flow of the store and then what what do you think's there next shoes so you walk through the pants section the shirt section the shoes section and then you um end up where um they have accessories so where you could get some Rayband some sunglasses that's intentional retailers spend a lot of time on developing designing and implementing the flow in their store and creating what we call a visual theme so they want the retail experience to be unique relative to other retailers so they want there to be a this visual theme so that the store looks and feels a certain way based on the colors that they use the lighting whether they have music or don't have music what they're trying to control is all the touch points and one of the things that they look at is we talked about the visual theme the flow and touch points they develop a touch Point map that shows all the points of contact that the customer will have with the organization everything should be consistent you want to have the same brand image and same brand identity throughout the entire organization whether it's on the company's website whether it's in the store everything has got to tied together and create this unique experience very important to create a compelling visual theme Pi any place like um you mentioned restaurants think about what it's like when you go into Applebees what it's like when you go into the Hard Rock Cafe what it's like when you go into TGIF all of these places for example restaurants have their own visual theme they have different Furnishing different decorations different flooring the lamp shades or unique some of them don't even have lampshades some of them are just recess lighting but all of that is by Design that's all intentional whether they have tables or whether they have booths or whether they have both that's all part of the visual theme in retail which is very important you have to design indidual theme do you want it to be um Caribbean do you want it to be focused on Sports do you want it to be focused on music just some ideas you have that's something that we have to decide as marketers what about Mass Merchants yeah these are all the big box uh retailers also we don't have uh we didn't include General merchandises like Sears Sears used to be in the 1970s sear used to be the nation's largest retailer but what did they do they decided to focus on Market penetration no Market development no diversification so in terms of the BCG model they saw their retail operations as being the cash cow so their retail operations have been around for a long time they were very successful very profitable they were generating a lot of excess cash so what did they do they decided to buy All State Insurance company is that diversification buy All State Insurance Company yeah they bought Discover card they bought ding Witter financial services and so their plan was to use the excess cash from their retail operations to fund the growth of those organizations and they did that for a while and then they reached a point where what happened yeah so they lost Focus remember we said um companies are being rewarded for Focus now of course you want to try to extend your brand you're going to want to extend your brand to other categories but you have to know your limitations and you have to know your core competencies and so of course while they were focusing on growing all state which was a star their business was um growing and they had a significant percentage of the market share and their other businesses were growing but like Edward says while they were focusing on that then their retail sales dropped over time and so then they found a point where they didn't have enough resources to grow or to fund the growth of these other strategic business units in their organization so they sold off those other organizations and then decided that they were going to focus again on retail well they're still not the number one retailer right that's gone in fact um they their latest as of like five years ago or so or more a little more um survival strategy was that they came together with to with uh Kmart which is two struggling retailers um coming together to try and survive one of the Kat's challenges is been um its lack of identity because they're trying to compete with Walmart and Target but Walmart knows who they are their strategy is very clear um in their minds and the minds of customers as well as um with Target and those strategies or those positionings are very well articulated Kmart is um cour in the middle they're not they're trying to um Walmart they're trying to be Target so they want to try and differentiate themselves they try to charge more um by offering more like for a while they had the Martha Stewart brand so they had multiple product lines that were branded Martha Stewart at Kmart what do we call multiple product lines that a company has right it's product mix so when you group product lines together that's the company's product mix that was that generated over a billion dollars a year in sales for Kmart but is that really Kmart is that what you would expect um to have at Kmart more like at Target Target is definitely more fashion forward more contemporary you would expect to find something like that at um at Target but they had that at Kmart so I didn't just an indication although it was very successful I mean to generate a billion dollars in sales that's um an indication that um the approach was successful but it's not there anymore but yeah it seems like a mismatch despite the success right despite the sales whereas um Walmart is what we describe as edlp every day low price and they not trying to say that they sell fancy things or that it's especially fashionable or welld designed or any of those things it's not that um their products aren't um good they're good they're reliable but if you you want something that's maybe a little bit more trendy then you're probably going to shop it shop at Target or you might shop at both retailers it depends what you're going to buy but you know that at Target you also going to pay a little bit more see Walmart says we know that it's a elastic Market that the market is price sensitive for the products they sell and they say you know what we're not going to give you any more than what you need you don't need all these bells and whistles we can sell you a toaster for $8.97 cuz why do you need these other features that's going to increase the price you know W never even be able to figure out how to use them even your professor doesn't know how to use them they say it comes with 98 different features but how how would you ever I'm still trying to figure out how to sink my keyboard to my smart TV that's great the TV is smart but what if the person is dumb right why are they selling you smart TVs sell you smart people I mean come on it says sink okay paing like I mean yeah I don't get it smart water nice Smart Water great this the water is smart that's why maybe I should start drinking that yes smart that's it that's what I'll do all right what about grocery stores uh that like supermarkets and whatnot yeah supermarkets who are some of the what what is it pthk path mark so you know that path mark and a& P and wall bounds and is a Food Emporium Food Emporium I think are part of the same company those are all example of supermarkets any others Stop Shop yes STP and shop and then what was the other one I heard was uh shop right um let's see ke food right so you got the idea all of those are different Grocery Stores um supermarkets if you will Kroger Albertson Publix these are actually two of the largest um Kroger and Albertson um extremely large not big in this area in the north east but they're among the top um supermarkets in the country right but they're yeah they're um they're mainly in the midwest they've been been growing but they um their Origins are in the midwest Kroger and Albertson who is the retailer that sells more grocery items than any other didn't I just say no but actually it's a trick question but so which retailer sells more groceries than any other Walmart yeah Walmart Walmart sells more grocery items than any other even though they're not a supermarket they're not a grocery store but they sell more grocery items than any other retailer in the United States let's see what about convenience stores 7 yeah 7-Eleven is a great great example thousands of 7-Eleven in Hong Kong in fact they have like a 7-Eleven literally on like all four corners in in some areas so 7-Eleven um is still growing even though they have so many locations in the United States and even outside of the United States they still growing even in Brooklyn they just open up one on Avenue U and flatfish Avenue there's a 7-Eleven that open there they open up one right over here on um right across the street from Target there's a 7-Eleven there behind Chase so they're still growing we have to decide do we want to sell our product at 7-Eleven is that going to make sense are we going to um be successful selling our product at that retailer the amm stores the one we mentioned while that with oh okay yeah that's right that's a good example like at uh what is it at mobile they call them the uh the bowler market right and um Exon then they call it tiger Express yes absolutely they have a local presence and there certainly there's gas stations everywhere so yeah they and they sell they sell milk they sell bread they sell orange juice right they sell all these things so you might not think you think oh people would just go there to get gas but no people actually they're right there they you know be around the corner from where you live so if you need some orange juice and Oreos there you go you can't go wrong yes good example yeah what about department stores good we talked about um said that these stores they have different um visual themes Apple be TGIF what else did we say Outback Etc olve Garden Olive Garden yeah I love that salad after I eat that salad then I wonder why did I order an what about drugstores yeah Walgreens CVS right yes Dwayne Reed and then somebody I think had mentioned the talk um wholesale clubs like BJs Costco Sams hope food so how would you describe hope Foods would you see would you describe Whole Foods as a grocery store or a specialty store or maybe both a combination of both now what's let me ask you this what's different about specialty stores and department stores for example cuz there's a reason why we classify them otherwise they would just we would just call them all just retailers but why do we make that distinction why do we say well these are specialty stores and these are department stores what do you think valan find um like an aisle an department for M but yeah so there's definitely there's um a focus the assortment at at specialty stores is very narrow myus is saying it's very focused so like what what thing we put up here for example Foot Locker Foot Locker is also a good example of a specialty store why what do they sell there mostly yes sneakers right now yeah they sell some T-shirts right they sell some um some other items as well but their focus is on sneakers and they don't just have 10 different types of sneakers they have a very wide variety of sneakers am I right a lot of different brands a lot of different styles a lot of different features now that's different like molus is saying yeah you could go to Macy's oh yeah they have um they have sneakers there you could buy Sneakers but they don't have all the different brands and all the different styles all the different designs but you definitely you could get um sneakers at Macy's and you might be happy you might be happy with the um with the sneaker assortment that they have there and get a pair of sneakers that you like maybe you bought a pair of stickers there already but definitely at specialty stores the breadth of the assortment is greater you're going to have a much greater assortment many more items many more product lines to choose from now maybe sometimes that's the disadvantage in when you have to make a decision too many choices he can't decide so that's definitely a very significant difference between department stores and specialty stores we have to decide whether or not our product should be sold at these retailers and then we have to allocate resources to try to get distribution so that our product is going to be on their planogram so we talked about the visual theme we said that a retail organization has a visual theme that they have a flow that the store has a certain flow and importantly there's what's known in for example in retail at these Mass Merchants a p which is a planogram which means that there's a layout for the shelf for each shelf for each each aisle that they replicate throughout the chain that's one of the secrets of the success of mass Merchants for example and also at convenience stores and drugstores for example so the layout is the same the items that they have on the Shelf are the same the design is the same same the visual theme is the same so basically the idea is if you go into any Walmart or any Wallgreens you're going to have the same experience I 2 is over there and what do they have the same items the same products sometimes they try to regionalize the assortment they realize that in certain areas that say of the country these products don't sell as well so there are exceptions every store is not not like identical but they try because they realize that that's going to help them be successful because customers expect they have that expectation that it's going to be the same so customers want that don't you hate that when they do a planogram reset and you go in the store and you're like that's right you can't find it you're like what happened and then you start think oh it's um just it's been a long week and like no I know that the orange juice was over there and it's not but they're trying to continually change the flow because now they want to put the orange juice next to some other product that's going to help improve the showell space productivity for the store overall so doesn't it make sense to put the chips next to the salsa put the what the tea bags next to the tea kettles what was that yeah exactly that's what we refer to in retail as adjacencies very important a lot of time is spent trying to determine what's going to be next to what so on one side of the aisle is going to be spaghetti and then on the other side is going to be the spaghetti sauce now we kind of take that for granted and when I say that kind of chuckle you like yeah but it wasn't always that way and maybe that's not the best way to do it but they continue to track and monitor to maximize their sales that's known as showspace productivity so for every section remember I told you that their um stores have given departments they look at the sh space productivity not just um so we're not just talking about what is the sales for the entire store but we're looking at for the dairy section or let's say for for the pasta section they're trying to determine what is the showspace productivity for this space how many units are being sold for each item what is the bestselling item what is the amount of sales that's being generated for each item each shelf what is the amount of margin dollars and also the margin percent so for a given category retailers have a goal in terms of the margin percent remember last time we talked about standard markup so they expect to have a certain standard markup and that varies by category in Gross is generally much lower can be 8 to 15% in home furnishings usually 30% sometimes 50% apparel we said very often is 100% And even 200% markup so they want to look at how productive that entire section is because they might this might this item might be a 30% markup but this item might be a 10% markup why would they do that that's known as margin averaging why would they do that why would a retailer do that why would they agree to take this product and to accept a 10% markup about the product or you mean the the different stores no the product on their shelf so if this is their shelf and they have these different products their goal might be to have a 30% markup that might be their standard markup but that doesn't mean that every item on the Shelf is providing them with 30% why would they have an item on there that's a markup of only 10% so they might it might be a sale they might be trying to move out some excess inventory and a lower price they expect that they're going to um sell a lot more and maybe they'll be able to transition into a new item because what that means when we talk about a plan ofr reset this item might disappear if we're going to put our item something has got to go away there's no more like this you just don't add on sections your store is so big that's it so if you're taking something new something else has got to go away and that's why they're constantly looking and trying to anticipate like we've been talking so much about the price elasticity of demand and cost to try and figure out if I take in this new item how is my entire section going to perform is it going to be better so this is our show space we're looking at the productivity are we selling more units are we selling more dollars are we more profitable what is our margin percent now what about if it's a product that's branded with a brand that has a very high level of brand awareness you might expect that you're going to sell more that your turns are going to be greater so at that price you're going to sell more units even at even though you're making less margin your total revenue in this case we would say it's going to be more it's a reasonable assumption that a very well known brand that's selling even if that's at the uh suggested retail price but for us is delivering less margin percent per unit overall is going to increase our total revenue so in that case it would be make sense to say yeah we're um we're going to sell more units we're absolutely going to sell more units and it's going to compensate for the fact that we're making less perent per unit in that case it would make sense that would be a reasonable assumption sometimes we don't know if that's the case we might lower the price but then it's not going to increase the number of units that we sell enough to even reach the same level of total revenue all right so before we go keep this in mind for next time in terms of our distribution strategy is it going to be intensive selective or exclusive how we going to sell our product everywhere which would mean that our strategy our distribution strategy is intensive so we're going to try and sell our product to all of these retailers in all of these channels for some products that might make sense not for every product but for some or we might choose out of those we might choose only a few only a few channels maybe only 8 to 10 um specific retailers and then our exclusive strategy an exclusive distribution strategy means that they're going to sell even if it's just for a limited time at one retailer can you think of an example where we've seen that happen who's talking yes Mia Alexis okay Alexis yes go ahead with the iPhone and AT&T absolutely that's a great example of where they have an exclusive and for sometime in some cases what we do is we give a lead launch to a particular retailer where they have an exclusive for a certain period of time very often in certain categories we'll give a lead launch let's say to department stores we'll sell it at Macy's to establish a higher price in the market before we start to sell it to Walmart and Kmart and Target so for a period of time there may be one retailer or a few retailers in a given um channel that have an exclusive before it trickles down to other channels at a less expensive price questions are we good you guys rock |
Marketing_Basics_Prof_Myles_Bassell | 9_of_20_Marketing_Basics_31912wmv.txt | [Music] all right let's get started what we're going to talk about today is Place mint mint last time we Ted price in right so the marketing mix consist of product promotion price and place those are the four Ps we refer to that as the marketing mix so today what we're going to talk about is place we're going to talk about marketing channels channels of distribution if you will so I want to start our discussion by seeing if you could identify different channels of distribution after we do that then we're going to talk about different retailers that are in those channels of distribution but first I want to identify the different channels of distribution and different types of retailers then we're going to look at some examples we're going to then we'll talk about specific retailers but first let's talk about the different channels of distribution what are they what are some of the different um channels of distribution um wholesale right so as part of the value chain we have processors we have manufacturers we have wholesalers and Distributors retailers and the end user so when we talk about some of the retailers for example what are some of the different types of retailers what are different types of retailers we're going to talk about the names of certain retailers but before we can get to that let's talk about how we classify those retailers so for example how about Mass Merchants sometimes we refer to that channel of trade as Discounters so Mass merchants or Discounters that's an example of a channel of distribution department stores is another channel of distribution another type of retailer grocery stores drug stores wholesale clubs specialty stores those are all examples of channels of distribution those are all different types of retailers so when we think about how we're going to sell our good how we're going to sell whether it's a consumable or a durable good we have to think about what channel of distribution what types of retailers are we going to try and sell our goods and our distribution strategy if we think of our distribution strategy on a Continuum it could be from intensive to exclusive and somewhere in between would be selective so when we think about how many different places how many different channels how many different retailers we want to sell our Goods that's the way that we would classify the level of intensity as it relates to distribution so in other words when we talk about intensive distribution so our distribution strategy might be intensive if our distribution strategy is intensive like it suggests here then what that means is that our goal is to sell our Goods everywhere so to speak so at every retailer in every channel of distribution now it's not enough for us to have that as a goal to say that our distribution strategy um is intensive it's got to be something that's achievable so if we say that we want to be um able to implement a strategy a distribution strategy that's intensive it's got to be that those channels of distribution are relevant to our product so there's got to be a match it's got to be there's got to be a logical Association so for example what do you think if we said well yes um I want to have an intensive distribution strategy and our company sells sneakers and you say yeah all those that you mentioned Mass Merchants department stores grocery stores specialty stores wholesale clubs drug stores convenience stores I want to sell sneakers in all of those types of retailers and all of those channels do you think that's something that's appropriate that's realistic what do you think Jacob so the sneakers might not want to put it in like Pharmacy or some place that's uh not fitting for your product yeah yeah so there's a disconnect there that typically um you don't buy sneakers in drug stores or grocery stores right maybe a selective distribution strategy is more appropriate which would suggest that yeah you're right coach let's leave those out so we'll scratch convenience stores drug stores and grocery stores from the list and the others um we feel are appropriate for our product they're relevant to that channel of distribution so instead of intensive our distribution strategy would be selective and the Other Extreme would be if our strategy was exclusive which would mean that we're only going to sell our Goods at specialty stores or maybe only at company stores our company owned stores so that's something strategically we need to decide is our distribution strategy going to be intensive selective or exclusive so what would be an example of an exclusive distribution strategy that we've seen in the marketplace what do you think would be a good example now sometimes it's it's not uncommon in fact it's become more common actually for companies to give certain channels of distribution what we call a lead launch so in order to establish a higher perceived value in the marketplace for example it's common that manufacturers for a given product will sell the product first in department stores so they might give them an exclusive for The Limited period of time sometimes 6 months for example and then after that period of time is expired then sell it in specialty stores and then discount stores but that being said what are some examples or let's let's see if we could have one example of an exclusive distribution Arrangement go ahead the medicines and the pharmacies so tell us about that explain the medicin are exclusive to selling pharmacist so that for prescription medications that you can't get it at a department store but but what about that's actually an interesting example because um for example Discounters very often have a pharmacy within their store see they understand that um that's going to drive foot traffic to their establishment and also grocery stores also very often have a pharmacy within um their store so but that being said from a manufacturer standpoint from a pharmaceutical standpoint you could still argue that well it's still we're selling to the pharmacy within that store and that's a type of exclusive um distribution strategy even though it's basically a store within a store which has become more and more common what's driving this store went in the store approach to retailing why is that become so um common so for example at Macy's Macy's um at heral Square in Manhattan they have quite a few um stores within the store why is that they have um Starbucks there in fact they have quite a few Starbucks in Macy and even for example the Ralph Lauren section is really a store within a store Tommy hillfigure is a store within a store the Cosmetics counters for example are also examples of um stores within a store wh why is that what is it that from a business strategy Macy's is trying to achieve you have a lot of different um stores that are all in the same category in the same place you can get more um I mean it makes more sense have everything on the same place rather than to have a separate store here and a separate store there oh absolutely so Macy's is the department store absolutely that's part of their value proposition so um Kevin is saying that um yeah instead of going to a specialty store come to Macy we've got Cosmetics we've got sneakers we've got shoes we've got shirts pants so that's part of absolutely part of the appeal of that channel of distribution but from a merchandising standpoint and also an inventory standpoint so what Maes is trying to do is effectively manage inventory and also the risk associated with operating a coffee shop within their building so why would you why would you want to um you know um do that when an alternative approach is to have these type of concessions right so you have a counter and the it's a Lancome counter for example or it's the Bobby Brown um counter and so those organizations pay Macy's for that space so instead of Macy's taking all this inventory in and then if it doesn't sell then basically they're stuck with the merchandise although of course most uh manufacturers have return policies and so forth but wouldn't it be better to rent that space to them so rent a portion of the counter to these different Cosmetics companies could it be similar to bartering also what was that could it be similar to bartering well in this case it's it's not bartering but um I I see the um the connection you're trying to draw but that's in this particular um situation that's not the case so what they're trying to do is have um a way for them to reduce the amount of cash that they tie up in inventory and also the risk associated with the inventory not selling and also minimize the amount of employees that they need to hire to um run the business so for example very often the um the staff is not employed by Macy's in some cases the um the staff is employed by the um the manufacturer or the or the marketing organization so for example when um when you go to Starbucks there and you get a rece seat what do you think it says does it say Macy's or does it say Starbucks Stars yeah so in that situation um it says Starbucks so what about so Alan raised a good point this idea of selling exclusively to pharmacies but we saw it's not as easy as it seems because there's very often pharmacies within different channels of distribution what about the iPhone did Apple have an exclusive distribution arrangement with AT&T do you think that's a good example of they could have um sold iPhones in Mass Merchants right Mass Merchants like um Walmart they sell phones um specialty stores like Best Buy they sell phones Sprint sells phones Verizon sells phones but who could tell us more about their distribution strategy what was the distribution strategy for the iPhone what did Apple decide what was their approach they only use AT&T as their provider to um if anyone wanted the iPhone so they had to have AT&T service so they could use their phone then only after I think it was like six years or something then they then they allowed Verizon to also um be able to distribute the it so people it's great for 18t because anyone that wanted iPhone switch to their Network and so there's something in it for um for AT&T absolutely it's definitely a benefit to AT&T for being the exclusive distributor of the iPhone because even though like Verizon now has their phone people are so like used to their AT&T service or they're happy with it they're not complaining so they're not going to switch back to Verizon so we just gain like a bunch of customers from that and what's the benefit for so for AT&T it's a great um a great deal because they have the um most Innovative new phone right the most Innovative phone um on the the market and if Wireless um customers want the iPhone they have to have AT&T wireless service so it sounds like a a great uh partnership from their perspective what about tell us about from the perspective of Apple why is it a good thing for them why would they say why would they not say we're coming out with a with the iPhone and we're going to sell it at 18 of course and also at Verizon and Sprint and and other places if it becomes like exclusive I think want it more in general like if people can't get it everywhere that's the like you want something like that you want an item that most people can't get just anywhere on any the street or whatever and you have to um be like a like an exclusive like member like a club whatever to be able to get like so that um creates um some appeal people talk about it and also Apple gets get PID a lot from at& like in order to have a deal at& paid a lot to Apple so tell us tell us about that how does how does that work what is it that they if Agy want the wants the iPhone exclusively they have to pay I don't know how many milons to oper so and so there's um an advantage if you if you put the um idea on the table of giving an exclusive then isn't it reasonable that you would expect to that because like um Jonathan said because it's so advantageous to AT&T that there's um they should expect that they would have to pay Apple to have that exclusive arrangement when you sign a 2-year agreement at for AT&T wireless service you could get um an iPhone for $200 how much does the phone actually sell for what is the if you're buying the if you had to pay $200 for the phone at the AT&T store when you sign the 2-year agreement what is the actual price of the phone 6650 yeah it's um approximately $600 so who do you think is paying the difference who who who has the greater incentive the carrier right uh because they don't expect to make money on the phones in fact on not just the iPhone but on every phone some of the phones they give you for free said we'll give you a phone is said really get it because what there what they want to sell is the service they want to sell the wireless service and if they have to if you don't have wireless service now then they'll even give you a phone nothing's right so there's no free lunch but so there's an incentive for them they're thinking long term they're saying we'll give you the phone for free and um you sign a contract with us that we'll get provide you the wireless service what another if we have some insight about the industry Dynamics what's the appeal to partnering with AT&T goad Jason um if Apple gives AT&T their product it's another shelf space that they are opportunity to sell their product and make more sales as opposed to if it's just selling in an Apple Store people have to come to Apple and then get the service whereas it's it's direct the the product and the service together or likeing I guess C branding oh c branding yeah so so tell us more about that so why AT&T though why don't you um why don't you Cod brand then if that's what you're going to do with um with Verizon why AT&T what do we know about the market dynamics in Wireless communication well first of all we know that the wireless communication Market in the United States is very highly concentrated which means that there's only a few wireless carrier providers that control literally 90% of the market and AT&T one of the largest AT&T is actually the largest in fact at the time they had about 50 million subscribers so that was one of the it wasn't that they just randomly picked a provider they wanted to partner with the largest provider of wireless communication in the United States and the expectation is that a large percentage of them this was their um hope would trade up to the iPhone and as Jonathan was suggesting that AT&T being a very wellestablished respected and prominent wireless communication carrier would also be able to steal customers from their comp competitors so the thought is that if you want the iPhone the thought is that they would switch from Verizon or Sprint at $600 you're not going to sell everybody remember we talked about the diffusion of innovation model Ed Rogers he says that the innovators are about 3% % of the market so they didn't think even if we said okay it's going to be intensive we're going to sell iPhones everywhere but I mean how many people realistically even in the United States where the per capita income is very high relative to other markets around the world how many iPhones are you going to sell even the with a population at 300 million which by the way the us only represents 5% % of the world population so you think 300 million is a lot that's only 5% of the world's population in China there's 1 billion 300 million people so China is the most populated India is the second most populated India also has um a bit more than a billion people so realizing that they wouldn't be able to sell the entire Market initially they thought it was realistic to assume that partnering with and giving an exclusive to the leading wireless service provider was the best thing to do because they would have access to the 50 million subscribers that AT&T currently had and that people who didn't have AT&T service that those innovators are those early adopters who are willing to pay a premium for the phone would also be willing to switch carriers I mean it's not crazy to think oh you're going to switch from Sprint to Verizon or Verizon to AT&T is it I mean is that like a unheard of phenomenon no it happens sometimes if you break the contract you have to pay a fee sometimes it's what $50 $150 it depends but if if you really want the iPhone they thought that that wouldn't be a deterrent that wouldn't be a reason because it's not distributed at Verizon they said well we're not really taking that much of a risk we're not going to sell everybody right away anyway we'll give an exclusive to AT&T it'll be a win-win partnership we'll certainly have access to their already very significant installed base usually refer to that as an installed base their existing customers which we said is about at that time was about 50 million and those who are Sprint and Verizon customers who really want the iPhone they'll switch so that's a good example of exclusive distribution so let's go back now we talked about we identified some of the channels of distribution some of the types of retailers who could tell us what they are what are some of the types of retailers that we identified so far and then we'll talk about the names of some of those but let's go back now and recap who could tell us what are some of the channels of distribution what are some of the types of retailers that we identified you sticking with that answer huh again Isaac Mass Merchants we said also known as Discounters department stores somebody retail specialy St specialty stores grocery grocery stores drug stor drug stores wholesale CL con wholesale clubs convenience stores so now let's see can we come up with the names of some retailers in those categories Costco for wholesalers wholesale clubs okay so for wholesale clubs we have CostCo what else what else well let's see who else is who are some of the other competitors in wholesale clubs Sam Sam's CL Sam's Club BJs BJs so in the United States those are the three key wholesale clubs BJs Costco and Sam's Club what's unique about that channel of distribution because remember we said now we like to think about if we're going to sell our products at costro or Sam's Club whether or not that makes sense so what is unique about that retail format because there's a reason why we classify those channels that way obviously there's some differences amongst them so what is it about wholesale clubs that makes them unique relative to Discount stores and specialty stores and so on um it's lot of like business to business for them they're they're selling to since they have like Mass um products now they're selling to other businesses a lot of times so sometimes um businesses shop there they um because what Jonathan is saying is they sell in bulk so it's not in a quantity that um maybe that the average family would need but if you have an office then maybe you buy three million Staples right three million Staples you might exactly the average person um you wouldn't think would need that many stables at least not within the course of a decade but um if you have uh 50 people in your office then you might what else so they do sell in bulk but is it just business to business or do they also sell to Consumers they do sell to Consumers consumer shop there and they buy in bulk certain items don't they have a membership you have to pay a membership which is um about $50 per year and the thought is that they sell these products in bulk and they're at a discount relative to what you would pay in other channels of distribution so they sell um cash shoes for example but you have to buy it in a big jar which might be um $20 where maybe in some other channels you buy just a small jar at uh maybe at a Discounter that's maybe only $3 so you have to ask yourself well you buy this big Dr I mean who's are you going to who's going to eat all that maybe if you have if you're really your family really likes cashews and um you know you're married and you have eight kids then maybe it makes sense or cereal for example so they sell um um very large boxes of cereal in fact it's um there's like two bags Inside the Box family size exactly it's family size and they have a whole range of um products but like Jonathan is saying the key is that it's sold in bulk large quantities whatever it is you know like normally think oh you buy a little pack of M&M's yeah there they sell you know like these big bags of m& M everything you pick up there is at a minimum is like $8.98 you don't need a bag for anything yeah usually yeah you don't you want to get a can of soup well you can't just buy one can you have to buy six cans six cans are $77.99 then that's all six cans of chicken noodle soup and you think well but if you have a big family then and maybe that makes sense maybe that's what we sitting here we think oh that sounds like a lot maybe that's um will go very quickly in a big household so certainly bulk is um a key part of the way that um they stock their store they sell products in bulk not every item is in bulk so they sell you could buy pair of pants there but usually um like Jonathan is saying everything is in a um everything is like super siiz quantity so you buy laundry detergent you know it's not one of these like 18 o things of laundry detergent it's a big like 96 oun um container of laundry detergent what about Mass Merchants who are some of the key Discounters in the United States so an example of a mass Merchant would be Walmart Walmart Walmart is the world's largest retailer obviously the you know the largest retail in the United States and they're considered to be a Discounter Mass Merchant if you will Kmart is also classified as a mass Merchant Target is also a mass Merchant how is there business model different from Costco for example so from wholesale clubs's no membership there certainly there's definitely no membership fee and the price is um equivalent I would say maybe uh wholesale clubs are less per ounce or per pound but um certainly Mass Merchants Discounters have what we call an edlp approach to retailing which is every day low price Walmart right so at Walmart they don't run sales per se sometimes they um Mark things down part of their um strategy their retail strategy is that they focus on Roll backs so after the product is in the store for a certain period of time like let's say a year and they reset the planogram the anagram is that standardized layout for their stores and for um the um products right the the what they carry on the Shelf so they standardize what products they carry on the Shelf um in all their stores as much as possible but it's everyday low price and they try to have these roll backs so after the product is in the store for a year then next year they want to sell it for Less so maybe the first year that they have the product in the store they sell it for $3.97 next year they want to be able to sell the product for 348 and then the year after that 327 that's known as roll backs and so the promise is that they continue to reduce the price paid to the consumer on an ongoing basis in fact um they talk about often they caution us to watch for falling prices you ever see some of their commercials watch for falling prices and that's what the roll backs are about that's an important part of their retail strategy and they expect customers are going to look for rollbacks Price reductions but that's different from department stores department stores are known as high low retailers they're not everyday low price they're high their prices are high and they have sales often very often so they have sales um very frequently tpr yeah they are a type of a tpr temporary price reduction because after the sale is over right they say come in the sale it's 50% off and then um that's for sometimes they have it as a one day sale so Jacob is saying yeah it's like this you had this tpr temporary price reduction 50% off on Wednesday and yeah yeah they have have a sale for everything Columbus Day President's Day any excuse for a sale and so we refer to that as high low retailers when do you think they do the most of their volume the most of their revenue Christmas holiday fourth quarter yeah certainly for most retailers the fourth quarter is important and when they have the sales so definitely they drive a lot of foot traffic when they advertize that they're having a sale and they send um their um promotional material whether it's a postcard or a catalog in right they send that to your home and that drives a lot of traffic to the store and it generates a significant amount of sales so some examples of department stores would be what what would be some examples of Department stores's SE wow that's interesting hold that thought is that not a retail it is a well yeah it is a retailer but let let let me come back to that cuz Sears we would actually describe Sears as a general merchandiser now Sears 30 years ago was the nation's largest retailer in the United States but we're going to we're going to come back to that but they're we would classify them as a general merchandiser as opposed to a department store so would Nords also fall in that same category or the same category as Sears yeah don't say that or sax with oh no you fth Avenue Macy's laen Taylor Nordstroms nean Marcus Bloomingdales those are all department stores yeah those are all examples of department stores and certainly some department stores sell prod uh merchandise at a higher price than others so nean Marcus Nordstroms those um are at a higher end Bloomingdale Sachs also but um they're um five Town stores part of the we didn't we didn't talk about well yeah yeah we talked about the fact that um department stores besides very often having this high low pricing strategy that they have a variety of departments so they have a lot of different departments thus their name department stores that classification why because they have a shoe department they have a watch department they have a Cosmetics department they have a a beding department they have all these departments within the store so they have something they have something in a lot of different departments but that's different from a specialty store what's the difference between a department store and a specialty store cuz a specialty store might also sell sneakers and Macy's sell Sneakers but what's the difference what's the difference between Macy selling sneakers and a specialty store selling sneakers the specialty stores just sell sneakers yes absolutely so the specialty stores have a narrow Focus they have a narrow Focus but they have a wide assortment they have breath so sure you could get sneakers at Macy's but Macy's although Macy's is a part of our life you know right you're familiar with Macy's advertising Macy's wear a part of your life you want me to sing this first song Let Me so at Foot Locker for example they have a lot of sneakers Macy's has um quite a few sneakers there too you could get nice sneakers at Macy's but at Foot Locker they have a lot of sneakers a lot of different kinds of sneakers a lot of different brands a lot of different styles hundreds right if you look look at the number of skus the number of stock keeping units they have a very wide variety they assortment is very large and it's usually newer now Foot Locker they'll have newer and like Macy's have like last well that's um that's not something that um I'm sure Macy's would agree to um as a department store I'm sure they wouldn't they wouldn't be happy about that but what happen it's reality no mean as as marketers we need to manage these channels so they what we might have done is giving re uh Macy's a an assortment of sneakers that other retailers don't have so even if we don't give a retailer an exclusive on a product we could still right on a product line we could still give them an exclusive on an item so you might be the only one that has that sneaker in Orange so that's that's very possible so of course what we're trying to do as marketers is differentiate the assortment that's carried at different retailers in different channels one of the challenges that that we face for example if we carry um we ask Walmart to carry our product is that Walmart could sell the product at a much lower price than other retailers so if that's the case well then how are you going to sell that at Macy's how could you sell that same product at Best Buy that's embarrassing to somebody to come in there and say why saw this at um Walmart it's $20 less that's not funny that's like really embarrassing that people think where you've over you're overcharging them but what marketers do is work really hard to differentiate the assortment so to make the item unique so the item especially that Walmart has you want to be a unique SKU whatever that means unique color unique configuration unique features so that other retailers are not going to be embarrassed because there's no way that they could sell the product at the same price as Walmart so if they are going to carry it at Macy's then it's got to be with more features more benefits so that Macy's can say well it's not the identical product of course it's a right whatever the product type is yes it is a sneaker it is a MP3 player but this particular item has these features that the Walmart product doesn't that's our responsibility as marketers to make that happen because when you go to to Best Buy when you go to whoever the um the retailer is they're going to ask well what what is um what are you giving Walmart because they know that Walmart not because of predatory pricing but because their course structure their operating expenses are so much lower than other retailers that they could buy it literally for the same price because the government says you have to sell um the product for the same price they could buy it for the same price and still sell it for Less so they require um a smaller markup on the product but if yours has less features right your item is unique then you could charge a different price or if it's a different configuration so for example let's say that we're going to sell plastic bowls plastic storage bowls plastic storage bowl that has a plastic lid now if we sell that to um Bed Bath and Beyond for example and they want us to put it in a box in a box that's going to hold eight eight plastic containers because because what we want to be able to ship is case pack and a half so in other words what we ship has got to be able to fit on the Shelf so if they require let's say for the shipper eight units with the label on it and the plastic lid snapped on then we're going to charge them more than if we sell that product to let's say we to mention dollar stores let's say to Dollar General where they don't require it to be eight in a box but they'll take 50 in a carton and we don't need to snap the lids on so then we could charge Dollar General a lower price because as far as um the government is concerned but we should be able to justify that our cost is different cuz now the manufacturing facility we don't have to pay somebody otherwise we'd have to have another person on the line that's snapping the lids on the containers they say no it's okay just send us 50 bowls and 50 Lids that's the way we're going to sell it let the customer each Bowl grab a lid so there's got to be differentiation if there is differentiation they could also we could also charge a different price so we talked about Mass Merchants department stores what about grocery stores the largest grocer the largest Grocer in the United States you're probably thinking well tell me some of the names of grocery stores that that you're familiar with who could tell me who could raise your hand tell me if you know Jacob Fairway Fairway shop right Kroger Kroger wman Wegman's Wegman is nice the Wegman is a nice store Publix you could go in there you probably think oh apples you go into Wegman's they have like 30 different kinds of apples who ever heard of such a thing right what giant pomegranate oh used to be OSS pomegranate who pomegranate you know where pomegranate is on Cony Island Avenue yeah dude Val par that's right Kroger yeah kroer Pathmark what about um Pathmark Albertson's Whole Foods Whole Foods all of those um all of those are examples of grocery stores none of them are the largest grocers in the United States the largest grocer not the traditional grocery store not the largest traditional grocery store the largest grocer is Walmart Walmart sells more groceries even though obviously they're not a grocery store ex said they're a mass Merchant they sell more groceries than um than any other retailer any other grocery store in the United States so Kroger Kroger is the largest traditional grocer Koger is the largest traditional Grocer in the United States um the largest traditional grocery store I should say Walmart is the largest grocer what is it about um grocery stores That's Unique just find groceries well one of the things that does make them unique is the um assortment that they carry so their focus is on food and but they also have implemented what we call a scramble merchandising strategy which means that they carry a variety of diverse products besides groceries so you could even go into some of these grocery stores they buy a light b a mop well those are not groceries but they um they sell them what about drugstores Walgreens Ray CVS CVS Dwayne Reed Dwayne Reed y absolutely those are all very well-known um drugstores and Walmart and CVS are um certainly the largest they each in terms of um the number of locations have increased very significantly um in the last several years on a national basis Dwayne wene certainly in a regional basis we see like they in Manhattan there like practically on every corner is that not so and so they also have a scramble merchandising strategy which believe it or not um 25 years ago that was not so common they have you could get laundry detergent right shoelaces just the about anything in um in the uh drug stores depending on the drugstore some are larger than others one of the things that um in fact you might even almost say that uh drugstores convenience stores drugstores convenience stores dollar stores specialty stores have a rather um small format if you will compared to Discount Stores so like Walmart their um super centers are about 200,000 Square ft which to put it into perspective is huge so 200,000 square feet is a big store that's enormous um some um drug stores uh for example and well certainly convenience stores you're talking about maybe 2,000 ft some convenience stores absolutely um are only 2,000 ft compared to 200,000 Square ft drug stores have actually increased in size um quite a bit over the years as we've seen as they broaden their assortment very substantially so they are definitely not as small as convenience stores but um they're certainly not as big as um Mass Merchants as discount stores like Walmart or KMart some of the um not all of the Target stores or the um Kmart stores Kmart refers to that their super stores as um Big K they're not all um 200,000 Square ft some of them are a bit smaller but still very significant very often most of them are still around 150,000 Square ft which is very large specialty stores are also a fairly small uh format in fact even um for example a Foot Locker not really much bigger than this room would you say I mean they just have like wall to- Wall sneakers I how many um it's probably bigger than your bedroom but how uh I mean how big do you think those those stores are I mean what do you think the the dimensions are question um is Home Depot a specialty store ah Home Depot so how would we Des how would we classify Home Depot and Low's what what is a well they used to be now they CIO furniture and like it's a general home store so they've expanded from Home Improvement many things yeah and so but you don't think that like the yeah so you're thinking more of like the construction aspect versus the fact that they sell Furniture is less of uh Furnishings is what you're saying right so they sell the furniture the patio furniture which is different from saying you're selling the bricks to put on the patio now they not only sell the bricks to put on the patio they also sell the furniture which is um you're saying is like not construction appan yeah like ovens and stuff right so we could describe them as um as a general merchandiser or generally this still very often referred to as um Home Improvement centers um Home Depot has been very successful in fact um to your point they um at some locations they even sell gas so you could uh yeah well that's right Walmart too Sam Club yeah so coach is there a reason why Sam's Club they gas you don't have to be a member for I'm not sure I guess because they don't want that to be a barrier to um to customers they figure well if you um you know they they feel they might lose customers because of that I mean doesn't that go against their whole model yeah I mean I'm not familiar with um I never bought guas there they're the cheapest yeah I would I that doesn't that doesn't surprise me um yeah I'm not I'm not sure what I mean I understand why they would do that but you're right it does go against their their business model so do you think it's a good idea no you think that it should only be um to for their members the membership has its privileges that if you want to get the gas at a discount takes off it takes off the numbers they think about it they can get more people people spend people spend you know an annual fee to be member it's like they someone you know like's long gu stop the person that happens all the time and so what happens when you go in there to pay what do you think do you think they try to sell you a membership you go where when when you go to get gas at Sam's Club they haven't tried to sell me a m they haven't you've been there and they you go in there and you're not a member they don't ask you for your card they don't even ask you for your card so it's like they don't yeah and it's I think someone said it may be like a like a like a legality issue that according to like I don't know stand you're not allowed to he is not sell oh maybe that's interesting actually I don't know that yeah you what um yeah there may that that may be the case that's I'm I'm not aware of that but there might be some um some state or city um regulation regarding that I wonder and it says Sam's Club but where at the gas pump it's like Sam's yeah interesting h do you think that um so what do you think about the idea that they charge a membership fee you know and it's like it's not a little bit of money either it's like $50 all based on their premise that what you buy here is going to be less significantly less when you shop here your annual savings is going to be hundreds of dollars and we just asked that you know in order to enable us to be able to do this that um that you provide that you pay a fee you pay $50 a year to be a member because that's another example of um exclusivity a of exclusivity where your customers are exclusive and you might argue well what about if they didn't charge a $50 fee maybe they would have more customers maybe that would be better but at this point um their strategy um their strategy is has not wavered all right so we talked about the different types of channels we talked about the different um retailers in those channels let's see if we could uh want to see if we could get through this let's try and get through this see we have another uh we have what another hour let's see if we can get through this where's H let we could get done all right so here we go it says two students Nick and Lee were studying for an upcoming exam in their introduction to marketing course I don't have that sh while studying because you came 20 minutes late right yeah right that's true while studying the chapter on marketing channels no you didn't and wholesalers Nick made the following statement this one too the following statement if it weren't for wholesalers and other intermediaries in the channel of distribution the products we buy would cost a lot less what do you think about this this is interesting we we talked about this when we talked about company-owned stores selling through company-owned stores versus third-party distribution so it says if it weren't for wholesalers and other intermediaries in the channel of distribution the products we buy would cost a lot less after contemplating Nick's statement Lee said wait a minute yeah in between Barry's mumbling and and murmuring we learned in class that channel intermediaries actually make marketing more efficient by minimizing the number of transactions necessary to sell products Lee's statement refers to what the fact that a value created value is created by channel intermediaries so what he was saying is really just dismissing the value that that intermediary in the value chain was providing all intermediaries provide certain function like for example what is the function what are one what is one of the key functions of a distributor for example or or a wholesaler one of the main functions is that they break case packs so if you order directly from the manufacturer if your business is large enough that you could order directly from the manufacturer well they sell also in like what you would what we would consider to be bulk if you buy from an wholesal you say well send me three well they have a case that has 20 in them so what they do is they send three to Isaac's shop they send three to Jacob's shop they send three to Noah's shop and uh vitki they send 10 yes don't ask me why but they do so that's an important function now of course as a wholesaler as a distributor of course you're you make a certain amount of margin and remember when we talked about vertical integration we said why would a company like Kenneth Cole for example a designer why would Kenneth Cole have 400 company-owned stores that sells their product and also sell in Macy's well because they understand that Macy's is an intermediary as the retailer Macy's is the intermediary Macy's is earning margin and they're entitled to earn margin but if you have your own company owned store then Kenneth Cole is earning the margin of the retailer and also the manufacturer as the designer so it makes sense for them to want to vertically integrate that's why companies vertically integrate they want to be more profitable more efficient so in that case they're not only manufacturing the shoes the clothing but they're also selling them horizontal integration would be if you're a specialty store let's say your organization is a specialty store you sell Sneakers but if you also open up a specialty store that sells kitchen wear and then you open up a specialty store that sells clothing and you open up a specialty store that sells Electronics that's an example of horizontal integration because why your business is retailing if you're expanding that way then you're opening up all these different types of retail shops these different types of specialty stores we call that horizontal integration vertical integration means that we're not only the retailer but we're the manufacturer maybe the processor Etc all right so we'll continue next time good job you guys rock yes orange juice very |
Marketing_Basics_Prof_Myles_Bassell | 10_of_20_Marketing_Basics_32112.txt | [Music] so let's continue where we left off we were talking about marketing channels so we were uh discussing about um these two students Nick and remember they said they were studying for an upcoming exam in their introduction to marketing course and they had an interesting conversation while they were um studying for um for the upcoming exam they were focusing on marketing channels and wholesalers and we said that Nick made a statement he said if I in his opin opion if it weren't for wholesalers like Barry and other intermediaries in the channel of distribution the products we buy would cost a lot less but after contemplating that statement then um work timer said wait a minute we learned in coach's class that channel intermediaries actually make marketing more efficient by minimizing the number of transactions necessary to sell products so what's the what's the discrepancy here what is it that um that's being addressed what is it that uh when he says we learned in class that channel intermediaries actually make marketing more efficient what are we actually talking about we talking about the fact that value is created by intermediaries remember we said that the intermediaries in a value chain each provide a certain function and in exchange for that function that they provide they earn a certain amount of margin so marketers are aware that the channel intermediaries create value and in exchange for creating that value in exchange for performing that function in the value chain remember we talked specifically I said what um for example is the role of the wholesaler or distributor we said that they break case pack quantities so in a case pack there might be let's say a dozen or two dozen units but if we own one store maybe we just need to order three units so if we order three units if our need if we only need to order three units we can't we're not going to be able if we're thinking about you know consumable type Health and Beauty type products you can't call up Johnson and Johnson and say oh I have a shop yeah I like to order you know three baby powder and three baby shampoo you could only order direct if you are um going to order in large quantities the distributor breaks the case pack quantity that's an important function so they said it's okay because we'll send you three and we'll send herzfeld three and we'll send um Alexi three and then brick will send six who yeah well and so they provide that's an important function that they provide so they add that value and of course they're entitled to a certain amount of margin Max Max thanks for showing up today thanks for coming thanks for yeah yeah yeah yeah yeah thanks for the shout out good good good all right so here we go next question question says in terms of distribution when marketing channel members are engaged in buying selling and risk taking they are performing what what what is the type of function trans transaction it's transactional right so the best answer is e transactional so by definition the transactional function what is transactional functions when we talk about that it's that's exactly what it is it's the the buying selling and risk-taking because what's happening they're stocking inventory they're holding inventory with the expectation of sales so they expect that they're going to get customers that are going to purchase the product and So based on that expectation they're going to they're going to be carrying the merchandise and what's significant about them carrying the merchandise why is that always a concern when we talk about holding inventory what's the issue as a business person they don't want it to in a sense expire in value they want to flip it well certainly you want to maximize the number of terms and also um you you wouldn't want to have the merchandise depreciate well you wouldn't want it to be um perishable you wouldn't want to become obsolete but there's another issue which is that we're tying up cash so you want to have enough inventory on hand to meet demand but expenses there is holding cost associated with that if you um purchase merchandise that means you need to have you need to be able to pay for the for the inventory storage and also in some cases there's um storage course associated with the merchandise the next question says we're talking about different types of utility what is some of the different types of utility for Place time absolutely so this question says enhancing a product or service to make it more appealing to buyers is a function of form utility that's talked about on page 382 what is that what um what do you mean you would uh change the product or service you're going to modify it to make it more appealing why how is that done and tell us more about why that um makes sense to to do that go ahead Lexi because that's like a main Channel person mind visible like what you see is what makes like your choice at the end of the day they so like the way it looks is very good trigger to make people be more appealed to so that's why it's so important and especially things like food we we base a lot on like what it looks like that for example could be perfect yeah so the way that the product looks or the even the um the packaging the shape the color the form absolutely is going to be something that the customer that the customer is going to process and then ultimately base their um purchase decision and we said that in order for a company to be successful if we're going to be successful as marketers we need to be willing to change the form of our product as necessary so that we could meet the needs of our target market very well and so we need to realize there's different segments in the in the marketplace they have different requirements and so we're going to modify the product to that might be too dark we have to change the um the form of the product so that it's relevant to the target market and so that it's a product that they're going to adopt so we said one size does not fit all so we need to customize the product we need to tailor the product the next question talks about the snack vending machine located Prince you ready a snack vending machine located in a University Building located in a university building that you use between classes to buy ice cream cones when you're hungry is Right eight it's time and place utility that's on page 382 right here right now which of the following services or products must be provided by traditional marketing channels what do you think is the best answer health yes healthare so why is that well the other items um what about the other items can there there's really no way around it when we talk about receiving healthare you need to go to the doctor's office right is there any other um education you could even um you could yeah do online right music you download music software all those things um except for uh cor rental is iffy but I guess you could say there's like more modern ways now what would be the alternative service that's what we're talking yeah something get yeah all the others you could do on the internet you can make your car reservations on the internet download software music take courses online but um Healthcare there's no there's no substitute for visiting the doctor's office wait you rental's the sub for going to get car no it says the car rental reservations um you just do that online so all of these things you could do online make your car under reservations buy software buy music take courses online but healthc care you have to you just either you go to see the doctor or it don't so that's why um we said that out of those five Healthcare is the one that um must be provided by traditional marketing channels about webd they could they write a prescription for you they just they can they're just a source of information everyone thinks there something wrong with them they that yeah this is this the next question is interesting about Pharmaceutical companies it says that pharmaceutical companies sell to Hospitals and Clinics directly they Market their products to large retail chains that distribute the medicines to their stores Across the Nation they also sell to drug wholesalers that sell to the remaining independent drug stores in the United States what method of distribution best describes that used by the pharmaceu itical companies in this example who said duel Alexa yeah duel because they're using both direct and indirect so in this scenario they're um we're given here it talks about direct channels and indirect channels because direct to the hospitals and the clinics and then indirect to the other intermediaries so of course they want to sell their product Zer they want to sell their products to their pharmaceutical products to the to the drugstores but they realize that they need to have a dual distribution strategy which is exactly what we were just um talking about isn't that what we were talking about is um how are we going to successfully distribute our product in this case they need to use multiple channels of distribution remember also we talked about um K Cole we said that they sell direct to Consumers and they also sell through third party retailers so they have their own company-owned stores they have their Kenneth Cole stores they sell direct to Consumer and they also sell through retailers such as Macy's for example questions about that are we good great great great great good all right the next question talks about wholly owned extensions of the product that perform wholesaling activities as referred to as what's the best answer so the best answer is D which is Branch yeah the branch office the sales office of the organization which is D the best answer is D manufacturers branches and sales offices on page 389 and so importantly it says here that it's a wholly owned extension it's a part of the company the wholesaler is not a part of the company you sell your products to a wholesaler you sell your products to a distributor they're not part of your company they buy the product from you they sell it to their reseller they sell it to um to in this other example they sell it to local drugstores but they're not a part of our company they're an intermediary in this case the um company also has sales offices and branches that they own they have their own sales office they have their own people that are um working hard to generate um sales for the organization then last class remember we talked about the difference between intensive distribution selective distribution and exclusive distribution and remember we talked about Apple and we said that they have an exclusive when they first introduced the iPhone they had an exclusive agreement an exclusive distribution agreement with AT&T we talked about why that made sense and so this question asks us about the type of distribution and Market coverage associated with remember we talked about before shopping Goods we said is a distinction between shopping Goods convenience Goods unsort Goods specialty does that sound familiar yeah a little bit just say Yes it doesn't yeah right so this says that with shopping Goods what type of coverage what type of um distribution strategy would make sense what's usually associated with shopping Goods what do you think is it going to be intensive extensive selective exclusive concentrated selective selective selective is the best answer so for shopping Goods yeah you're not going to be everywhere you're not going to be in every channel you're not going to be in Discounters department stores grocery stores drug stores specialty stores for shopping Goods um you're going to be in a good number of locations but not everywhere why who could explain why that is why for a shopping good does that make sense go ahead shopping good is like fruits and vegetables could that be um could be it's your example but I mean if it's if it's uh fruits and vegetables you're find that in an department store that sells clothes so there could be um a disconnect there absolutely so it's last class we said that it needs to be relevant to that channel of distribution so yeah you wouldn't um it doesn't mean that you can't get an Apple in a department store but that's um not the primary focus of of that channel I guess if if it was everywhere it would more likely be a convenience good than a shopping good yeah so for a shopping good it means that the customer is going to go maybe from store to store maybe do some uh a bit of research and so it doesn't need to be and you wouldn't expect it to be everywhere they know that this product they could only buy online or they could only buy on TV or they could only Buy in um department stores or maybe we gave them an exclusive like we said maybe we gave certain retailers an exclusive or a channel an exclusive on the product so for this um in this situation that would make the the most sense and what um Ari is suggesting is that well if it was a convenience good then you would think well yeah that would maybe you would be you would have intensive distribution depending on the actual product like Jonathan is saying well yeah maybe if it was um fruits then every channel wouldn't lend itself to that but there's other items that we could come up with that uh would make sense that we would have intensive distribution questions about that so who could remind us that difference just a um as a refresher because we talked about the difference between convenience products and um shopping and Specialty and unsort do you remember that we had a conversation about that was the same day that we also talked about different types of products and we said that there are goods and services and goods can either be consumable or durable and then we had a discussion about Specialty Products and convenience products and shopping and unsort so let's let's just um take a couple minutes and recap convenience is more like in a drugstore um there's like many different types of items that are there no if you see it then then you'll buy it but you're not going to go out look for it so what we talk about is definitely a distinction between a convenience good and shopping shopping suggests that we're going to make the effort we're going to it's not um an impulse purchase but it's something that we're going to look for that we're going to drive 10 miles and go to a particular store to find and we might also need to do research resarch as well or maybe we just want to do research want to get input from our friends from our family we go online we read Consumer Reports but we need to understand that as marketers is our product a shopping good are people going to expect us to have a substantial amount of information about the product let's say on our website is it a high involvement purchase whereas a convenience product suggests that more than likely it's low involvement and of course depending on the individual so what I consider to be a low involvement purchase Joseph might consider to be high involvement but it's accessible more than likely it's has a intensive distribution so the market coverage is intensive and what about specialty and unsort so you see it's important for us to make these classifications about our Goods because that's going to help Define our marketing strategy and tactics so what about what about specialty what would be an example of Specialty wait on the line for hours to buy it something you you go go out and really try to buy to buy you definitely going to make a lot of effort could be something like antiques for example so that's definitely not intensive distribution also like aide like Universal Studios or something waiting online for for a long time and yep I'm listening go ahead but is so are we we're classifying the theme park as a specialty as a specialty type good or service yeah what do you think someone come on repeat the question the question was can you repeat the question's not really special good answer so I think a good example for you to remember um a specialty good is is an antique if you're an antique collector antiques are an example of a specialty good because you don't have intensive distribution and it's something that the Shopper is going to spend a lot of time locating the the product that's shifts the burden in a big way for us as marketers what about you see that you see how that that's different from a convenience product or um well certainly a contrast to um convenience and then the next at the next extreme at The Other Extreme would be unsort which is a complete right that's like a two extremes if you will unsort products that means the customer isn't even looking for the product and the other side is that Specialty Products that they are actively and aggressively researching and looking for a particular product so they're they're taking the initiative to find antiques what about if they're not if they're not taking the initiative what would be an example of an unsort good people don't know about right something they may not even be on their radar screen and so that means that we need to build awareness that would adver so we need to um advertise and make them aware that we provide a certain product or service because presumably we've identified the unmet need now we need to make them aware good job you guys rock thank you you're the next question is about vertical conflict whoa the best answer is B conflict that occurs between two different different levels in a marketing channel that's on page 397 remember last time we talked about vertical integration and horizontal integration what's the difference what's horizontal integration if a company integrates horizontally what does that mean within their same produ some in their Market so who can give us an example what does that mean if a company um horizontally integrates what about let's say um it's a specialty store and they sell Footwear let's say it's Foot Locker and they decide to horizontally integrate what is that what would be evidence that they've horizontally integrated what would they do next and then after that what would they do that would provide provide us with evidence that yes coach they this is an example of horizontal integration what would need to happen if Foot Locker were to integrate horizontally sing a happy song La La Smurf the whole day long so what do you think they do something Smurfs purchase something that helps their business in terms of like manufacturing shoes Prince know answer y he's iring me on this one okay you got answer up oh you were thinking so for example if our Core Business is retail and we horizontally integrate if we sell Footwear sneakers and shoes and we horizontally integrate that means that the next thing that you would expect to see is that we would then open up a specialty store that sells electronics and then after that we would open up a specialty store that sells vitamins so they we're H we're still retailing we're still um operating as a retail organization but at that level in the value chain but we have different types of stores that we're operating they always have to be is it have to be a different store could be part of the same store well if they just opening up more of the same then the question is have they horizontally integrated or are they just growing their business so if they had one store and now they have 10 Stores um that's okay and but is that horizontal integration so that's not horizontal integration no it's horizontal integration I'm saying like they have 100 stores and then in each store they add on a section for vitamins inside their running store inside the shoe store well then you would say that they're not a special store maybe they're a general merchandiser and so they just added a department or a section so isn't that basically what you're saying is that's really how department stores evolve and it just kept that another department for you know you have 50 departments so that you're a department store but we said what's the difference between a specialty store and a department store in terms of the format the retail format if you will what makes what what what's unique if we were to compare a department store and a specialty store special dealing focus focuses on one thing and they only s only sell that type of product Department sell many different many different yes good and what else closely related to that is if they only sell one thing then they have more bit more different type more more variety of of that product absolutely so if you just sell sneakers if you're a specialty store and you sell sneakers then you would expect that they might have aund or more different types of sneakers there different colors different brands if you're a department store the value proposition of a department store is not that you're going to have this vast variety that Jonathan was talking about yes we have we have sneakers yes we have sneakers if you want you could buy sneakers here if you want to buy coats you could buy coats here you want to buy shoes you could buy shoes you want to buy shirts you could buy shirts watches we have that too but that's different from saying we have a thousand different watches we're a watch store see so what um what Ari is suggesting is um is correct is that if it's a specialty store you focus you have a limited number of items maybe just one item even like you just sell sneakers that's an item and then you're going to have a very vast assortment so you're going to have lots of different types of sneakers different colors and running sneakers and walking sneakers and sneakers for basketball and for golf and so on and you're going to have this vast variety but you wouldn't expect that in Department department stores the department store channel has something of a lot of different categories and a lot of people like that because it's convenient so you go to one store and then you could could get Sneakers but at the same time while you're there you could get pants you could get sweaters you can get other merchandise yeah but like the largest apartment stores in the US try to also have a lot of each like a lot of each product so if they have like shirts they have pants they have um you know watches they're going to have a large variety of each of those inside their department store so they're going to have absolutely complimentary products So shirts and pants and socks and shoes products that are used together but in terms of the breadth of the assortment while they're going to have a variety of pants just think about how much variety really they could have based on the amount of space that they're going to allocate for for that particular Department like for example of course I'm not just saying that they only have like one pair of pants there like you know you could only get um blue pants no of course you could get pants that are blue brown green some with stripes some um solid but that's different from a store that focuses just on selling pants so if that's your whole Space is allocated to pants then you're going to be able to have a lot more variety or places like for example like there's a place called um Thai City well if you um you know has a good assortment of ties Macy's actually has a pretty good assortment of ties c 2 Bloomingdale's has a good assortment of ties Sachs has a very extensive assortment of ties extensive very extensive and the question is how do those compare with like Tai city place that just has ties now so based on what I said I'm not saying that Macy's only carries like one tie one color one size one designer no they do they have variety but not as much is if that's your business then you're going to have maybe instead of 10 different brands you might have 50 different brands and you're going to have a lot more patterns and colors as part of the assortment questions all right so let's look at the next question the next question says a firm can become a channel Captain because it is typically the channel member with the ability to influence the behavior of other members influence can take four forms economic identification with a particular Channel member the legitimate right of one channel member to direct the behavior of other members and expertise so expertise is the fourth element that's on page 398 when we think about marketing we said that one of the four p is promotion an example of a promotion could be a sweep steak or a contest what's the difference what's the difference between a sweep steak and a con and a contest because we hear about those quite a bit sweep stakes and contests are types of promotions but what why do we use two different terms there something that's unique about sweep Stakes versus contests why do we need to make a distinction if a company has a contest what is that what does that suggest so the participant there's an expectation of the participant in a contest there's an expectation that the participant will demonstrate some skill so they need to write a jingle for our advertising campaign or they need to draw a new logo for our company whereas a sweep steak you just enter your name and it's just random there's no skill involved you put your name in the box and as luck will have it either they call your name or they don't but a contest that participant has got to do something they have to do something um that's going to justify them being awarded the prize so why do they get the trip to O expenses paid trip to Israel or why did they get $10,000 or why did they get a lifetime supply of orange juice why because they demonstrated that they have some skill they won the contest not that their name was drawn from a hat but that what they did was better than what others were able to do now what's important about when we talk about promotions and when we talk about advertising and we we have different mediums in which we can advertise we could utilize broadcast broadcast we're talking about broadcast as a medium we're talking about TV and radio for example we talk about print print is another medium so we have broadcast and print could be in newspapers it could be in magazines and the different mediums have advantages and disadvantages outdoor is another medium so it could be on billboards bus shelters could be on the side of buses or inside of a bus or a train or a train station those are examples of outdoor it could be on the internet but what's important when we talk about promotions and when we talk about advertising is what's referred to as IMC so in chapter 18 we talk about IMC which is integrated marketing Communications which means that if we have promotions if we advertise on billboards and on the size of buses and in magazines and newspapers and on TV and on radio that it's all got to be integrated so we need to have a marketing Communications plan that's going to ensure that all of those different aspects are working together that they're reinforcing each other so whatever the messaging is in the TV commercials and whatever the brand image that is being communicated in the TV commercials that needs to be also supported in the print ads on the Internet it's got to have the same look feel identity personality there's got to be consistency so what we do is we develop a a map and we look at what what's called touch point mapping touch Point mapping is our attempt to make sure that all the points of contact that the customer has with our organization whether it's in the store or on our website or on the phone or ads that they see of ours in magazines or TV commercials that all of that is consistent so what we do is we map that out and see all the points that the customer has contact with the organization in some cases maybe we don't have a store our business um we don't have a shop we don't have any company-owned stores but if we did we would need to think about are we going to use greeters what are the greeters going to say to our customers when they come into the store and how is that going to support who we said we are as a company as a brand is our brand something that's fun contemporary or is it professional is it provocative all of our marketing Communications have got to achieve this synergistic effect because remember it's likely that we're going to have to use multiple mediums in order to maximize the processing of our message the ability of the receiver to decode our messaging in advertising we talk about messaging so we have a certain message and remember we talked about reach and frequency so we need to be able to reach we need to be able to expose the target audience to our marketing Communications but importantly they need to be exposed to the marketing Communications enough times the frequency has got to be great enough so that they get it so that they're able to decode the message and that they're going to remember the features the benefits and the brand name because we said there's a lot of clutter what does that mean clutter that means you're watching TV you're watching a program and they're talking about Rashi they're talking about Rashi and um his commentary and so TV what happens is they go to a commercial break and we see a commercial the commercial is 15 seconds right after that is another commercial also 15 seconds and then another commercial and then another commercial and then another commercial and another commercial is that is that an exaggeration or is that how programming is created that you have Pro a certain um program and then there's the commercial breaks and then you have five six seven sometimes eight commercials during the break that's an example of clutter six seven commercials and then somebody says to you all right Alexi what was um what was the first commercial about and then you say oh I yeah it's a trick question it was about orange juice I I know that I know it was about orange juice I was born at night but I wasn't born last night it's got to be about orange juice and then I say okay all right and um what was the brand of orange juice and he starts to look at Jacob and he looks over to Joseph and he's like uh well I know it was about orange juice but then after that I saw all these other commercials how how am I going to remember and I said okay well if you can't remember the brand name tell me what was the value proposition what was the promise what was the brand promise even if you can't remember the brand name what was their promise what did they say so that tells us that there needs to be frequency there needs to be a higher level of frequency so that after seeing it five times or eight times and very often those are realistic um The Realistic number of exposures then he'll be able to say okay yes it was about orange juice and it was the Tropic Cana and they said that if you drink orange juice that um you'll be able to um be healthy because Orange juice has a high level of not only vitamin C but also vitamin A and vitamin D and calcium and you know what if you drink this you could stop drinking milk I said wow this gu he's uh he's got it he's got it so part of the decoding occurred because Alexis saw the tele commercial he saw the billboard he saw the add in a magazine maybe heard something on the radio and so we realize that the idea of our target audience being reachable is challenging it sounds easy to say oh well the target audience has got to be reachable that would be lovely but but how are we going to be able to reach the target audience and so what we know is very often it can't be through one magazine in fact I had suggested that um very often we're looking at advertising in 10 or 12 magazines for a given campaign because it's unlikely that the readership of the magazine is going to be perfectly in line with the demographics of our target audience so we know when we get a you could go online and go to the website of any number of Publications download their Media Kit and they give you information about their um readers their um their the age group of the people that read their magazine and other demographic information so that's going to help us determine which magazines we're going to advertise because ideally we want to have as much of a match with the demographics of our target audience as possible with the demographics of the magazine but very often that's not the case sometimes only 50 % of the people who read that magazine are rock target audience and you and so there's a certain amount of waste and then you might say well I'm going only select the magazines that are um 95% that 95% of the readership is my target audience it's usually it's it doesn't happen that way so we know that there's a certain amount of waste and that's why media buyers and media Specialists spend so much time literally I mean people um in media who just responsible for doing media selection work 90 hours a week did we talk about course per thousand course per thousand in order for us to compare in order for us to compare the cost of different media we talk about course for thousand CPM the Roman numeral m is what a th000 that's why we call it um CPM so now you could concentrate right so when you're looking at a variety of different magazines for example to advertise we need to decide which is the most cost effective but it's not as obvious as you might think especially when you have to look at so many different magazines so many different if we talk about one example you say well what do you need a what do you need a um a spreadsheet what do you need a computer to figure this out when you have to do a few dozen of these calculations it becomes somewhat problematic so I give you an example so we have two choices we could advertise in magazine a or magazine B magazine a is $200,000 for a full page color AD magazine B is $400,000 for a full page color AD in the United States which is the better deal let me see who's smart here let me tell you again magazine a is $200,000 not shekel dollars all right magazine magazine B is $400,000 so magazine a is $200,000 magazine B is 400,000 which number is greater 200,000 or 400,000 what do you think good so what do you what do you think is the um the publication in which we should advertise it's not enough information you have to you have to know how many people read each magazine yeah so um we don't really know yes 400,000 is more than 200,000 shat yourself on the back if you if you figured that out then you should give yourself a round of applause for that but like Ari is saying what we don't really know which is the better deal which is the least expensive option because we don't know the level of circulation for the publication so if I told you that in Better Homes and Gardens for example that a full page color AD was $400,000 which is actually is a real number so a full page color AD in Better Homes and Gardens is $400,000 for one insertion the readership is approximately 8 million now in print that's that's actually a lot I know we're so accustomed to people throwing around numbers um for television advertising especially in the Super Bowl people talking about a 100 million viewers and 200 million viewers in print 8 million if you have a magazine has a circulation of 8 million that's substantial do you don't see numbers like in for magazines that have um circulation of like a 100 million 8 million is a lot and very few magazines have a circulation of more than 20 million in the United States new yorktimes we can come we'll come back to newspapers in in a second the and we also with um newspapers we need to identify which are National newspapers and which are local but we're going to we're going to come back to that oh worth on he's leaving maybe it's good so the circulation for Better Homes and Gardens in the United States is about 8 million and so for $400,000 you have the opportunity to expose 8 million readers to your ad now you might say 400,000 how come I'm always reading in Ed week and advertising a AG that companies introduce a new product and they're um kicking off the launch they're spending $50 million in advertising say coach you must have got the numbers wrong how how do you get 400,000 I thought advertising you spend Millions one ad in one magazine one time so if you were going to have a full page ad in Better Homes and Gardens every month then that's $5 million right there and that's just one magazine and remember I told you that um very often we advertise in 8 to 12 magazines as part of a campaign so not everybody runs full page a full page ads so some people run half page ad some people run a quarter page now in our scenario what do you think about the situation where for $200,000 you could reach 4 million people what do you think let's let's figure it out so we're going to calculate the cost per thousand for A and B we have what do we have we have the cost of the ad time a th000 ID by the circulation so for a we said it's what 400,000 time 1,000 / 8 million and here we said the cost is what 200,000 times 1,000 divided by how many 4 Milli all right so just what's the what's the cost per thousand what's the C per thousand here [Music] still Smurfs sing a happy song Smurf the whole day long how much 50 yes $50 reach a th000 people you sure yeah what do you think it's Les ex so the course for, for ADB is also $50 now in the United States that's a realistic number so I wanted to give you real numbers so the course per thousand varies depending on the the publication and that's why we need to do this because it could be 50 it could be 35 depending on um it could be for let's say for Billboards is much less expensive so the course might be um for example $18 the CPN might be $18 so from like $18 to even like $25 is realistic in terms of course per thousand to advertise in the United States so if you do this calculation on an exam and you get 2,347 18 and something's not right there well if you see that as a choice on the exam they like well that's that can't definitely be that why because coach said that the range of course per thousand could be you know approximately from like let's say $20 to about $120 those are real numbers could be a little bit more could be a little bit less but it's not 2,138 and it's not 5,247 1116 so good um good perspective for you now what about if we change these numbers what about if the circulation was 3 million what if it was what if it was 3 million so in this case what we saw is the cost per th is the same so what would we do so what if the cost per th all right so how does this help us with our decision- making process so in this case in this case we found out that the course for thousand is the same and then you might say oh so what did we do this the course for thousand is the same so it doesn't matter H wait a minute what do you mean it doesn't matter what do we find out so what is all going to decision going to be so if the course for thousand is the same what are you going to do does it matter or not what's what's what what did we find out here if the cost to reach a th000 people is the same then Which choice is better doesn't it just depend how much money you're going to spend which the better deal we said that the reason for calculating the course per thousand is so that we can compare apples to apples cuz said well wait a minute H coach you're tricky you didn't you didn't tell us what the circulation is all right so now you know the circulation now what so don't you want to go with a because the largest circulation the same is that your final answer I guess yeah and what do you think we have to determine how many of the Thousand are our Target Mar yes that's a separate calculation we need to determine the amount of coverage how much waste there is but let's just work with what we have here for now so what do you guys think the cost per thousand is $50 for both and which should we choose I would say a I'll go with B then just I said EXA right what do you guys think spend less money depends yeah not getting to as many people I said it depends it depends it depends how much you want to spend how many people you want to reach what's the survey sign you always want to reach as many people spend as little as possible it I got you Smurf the whole day long what you not you're not voting what about Johan you want to vote hey okay never mind go back to whatever you doing this says that $50 per thousand this tells us that for $400,000 we could reach 8 million people or for $200,000 we could reach 4 million people same so what whatever you prefer like Ezra is saying if we knew then it would be well what about the readership that maybe um here we reach more of our target audience but the course per thousand is the same so even though so this is misleading that's one of the things I want to show you is that's 200,000 this is 400,000 you might say no this is the better deal it's cheaper but this says no don't be tricked this is not really cheaper the cost is really the same the cost per so in this case we'll have to go with what Ezra suggested and pick the publication that is a better match with our target audience but then we have our friend here who's very clever thank you thank you not you 66.6 thank you so much for the compliment coach yeah it's my pleasure never be so this says that for $200,000 it's going to be half as much the price is half as much but we'll only reach 3 million people so instead of reaching 8 million we reach 3 million and then the cost per thousand is 6667 I go then you go with Rich hey the this is has a lower cost per thousand thank you so you change the problem right so we looked at a different scenario so we looked at one where the cost per th is the same and one where the cost per th is different so if the cost per th000 is the same then even though the price of the ad here is less it doesn't matter the course we're still being charged the same amount per thousand so you might be throwing nor say no it's 400,000 yes 400,000 is more than 200,000 I'm happy that that you guys know that 400,000 is more than 200,000 that's right that's right but the question is how much is it to reach a th000 people here we we said in the prior example that it's the same $50 to reach a th000 either way but if the circulation was only 3 million then the course for thousand would be higher all right before we go something else it goes on and on my friends some people started singing it not knowing what it was and they'll continue singing it forever just because go on and CU coach goes on and on my friends all day long all right so I want you to think about the strategy when we think about our advertising we need to decide we need to decide on a media strategy we need to decide if we're going to use a yeah depends continuous so we need to decide whether or not our strategy is going to be continuous pulsing or flighting so when we're advertising and we're using a continuous strategy that means means that we're advertising all the time all year absolutely all year round for some companies that Mak sense because of the product that they sell the product is purchased all year round and some companies could afford to advertise all year round but that's different from pulsing and flighting puling and flighting is different who's gonna tell us go ahead you gonna try this again uh wow yeah uh I would say pulsing is like if you own a winter winter like winter clothing company then in the winter you're going to advertise for that and flighting is more like not so um flighting is like up and down like a flight so like you have should not be going it going up and down like at one point of the year you're like pounding like the advertisements you're always like advertising at that point and flighting um is like in between puling and continuous it's like intermediary between the two in terms of all right so it's more constant than pulsing but less than continuous sure both is that wrong all right so continuous we know what that is we're advertising all the time then which strategy do we advertise all the time but then sometimes we advertise more than others p goes up con right so importantly when we talk about pulsing as a strategy that means we advertise all the time it's still continuous but there's certain times in the year when we're going to yeah when we're going to increase the amount we're spend depending on Advertising yes that's pulsing flighting is January we advertise February we don't advertise March we advertise April we don't advertise so on off what's the challenge with that what do you think that's yeah people are going to forget so you advertise in January and and then what's happening in February when we're not advertising people yep absolutely so have a good night do good things orange juice [Music] juice |
Marketing_Basics_Prof_Myles_Bassell | Marketing_2.txt | [Music] you let me down show me when I needed you [Music] the [Music] I thought that mr. Bies when you let me [Music] [Applause] [Music] [Applause] [Music] [Music] [Music] let's learn I know I'm worthy get the wrong yeah yes bizarre I'm gonna take it down [Applause] [Music] [Applause] [Music] [Music] [Applause] [Music] [Music] you let me down [Music] [Music] [Music] [Applause] [Music] [Music] primary research that's research that we as an organization initiated we initiated that research that's known as primary research secondary research is research that was conducted by somebody else not somebody else that like works for our company or a consulting firm that we hired but somebody else a third party already conducted the research that's known as secondary research that's usually relatively inexpensive we should use secondary research to prepare for our primary qualitative research so an example you could get industry reports state of the industry reports for example of the beverage category on the website for one of the beverage associations that supports that industry so executives are members of associations industry associations one of them for example is like the beverage Association and there's also trade publications that executives in those categories and industries read you could usually find reports on different industry trade association websites and get reports for free you could also download from market research firms different types of research reports both focus groups and questionnaires that can be $500 to be $5,000 now you might think $5,000 is a lot of money and it is but compared to spending $50,000 on our own focus groups and $150,000 to do our own questionnaire $5,000 is not a lot right so the first thing we should do is look at the secondary research that's available invest in secondary research and use that to prepare for our primary qualitative research like focus groups so when we say identify an unmet need you see there's a lot to that so I'm just telling you that some of the key activities one of them is identify an unmet need the way that we're going to do that is through marketing research I just gave you a little bit of an overview of what that is the next thing is to determine a price determine a price that the consumer is willing to pay then develop a product No so I'm gonna beat I'm going to tell you all five of them for now ready here we go marketing activities five key marketing activities the first one is identify an unmet need the second one we're going to break this up the second one is two separate points the second one is develop a product the third one is separate point is determine a price the fourth one is to gain distribution gain distribution so what is gain distribution we'll talk about that in a second the fifth thing is to build awareness so the five activities are identify an unmet need develop a product determine the price gain distribution and then build awareness so another way that we could summarize that is as the marketing mix so what is the marketing mix the marketing mix consists of the four PS the four PS so the marketing mix consisted of four PS what are the four piece the four PS are product price place and promotion that's the marketing mix the marketing mix consists of the four piece product price place and promotion questions those are the controllable factors so in marketing as marketing executives we face controllable factors and uncontrollable factors the controllable factors of a four piece which means what we determine what product we're going to sell we determine what price we're going to sell the product for we determine the place in otherwise the channels of distribution the retail is where we want to sell the product whether it's Walmart or Bed Bath and Beyond or Piggly Wiggly Publix wallbangs Duane Reade Rite Aid we determine the place where we want to have distribution and we control the promotion which consists of certainly trade promotions consumer promotions and also advertising but advertising doesn't start with a P so the model the model is known as the four piece product price place of promotion so what I'm trying to do is give you an overview of marketing so we've come at it from a number of different angles we said marketing is about creating communicating delivering and exchanging value we said that marketing is about identifying an unmet need developing a product determining a price getting distribution and building awareness we said that marketing is about four p's product price place and promotion so when you look at the book and you see 22 chapters it kind of like whoa what is this all about I just told you basically that's what it's all about that's what the entire course is about is those concepts but we're going to look at that them in depth so the marketing mix consists of the four PS questions yes go ahead no let's keep going we're here to midnight right yeah I have it on my calendar then broken college midnight no you have the cookies you know this is as America's favorite cookie which brings us to an important point which is we're gonna come to its marketing metrics all right so we're gonna come back to that in a second but before we do before we do we're going to keep on going we can do this yes we can look at your neighbor and say yes we can yes we can you can do it yes you can I have a plan to you I have a plan for you all right now in Chapter two in Chapter two we talked about different types of plants so there's three different levels of strategy three different levels of strategy three different plans so an organization is going to have three different plans in place at the same time the first one is the corporate plan that's the corporate strategy so there's a corporate plan there is a business plan created by the SPU's f bu which is strategic business unit this is in Chapter two so the strategic business unit and the third plan is the functional plan an example of a functional plan would be the marketing plan so the marketing plan is an example of a functional plan so it is three plans the corporate plan the business plan and a functional plan who can tell me what the three plans are raise your hand who could tell me then yes so Madeline said it's three types of plan the corporate plans a business plan and a functional plan an example of the functional plan would be a marketing plan that's a function right so the marketing department is going to have a marketing plan now what's in the corporate plan one of the things that's discussed in them in the corporate plan is the mission the vision and the values of the organization so the mission the vision and the values are three important things that are discussed in the corporate plan so the senior management team develops a corporate plan that guides the entire organization so the corporate plan sets the direction for the entire organization it includes for example the mission of the organization so the mission defines the business what is the business what is the business of our organization what was that I can use right there okay Alban well so you're saying that our mission might be to maximize profit yes yeah that's one of the causes of the financial crisis is that executives wanted to maximize profit maximize shareholder wealth that has wreaked havoc on the economy worldwide because the obsession that executives had with maximizing shareholder wealth at the expense of all other things like doing things ethically being socially responsible being responsible to the stakeholders not just the shareholders but also the stakeholders which includes the stockholders but also the community and the customers and the employees look at look at what Enron did you know one of the things besides you know obviously you know they they were involved in fraud they manipulated their accounting procedures their books they falsified their accounting records but one of the really kind of absurd bizarre and obscene things that they did was they scammed their own employees so when the company when the whole thing unraveled and a company actually went bust the people who lost nearly everything that they had in large part was the employees that have invested in the company not just outside random stockholders which that's bad enough but people companies have large companies have millions of people that invest in their company but what they did was they barely went the extra mile to con their own employees that people put their life savings into the company but it was all smoke and mirrors they exaggerated their earnings and the whole thing unraveled why because they were focusing on maximizing the profit of the company but apparently at any cost yes go ahead they provided energy there an energy company so an example of a mission nothing got bad Chanel yeah so to provide healthcare to the indigent and needy the Indian community oh so so now it's saying to provide health care services to those that are indigent but to do it with dignity yes so that's um that's a good example of the mission we're in the healthcare business and our focus is on helping those that are economically disenfranchised another way to describe that is indigent those people who are that are from low-income households that's our focus but to do that with dignity what about this one to explore strange new worlds to seek out new life and new civilizations to boldly go where no one has gone before what do you think about that is a mission you like that one oh you guys a Trekkie friend no you guys the North Star Trek is you don't know what Star Trek is you don't know what wrestling is what about tequila you know what daddy what about ganja yeah I got to come there now alright so that's a good example of a mission or to provide high quality educational devices for K through 12 in North America so to provide high quality educational devices for who who's our target market K through 12 of students we're in North America so our product is educational devices our target market is K through 12 students and we're in North America so that defines our business doesn't it now our vision though and sometimes more and more those terms are becoming synonymous but they're really not I know that textbook suggests that they are synonymous but it's not true okay I've been doing this a long time I know I look like I'm 21 but that was a couple of year that was last year actually I was plenty but the vision really is forward-looking so the mission defines the business that we're in now the vision suggests where we want our business to be in the future so our vision would be let's see how we can modify this watch me watch me how would you modify this by saying to make it a vision to sell educational devices to all grade levels including college in both North and South America by 2020 so that's our vision is to sell educational devices to all grade levels before we just said K through 12 so now it's all grade levels including college self educational vices from kindergarten to college seniors in college not just in North America but our goal is to expand that to sell these educational devices in North America and South America so that's our vision that's not where we are right now we're not selling in South America but our vision yeah we want to be as a business in the future include South America and it includes our target market expanding to include not just K through 12 but also College questions yes dad what is your name Jamal okay [Music] yes absolutely so it could be you we could specify that as well so it could be we might want to have our target market limited to just public schools Jamal is saying or private schools definitely and maybe our vision is to sell to both public and private schools you guys see what chamalla say yep that's a very good point absolutely anything else right in the future where is it we want our visit to be in the future the mission is what business are we in now how do we define our business now and there's a lot of information that we could use to support that but the mission statement should be really short so really not more than two or three sentences and it's something that everybody in the organization should be able to internalize everybody needs to know what the mission is so whether not just the CEO of the company not just the vice-president but the marketing managers the supervisors the the fast in the mailroom the janitors everybody should know this is all this is the business that were in this is what we do what do we do we sell educational devices came through nine in North America high quality educational devices to K through nine students in North America everybody should know that what is it that we do we could add to that that's basically one sentence but we could add to that maybe you want to say something about that we're doing that in a way that's socially responsible that's socially responsible and [Music] sensitive to the needs of our stakeholders sort of stakeholders remember includes the stockholders it also includes the customers the employees the community absolutely now in terms of marketing metrics how are we going to measure performance so in Chapter two we talked about some marketing metrics we talked about so far profit so one of the ways we know that we're doing a good job as a marketing executive Gregg 50 is profit we measure the level of profit for the organization so that's not our only measure but certainly we want to be profitable and even I'm not for profit organization is so profitable a company that is not for profit that doesn't mean they don't make any money when they're not allowed to make money a not-for-profit designation is a designation that's granted by the Internal Revenue Service that just means that their income is taxed differently it doesn't mean that you can't make any money not for profit companies make money they make a lot of them bo I should say a lot something that make a lot of money and some of their executives get paid a lot of money so it's okay to measure the success of the company based on how much profit it generates if by dealing hopefully some of that profit goes to the community and allows the company to do things that are environmentally friendly of those important objectives so should the company be a good corporate citizen so we talk about values what would be an example of a value of a value is that the company would be a good corporate citizen that it would be honest that it would operate in a way that's transparent are those meaningful values so Greece is not good what do you think what do you think Green is good think about that that's one of the questions on the survey is whether or not greed is good so Greek could be a motivator for some but we need to think about what are the implications from people being driven by grief so marketing metrics include profit it includes a level of sales now sales can be dollar sales so how much dollar sales that we generate and also unit sales so how many units did we sell did we sell 10,000 units then we sell 10 million units and how many units we need to forecast how many units we expect to sell in the future because we need to cover both our fixed costs and our variable cost what are our fixed costs sometimes I ask them on the exam and students write fixed course of course and are fixed I'm not kidding looks like this you can't make up fixed costs are fixed of course it'll fix yes that's true right you know that's true fixed costs of course didn't fix Madeline right so fixed course of course that'll fix but really to expand upon that we need to understand that fixed cost of course that don't vary with the production volume so whether we make the reason why we care about that is because it means that whether we sell one unit or 1 million units our fixed costs are the same that's a problem especially if they only sell one unit that means if we only sell one unit and our fixed cost is still 10 billion dollars we can't be profitable so we need to sell a lot of units so that we could spread those fixed costs across a lot of units that's known as fixed cost absorption it's an accounting term fixed cost absorption I mean each unit absorbs a part of the fixed course so the more units that we sell the smaller the amount each unit will have to absorb yes go ahead tell us your name fixed course our course that don't vary with the production volume so the fixed course is going to be the same let's say the fixed cost is 10 billion dollars the fixed course is 10 billion dollars whether we sell one unit will we sell 10 million units the variable cost of course that vary with the production volume the accountable course of course that are variable but they vary with the production volume so if the if the variable costs let's say is $1 so let's say we make a product that's made of plastic and to make that product we need three ounces of plastic three ounces of plastic cost $1 so if three onto the plastic costs $1 and that's our variable cost it's not the only type of variable cost but certainly the raw material is a type of variable course because why if the variable cost is $1 per unit and we make one unit our variable cost is how much $1 if we make 10 units our variable costs are $10 Wow you you guys are amazing you are the best students ever no joke for reals for reals now what about if you make a 5 you make a hundred units what's the variable course but now you're not sure we're getting Internet a hundred dollars what about a million units so a million dollars so if we if the variable cost is $1 and we make a million units that means our variable costs rent from $1 total to a million dollars our total variable cost is now a million dollars if we make 10 million units our total variable costs without ten million dollars to rent from a million dollars to ten million dollars right our total variable cost is now 10 million dollars how much is our fixed cost still 10 billion dollars a fixed course we set up course that are fixed you see right now so the fixed costs don't vary with the production quantity so whether we make 1 million or 10 million or a hundred million or even just one unit a fixed course is still going to be the same in our example we said the fixed cost is 10 million dollars but the variable cost which is $1 per unit is going to change we said if we make one unit our total variable cost is $1 10 units $10 a hundred units $100 a million units a million dollars 10 million units 10 million dollars so you see why we call that a variable course it varies with the production volume the fixed cost doesn't the fixed cost is what's really scary is because that's really what has the largest impact on our ability to be profitable in many situations for many manufacturing organizations because when let's say a ten billion dollar fixed cost how could we be profitable if we only make one unit for 10 units for a hundred units if it's a consumer product that's let's say selling for $100 now if you have a ten billion dollar fixed cost and you're selling equipment to industry and each one that you sell is 50 million dollars okay that's something then we could be able you be on the cover off fix course without producing ten million of those questions no questions alright let's keep going so marketing metrics profit sales we said unit sales and we just talked about why the units the total units are important because it is gonna help us determine on level of profitability by taking into account the variable cost and the fixed course will allow us to determine our break even volume so how many units we need to make and sell to be profitable market share so market share is a level of our success it's a marketing metric marketing share so if I told you that our company our shampoo company sold 50,000 cases of shampoo last year should we be cheering should be like coops partying two shots of tequila when you think 50,000 cases of shampoo is that good how do what do you think yes go ahead depending on your fixed cost so we don't know we need to say this in our one optics course also we don't know if we're gonna be able to cover off fix course what else don't we know what else um we know we know that we sold 50,000 cases I Alan so the year before so last year did we sell 80,000 cases and this he had 50,000 or last year was it 40,000 and now was selling 10,000 cases more we don't know that yeah that's a good point yes good how many cases are competitions oh whoa oh now this is interesting so our success is not absolute it's not just that we sold 50,000 cases what is your name Joey what Joey is saying is how many cases did our competition sell now that starts to get interesting that's what market share is so in the let's say in the shampoo category that the total number of cases sole was 500 thousand last year 500,000 cases of shampoo the company that we don't okay so we own the shampoo company I'm the president of course that makes sense right and the spokesperson also and so the industry in the industry more companies combined in the United States in all channels of distribution not just convenience stores and not just drugstores and not just grocery stores and not just in mass merchants what's an example of a mass merchant Walmart is an example of a mass merchant Samson target is an example of a mass merchant Sam's Club is an example of a wholesale club BJ's is an example of a wholesale Club Costco is an example of a wholesale club grocery stores include what what are some examples of grocery stores shop right key food food town met food warbound Publix Albertsons food emporium Whole Foods Trader Joe's although we might want to make a clarification about Whole Foods and Trader Joe's is maybe being a specialty store focusing on what organic foods other types of specialty stores could be like Foot Locker it would be an example of a specialty store what else Best Buy would be a example of a of a specialty store finish line finish line yep absolutely so we need to understand in all challenge of distribution how many cases were sold we said it's 500,000 our company our company sold 50,000 cases which means our share of the market is nobody knows 10% so of 500,000 cases we sold 50,000 cases that's 10% 10% of the cases sold was sold by our company that's our market share our unit market share of shampoo is 10% so what about now should be party so that's a nice business to have that 10% of the market is a nice visit to have and probably are very profitable we're making a lot of money and we drive the Lamborghini and Maserati and Tesla so the market share is an important metric because it tells us how we're doing relative to the competition so it's not just we're not evaluating our performance in a vacuum just like oh we sold 50,000 units but we need time a sense of perspective 50,000 units out of 500,000 units okay that gives us a sense of perspective so market share is an important marketing metric and of course over time what we're tracking to see is did it go from 10% to 11% to 15% to 18% so in of itself we don't need to be able to get too hung up on that as 10% but certainly one of our objectives one of our goals is that it would go from 10% over time to 15% or maybe 20% does that seem reasonable a reasonable goal our objective is to Inc crease our market share yeah another marketing metric is the level of customer satisfaction how are we going to determine that yes surveys marketing research we're gonna do some marketing research this semester what is the first question they are going to be about how many think ethics raise your hand actor yeah that thanks good shabam ethics right all right ethnic so the first question is gonna be have ethics so we could measure the level of customer satisfaction by doing questionnaires asking our customers to complete a questionnaire about their level of satisfaction that's an important marketing metric is to measure the level of customer satisfaction we can also use the level of social responsibility so for example how much money do we donate to charity that's an important marketing metrics so all those are marketing metrics so examples of marketing metrics profit sales market share drop it sounds market share customer satisfaction social responsibility all of those are types of marketing metrics ways that we could measure our performance profit sales market share customer satisfaction and social responsibility yes so social responsibility has to do with being a good corporate citizen demonstrating a responsibility towards the communities in which we operate so for example giving money to charities in the community giving money to local hospitals giving money to build parks in the community giving money to plant trees in the community operating in a way that minimizes pollution in the communities in which we operate those are some examples of social responsibility so it says that we have an obligation we have an obligation as an organization to society to behave in a way that's ethical and to give back to be philanthropists so think about how much money how much profit how many billions of dollars in profit companies made is it reasonable that they use some of that money to minimize the amount of toxic pollution that comes from their manufacturing facility even though it might cost them twenty five million dollars a year to do that is it reasonable to expect that they spend out twenty five million dollars so that people in the community don't develop lung cancer what do you think but what about at that twenty five million dollars would bankrupt the company should they still do it anyway so they spend a twenty five million dollars so that the people in the community don't get lung cancer but the company is going to be less profitable and maybe actually go bankrupt think about that that's an ethical dilemma isn't it that's one of the things that we're trying to achieve this semester is to enhance your awareness of ethical issues because maybe your initial reaction is well the company can't go bankrupt that's unacceptable but then what about all those people that are gonna have lung cancer is that okay so what do we do we just paid off their families because that costs less than twenty five million dollars almost people get cancer we just give them each a million dollars and that's way less than twenty five million dollars a year to minimize the amount of pollution so we do that cost-benefit analysis you think companies ever did that before yes well if they move the people that's such a bad idea the problem is with tracking which is a way to extract natural gas from the below the surface is that they use chemicals to be able to get the gas to be gain access to the gas and those chemicals are very toxic and so what happens is they go into a community they try to extract the natural gas and they use all these chemicals and then people develop cancer but just think about all that natural gas that you're getting but Teresa saying on their behalf that sometimes maybe not intentionally on their behalf and sometimes they try to get people to move from those areas they will buy your house move from here we're gonna do fracking which is the term used to extract gas okay let's talk about the BCG models so we need to as a business we need to look at our portfolio so we have a portfolio we're going to talk about the BCG model so the BCG model BCG stands for Boston Consulting Group so the Boston Consulting Group created its model to do portfolio analysis so we're gonna operate a business that sells multiple products so our product let's say our electronics company our products include TVs so we're a global company we sell TVs we sell tablets we sell phones we sell printers we sell gaming consoles we sell cameras we sell video cams those are all electronics right come products that our electronics company sells in this scenario so our electronics company sells these products TVs who commits us to list tell us bad what does our product what are the products that our company sells who could tell us so one person raised the end two people raised there somebody who knows good [Music] electronics we got one time red Sun tablets TVs [Music] cell phones printers gaming consoles video cameras cameras digital cameras those are all electronic products sold by our company but and and importantly those are each strategic business units in Chapter two we talked about SPU's so each of those are a strategic business unit we have a strategic business unit for cameras we have a strategic business unit for tablets we have a strategic business unit for phones we have a strategic business unit for gaming consoles because the target market and the target audience is different and also the competitive set very often is going to be different so the company is operated not as a function of organization but using a product structure based on the product type so the question is so that's our portfolio our portfolio consists of all those products the question is how are those SPU's performing so we have to be able to write them we have to find a way to write them so that's what the Boston Consulting Group model does the BCG model does is it is a way to analyze a portfolio so there's four classifications stars about stars question marks changed house and doors all right so let's see so stars question box now I'm gonna draw the door you ready so that's the door that's a different model the dinosaur model this one is the dog and the cash cow gonna believe me it's adorned it looks like a dinosaur but it's really an adorable alright so what is the the VCG model tell us so it tells us that the growth rate for the stars is high and the market share is high so on this axis we're looking at the growth rate on this axis the market share so this matrix is a way to classify a portfolio so one way to do that is to look at four different aspects based on the growth rate and the market share that's a dork you guys okay sure there's no confusion about that don't let your imagination run wild okay so in the upper left hand corner we have the Stars so what we're gonna put here is we're gonna lift those SPU's that we consider to be stars how do we determine if they're stars how many employees they have know how much their sales are no not into according to this model those might be useful measures we might want to take that into account but in this model we're gonna look at the growth rate so stars are those SPU's that have a high growth rate I mean the category is growing their businesses is doing well even if the category is not growing the we need to look at the growth of our business so the growth rate here is about the growth rate of the category so is it a new category that's growing very rapidly or is the category very large but not growing so for example the beverage industry in the United States at retail is over 200 billion dollars per year that's a very large category 200 billion dollars the category for pots and pans in the United States is only about three billion dollars it's a nice category and it's a nice business to have if you sell pots and pans in the United States but it's not two hundred billion dollars that retail so 200 million dollars the beverage category includes water soda milk orange juice it includes orange juice and it includes alcohol right it includes alcohol and alcohol includes Rome I don't need to tell you guys this beer wine and spirits now that category is very large but it's not growing really it's me will be considered to be a mature category we're going to talk more in a different class a different session about the product life cycle which is include several stages introduction growth maturity decline obsolescence and sometimes even revitalization but what I'm telling you today is that the beverage category for intensive purposes even though it's only growing even though it's growing 2 to 3 percent per year is basically a mature category so in this model when we talk about the market growth rate we would say that the growth rate is low so it wouldn't be a star if that was our business if we were looking at the beverage category nor wouldn't because because it's Wyndham in order to be a star the growth rate for the market has to be high and our market share also has to be high so according to this model an S begin whenever s be used that we would classify as a star operates in a market that is growing and we have a high market share so it might be let's say for example for our electronics company that our cell phone our cell phone business the that we have is a star because the market is growing the rate of adoption in United States continues to so it's a market that's growing and we have a large market share that do we classify as a star boy it could be Apple right so Apple for example if we were doing a portfolio announced to them so they are iPhone we would say that that category the growth rate of wireless communication of cell phones is growing we might even want to be more specific and say smartphones and our market share is high so for Apple their iPhone would be a star now what about the cash cow so the cash cow would be a category that has low growth but we have a high market share so in the beverage industry we said that the categories mature it's not growing but only 2% 3% per year which basically is mature because other categories like in electronics and other technology related categories the category is growing 20 30 40 50 percent per year in the United States that's what we mean by growth 2% yes it's growing but that's not what we considered to be growth when we're looking at this model that's not significant growth so that being said though we might have a large share of the market even though the growth rate in the market is low so the market is not growing but we have a significant share there's nothing wrong with that in fact that's a cash cow for the company because the market is mature it's not growing but it's not declining either and we have a large market share that means that we don't have we're not selling 50 thousand cases we're selling 250 thousand cases 250 thousand cases of shampoo means that we have 50% of a market that's not growing so what it's not growing growth is not the only criteria it's not growing but we're on one of the market share leaders with 50% of the market selling 250,000 cases of shampoo per year that's a cash cow what money now as marketers we're not going to invest all our money into the cash cows because the category is not growing so unless we could figure out a way to reinvent the category so that it does grow basically we're expecting that we're going to sell 250,000 cases of shampoo every year now we'd like to think that with advertising we could sell a little bit more with promotions we could sell more there's other things that we could do to try and increase the number of units that we sell but the category is not growing what we're going to do the reason why we call it a cash cow is good we're going to melt that count we're going to use the money that we generate the profits and generate from the cash cow to grow the stars so to invest in the stars those that are in high-growth category so if the category is growing 25 35 45 percent per year don't we want to continue to sell more I'll be here not just 1 percent or 5 percent but 20 percent per year 30 percent more per year 50% more per year well that means we're going out to invest in marketing we're gonna have to spend money on TV commercials and print ads and sales promotions and trade promotions and sweepstakes and contest winning each other the entire integrated marketing communication plan I am see now there's the question mark sometimes we refer to that as the problem child the problem child is one where we're operating in a high-growth category and our market share is lower so the category is growing 20 30 40 percent per year but we have a small percentage of the market that's a question mark do you see why we don't know where that's going the market is growing but we don't have a lot of share so we need to decide we need to make a strategic decision what are we going to do here are we going to start investing we're in a category we have a foothold in a category not a very good foothold but we have we are operating in a category that's growing but we have a very small percentage of the market so we need to decide what we're going to do we need to decide whether we're going to invest in the problem child in the question box so we might have some problem child in our portfolio we might have some strategic business units that are question marks like I said also known as problem children now the dog in this model so remember we're using a smarter but classify our SPU's so each of our SPU's are going to be in one of these four categories it's either going to be a star a question mark a cash cow or a dog not to be confused with a dinosaur or a crocodile so a dog those SPU's that we would classify as being a dog have low market share and low growth so the category is not growing significantly if at all and our market share is very low so we might have 1% or 2% share in the category and what the category isn't growing so we need to decide whether or not made them a spinoff that business because we have a limited amount of market share and the category isn't growing so this is a tool that we use to to do portfolio analysis what's in our portfolio those SPU's what are the SPU's business the SVU that sells tablets we have another SP unit cells phones we have another SP use that sells what canvas another SP unit cells gaming consoles so we need to classify each of those st use so that we could allocate our resources accordingly how do we know who gets a hundred million dollar advertising budget where do we just give it to any meenie miney moe do we if we have ten SPU's and we have a hundred million dollar budget do we give them each ten million dollars or should we give some s 1 s to use 50 million dollars and the other 9 5 million each how do we decide that what we need to do a portfolio analysis we need to analyze our portfolio and determine which ones are growing and which ones are operating in a category in a market that's growing and which one has a large market share in the category in which they operate questions about that this is known as portfolio analysis |
Marketing_Basics_Prof_Myles_Bassell | Marketing_Basics_12_of_20_Professor_Myles_Bassell.txt | so let's start off talking about segmentation what is segmentation what is segmentation when a company divides their their products into smaller groups to to well let's make a clarification to that because um one you're talking about um sounds like you're talking a little bit about um the portfolio of the company which remember in chapter 2 we talked about doing portfolio analysis and so you're right we want to as part of portfolio analysis we want to look at the different product lines and evaluate them but segmentation go ahead malus m tell us what is segmentation all right everybody pay attention go ahead here we go when you aggregate no groups that have same um needs similar needs and they will respond similar toke absolutely so what we're trying to do when we segment the market is we're going to aggregate consumers which have similar needs and wants right that's one of the um criteria for segmentation so molus is telling us that one of the criteria is that the individuals have similar needs and wants so molus is telling us that we're going to aggregate consumers into these groups that have similar needs than want or another way to look at it is to say that we're going to divide the market into submarkets what else what else where's um Susanna all right don't get nervous I'm going to ask you a question Susanna besid similar needs and wants right now that we establish what segmentation is tell us what is another criteria for segmentation because when we segment the market it's got to be in a way that's going to make sense for our business in our category so we're going to talk about different segmentation approaches and you certainly could segment the market that way but the question is is that going to make sense in the beverage category is that going to make sense in the shampoo category right and Crystal is sitting here thinking now what does this man know about shampoo right but still you'd be surprised so we're going to talk about the different ways to segment the market but we still have to say to ourselves does this make sense have we found homogeneous right similar needs and wants with these consumers and we're going to group them together now remember we might have six different segments within those segments they're going to have similar needs and wants but amongst them the needs and wants are going to be different and that's what makes segmentation so important for us and remember it's got to be actionable it can't just be interesting it's got to be actionable something that we could apply to our business so what else so what do you think um Susanna what are what is another criteria besides similar needs and wants that we're going to look for when we segment the market what's another criteria the same price range like different products for different price ranges like depending on what you want to use it for well it sounds like you're talking about the application because remember after we segment the market and we identify these different categories the key is for us to be able to to bring that segmentation approach to life so there's got to be an application remember we said it's not just interesting of course it it might be interesting but it's got to be actionable we do segmentation so that we could implement it in the marketplace we could use that as the basis for our marketing strategy and our tactics so what do you think go ahead tell us um potential for profit increase it could me um so segments that one of the criteria you're saying is that there'll be an increase in profit well the potential to have the INRI to decide whether not want oh I see what you're saying so you're taking us some place a little bit different which is why would we segment so absolutely one of the reasons why we want to segment a market is because we want to increase our sales and our profitability but Mulan tell us those consumers need resp to Market the same way right so they're going to respond to the marketing mix in a similar way who could tell me what is the marketing mix y the four P the four Ps and what are the four psice right so the marketing mix is known as the four piece Price Place promotion and product so that's when we when we talk about responding to the marketing mix in a similar way that means that in that segment in that given segment the consumers have a similar need and want and when we say they can respond to the marketing mix in a similar way it means that for example at a certain price they're going to purchase so that's the who was it who just said about the um the price points Susanna right you were saying about the different price points so you're right that would be an example of the application so when we talk about they're going to respond to the marketing mix in a certain way well the application is that we're going to have products at different price points like like the iPad right is the iPad a good example how many people have here have an iPad okay so quite a few quite a few they're at different price points there's iPads that are at $6.99 5.99 $4.99 3.99 yeah so Jonathan is telling us that they're different versions the reason why they have iPads at different price points is because they've done a segmentation analysis they segmented the market and understand that at $3.99 a group of consumers are going to have similar needs and wants they're going to respond to the marketing mix in a similar way and what else what's another criteria for segmentation need be reachable reachable so molus says that the segment needs to be reachable who could explain to us what that means if we say that the consumers let's say well the target market is me Annie you want to give it a shot yeah so in other words if the segment is reachable that means that there's a way for us to communicate with them so that we know what programs they watch on TV we know what radio stations they listen to We Know What magazines they they read so if we identify a segment we've got to be able to reach them otherwise how are we going to advertise where is our marketing Communications going to come from so again it can't be just interesting it can't just be um a segment of people that that have tattoos okay that's interesting I I'd like to understand how many um people in the United States have tattoos and the different types of tattoos and the different colors of tattoos and the reasons for having a tattoo right there's different motivations for people to have tattoos I'd be interested in knowing that but if we're going to Market a product or a service to them it's got to be reachable that's what Annie is saying it's got to we've got to be able to reach them we need to find out okay we're going to um provide a service if we're going to want to sell a service to individuals that have tattoos we need to know what magazines do they read what radio stations do they listen to questions about that does that make sense so are we clear on why we say that the segments that we identify I've got to be reachable so it can't just be theoretical that's interesting it's okay right to be interesting but if we're going to be successful in marketing our product or service we have to be able to communicate that's what marketing Communications is all about how many people here have a tattoo how many here have a tattoo but don't want to raise their hand so tattoos are are pretty popular I I think that um now I think tattoos are more popular than they've ever been again people get tattoos for different reasons either some are for um Memorial purposes there's a lot of different reasons that somebody would have a tattoo so we have these are the so far we've identified three criteria reachable response to the marketing mix in similar ways similar needs and wants what's another um criteria yeah large so the segment should be large why is it important for the segment to be large when we segment the market why is the segment being large important go ahead Nana because you want to be able to Target more than just one specific group it's a specific group but it's still more General than that one very very specific so can I paraphrase what you said absolutely okay so I'm going to paraphrase what Jana just said so other words we're looking for commonality we're looking remember we said we're looking for consumers that have similar needs and wants because we can't Target every segment remember we start with segmentation first we're segmenting then we're going to quantify those segments which is what when we quantify segments what is that called who could tell me Sophia you want to give us a give a give a shot no okay no pressure Coe what do you think when we um when we're quantifying segments there's a term that we use in marketing if we're quantifying the segment right exactly Market sizing that's what we're doing here Market sizing and yes we're going we could also look at also importantly we're going to look at the growth rate targeting after we quantify the size of the market and the growth rate we're going to have to Target certain segments maybe we've identified eight segments but then going to look at well which segments are small which segments are large because we're going to need to focus today in um the marketplace companies companies that are on um the New York Stock Exchange for example they're being rewarded for Focus which means that they're not trying to be all things to all people they're focusing on a core competence that the organization has so they want to be the best at what they are so remember we talked about the beverage category we said that there's different segments in the beverage category in fact I've done um Consulting for beverage companies so this is something I I know a little bit about the beverage Market in the United States is over $200 billion at retail on on an annual basis but there's multiple segments which do you think is the largest beverage segment in the United States go ahead Jonathan well Coca-Cola is a brand but in terms of the category when we're dividing the market into submarkets go ahead tell us soft like drinks water well soft drinks is certainly one is one segment so we have uh soft drinks soda alcohol who said that you Jonathan you know something about alcohol yes you doo all right so everybody see Jonathan after class in the United States 6 % of the beverage Market is alcohol so we have to decide as an organization when we segment the market which one of these segments we're going to focus on because we can't focus on all the segments we have even large companies have limited resources Coca-Cola has said many times when they've been asked because Coca-Cola sells soda they sell water they sell orange juice but when asked well what about alcohol they said no we don't want to be distracted we don't want to lose focus on nonalcoholic beverages that's their area of focus is non-alcoholic beverages who sells the most alcohols who says most alcohol well so then we got to ask ourselves how do we Define the category is it just alcohol as a segment or are there different sub segments so for example what about for example beer liquor and wine so we have to understand who's the market share leader right that's something that interest stepen is in market share we've been outside of class we've been having discussions about how do we determine the market share of different companies so we have to decide how we Define the market to understand are you the market share leader in the beer category the market share leader in the wine category and then within liquor we have what vodka what else what else anything else wrong what else tequila tequil tequila so market share is important because it helps us understand our performance relative to the competition you know that um the Oreo brand which has annual sales of over $500 million is that is that a lot does that seem like a lot to you Oreos yes $500 million worth of cookies each year their tagline for quite a while was America's favorite cookie and the reason why they had to stop using that tagline is because of market share challenges so their competitors challenged them in the courts and said we've taken Professor basel's marketing course we know what market share is if you're saying that you're America's favorite cookie then that means what that you are the market share leader and they said Sophia they said you're not the market share leader they said we are they said no they said in grocery you're the market share leader but that's only one channel of distribution so that's one of the four Ps Place distribution but distribution doesn't start with a P so um place means distribution yeah I'm just I'm you know I for the first day I told you I'm keeping it real right I want it to be practical for you not theoretical so I tell you the real deal so they challenge them based on the market share they said no you're not America's favorite cookie you're only the market share leader in the grocery Channel but not in the convenience store Channel not in the drugstore channel so grocery is not the only channel of distribution for cookies so now they change their tagline to milk's favorite cookie clever right a great example of marketing and they are there they are an outstanding um marketing organization so now that we talked about the criteria for segmentation and what's going to happen after we segment the market let's talk about and we looked at an example different ways that we could segment a given Market what are some of the ways that we could segment the market geographically Geographic absolutely so what does that mean though if we say we're going to segment the market geographically n good tell us where's Somaya okay but you're here that's what's important all right I'm listening go ahead right absolutely so different regions countries now certainly we could segment any Market geographically but remember the challenge is to understand if whether or not that's relevant to the product and service that we're selling so remember if we um segment the market geographically we're saying we're saying that those segments have similar needs and wants and are going to respond to the marketing mix in a similar way so if this is the world and we segment the market by regions right nrat said we're going to segment the market by regions we have let's say what what are some of the regions in the world go ahead well I like to think so um but at a higher level right North America and I heard somebody over here say Asia South America Europe what do we me say Latin America austral so depending on our product or service we need to make sure that if we segment the market for let's say for soda if we segment the market for soda this way and we identify these segments then importantly we have to make sure that each of these segments are going to have similar needs and wants and they going to respond to the marketing mix in a similar way now you say okay well what's wrong with that well maybe there is no difference between the needs and wants for soda in North America and South America or between any of these actually so you need to understand that an approach to segmenting markets is geog graic so you're right one approach to segmenting a market is geographic but it's got to be relevant to our product and service questions about that go ahead Jonathan how about like when I went to Costa Rica they have like their own line of like beer and like that they don't have in the states or like other country like us so what are what kind of those certain segments and so what happened is what you're talking about um Jonathan is what happens after segment the market is the implementation so they found out like you said like in Costa Rica that the needs and wants are different and they're going to respond to the marketing mix in a different way or in that segment in a similar way so they introduced what was the takeaway for them is they introduced a brand into that Marketplace that would represent that they would use to Market to that segment does that make sense yeah because yeah so like for example in um in the automotive industry what did um Toyota do Toyota they segmented the market and so what they did as a result of their segmentation is they implemented a brand hierarchy so remember segmentation is something that we do that's analytical but there's got to be an application for our business there's got to be a takeaway it has to impact our marketing strategy and our tactics and for them what they did was because of that analysis what they did was they introduced three Master Brands what are the three Master brands in the United States that Toyota has who can tell me who can tell me one go ahead Lexus is one of their Master Brands absolutely right cion is another and Toyota that was no Nissan has a different brand hierarchy this is for Toyota motor Sals Prius is actually um too what was that Prius is now a fourth one they have like three different Prius models oh interesting tell us your name Ian Ian all right and Ian raised an interesting point this here is the corporate brand these are the master Brands and what Ian has identified for us are one of Toyota's sub Brands so Ian is Right Toyota has sub Brands so what we have what we have here is a brand hierarchy and Ian says that Toyota has a sub brand called Prius that's right so a brand hierarchy is going to illustrate the corporate brand the master Brands and the sub Brands this is based on segmentation that the company did so this is the itical application they segmented the market and then what did they do with that information they develop this brand hierarchy well but they actually turned Prius in from just being a sub brand with like Camry and coroll and St like that they changed it into a master brand and now they have like multiple Prius models that are different like what what's what's they like the Prius C which is like the smaller little like City Prius and they have a normal Prius and the Prius B which is like for families or whatever so my question is that c that designation C or V is that a sub brand or is that an example of a product code or a model number right so companies will have so companies as part of their strategic um plan they they might have um subgrant and of course each brand is going to have different models different model numbers product codes so yeah I would say that what you described um the what was it v c v or C whatever right V is um is simply a model I see saying yeah because they're all still Toyotas I guess right so it's all and you look at the way it's marketed Toyota Prius now why would you have what's the benefit of this brand hierarchy why would you have relationship between the master brand and the sub brand what are you hoping to gain Yana go ahead um well each one of the master brands has a certain um criteria to it that fits certain people um I think you talked about it last time good better best so this is an example so their pricing strategy is good better best and So based on that each of the these brands has its own identity it has its own personality its own positioning in the market so positioning is the space that the brand occupies in the consumer's mind so what are some of the other um sub brands that Toyota has Camry Camry Cora Corolla what else yesus Avalon right Echo y four T which one t Tacoma yeah so those are all um those are all sub Brands and importantly the master brands have a tremendous amount of equity a very high level of awareness that's why we want to create this relationship between the master brand and the sub brand that's why it's always marketed as Toyota Corolla Toyota Avalon Toyota Echo why because the awareness level for Toyota is very high but let's say we just decided to introduce a car and called it Echo how many cars do you think you would sell called Echo like if us as a team so me as your coach we form a company and we decide we're going to sell cars and we introduce a car called Echo how many cars do you think we're going to sell and and just imagine if we introduced a car that was called Nova then we would really have a problem but if we introduce a car that's called Echo we're not going to sell very many but if you introduce a car that's called Toyota Echo now what do you think you think we're going to sell a lot Jana yeah I actually have a question okay just to go back to beverages for a second um Coca-Cola actually has their version of beer but they only sell them in um Mexico um I only know this because I've been to Mexico and I flip the back of the can and I sell it's a so that like a segmentation specifically for Mexico like that area so they introduced the brand into Mexico a type of beer yes it's called so right so as part of their um product portfolio they have you're telling me that they have a um a brand called Soul a brand of beer called Soul right so is that a type of segmentation it's um it's not a type of segmentation it's part of their product assortment so they decided to um to sell beer in Mexico which is interesting I would suspect that as part of a joint venture so in other words they're leveraging the manufacturing facility of a local producer and um there's advantages to doing that for Coca-Cola and also for their partner as well but historically they haven't Express I mean obviously they're a huge company the Coke brand itself is valued at $70 billion so in the past they expressed no interest in selling alcohol that wasn't consistent with their um their mission and their values and their focus but um they are opportunistic and based on what you're telling me they decided that um they're going to um bottle beer in Mexico well hopefully it's better than Chevy Nova but we'll see we'll see what happens but importantly their focuses on the so brand the fact that it's distributed by Coca-Cola most people don't know that now in this case they feel that um they want to have a brand that's more relevant to the local market or maybe they feel that um their positioning their identity the coke identity is so closely tied to non-alcoholic beverages that they wouldn't be able to sell it under the Coke brand you know what that's called we do that type of research is brand elasticity research what we're to trying to find out is how far we could stretch the Coke brand or how far we could stretch the Toyota brand so for example when we think about the elasticity of Toyota what about if Toyota introduced a soda Toyota soda what do you think Samantha she's like yeah yeah bring it right no you see there's got to be Brands over time especially power Brands create an identity for themselves and importantly they create associations that are strong unique and favorable so as part of the identity how do you create that identity you create associations with your brand and those associations got to be strong unique and favorable so once you do that and you have that association with non-alcoholic beverages or you have that association with cars then how are you going to right like who's going to buy Toyota soda even though Toyota represents quality reliability durability but Toyota soda I mean their their tagline is moving forward right moving forward the thing that amazes me about that marketing approach which I would want to say also that they're marketing um they're actually an excellent marketing organization Toyota but there's something that's um that they're not sharing you know they say that they're moving forward what do you think about moving forward okay that sounds kind of innovative but what they don't tell you in their advertisement is moving forward whether you want to or not right you remember only a couple of years ago their cars would suddenly Accelerate from 0 to 90 M an hour so moving forward is yeah is like um yeah maybe it's like kind of a little bit of an overpromise I think it's going to exceed it's going to exceed Denise it's going to exceed customers expectations what do you think Brandon right I mean isn't that like you would never expect the card to move forward right on its own that's um and in that category yeah that's that's a little bit of a concern they're actually changing it though yeah companies change their um taglines over time taglines are more enduring than slogans slogans for advertising campaigns change could change every three months every six months it depends on how effective the campaign is what is one of the indications of uh advertising campaigns Effectiveness what do you think how do we know if the advertising is working more sales is one indication and what else the level of brand awareness so how many people are going to recognize the brand when they see it or be able to recall from memory the brand name yes right there's an app on um on uh the iPhone that um quizzes you to see if you recognize different symbols different symbols for different companies so you want to see do you recognize a symbol for Mercedes do you recognize a symbol for Nike so we could measure that we could do um branding research to understand if there been any changes in the level of awareness and also any changes in the perceptions and attitudes towards our brand so in other words maybe um a year ago we did research and we found out that the perception was that our brand was associated with a low quality product so they thought that our brand means low quality so then we introduce an advertising campaign to try and make people aware of the fact that no actually quality is very important in our organization and that that's a focus area for us and that our products are high quality and they explain to you why they're able to achieve high quality so then a year later we do some research to find out do they still think that our brand is associated with a low quality product so if we see that now instead of 10% of people thinking our product is high quality now 70% of the people that we surveyed think our brand is a high quality then we know that our advertising is being effective so we talked about um demographic segmentation Psy psychographic we talked about benefit segmentation that was the toothpaste example and behavioral well yeah behavioral which let's for now use um add usage rate which is something we didn't talk about so what is the usage rate one of the ways that we could segment the market is based on the usage rate what is that that mean how often do you use the product right how often do you use the product what is the frequency that you use the product so what we do is we are we label the segments we label the segments as in terms of the usage rate as low moderate and heavy that's the usage rate so who could explain this why does this make sense from a segmentation approach what is it that we believe about each one of these segments because you don't want to money and don't they use your product well that's going to help us to um to make that decision is based on this usage rate which of these segments are we going to focus on what are we going to do about this Insight so we found out that 20% of the people that use our product on a limited basis 30% are moderate users and um 45% are heavy users and then we can even add um 5% for nonusers so our our um hypothesis if you will is that each one of these segments have similar needs and wants and responds to the marketing mix in a similar way and so our marketing strategy we're going to customize marketing and um strategy tactics marketing strategy and marketing tactics for each of these segments so what would be for example let's say being that 20% of our c customers have a low usage rate what could we do to increase the usage rate you see why this is so important segmentation is critical this is a very important Concept in marketing and that's why we're taking the time today to also to reinforce um some of the things that we talked about last time because this is so important to marketing because of the implications that it has on our strategies and tactics for the organization so it's not just analytical theoretical it's going to have an application it's going to impact our branding strategy so and our um our approach to advertising and the other marketing mix elements so who could tell us if we have a segment here that has a low usage rate let's say our product is orange juice so you have different beverages that you could um consume it could be orange juice it could be milk it could be soda and some of those beverages we might describe as direct competitors and some as indirect competitors what could we do to increase the usage rate for orange juice tell all the benefits of orange juice tell all the benefits absolutely so communicate the benefits which would be what for example like vitamin A vitamin C right so um vitamin C absolutely so orange juice is known to um have a high level of vitamin C relative to other beverages absolutely so communicate the benefits maybe that's one of the reasons I'm sorry what is your name uh Mike Mike so what Mike is saying is that well maybe people aren't buying orange juice because they don't realize that it has a high level of vitamin C right so yeah we need to let people know that then that might increase the usage rate there might be some people who are like oh wow I didn't know Susanna like put it in TV show so it shows like things people are absolutely so in marketing that's known as product placement so what is one of your favorite shows which one okay so let's say in that show you have you have y what that too okay let's say that in that show what you do is you show people drinking orange juice right and and if people are drinking orange juice then that's going to help to to promote um sales of orange juice now you know that um milk and orange juice are actually competitors yeah they're both um drinks that are typically consumed for breakfast but calcium yeah go tell us say calcium yeah so orange juice um realize that people just like we just discussed like why are people drinking orange juice and it has um Mike says that orange juice has vitamin C and people are not aware of that so we need to build awareness that one of the product benefits is that um orange juice has vitamin C but milk is known to be high in calcium and have vitamin A and vitamin D so Orange Juice companies started promoting their products as having calcium and vitamin A and vitamin D because they were trying to get more people to use the product and then orange juice sugar because milk has a lot of calories and a lot of sugar sometimes in it and start saying that they have way more calcium than milk or or juice so you should just drink that yeah so there's there's a difference between indirect and direct competitors a competitor being indirect is not a way for us to group competitors that we're not going to focus on actually the reverse is true because for example the direct competitors for Tropicana is what minute made yeah all these other brands of orange juice so you say well that Mak sense who am I competing against minute made well yeah minute made they are Market orange juice that's our competitor but what about other competitors so for example there's substitutes for orange juice people could buy other beverages if they're thirsty they could drink other Beverages and they found out that that one of the other beverages is milk and milk interestingly the milk industry also is focusing on orange juice because they both both industry experts see them as being competitors of each other so have you seen the got milk campaign the got milk campaign is an attempt to promote the consumption of milk so they're not talking about the brands different brands of milk just to increase the consumption of milk and talk about as it relates to milk like Mike was saying about orange juice the benefits of milk government or no it was based trade associations in the dairy um Market came together and they paid for the campaigns because they said you know what my competition is not this milk farmer and this milk farmer and this milk farmer and that milk Farmer they said yeah I know either they're going to buy milk from him or they're going to buy milk from her or they're going to buy milk from me but you know what you know what I really need to be worried about is orange juice people are drinking orange juice why are they drinking so much orange juice and that's what became that was the um the Catalyst for them to promote consumption of milk as a category to create what we call category need to create primary demand those terms are interchangeable primary demand and category need so those are some ways that we could increase the usage rate so you see that's an example of taking something that's analytical theoretical and then saying now that we've done this now that we segment on the market that's what we've done here is we segmented the market what are we going to do about it so Mike says we need to advertise and tell people that our product has vitamin C we need to have promotions like both go buy one get one so when you go on the patm mark you buy one you buy one um buy one one um half gallon of orange juice and you get another half gallon for free so we're going to have don't get all choked up now see how dear to this stuff is to my heart you see I mean I can't even I'm just like so we could have promotions we're going to advertise we're going to have consumer promotions and also trade promotions so consumer promotion is buy one get one but we could also offer promotions to the retailers to carry our product and to promote it what else what else could we do show right absolutely so tell them that it's a beverage that you could drink any time because there's definitely um this perception that orange juices for breakfast so what we're trying to do is change the use occasion that's what we call it the use occasion what is the use occasion so you're saying well the perception is that the use occasion is for breakfast but now we need to change that communicate to people and convince people that no it's not just for breakfast but it could be for lunch it could be for snack it could be during dinner it could be after dinner and you know what maybe after dinner you have some orange juice and then put a little Gray Goose in there that's that's right you wouldn't want to do that with milk so you want to increase the the use of the product yes you could do that advertise mixed drinks so absolutely so complimentary products so absolutely so why can't you have a campaign that like partner with um a vodka company to promote the use of these two products together so maybe the consumption of both will increase so you drink more so you what would you tell them drink vodka and orange juice and you'll be healthy right you won't get the flu you'll have a high level of vitamin C where is it oh really okay we have to look for it what else what else could we do to increase the usage rate Tak people to the F come from and so education absolutely educating the consumer taste better taste so why not have a taste test give them free samples is that a good idea give free samples of orange juice do you want people to drink orange juice maybe they think that it doesn't taste good it can't taste as bad as vodka right so give them free samples why not come to campus and stand in front of whitee head and let people taste the Tropicana orange juice or give them what coupons why not give them coupons that's going to help to increase trial of our product give them a coupon for 32 gallon of orange juice or give them a coupon for $2.50 off do you think that would work do you think if people got a coupon for a free half gallon of orangee juice if they say you know what this week no milk orange juice for everybody what do you think Jonathan with a little right so this is a practical application of segmentation yeah it's interesting but what are we going to do about it see how all these things that we talked about as part of our marketing strategies and tactics all because right we talked about all these different strategies and tactics because of this usage rate segmentation and importantly some of the strategies and tactics are going to different for those that are low usage and those that are heavy usage do you agree like for example Mike told us that um if we want the low usage consumers to drink more orange juice you need to tell them about the benefits you need to tell them that hey this has vitamin C vitamin C is good for you but for the heavy usage consumers do we need to tell them that do what do we in our advertising campaign remember we said most commercials most TV commercials are 15 seconds we're not going to have the same commercial for each of these segments does that make sense the messaging is going to be different Mike is right for these people hey let's try this let's tell them that this has vitamin C one of the key benefits and now it also has calcium and a vitamin A and vitamin D so we're going to build awareness of those benefits of the product but what about these people they're already buying 6 half gallons of orange juice a week what is the message to them going to be you want to tell them the same thing hey by the way this has vitamin C and calcium they're already buying three gallons a week why do we need to tell them that we need to tell them different messaging that's going to either increase their usage rate or minimize their buyers remorse which is also known as post cognitive dissonance like for go ahead oh um if you're segmenting this into l uh medium and heavy usage and I didn't really see the point of non usage because obviously you know you're just you're just segmenting the people that are using it right so then I I don't really see the point of the 5% because 5% of what you know 5% of America 5% of the world like it it didn't really say you know so of the market so of those um of like the beverage market right I didn't really get like 5% of what okay yeah I'm just giving this as an example so we would have to decide you're right is it the beverage Market um yeah it could be it could be United States it could be like somebody said before it could just be New York City it depends where our focus is going to be it could be North America it could be Latin America so maybe in North America 5% of those who um consume beverages don't consume orange juice so you're absolutely right um if this is going to be more meaningful to us we need to Define within the what context within what Market but based on our research we're making the assumption that we found out that 5% don't drink orange juice at all it's not that they only drink one glass a week or they only drink one glass a month it's they don't drink orange juice at all why why don't they drink orange juice that's what we need to find out we need to ask those questions in research to find find out what is the substitute do they drink cranberry juice do they drink pineapple juice do they drink grapefruit juice or do they just not drink juice at all maybe they drink milk you see that's right all you need is milk that's their that's their manra milk and Oreos so why why do you need to even get involved with orange juice I meant orange juice why why bother with that you got milk and Oreos life is brand right what what else do you need and naaka right that's it you need orange yeah but not really you could drink that straight but as long as it's in moderation everything moderation right I know none of you drink alcohol anyway but I'm just saying in case you why you laughing what did I say in case you did in case you did in moderation right all right so what are we going to do with how is our messaging going to be different for those that are heavy users what are we going to tell them probably tell them like they live a healthier lifestyle and like they live longer and I don't know they did this with the coffee thing they said people who drink coffee are more um Innovative and all this other you know so what we're trying to do is convince them they made the right decision that yes drinking orange juice is going to make you healthy and you're going to be more productive and you're going to do better on your exams and you're going to be able to um pursue Graduate Studies and importantly get a good job and oh because you drink orange juice now think about think about um have you ever experienced any type of Bio remorse I mean have you have you like gone into Best Buy and purchase a iPad for $699 which by the way by the way um the tax on that is like 50 bucks right so $6.99 sounds nice but then by the time you're out of there it's almost like $800 and then get home and think did I do the right thing is this really why did I do this why did I get the 64 gigabyte one with a retina display why did I just get the 16 gigabyte without the retina display maybe I paid too much money is it really worth it am I really going to be able to do my marketing homework on this can I really watch coach's YouTube videos on this thing yeah yeah you checked it out you see Jonathan he's already check watching coach on uh um on YouTube his iPad some of my students they tell me that they make their children watch the Youtube videos I told them that I I thought that that was cruel and unusual punishment oh we were talking about the um the oranges when we just say it sounded ridiculous like oh you're going to get a better job and stuff but like Sunny Delight I remember Sunny D were like you know they showed a commercial where the kids were little they're like give your kids Sunny D and they showed them that they got older they got better jobs they became like Olympic winners and they'll win the Olympics right definitely so that we're going to have a brand promise what is our value proposition we're going to have a brand promise and you remember I told you about Pepsi when they introduced Pepsi into China their brand name was phonetically correct so in Chinese it was pronounced Pepsi but it translated to we bring your ancestors back from the dead now now that's a brand promise that's yeah it lacks some credibility right so that's forget about um uh we give you wings that's Wings who wants Wings drink this this will bring your ancestors back from the dead now the problem is is in China they didn't think that was funny because the Chinese culture places a very high value on Ancestry and family and history so that was a big blunder for them but I think in the United States I think that's very clever we should develop our own energy drink forget about this five five hours five hour we have our own um tagline for our brand which is going to be we bring your ancestors back from the day that's it take during midterms finals drink this drink this right all right |
Marketing_Basics_Prof_Myles_Bassell | 8_of_20_Marketing_Basics_Myles_Bassell_31412.txt | [Music] so we're GNA continue our conversation about pricing remember we talked about pricing constraints pricing objectives we talked about different pricing strategies and approaches to pricing we talked about demand oriented approaches which includes what skimming penetration pricing we said were examples Prestige pricing cost oriented approaches which remember we talked about Cost Plus we talked about standard markup we talked about Target profit as an approach to pricing and we talked about competitive pricing then we talk about loss leaders do we talk about loss leaders as a pricing strategy yeah how we going to use loss leaders to drive uh foot traffic for example into our store so let's go over some of the some of the questions here all right so the first question says the money or other considerations including other products and services exchanged for the ownership or use of a product or service is referred to as go ahead Jason price price so the best answer is D price on page 320 question two the practice of exchanging products and services for other products and services rather than for money is after all of that you didn't get one of these oh you guys that came 20 minutes lat okay here you go oh wow no you did that's on YouTube right right let's see which one is that 13 you need one more 13 you 14 hold on you do but you don't have 13 you guys decide you're not so the best answer is b b right the best answer is a b so what's the difference what's the difference here between um these two uh terms what that we're trying to make a distinction why is it that um we said first price and then we said B isn't one specific explicitely money and one is like traded yeah absolutely so if um it's for the exchange of money that's the price that we pay if it's the exchange of products or Services then B farer the next question says to increase value marketers may do any number of things including decreased price or both increase benefits and decrease price and so the best answer is d increase benefits so to increase value marketers may increase benefits or decrease price or both increase benefits and decrease price so do you see why that is remember we had previously talked about value and we said value is a function of price quality benefits and there could be other dimensions as well but certainly those dimensions are a key part of value and we said that value is something that's subjective it's based on what we perceive to be a good value and importantly we said that a good value doesn't necessarily mean inexpensive and that's what this question suggests is that something might be a good value it could have many benefits so if it has many benefits then we might be willing to pay more for the added benefits for better quality isn't that the reason why Sony is able to get a premium for their Electronics because they have more features more benefits and the product is of a higher quality so the perceived value is high even though the price is also very high next question who's going to be the next question go ahead Jason um creative marketers engage in value pricing the practice of simult simultaneously increasing product um and service benefits and E maintaining of decreasing price exactly yeah so when we talk about value pricing here it says that if we're engaged in value pricing the price of simultaneously increasing product and service benefits and maintaining or decreasing the price is an example of value pricing who could explain that to us what do what do we mean when we talk about value pricing why is that an example of value pricing value priceing just putting a price quote unquote or a t an amount on the the customer will get from the product like I remember reading it in here it says like about like the bathtubs that you have a door that you just step into for elderly and kids so people are going to spend more money on that because there's an added feature of safety or accessibility I guess for for heart to reach children or other type of things so it's it's the added price onto maintain so we could modify a product exactly so we don't need to lower the price to add value we could we could um as part of value pricing we could lower the price but even if we maintain the current price if we add more features and more benefits isn't that increasing the perceived value so product modification is an example of how we could increase value in the marketplace the perceived value of our product or service so we have to think about as we move through the product life cycle and as we look at the diffusion of innovation model we have to think about how are we going to get people to buy the product how are we going to get more people to buy the product how are we going to stem that growth that we're experiencing in the product life cycle so we introduce the product and then ideally we experience a period of growth how are we going to continue to sustain that growth so what this suggests is that we could lower the price we've talked about that a number of times we talked about that also recently last class we talked about the role of price as it relates to the rate of adoption and the ability to in an elastic Market to sell more units but value pricing also suggests that we could also increase the perceived value by not lowering the price but adding more features and benefits so isn't that what Apple did very effectively with the iPod cuz there's some challenges that we need to address when we lower the price even for a promotion remember we said yes we're going to have a short-term increase in sales because we've lowered the price even though for us we see that as a tpr temporary price reduction but in the minds of most consumers they don't see it that way and very often consumers only purchase when the product is on sale remember we talked about orange juice that very often retailers promote at two for $5 so two half gallons for $5 but the price of a half a gallon of Tropicana of orange juice the manufacturer suggested retail price is not $2.50 so we have to be careful how we use price to impact demand because one once you lower the price it's very difficult to raise the price of the product after that so let's take the example um that I mentioned the iPod what they did was actually very clever because they introduced the iPod they had a number of models items product items which made up their product line you remember we talked about about that so they had a product line of MP3 players and what they did after the introduction was instead of lowering the price of those items what they did was to increase the perceived value is to keep adding features and benefits like for example what did they do they increased the amount of storage space so the same model instead of being 20 GB now it was 30 gabes or 40 GB 60 GB but at the same price then they also decided that they were going to introduced the Nano and the shuffle which were at a much lower price point that had literally only what was it 1 Gigabyte but the initial product line how much were they selling the iPod for when they first came out like2 299 249 199 but importantly the key takeaway for us is we observe their pricing strategy which by the way for the other items in their product mix they've used over time they've used different pricing strategies but the iPod in particular they didn't lower the price of those items what they did was just keep adding more features more benefits and now it comes with more space and now it comes with a LCD screen in black and white and then now it's an LCD screen with color and you see how that we're increasing by modifying the product we're increasing the the perceived value which ideally is better than lowering the price for the reasons we just talked about so if we could do that if we could keep adding value avoid lowering the price in many cases that would be better you don't want to get into a price War because that's going to erode our profits so if we can although we said it is a powerful tool in an elastic Market we should try to avoid that you'll see for example companies realize this for example shampoo one of my favorite products you'll notice that if you go into Target they have um a 16 o bottle of shampoo but what they do is sometimes instead of lowering the price they say now with 20% more free now of course anybody's looking at that do you understand that well the value is more so you're paying the same price but you're getting a better value which if you're able to do that I would suggest in many cases that's better than lowering the price and say two for $5 so who could tell me why what is the what is the problem in lowering the price for a sales promotion no don't get me wrong it's not that we don't do sales promotions of course we do sales promotions on a regular basis for a lot of products but what are what's one of the issues that we just talked about go ahead the consumer might get stuck with the TP are because uh they'll be like okay I only want to buy the product when it's cheap I don't want to have to spend a lot of money so instead I place in the value like with the shampoo placing more value on the same product at the same price then people feel like they're getting more and they're still absolutely so we achieve the same goal yeah people are going to buy more they're going to see it as a value and but what happens is when we have promotions is that we see a spike in sales because what we've done is what we call overstocked the customer or overstocked the tray which means that if you use that particular brand of shampoo so instead of buying one that week you're going to buy two or maybe you buy three well then next week you're probably not going to buy any and the same thing with orange juice and other products and retailers do that as well too that's what we mean when we talk about overstocking the trade is retailers if they're able to buy products from the manufacturer at a significant discount of course they're going to also um stock up in their warehouse and this week they're going to buy a th000 cases of peanut butter well maybe usually they only buy 350 cases of peanut butter a week well next week they're not going to they're not going to um buy peanut butter again but in some cases they'll for example they'll sell two for one orange juice knowing that it's going off in two weeks so they assume you're not going to drink that much and then you're stuck with you're going to finish the one orange juice the second orange juice you can even drink so the next week you're have to buy anything as like certain things they know for example Costco is one one of their big things if they know people tend not to use a lot of the stuff by the end of the expiration date so they'll buy it will'll sit around and then it will go off because they have this giant tub of whatever it will go off next and but you also raised something else that um another option for us as it relates to managing the pricing component of the promotion which is instead of saying two for $5 you're suggesting buy one get one free so we didn't lower the price we didn't say well before it was $10 now it's $5 we said no you buy one at $10 the other one we're going to give it to you for free do you see how the implementation is different I think it's reasonable to suggest that the result is going to be the same the impact on sales is going to be the same the perceived value is going to be increased but we didn't lower the price and so we don't have that issue that Jacob was mentioning which is that now people are going to think oh it's um yeah it's now $5 and that's why when we talked about that's why I had raised the concern about Mercedes selling a $30,000 car because when they advertise they show their symbol the Mercedes symbol and 29999 in every ad well while we talked about creating strong unique favorable brand associations which is challenging it is very challenging to do that creating a logo is is not challenging it takes skill but to create a brand name to develop a logo is not as challenging as creating strong unique and favorable brand associations because that takes a long time to be able to do that now as difficult as it is for Marketing Executives to do that it also takes and it's also a challenge for our target market to make those connections to make those associations that being said how long do you think it takes people to make the association between mercedesbenz and $30,000 when every ad that you're running in like every magazine shows your logo and symbol and a $30,000 price point I mean consumers might be slow to process the messaging to decode the advertising but I mean come on how many times do you need to see that before you say I got it Mercedes is a $30,000 car so while price could be a very effective tool for us in in terms of increasing sales it's a tool that we have to wield carefully because there's implications lowering the price is not a no-brainer decision once you lower the price then you have you have to live with that because like we said next week people don't want to spend $4 for half a gallon of orange juice and do people do companies do this of course yes they do it do we promote products yes do we have sales yes but again my word of caution is you have to manage that carefully and once you do that then you've you've opened yourself up to certain expectations on the part of your customer and if you don't promote the product every week then they they know what our schedule is they know eventually they learn we promote the product every other week we promoted you know once a month and that impacts our behavior and we could talk proba we could talk for quite a long time um about consumer Behavior so it's important for us to understand how our decisions impacts consumer Behavior yeah uh in terms of what we're talking about uh buy one get one free how like why is it possible that it's a dangerous thing to do in the sense that like why won't someone perceive it as 2 for 10 instead like like the same thing with the orange juice how we perceive the orange juice five so what that necessarily like could hurt the way our perceived value as of the product yeah but you're you're achieving the same goal but you haven't actually lowered the price what you've done is we've changed the perception of the value see here we've come out and we said two for $5 here we said well you're going to get um you're um you're going to buy one and the other one we're not going to charge you for it you're going to buy one for $10 that's different than saying two for $5 or two2 for $10 would be the equivalent um situation we're not saying it's $2 for $10 that's what I think you're saying isn't it the same as two for $10 but no we're promoting it as buy one for $10 it's one for $10 and the other one we're not going to charge you we're going to give you that for free can't perceive that way that's what I'm trying to say well ultimately people perceive it that way and that's why they purchase it because that's what's increasing the perceived value the difference is we didn't lower the price what we've done is we increased the value that they're getting it's the word free that makes the customer really buy into it well free is a very powerful term um in marketing definitely um it's effective in headlines it's effective in direct mail certainly free is something that um gets the interest of customers so let's talk about um let's go back and talk briefly about pricing constraints the next question says which of the following statements about demand as a pricing constraint is most accurate and the best answer here which is talked about on page 325 is the it says whether the item is a luxury or a necessity affects the price a seller can charge so in terms of demand as a pricing constraint remember we talked about different pricing constraints well here it says that if the item is a luxury item or a necessity that's going to affect the price of seller can charge the next question talks about uh pure Monopoly says pure Monopoly is the competitive market situation of what do you think what's the best answer there pure Monopoly I think I heard it here who said it yeah B in which one seller sets the price for a unique product do you think of any examples yeah IP iPhone I when it first came out I'm say it was like one of the big smart um there those there's definitely some interesting market dynamics there but the question is would we consider that to be a pure Monopoly yeah so you have um you have let's see so I think both of you suggesting you know like the electric company or utility company which is um very often something that we consider to be a monopoly are there any true monopolies yeah well that's that's what we're talking about why is that why would we say although in general we think that a um for used to be in Monopoly Ford yeah back in the day first came out oh well yeah so if so in this case it says that for a unique product there's one seller that sets the price so if in Ford um back in the day they um they were the only ones selling cars and so it was Model T it was unique product and they they set the price railroad companies johnon Johnson Johnson Johnson oh tell us why what the book tell tell us why says that achiev of heart disease dis bying oh okay so the the um Sten that they sell um is some something that um they're marketing and the example that um we're having here is that they're um they what the product that they sell is very unique and that they're the only one that's setting the price but let's talk about also the I want to come back to the issue of utilities when we talk about um utility companies gas company electric company yeah I think in general it's a good example of a monopoly but what's the exception what are we seeing now in the market why you might say yeah it is a monopoly but gas St what was that gas stations um I was thinking about natural gas um or um you're saying that there are other ways to provide energy that's why you're saying it's not a monopoly no what I'm thinking is that because what the government has done um is they've allowed competitors to enter the market now why how does that tie into the government's original motive why would they give somebody a monopoly why would they allow that to occur because they have in incentives from the government yes so it is an incentive for the government but what do you think Zach why does the government say you know allow um the electric company to have a monopoly so to speak it's if it's government run or govern own like they could they have more control over how many employees they have and like they they can control what the prices are if it becomes competitive and they can like they can lose a lot of power in it so in uh in a monopoly in um as we um think about utilities what the government has realized is that a large infrastructure is needed to provide electricity it doesn't really lend itself to um replicating that infrastructure right it's really course prohibitive isn't it I mean think about um the uh like also telecommunications I mean how many companies could you have running cables how many different companies could you have with um towers for wireless communication so the government realizes that it's impractical to have and it's not economically feasible to have these vast infrastructures being built if you could even get investors to invest such an enormous amount of money right we're talking about many billions of dollars into creating an infrastructure that's going to be able to provide electricity for a community and maybe in that case they couldn't even be profitable anyway you had three companies let's say providing electricity in a given Market um and they each had to invest let's say a hundred billion dollar how it's right it see would seem um that it would be unlikely that all three companies could be profitable with such a huge investment it might be $500 billion so the government says yeah that doesn't really make sense to us it's impractical economically it's it's not feasible but what's happened is they recognize the need for competition and so what they've done is they've required utilities and also in telecommunications this is also another good example of where they required the monopolies to give access to competitors so in other words what they do is they have organizations that are resellers of energy resellers of electricity resellers of natural gas that are able to come into a particular market and they don't have to invest in the infrastructure they're not going to run their own gas lines you see why I'm saying that's why it's would be impractical to do that so you have three companies that each run different Electric wires to your house you run this company runs their to to Matthew's house and Jason they have another electric company who's running wires there or gas and they think away that gas is delivered natural gas so going to dig up all the streets to run natural gas lines it's there but to what the government has said is yeah so what you do is you're going to in effect um sell them a license to use your gas and you're going to sell the gas to them at a price where they could be profitable and they're going to resell it to Consumers and that's what happens um in Wireless communication um as well for example they share Towers they don't know Verizon Sprint you think all these companies that have their own Towers that's crazy who could afford to I mean how could these companies I mean how would they stay in business so they shared towers and that's how competition is um able to exist in for example in these particular markets in these categories yeah exactly like the government doesn't want you have all that stuff the government is opposite I know like for like natural gas lines and stuff like that it's not that other companies aren't allowed to build their own it's not that other companies aren't allowed to build their own pipes like and stuff like that just like the government go the government would like give a huge uh huge company the main contract with like like with the requirements they have to like R like lend it out like with phone companies anybody can start a phone company if they want and like AT&T legally has to rent out like it has to let somebody use their power if you pay for it so it's not that they're the only people using it like there's no requirement that you need to get power from the state thing if you can use your own power you can start your own company doing whatever you want doesn't it so it's not like it's not that's not how it works really that's not what I said no say you said like that the government like they give it to one person just because it costs like a crazy amount for all the infrastructure and building all that stuff because not everybody could just dig into the ground and make a pipe right so then I said that um what they do is that they then require um and we've seen this um is common place now that they will lease um and sublet the gas lines and the towers to um other um competitors in the marketplace so that um it's not what we would say a true Monopoly SO gas like in this market you could get um um gas from somebody else it's still going to be delivered through the same pipes and so on but and I think that's what sa's point is is that um yeah it's not it's that it's not a true Monopoly but certainly the cost of um the infrastructure and being able to construct and deploy that infrastructure which is certainly something of great concern cuz what if um what if you did have the $500 billion what are we going to do we're going to tear up all the streets to run these gas lines so I think what what Zach is saying is um yeah is definitely a good point so that reinforces what we were uh what we were talking about all right next question number seven who's going to be number seven all right goad maximum quantity of products consumers will buy at a given price shown by go with demand yes Final Answer absolutely yes hey the demand curve that's exactly what the demand curve is demand curve shows the maximum qu quantity of products that the customer the consumer is willing to buy at a given price questions about that and so that's what we've been talking about is what happens if we change the price how is that going to impact the quantity demanded that's why it's so important for us to understand the implications and the theand curve is a graphic representation of that and the demand curve is going to um obviously vary by product so the demand curve could look um very different depending on the product and the level of um elasticity for that given product in that market so what does that mean that means that in some cases you might lower the price 10% and sales will increase by 10% in other cases you might lower the price 10% and sales might only increase 5% sales might only increase 1% or you could lower the price 10% and sales might increase by 20% so when I say that it's not the same the demand curve is not the same for all products that's what I mean so we need to understand what is the demand curve for our product how price sensitive is the market for our product what is the price elasticity of demand that's what we need to understand so we need to make some assumptions over time you would expect that we would have some data to support those assumptions so how would we get that data well go ahead run a sales promotion see what happens 20% off and then see if we have if we feel that there's a stronger uh enough correlation between price and the demand for the product because remember there could be other variables that could be impacting demand so you might think well the reason why I sold more shovels was because I lowered the price 20% oh but the fact that we had 3T of snow the day before it has nothing to do with it right maybe you should have raised the price 20% cuz now everybody now that the man is very high and the supply is the same for Batteries black exactly the next question says the average amount of money received for selling one unit of a product or simply the price of that unit is referred to as what's the best answer then a right is e average revenue on page 331 and remember we said that we want to look at these metrics it's important for us to look at these metrics what is the average revenue what is the average number of units sold per transaction for example in retail we're very interested in the average number of units sold for Transaction what is the average um sale which understand for every sale how many units are sold is it one is it two is it three and what is the average um sale once we know that then we could try and influence that and that's why remember we said did you ever notice that when you go to the register where you're dealing with a salesperson they would say would you also want to try this would you um like to try this in um in this flavor or would you want to see do you want to see maybe if you would like this t-shirt in blue because they know that's not just random they know their objective is the manager told them that the average number of units per transaction is three we should try to get that up to five and what that means is what that sales are going to increase but you need something that's measurable like what does that mean it's say increased sales well what do you do with that if you work at UH Banana Republic increase sales well that's a strategy we need some tactics so Joe you working a in a place they say you're going to increase sales like well what do I do different now than what I was doing last week so if you know that you're trying to increase the number of items that you sell right that's something that's measurable okay so yeah you're right I think that's probably based on the data I believe it but also based on your observation that um yeah typical customer buys three items so on average so he said we need to try to get that to five or we need to get the average transaction the average sale higher so then you think oh you're going to trade them up so you're looking at a $10 sweater right so you say don't you want to buy ABA a $50 sweater and then the average sale per transac action he's going to go up questions all right you guys are great fantabulous orange juice yes all right next question number nine who's going to read it which are the following statements about price elasticity of demand is most accurate what do you think is it D what do you think is the best answer somebody anybody oh yeah it's definitely either A B C or D absolutely abolutely I I think definitely one of those yes B the best answer is B the more substitutes a product has the more likely it is to be price elastic what does that mean who could tell us what that means why do you why do you think that's true first of all what does that mean when you say there's more substitutes get Ari there's substitute would be like but margin something that someone would buy similar that replaces it that that that uses the same pickles of ice um satisfies the same similar substit more substitutes if you raise the price of of a product the match of people that are going to buy your product going to be a lot less all going to buy the sub because it satisfies a similar so wow what do you think Joe you want to add to that no I'm saying that was well well worded so substitute if there's a substitute for the product like Ari is saying if instead of butter somebody could buy margarine if some instead of um instead of Pepsi somebody could buy coke or instead of milk somebody could buy orange juice then the um the more likely it is that the product is going to be price sensitive which means that if we raise the price for example if we decided we were going to try to raise the price then demand would go down would go down pretty significantly actually why because instead they just instead of Pepsi they'll just buy coke or instead of um butter they'll buy margarine so the more substitutes that there is for a product the more likely it is that it is to be price elastic so price sensitive do you think any of the others come close to being logical in any way that's discussed on page 333 what about this one it says price elasticity with in elastic demand must always be greater than one what do you think no that's that's the that's the opposite is true right it's not if if a if it's exactly that's an elastic that's elastic if it's greater than one then it's elastic if it's less than one then it's inelastic so price elasticity with inelastic demand must also be greater than one that's not right what about this one it says with in elastic demand reducing price will result in an increase of total revenue I don't that's not true at all right it says within elastic demand that means if the market is not price sensitive then reducing the price is going to increase total revenue no if it's in elastic that means when you lower the price consumption is not going to increase demand is not going to increase and then it says with any elastic demand reducing price will result in an increase in total revenue though not necessarily increasing profit same thing if if the market is in elastic that mean it's not price sensitive so whether you increase or decrease the price demand is not going to change now we said importantly that with that's on a Continuum so a product could be very price sensitive somewhat price sensitive it's not just either or that it's in elastic or elastic it could be somewhat elastic it could be very elastic for example questions about that are we good y are we great Wonder [Music] yes all right next question more than good the quantity at which total revenue and total cost are equal this is important what do you think Joseph I believe the answer this was withheld from my my good that over here Aon V I believe the answer is a the break even point is that your final answer Yeah final answer a break even Point yes why is that important for us to understand that because this is a very significant part of our decision- making process is to understand the break even point which here it says it's the point in which total revenue is equal to total cost well you want to minimally Break Even because you don't want to take your hit from certain product even if you might have products out there and Par in the product mix you might have some products that you're not making money off of are attracting attracting enough attention to make help you make money off other products we don't want to be losing any money in any sense and so for a new product what it does is it helps us to determine whether or not our objective is realistic it tells us also how many units we're going to have to sell to break even so is that important definitely yeah so we need to know do we need to sell 5,000 units do we need to sell 10,000 units do we need to sell 50,000 units or do we need to sell a million units what if we have to sell 10 million units so that's going to impact for example how much money we spend on Advertising it's going to impact whether or not we're going to even decide to pursue that opportunity because what happens if we find out that based on our course structure and the price that we set that we would have to sell 10 million units in the first year to break even but what if I tell you that the entire category is only 8 million units per year so say what are we going to do we introduce this product in first year we're going to get 80% of the market we got to ask ourselves is that realistic you know in some mature markets even getting 1% even getting 1% is unrealistic because when you'll see when people develop business plans their assumption is always that we're I'm realistic I'm not thinking we're going to get 80% our business model is based on the expectation that we'll get just 1% of the market well what does that mean 1% of the market in the beverage category mean you're going to steal share from Coke and Pepsi 1% of a $200 billion category is substantial and in some cases that's unrealistic so the break even point is something that's insightful and important for us to understand all right let's let look at the next group remember we said that there's different approaches to pricing on page 346 here it says the key to setting a final price for a product is to find an approximate price level to use as a reasonable starting point four common approaches to helping find this approximate price level is demand oriented cost oriented yes profit oriented a profit oriented and competition oriented who could give us an example of demand oriented demand oriented approach to pricing see demand in the market well what what's one of the approaches what's one of the pricing strategies raise well we said that there's a demand oriented approach to pricing would include remember we looked at different examples penetration pricing is a demand oriented ented pricing approach skimming we said is a demand oriented pricing approach Prestige pricing is a demand oriented pricing approach which is different from cost oriented what were some of the examples of cost oriented Cost Plus standard markup and what about for competition oriented what was a good example that we had talked about LW SLE yes LW leaders is a good example of competition oriented approach and what about profit oriented what was that map pricing um well map pricing is part of our discussion about the price that for example retailers can charge the lowest um price that they're allowed to charge by the manufacturer cuz we said the manufacturer has an MSRP the manufacturer suggested retail price and that being said manufacturers also want to control the minimum advertised price so there's a certain level that they don't want retailers to sell below there's a certain price that they don't want them to sell below so Target um Target profit could be um the amount of gross margin um the uh total amount of dollar sales what about the next question says a skimming pricing policy is likely to be most effective when when is a skimming pricing policy most likely to be effective so skimming remember we said skimming means we start at a high price and lower our price over time now when we sell a product if we introduce a product at a high price what's one of the things that we might be concerned about if we sell the product at a high price people won't buy people won't buy good that's definitely one and then what's the other concern closely related comp yeah that if you're selling a product at a high price then it's going to attract competitors and competitors are going to want to for they might want to sell the product for less but certainly they're looking at remember we talked about the VCR you're selling VCRs at $1,100 and obviously it's cost them much less than that but they decided to deploy a skimming approach and they're going to lower the price over time well don't you think competitors see that they're like we know it cost $20 to make the product we want we want to sell um VCRs at $1,100 too so the skimming strategy is really most effective when you know that that's not a problem that that's not going to attract competitors because if the product is going to provide a significant amount of gross margin then of course it's going to attract competitors so we also want to understand if there's any barriers to entry things that are going to keep competitors out the next question says a skimming pricing policy is likely to be most effective when which one wait are we on question two still Yes e right so question two is e a high initial price will not attract competitors all right next question is also about skimming so here we get into it in a little bit more detail and it says that there's several situations when using a skimming strategy is going to be most effective it says one is that lowering the price has only a minor effect on increasing sales volume and reducing unit costs two the high initial price does not attract competitors which is what we just said three customers interpret the high price as signifying high quality important and then the fourth situation is hey customers are willing to buy immediately at the high initial price czy so those are the innovators the innovators and um possibly also the early adopters that's on page 346 so the next question says that in some cases penetration pricing may follow skimming pricing the skimming pricing would help C recoup initial research and development course and the penetration pricing would increase market so do we follow this scenario there it says in some cases penetration pricing will follow skimming so what that means is that yes we introduced a product at a high price we want to sell to the innovators and the early adopters and we're trying to recoup the large investment that we have in research and development for example and then follow that by deploying a penetration pricing strategy which means that will then at some point sell the product at a low price so when we talk about pricing strategy when we talk about any strategy it's not like it's forever it's for a given point in time so skimming it's it's possible to introduce a product at a high price lower it over time and then sell the product at a low price to try and sell a lot of units but the reverse is not true introducing the product at a low price penetrat some pricing and then trying to say now we're going to do skimming well I how do you how does that work how does that work questions does it make sense yeah okay good maoto let's see let's see oh this is an interesting question hmark cards all right so hallmar cards introduce a line at 99 cents about half the price of previously least expensive cards sold by Hallmark so they're trying to appeal to a mass Market that was price sensitive what is the pricing strategy that's being used c c who could explain that why is that an example of penetration pricing so they have some cards in their product assortment that they sell for $5 and there's some cards that they sell for $2 and they decided we're going to try and sell cards for 9 because they realize that some people are very price sensitive and that they don't want to spend um $5 or even $2 for a card they want to buy a card a greeting card for a dollar and so their objective is to sell a lot of cards to this segment of the market and the way that they decided to do that is to sell cards at a low price so they introduced an economy line so when we think about pricing we think about um very often economy good better best so the consumers they could buy a variety of um different cards from hmark at different prices so introducing that card at 99 cents is an example of penetration pricing so it's a new product that they introduced they want to sell a lot of it and so they're selling it at a low price in some cases manufacturers design products for different price points and retailers apply the best answer is a approximately the same markup percentages to achieve the three or four different price points offit to Consumers so what would be an example of that so for example you might see products in the store that are $4.99 $9.99 $4.99 1999 we refer to those as magic price points those are magic price points in retail because at those price points in given categories there's a lot of volume that's done and so the perception is that do people um not realize that 1999 is the same as $20 JCP JC commercials yeah so but there's um there's a perception right a lot of research has been done that um that in certain categories that a significant amount of volume is done at those price points and at Walmart of course it's a slight variation of that so they have products that are like instead of 1999 it's 1997 or 1897 so what this is saying is that the manufacturer and retailers are trying to work together they understand understand that when a manufacturer brings a product to retailers they have to understand that retailers are trying to hit these magic price points and they need to be able to sell it to the retailers so that they could meet their margin requirements and be able to be able to sell at those price points now of course you find yourself in a precarious situation when you've done that and you think you've done your home work but you need to know your customer because then Walmart says well that's great but I'm not going to sell that at1 1997 I want to sell the product at$ 1897 and so then what's going to happen is that if you want to sell to Walmart then you're going to be less profitable unless um but you just like guaranteed all your stuff is going to sell not well sometimes yeah there's um Walmart has made a lot of millionaires and they've also made a lot of poor people so um be careful what you wish for um it could be certainly um a great thing to be on the Walmart planogram and more than 2500 of their stores but keep in mind that very often other retailers in the mass Merchant Channel and in other channels they don't want to have what Walmart has because Walmart is the lowc course producer of retailing their operating structure is such that they could sell products for less than other retailers and still be profitable because they're selling General and administrative expenses their operating expenses are much less than other retailers so retailers don't want to be in a situation where somebody comes into their store and they say well I saw the same thing at Walmart so the our responsibility is then is to differentiate the product for other channels for other retailers all right the next question the most commonly used pricing method for business products is Quest Plus pricing so in fact in order for partners to be successful and I say Partners if you're selling um to um other businesses then those are your partners because their success and your success are tied together like wholesalers absolutely so in um in business to business it's very common the costs the um the customer realizes that the other party in the value chain adds value to the process and the cost it's not something that's secret they know what their cost are and they say to your course we're going to add 15% that's what the margin that we're going to allow you um to perform the role that you perform in this um in this process and so it's important because you might think well what about if um you try and squeeze the them and say instead of 15% we'll give them 5% but then you're not a partner they can't be successful successful companies understand this Motorola for example other companies they understand yeah what you're doing is an important part of the process and so you need for your partners to be successful because what if those Partners go away then that's a a problem so they need to be able to earn a living maybe there's other c um organizations that could provide the service for Less that's possible but you have to keep in mind that it's got to be a win-win you're not fooling anybody by thinking H we got them to sign for 5% but they can't stay in business right how how are they going to how are they going to cover their expenses so we need to um keep that in mind when we think about how we sell to um other businesses and that's talked about on page 350 all right number eight product line pricing involves determining one the lowest priced product and price and two the highest pric product and price and three the price differentials for all other products in the line so that's another way of explaining what I share with you about the idea of good better best pricing isn't that what they're uh talking about here the lowest price in the product line the highest price in the pricing in the product line right so companies can have a good better best pricing strategy or sometimes we talk about economy good better best and premium and then sometimes Ultra premium and basically what we're recognizing is that we sell products at different price points and that's what this uh this question talks about is product line pricing that we could have products in the in the line that are going to be at different prices they're not all going to be at the same price questions about that the challenge for us though is to determine how reasonable it is or how wide the Gap could be in terms of price points so for example Toyota they sell cars Echo Corolla Camry salara Avalon those are all sub Brands and so they sell them at 15,000 and 18,000 and 22,000 and 27,000 and 32,000 but that's different that's okay that's okay but that's different from saying well we're going to sell cars at 30,000 and we're going to sell cars at 130,000 there's only so far that you could stretch your brand all right so let's see quickly before we go so the next question says about different brands within our company's product line generally have different profit margins for example Nike variety tennis shoes and so the best answer is C Nike is using a product line pricing strategy that's on page 356 and the next question is when a seller represents a price as reduced the item must have been offered in good faith at a higher price for substantial previous period the best answer there is b a high price was set for the purpose of establishing a reference for price reduction all right have a good night orange juice orange juice all ready |
Marketing_Basics_Prof_Myles_Bassell | 15_of_20_Marketing_Basics_Prof_Myles_Bassell.txt | all right so we're going to continue our conversation about marketing today we're going to make more of a transition from talking about products to brands so up until now we've been um talking about them simultaneously right melinda but today we're gonna um move from our discussion of products and talk a lot more about brands and branding so last time we um when we finished up we were talking about the product life cycle who remembers what the product life cycle is right so there's several stages milan has to say there's several stages in the product life cycle introduction so introduction is at the point when we launch the product into the marketplace and at that point in time we could call that time zero at that point in time sales are also zero and so that's what this curve shows is that on this axis we're looking at time and on this axis we're looking at sales so we're moving this way we're moving forward just like toyota when we want to or not we're moving forward to time and at this point sales are zero that's also time is zero and the lineup says that that's an introduction stage of the product life cycle then as we move through time what's happening who could tell us what's happening we're moving through time like this right that's what's happening we're moving through time and yeah um assuming that we did a outstanding job with our marketing plan as we move through time whether it's month one month two month three or year one year two year three year four year five sales right what we're anticipating happening is sales are gonna increase does that make sense now if we did um a bad job like we talked about last time why products fail if we didn't develop a compelling marketing plan that had meaningful strategic and tactical components we're not going to be successful if our timing is bad we're not going to be successful those are some of the reasons why products fail what else what is another reason why a product might fail yes absolutely what else you have yeah the product is not necessary right so it doesn't need a customer need well it doesn't need a need that the customer is aware of that they have right so we could be ahead of the curve which has a lot to do with timing either because our product is too innovative it's too new or maybe um the economy is in a recession what else can i talk about yeah the target market so we could be targeting the wrong group of customers for our product and that target market those that we want to sell our product like data say is not seeing a need for the product what else go ahead right so the execution of our plan so remember i said it's not enough to have a great plan because you might actually have an outstanding plan but if it's not flawlessly executed then we're not going to be successful so when we think back as to why products are not successful why the launch of your products are not successful we need to understand why so when you go into an organization you say hi my name is amulonga and i'd like to suggest this how was it i'm gonna keep practicing this is my product idea and what happens the people are that have been working there for 10 15 years they said we've tried that already mulan it didn't work but you need to understand why it didn't work it might have been a great product it might have met the customer need but the plan was not flawlessly executed too much or too strong competition absolutely remember we talked about the competitive set if we're going to introduce a car we need to know who our direct and indirect competitors are if our direct competitor or indirect competitors toyota we're facing some pretty significant challenges now that's why when you look at that perceptual map you need to think about where we're going to be positioned is it going to be here where there's competition here maybe we want to be over here where it's just us so definitely the um we might have had a great product that met a customer need but the competition was too far entrenched right too well entrenched in the market they might have already had significant market share maybe the category is mature the category is mature so this is growth we're seeing sales are increasing and we have maturity if a market is mature if we're not seeing growth in a particular market and it doesn't need to be zero growth it could be two percent three percent that's not really the kind of growth that we're thinking of when we saying that a market is growing we're talking about fifty percent a hundred percent so markets are growing 300 per year remember we said the beverage category the beverage category is very mature in the united states 200 billion dollars a year in sales at retail but it's only growing two to three percent per year that's a mature category now that doesn't make it bad i mean we'd like to have a piece of that 200 billion dollar category coca-cola is very profitable they have billions of dollars in sales and profits so it's not bad we just need to understand the competitive dynamic we need to do uh an analysis to understand the market attractiveness and that market attractiveness could be um low or it could be very high at some point you know we're just looking at what could be so all the things that we mentioned so far about why products fail may we may not get to here we may not get to this point we may not get to that point we may not get to this point the product might be a bust this tells us what will happen if things go well this is what we can anticipate that's why this is so insightful and so meaningful because why why is this so insightful even if it doesn't our curve doesn't exactly look like this why is this so meaningful to us why is this helpful to us as marketers so if we're managing a business if we're the vice president of marketing or we're the brand manager in an organization why is it helpful to understand the life cycle of a product as it's illustrated here so at different stages of the life cycle we're going to have to deploy different strategies different tactics our objectives are going to vary throughout the life cycle of the product which is absolutely true um what jana is saying so our strategy when the category is mature is going to be different than if the category is not mature we are still seeing a significant amount of growth because when the category is mature there's really two key ways that we're going to increase our sales where are they how the sale is going to increase if i'm telling you that the category is mature and it's not growing then how are we going to increase our sales add features yes you could add features so if we add features and add benefits so if we modify the product what we're doing is we're enhancing the perceived value if we're enhancing the perceived value trading it up in what way tell us yeah you're trading up um your customers so what's going to happen is we're going to anticipate like what's happening with the iphone is that they're going to buy the iphone 4 they're going to buy the iphone 4s they're going to buy the iphone 5. so we're enhancing the features and the benefits of the product and we're gonna sell more units so what we're expecting to happen in a category that's mature is that we're growing the entire category the entire category would have to grow so if i'm telling you that sales are flat if the category is not growing then one way that we would be able to increase our sales is if we grow the whole category another way to increase our sales in a category that's not growing is we'd have to steal customers from our competitors do you see why that is so if the category is not growing that means that this is the size of the category now and next year this is the size of the category well we'd have to make the category this big for us to increase our sales and then we could get a bigger slice of that pie if this if this if the category here doesn't change right the size of the category doesn't change then we'd have to steal the share that our competitors have steal their customers that's very different when a category is growing and everybody's selling more units so everybody's selling let's say maybe 20 or 30 percent more units then you're growing with the category which is what happens very often but what if the category isn't growing then you're either going to need to add new features new benefits gotta modify the product in some way so that your sales are gonna grow and the entire category is gonna grow so the entire number of units that's being sold in the category would increase we could introduce a line extension we could introduce a brand extension but if we introduce a brand extension then we're going into a new category so our sales of the company will increase but for the category the category is going to stay the same now we could be in maturity we could be in maturity for a long time there's there's nothing wrong with that we might be selling products in a category for 50 years 100 years and the category is mature that's not a bad thing like i said with um coke they're very profitable they're making billions of dollars but the categories mature but remember this is just a foreshadowing of what could be it may not work this way in fact one of the key things i want you to remember about the product life cycle is that we could influence the product life cycle so we're not our hands are not tied behind our back in fact our job as marketers is to influence this cycle and what is it that we want to happen is we want growth we want to be able to stem growth we want sales to continue to grow and if we reach a point where the level of household penetration is very high which means what that means that for example everybody let's say in the united states has a television or has two televisions or even three televisions so what's going to happen to your sales of tvs sales of tvs are going to go down if the household penetration is very high and everybody already owns the tv then sales are gonna go down unless the category integrates and what have we seen happen in the market for tv monitors 3d yeah adding new features right so plasma has seen a significant downturn then they introduced lcd and now led and now there's more features smart tvs where you could access the youtube lectures for this class from your tv some of them have a um a widget on the screen where you can click on youtube and you could watch the lectures for our class some of them you have a browser we could actually on your tv using the remote type in youtube.com summer somebody said 3d so if you continue to innovate you could continue to even in a market where there's a high level of household penetration get people to buy so if they already have a tv our challenge as a marketer is well how do we get them to buy another one how do they get us to buy clothes right promotions sales but they keep changing the designs so you say well you you already have a suit you already have a blue suit they said but you don't have a blue suit like this with powder blue stripes thin pound of blue stripes like pink stripe and what about this suit where it's not pinstripe it's actually more wider like short stripes and you got blue suits that's great what about clay suits what about brown suits so we need to find ways to get the consumer to purchase so again important takeaway for us is that we could influence this product life cycle and we have to continue to modify the marketing mix what's the marketing mix the marketing makes it a four piece right price plays promotion and product we need to continue to modify those throughout each stage of the product life cycle so when we say modify the marketing mix what does that mean well it means that if we're going to modify the price then we have to decide if we're either going to increase the price or lower the price or the price is going to stay the same that's a strategic decision that's not a no-brainer we have to decide when we introduce the product if we're going to use a penetration pricing strategy which means that we introduce the product at a low price to try and sell a lot of products in a very short period of time or we could use a skimming pricing strategy which means that we introduce the product at a higher price and then lower the price in a planned way over time so what's the advantage of a skimming strategy why would we introduce the product at a high price because you might say well wait a minute whoa wait a minute at a high price aren't less people gonna buy what do you think well first of all you just make the money back from all the advertisements so just in case you can make all your money back so we're not just ripping people off right adam when we introduce the product to the high drive you're saying there's a very legitimate reason why you could charge a higher price at the beginning because we might have spent literally billions of dollars in developing and researching the product and advertising heavily to promote the launch of the product but then once those initial startup courses are recovered then it would be reasonable that we could lower the price a little bit and then a little bit more and a little bit more oh so the level of newness you're talking about right so absolutely so if we differentiate ourselves then we definitely have um the opportunity to charge more so even if what you are saying i'm sorry what is your name michael so even what michael was said is it true about us having a significant amount of startup costs because of advertising and research and development is saying that as marketers we realize that our positioning may not be what we call a head-to-head it might be a differentiation strategy if we differentiate ourselves that means if we're unique in the market then we have the ability to charge more so if we differentiate ourselves then we could charge more we could charge a premium for our product absolutely so everybody get what my linus is saying she says yeah that's gonna be um a short-lived phenomenon because competitors are also doing analysis they're also doing swot analysis they're watching what we're doing they're trying to identify their direct and indirect competitors and so they're gonna look at our product line and they're gonna say that is crazy they're selling the product for 600 we could make that product for 80 bucks and so we're gonna attract competitors into our category and they're going to rely on us to create that category need so there's definitely advantages to being first in fact very often um marketers believe that it's better to be first than it is to be better so in other words it's better to be first to market than it is to afterwards come out with a product that might be better that's not always the case because look at what apple did they were the first to the market with mp3 players i know it's like hard to believe right like there was a time when mp3 players were being sold and it wasn't branded ipod you were live then wow isn't that crazy but that's true so they learned first so it shows you the power of marketing what they thought was a much better job of marketing the technology so that being said understanding the different types of product the different types of product types and the life cycle of a product we need to realize that products in any category take any category you want the products in any category are the same they provide the same functional benefit what makes them unique is the brand products are wrapped in brands now if there was no such thing as brands there would be a very limited number of advertisements because the only thing you could talk about is category need right that would be our objective is to create category needs examples of that we talked about remember we said well if we're not going to build awareness of our brands we're going to just talk about the product then we're going to advertise the category which is what the got built campaign is about they don't talk about brands and they got got no campaign only thing they're talking about is drinking milk why because they don't want you to drink orange juice right isn't that what we said because they see orange juice interestingly as more of a direct competitor they said you know we're not as worried about the other dairy farmers we're more worried that people are drinking orange juice instead of milk they feel that a good number of people believe that orange juice is a substitute for milk all right so we're going to talk about different branding elements and the criteria so one branding element now we're talking about branding elements is the logo the logo is a graphic representation of the brand name so the logo is what's called a word mark right jenny all right so a word mark contains words what type of words does it contain the name of the brand that's a logo that's a graphic representation of the brand name this is a logo well this is like their logo now this was their logo before that's that's the logo that's the pepsi logo like i said uh a couple of years ago yeah yeah this is this is back in the day so these are examples of uh these are examples of logos companies very often use symbols now a symbol is not a word mark josh you good what about ford what about ford is that a logo or ah interesting so why would you think that's not um a logo if this is their um this could be their logo that's fine um your logo could be um does it look like that working me people you guys hey anybody wants to come up there give me the walker no problem come on come on joseph what what are you guys uh oh who's gonna come up where's stephen come on you guys anytime all right so your logo absolutely could be um encapsulated you know it's no problem we could put this you know in a rectangular shape where you could put little stars here whatever you want that's fine that could be uh that could be our logo it doesn't need to be just um you know these letters by itself no it's fine companies um when you're developing a branding strategy you don't need to include a simple that's okay so some companies don't have symbols so like for coca-cola they might have some personality symbols like for example the polar bears but that's not the same thing as like what pepsi has which is this that's the yes that's the symbol for pepsi and this right that's the symbol for mercedes these don'ts contain words what's so compelling about a symbol even though granted there's successful companies that don't use symbols as part of their grand architecture that's fine but definitely there's also companies that have used symbols in a very compelling way that is recognizable worldwide so the symbol is what's called a non-word mark so a non-word mark is a graphic representation of the brand period not the brand name of the brand itself see because we say the logo is a graphic representation of the brand name because it includes the brand name so the simple what we're trying to do is create an association between the symbol and the logo and we want to have associations with our brand that are strong unique and favorable is the logo do you mean do we have trademark protection let's say like outside of the us no they're not supposed to um and other countries though are starting to realize that there's values in creating powerful brands and so now they're trying to do the same because they recognize that the coke brand for example is very valuable who remembers what the value of the coke brand is 270 billion how much 70 billion yeah 70 billion according to businessweek the value of the koch brand is approximately 70 billion u.s dollars doesn't that blow your mind because that's not their inventory of soda has nothing to do with the inventory of soda or their corporate headquarters in atlanta georgia or their office equipment or their equipment the bottling equipment nope none of those tangible assets we're talking about an intangible asset just this they say is worth 70 billion dollars so governments want to have their um organizations their companies developing powerful brands and so they realize that if they want other countries to respect their trademarks then they're going to have to respect the trademarks of other countries as well so for coke you'll see um circle r which means it's a registered trademark but even before um it's actually registered you could get trademark protection so one of the one of the criteria for developing a branding element is that it needs to be protectable so if we develop a logo if we develop a symbol if we develop a tagline a slogan packaging or attempt to create trade dress then it needs to be protectable that should be one of the metrics that we use to determine whether or not the branding element is worthwhile whether or not we did a good job is can we get trademark protection so in order to get trademark protection our branding element needs to be unique so you can't trademark words in the dictionary now you're thinking well wait what do you mean yeah you can't trademark words in the dictionary so what some companies do is they make up their own words why well one of the main reasons is because they want to be able to get trademark protection they don't want people to copy their logo so the question is is apple the word apple trademarkable no but that's the name of the brand so the brand name is also a branding element but what's remarkable is this that's an apple right that's an apple if you use what it does sponsors you want me to draw a picture of the the dog again it was the dog yeah the coward is cows and dogs too you guys you want to mess around the uh the candy go ahead come on that has got to be empty go ahead finish it off suzanna come on work with me brandon you look like you want some candy come on come on so you might be surprised you're like what yes big companies make big mistakes so your brand name one of your branding elements is your brand name now you may not be able to get trademark protection for your brand name if your brand name is a word that's in the dictionary now if you have trademark protection that means that other people can't use that name so that's why you can't trademark words or in the dictionary because how can you keep people from using the word apple but that being said if you do decide to have a brand that's called apple or orange or pear then what you need to make sure that you do is have a logo that you can get trademark protection for and your logo needs to be something that's stylized and unique what do you think about this this was the logo before up until literally only uh a couple of years ago almost three years ago and this is the logo now which do you think is more memorable more unique more protectable is it this design here that's like 3d has this like 3d effect it's like glowing you know what do you think how many people think that this logo the old logo is more protectable than the new logo what do you think raise your hand if you think the older logo is more memorable more unique more spiralized all right let me try a little trick here let me see how many of you want to get 100 on the exam oh your arm does work amazing amazing i just wanted to check i wasn't sure you know all right so you could raise your hand that's good good good good so most people would think that the prior logo was more stylized more unique well yeah so a logo whether it's the design or the color is a stimuli right when we see the logo or the symbol or the packaging what's gonna happen is we're gonna it's we're gonna process that sensation we're going to process that sensation we're going to interpret that sensation so our eye sees that we could see hear smell taste whatever the sensation is we have the ability to interpret that sensation that's known as perception now we may decide not to process or interpret this sensation so for example we might actually might actually have a perceptual filter what's a perceptual filter i'll give you a couple of examples of a perceptual filter and let me um let me help out uh chantal here of example of a sexual filter here's a great example perceptual vigilance christina perceptual vigilance perceptual vigilance is an example joseph you're following me perceptual vigilance is an example of a perceptual filter what that means is that we filter out certain certain sensations we can decide whether or not to process a sensation so for example perceptual vigilance says that we only pay attention to things that are relevant to us so for example if you see an ad for burger king or mcdonald's or taco bell right everybody knows the taco bell dog the little chihuahua yeah yeah right if we're not hungry then we're gonna we're gonna filter that out that's known as um the sexual vigorous so essential vigilance means that if we're not hungry if we're not thirsty then we're not going to pay attention as advertisers and as marketers we need to make sure that we understand that very well so we might be advertising for cars for mp3 players for fast food for 1080p high definition 3d smart led monitors but if people don't have a need for that their perception is not going to occur do you think that's true what do you think does perceptual vigilance happen you see all these ads for for um let's say kfc taco bell pizza hut white castle but if you're not hungry you're not gonna you're not gonna be paying attention they might be here to make you hungry but then when you see the um ad and you are hungry then you realize that right that becomes part of what the options part of your options your consideration sets so all those all those different choices that come to mind when we're hungry are known as the evoke set all the different fast food restaurants or all the different um beverage brands so we're gonna have the box set or the consideration set the consideration set on only those brands that we seriously consider purchasing you see why that's important to us as marketers because when we talk about brand awareness there's two types there's brand recognition and brand recall so so when we're doing research to try and understand the level of brand awareness we're trying to understand for example the ability for us to retrieve from our memory the brand name remember the brand name is one of the branding elements we have to retrieve from our memory the brand name the first brand that comes to mind in a category when we're doing research is what we refer to as top of mind awareness which is the most enviable position so when people say give us some tell us the name of our ten beverage brands the first one that comes to mind is what we call top of mind awareness but brand recall that's known as brand we call brand recall is important for us to measure and we want to track our level of grand recoil and brand recognition over time why why is that what is that going to tell us if we're doing branding research if we're doing grand awareness research over periods of time every six months for five years 10 years 20 years what is that going to help us determine why would we do that why would we do branding research why we keep doing research whether it's through phone surveys or um internet surveys mail surveys why would we why would we keep track of that what is the purpose go ahead brandon you want to get a sense of where people are at with your body whether they like it or what could be improved what could be made better so it's going to help us with our position positioning right so with the perceptual map we're gonna try and understand where we are on the perceptual map and then if there's any changes so we're gonna look at not just the brand recognition and the brand we call but we're gonna look at attitudes and perceptions so we want to know how do they perceive our brand what is our brand image our brand personality and do they remember our brand name that's gonna go ahead tell us it's also about relevance to know like what's relevant right now so if you use like a cell phone search over the next four years and the first thing that comes up is iphone you know that iphone is doing something to remain relevant so that may be your biggest competition so it kind of helps you figure out who your competition is and what you need to do right absolutely so it's not enough to have a brand name a logo and a symbol even if you have grand awareness it has to be relevant that's an excellent point like for example when atg took over singular wireless singular had worked so hard to create a unique brand identity that was like what kepney is saying louis right oh that's you wow why are you playing on me like that trying to confuse the professor let me say oh i know williams lauren warren oh come on that was close kia money right your money all right that's what i'm going to grade your exam too well no close but no cigar no i'll give you a break i'll give you the benefit of the doubt don't worry kiamani right mulan okay so singular wireless was very relevant they connected with the younger generation their brand was fun it was cool they had that orange jack so they had done a terrific job and then atmt acquired them and they decided well they're going to rebrand everything att well unless like you really live under a rock um then you would have heard of att but the concern that marketers had is that even though people heard of the name a t was it still relevant did it have a positive brand identity so people are aware of your brand that's not it just to have awareness is not enough you have to be relevant you have to have a brand personality you have to have a brand image that people can connect with what do you think sofia you don't agree you do no okay so think about if we're doing that beverage research let's go around that quickly i want everybody to tell them a name of beverage like soccer we're not talking about grey goose belvedere patron right we're not going to go there that's after class a soda the name of our soda go ahead right dr pepper some kids coca-cola ginger ale seven up sprite what else so now look at this think of all those brands that we just mentioned remember that's our evoke set then there's only a limited number of brands that we're seriously going to consider purchasing that's the consideration set so why is that important because if you just have if we just know brand awareness but we don't understand purchase intent then we didn't do a good job in our research because of course of course if you ask them or if you either you ask them to tell you the name of the soda brand or you ask them if they ever heard of let's say the coke brand of course they never say yes like who has it except jenny but most people have yeah okay she says she has all right i apologize so the problem is that even if they're aware of the brand that doesn't mean that there's purchase intent that means that of course they're aware of coke but then wouldn't you just like literally fall out of your seat if they said but i don't drink soda so you see why it's important to ask questions that are probing that's not just enough to know the level of brand awareness yeah that's going to help us determine whether or not our advertising is effective because advertising is going to build awareness and we want people to learn the messaging we want them to remember it want to become a part of their memory so the features and benefits that our branded product provides so importantly they have to remember the brand name because if they see our commercial and they could remember the jingle bell right that's great but if they can't if they can't remember what brand that jingle is associated with they can't remember what simple is associated with what brand and the features and benefits and the unique selling proposition then we weren't successful in our advertising campaign so we need to find out the purchase intent and we need to determine the consideration set so it's great that they're aware of our brand but if at the end of the day they say well yes i'll wear the code brand but i only drink lemon lime i don't drink cola or i only drink root beer or i don't drink soda at all i only drink orange juice for example giantelli all right or water like chantal is that really water in there it is josh check it make sure the questions any questions oh wow we still got like two hours okay so so aidan awareness is when we say when we ask are you familiar with the coke brand or if we go into a restaurant we say um and they and the waitress asks us the service says would you like um a coke or a sprite that's a type of aided awareness unnated awareness is when they say well what would you like to drink that's an example of all right so if you go into applebee's and you ask and um they ask you what do you want to drink that's an example of what type of awareness huh unaided and it's an example of brand recall so we have to retrieve from our memory the brand name which if our objective is to have a high level of brand recall then we're gonna have to spend a lot more money on advertising it's going to take more exposure to our ads for our target audience to be able to process that message and to learn the message and for them to store it in their memory because recognition just means that when you see this symbol you recognize that symbol is being associated with the mercedes gland name that's brand recognition when you see the packaging and you recognize the packaging you recognize the logo and possibly the symbol and the trade dress so some companies have a very strong trade dress they have a color that's associated with their brand name like like mcdonald's has a strong association with their brand name and the colors red and yellow which one tiffany absolutely they have um a like a sort of a teal blue color like my shirt that's um part of their trade dress that's very memorable chase that was very impressive a few years ago they um use the color blue as part of their trade dress which now you're um many of us are very familiar with that we recognize that when we see it what about kodak yellow kodak yellow kodak yellow and red okay and red red like this what do i like about marina um that kind of red you know the color we have to specify a specific pantone number pantone is a color matching system it's not just it's red but we have to specify what shade of red because there's a lot the tagline the tagline is a few words or a very short phrase that we associate don't still going to be over don't worry a couple of minutes couple of minutes hang in there that embodies the positioning or the unique selling proposition of our brand that's very enduring that's not something that we change on a regular basis so we have our our logo our symbol and then we have a tagline that's not the same thing as the slogan the slogan is the key message for an advertising campaign the slogan we're going to change on a regular basis because remember we're talking about perceptual filters there's perceptual vigilance there's conceptual defense the sexual defense means we block out messaging that makes us feel uncomfortable like for example if you're a smoker and you see a um a billboard that says smoking causes cancer you're not gonna read that right you're gonna filter that out who would wanna so we have to so we have to think about that and understand that as marketers advertisers if you wanna have a billboard that says smoking kills who do you who is your target audience is it really smokers because for sexual defense the concept of perceptual defense says they're gonna block that out maybe the families maybe the families non-smokers adaptation is a type of perceptual filter that says that when we become so familiar with the ad that we stop paying attention we get used to it so we might get used to it for a variety of different reasons adaptation can happen for a variety of different reasons but one of the main reasons is because we see it too many times and after a while when you keep passing that same billboard every day on the right of the college whether it has anything to do with smoking or food we stop paying attention and so that's why the slogans and the advertising keeps changing because companies realize that wear out advertising wear out occurs the message gets tired and so they keep changing the slogans that's the theme for an advertising campaign they keep changing the theme and the commercials and the printer ads and the billboards for a particular company because they know at that patient is going to occur people are going to stop paying attention all right where's their tenancy all right so so an associated wait don't go yet i want to tell you something about the exam um the tagline is a couple of words like for example um we've been good things to life is the tagline for general electric that's something that is not changed on a regular basis that could be your tagline for five years ten years but the slogan you might change that every three months every six months why because for a given campaign your message is going to focus on a limited number of things so maybe in one advertising campaign your slogan focuses on the fact that your food is very nutritious but then in another campaign you're going to focus on the fact that your ad is that your food is healthy or low price rather so first you focus on nutrition and you said that your food has a high nutritional value and then once people perception occurs once people process that message then you're gonna teach them something else because in advertising there's got to be perception learning and then memory so you gotta process the messaging and then learn the message and then importantly they have to remember it so you can expect the theme for an advertising campaign to change on a regular basis but not the tagline once you have something that's very compelling um that embodies your positioning you want to stay with that because you're not going to change your positioning true remember we said you can reposition your brand but you can't do that on a regular basis that's not something that you do on a regular basis sure you could try and change your perceptions but you can't keep repositioning yourself from a luxury brand to an economy brand to a luxury brand there's got to be a crystal clear perception of our brand in the marketplace |
Marketing_Basics_Prof_Myles_Bassell | Marketing_5.txt | [Music] you let me down show me when I needed you [Music] the [Music] I thought that mr. Bies when you let me [Music] [Applause] [Music] [Music] [Music] let's learn I know I'm worthy get the wrong yeah yes bizarre I'm gonna take it down [Applause] [Music] [Applause] [Music] [Music] [Applause] [Music] [Music] you let me down [Music] [Music] [Applause] [Music] [Music] party started [Music] all right so here we go so welcome to the world of marketing welcome back we're going to do right now is just going to take a couple of minutes to review some of the key aspects of the syllabus everything's a copy of the syllabus with them so take that out all right so on the first page of the syllabus you have my contact information you already realize that I know how to use email by now you've probably gotten quite a few email from me is anybody here who hasn't got an email from me so far this semester is there anybody here who wishes they haven't gotten email from me so far this semester all right so I send emails to help keep us on track to remind you about key due dates but everything is in the syllabus so the syllabus that you have before you this tech page syllabus is our roadmap this is our roadmap for success so you already have the textbook we're using a 10th edition no match on you were using the 10th edition so that a 12th edition because I don't have the 12th edition right wrong I have the 12th edition and if I want to see what the 12th edition looks like next party people is a 12th edition but the reason we're using the 10th edition is so that you can save $200 turn to your neighbor say $200 $200 so we're using the 10th edition thread you could say $200 normally this book is $225 you should have been able to get the book for less than $20 most students have told me that they pay less for the book that they do for the postage but now that we're in week 6 hopefully everybody has the textbook so that's the textbook that we're using I'll let you know more about the simulation that we're gonna do the new shoes simulation so you're gonna have the opportunity to manage a business that's something that you could put on your resume because you're going to be managing this virtual business and you're gonna every week make decisions about how much money to spend on advertising how much money to spend on promotions how many sales force people to hire for example how much to allow for coupons and how much you can invest in product development and after you enter your decisions what the simulation is going to do is tell you your level of sales the number of units that you sold the dollar amount of sales your gross margin your net income your market share so it's a very realistic simulation I use this before it's gonna give you hands-on practical experience in marketing so your inputs your decisions about how much to spend on advertising how much you spent on promotions how much to spend on a Salesforce organization he's gonna have an impact on the level of sales so everybody's not going to get the same level of sales don't they know whatever put in there my cells is going to be the same you know it's not you're not gonna have the same level of sales you're not gonna have the same number of units sold you're not gonna have the same gross margin you're not gonna have the same level of debt income and your market share is not going to be the same so this is a terrific and very sophisticated simulation that's gonna interpret your decisions no shoes it's called new shoes it's on the front of our syllabus right underneath the textbook I'm gonna have to send you before you can access it I have to send you our course code so it's unique to our class because I'm gonna be able to track your results so I'll be able to log in you won't be able to see the decisions are other people on our team which by the way we have a hundred and twenty-five students on our team a hundred and twenty-five so you won't be able to see their decisions and their performance but I will so you're part of my executive team I'm the president of the company and you guys are the vice presidents so I need to decide by the way the prior management team was fired yep they were fired so now I hired this team to manage our shoe business and what I'm looking at in terms of performance is who on our team is gonna have the best performance who's gonna have the highest level of sales who's gonna have the most profitable business because I need to pick somebody who's gonna succeed me so I want to retire hello I want to retire so I need to know that there's somebody on our team that's gonna be able to replace me when I retire so I'm looking very carefully at the performance and there's a really easy-to-use interface that I'm able to access that could let me see your performance at a glance for every student no.not team so that's gonna be really I think meaningful but also I think it's gonna be fun because there's gonna be a little bit of trial and error so your decisions are gonna carry over from week to week so if you didn't spend enough money last week on advertising then this week you're gonna have a chance to increase the amount of money you spent on advertising so it remembers your prior decisions so it's too late to but right now you can't access it because again I have to give you our course code or unique course code for our team so you know that we're having an exam on the 13th and I'm expecting that everybody here is going to get a hundred anybody I want to get a hundred plays and if you want to get a hundred your hand you can do it yes you can you can get 100 studying outs but I believe that you could do it all right on page three on page three is a list of our assignments so basically going forward there's an assignment due every Tuesday technology you're gonna love it all right but do you guys have it so on page three in our semester at a glance the entire semester so before you even registered for the class I already had a plan for your success because your success is my number one priority right so there's a plan every Tuesday we have an assignment and every week we are doing a case study analysis that's going to help you remember the key concepts in the course you may not remember all the definitions and all the key terms at the end of the semester or a year from now or two years from now but I really do believe that you remember the BMW case and the Prince case and what's important is to develop your critical thinking skills so one of the learning goals for every course in the School of Business is to develop the students critical thinking skills so that whatever problem you face in business you'll be able to solve because you have strong critical thinking skills and strong written communication skills another important learning goal is to build the ethical awareness the learning goal for every school of business course is for students to enhance their ethical awareness their ethical reasoning so that's the focus of some of our research that we're doing this semester is about ethics ethics and the consequences of unethical behavior we already completed a questionnaire about the financial crisis of 2008 is that right and why is that important because that's a great example of the consequences of unethical behavior so you might say coach this is about a finance course well you're right that's insightful it's our course is a course in marketing keep in mind that very often the metrics that are used the metrics that are used to measure the performance of marketing executives are financial how do we know if a marketing executive is doing a good job when we look at the dollar sales that they were able to generate the number of units they're able to sell the gross margin for their strategic business unit the net income for their strategic business unit all those are financial metrics now what we found is that unethical and illegal behavior on the part of executives resulted in the global financial crisis so ethics I encourage you last time not to think of ethics is something that is academic theoretical philosophical ethics is not something that's philosophical unethical behavior has real consequence is a global financial crisis why why did we experience a global financial crisis Roka tell us why did the banks collapse they were loaning a lot of money and they were taking a lot of risk like a regulation there was lack of regulation in many cases so there was no one checking what they were doing I really think it's very likely that she might get an A in this class Nerissa very likely unethical behavior has consequences so I know we always hear about ethics we talk about ethics in every class right but it's important please don't think of it as something that's philosophical sure there's philosophical aspects of ethics but there's real consequences millions of people in the u.s. lost their home because of unethical behavior on the part of executives the economy the entire US economy was at the brink of failure because of unethical and illegal behavior on the part of executives so the consequences are real all right on the bottom of page 4 and this was in the email that I sent you at the bottom of page 4 you could see what do you see that the requirements when we're taking the exam you have to bring photo ID drink several number-two pencils with erasers I don't want you to double just yourself but it's it's a good idea to bring an eraser you might change your mind and it's gonna be four different versions of the exam pink blue green yellow you know what I have to do that right one on the part of students right so I have to have four different versions of the exam you can't use any electronics during the exam so you have to turn your phone completely off no texting during the exam I know that's a bizarre requirement no texting during the exams you can't leave the room during the exam you can't talk during the exam I know I'm pushing my luck here right and you can't use any notes or books during the exam all right so we're in good shape [Music] no way no way [Music] I know what do you do that [Music] [Music] [Music] [Music] [Music] [Music] [Applause] [Music] like this [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Applause] [Music] [Applause] [Music] [Laughter] [Music] [Music] [Music] |
Marketing_Basics_Prof_Myles_Bassell | Marketing_Basics_13_of_20_Professor_Myles_Bassell.txt | one defining the sides of the market and targeting which is ultimately us selecting markets that was going to penetrate so pocket-sized things in part because they're gonna look at the size of the market in terms of the number of units sold the number of people in that segment the dollar size of that market the growth rate for example so in looking at those different criteria within us select markets that would have focus on theirs most of the time it's not feasible for a company to focus on all the markets on at the same time so let's say for example we decided that we're gonna start a clothing company well we could start a clothing company and we might say well what are some of the things that we would sell what are some of the things that we could throw around and hoping for shirts pants sweat it Carnegie's bomb again see all these different items clothing preventing clothing for women shirts long-sleeve shirts shorts insurance solid shirt white shirts blue shirts green shirts yellow shirts and not a solid shirt so there's a device that we have there to lead to the side complete enough start our clothing company all of those items of the identify our categories maybe what we're going to do is whether to focus on jeans for example and we're going to use as our criteria of items that we mentioned the size of the market the growth rate the number of people that buy jeans the dollar volumes for that particular product so it is why if we started a category wide it wouldn't be feasible for us to and we saw a company and when I'm going to introduce online why we have to somehow decide on which segments to focus and that's why market sizing is so important is because that's one of the main part shipping that we're going to use to help us decide when we do the side that's called targeting targeting is selecting particular sentence and importantly when we pick up the product that we're going to introduce we need to understand positioning so we need to understand segmentation targeting and positioning positioning is a very important concept positioning is the space that we occupy in the customers mind so the positioning is the space that we occupy in the customers mind now in terms of our positioning we might position ourselves head-to-head with other competitors or we might try to differentiate ourselves so remember the first thing we talked about points of parity and points of difference we need to decide in terms of positioning you need to decide how they did a position ourselves in the market is it going to be head-to-head with the competition so we're going to focus a lot on the points of parity or is it going to be differentiation are we going to focus more on the points of difference relative to the competition and a way that we could visualize that is known as a perceptual map a perceptual map is a graphic representation of our positioning relative to the competition so importantly importantly when we construct a perceptual map what we're going to show here is where we are on the map relative to our competitors so remember we had talked about market share and we said well it's very impressive if we found out that we sold 500,000 gallons of orange juice but of itself that's not as significant as if we knew what percentage of the total number of gallons of orange juice sold was sold by our company so if the total market was 1 million gallons of orange juice and we sold 500,000 that means we sold 50% of the orange juice in the market now that's very compelling but what about if that was the case what about if the market was not a million talents but let's say the market was 10 million dollars then 1 then what percentage of the market did you sell right so now we're not selling 50% of the market now the amount of our industries have sold that carry our brand is only 5% of the market that's very different in terms of our performance and evaluating our portfolio of products and brands so the reason why I bring us back there is because when we look at positioning certainly we want to understand where we're positioned based on certain dimensions but even more importantly we want to understand have a weak position relative to our competitors and keep in mind that because we position ourselves so again when we're preparing the perceptual map it's not just interesting I'd be interesting to Jennifer it might be interesting to Jessi it might be interested to Steven but importantly what we're going to do is use that information to determine the marketing strategies and tactics for the organization so we can reposition ourselves what we're trying to understand is how the customers perceive our brand in the market place we're trying to understand how they view us as a brand relative to the competition once we know that then we could decide if we want to change their perception so they might have a perception that our product is a low-quality how do we change that we could cut out samples and say yep tried this out he won't die to drink this don't believe what the competitors say well it could be true it tastes a store like I'm saying give samples yes you can have testimonials maybe celebrity endorsements to help change the perception that our product is a low quality so if we have that absolutely so we can advertise so we can advertise and communicate with the target audience what is about a product that is unique what are the features and benefits of our product or service and maybe we're going to use a celebrity endorsement as part of that commercial so maybe a good commercial we're gonna have a celebrity speak in the commercial and saying that basically that they use the product and that's why they have a full head of hair now they drink orange juice and they're on Olympic gold medal winner what is it that people believe as it relates to an Jordans what is it really when you think about it what sort of the brand promise whether it's a subliminal message or not whether it's conscious or unconscious what people buy for example and Jordan sneakers what are the expectations that if you buy if you buy these speakers and wear them that Joker's are being able to I think maybe what you said is probably more realistic not jump as high as Michael Jordan but jump higher that these sneakers are going against your performance on the court so using a celebrity to endorse a product can be very compelling so we can change our positioning we have to identify we talked about this briefly before the competitors and why why do we care about identifying the directing and direct competitors why is that significant for us why is it significant to understand for example the rationale for the Got Milk campaign so the Got Milk campaign was a campaign that promoted milk they may talk about any particular brands what they were trying to do is create category need create primary demand so it was dairy farmers that got together and pull in resources to pay for the development and the airing of those commercials to get people to drink more milk why did they do that how did they come upon that idea to say you're not my diamond competitor or even if you are my direct competitor because you're also a dairy farmer and you're a dairy farmer and you're a dairy farmer and chantal is a dairy farmer they identified indirect competitors they said if people aren't drinking milk what are they thinking our gyms or their drinking water chantal right they're drinking water more than drinking iced tea or than drinking coffee there's other beverages that they're drinking so they said it would be smart for us strategically to focus on increasing the usage rate from Alaska I totally disagree to increase the usage rate to get people to drink warm milk then research showed that there are certain reasons why people are drinking orange juice and vice versa so for the orange juice companies they feel that their direct competitors are let's say for example Tropicana you know that Minute Maid is a direct kind of burn what else right simply orange but they also understand that they have some indirect competitors so does that make sense if you're an orangey juice company if you're Tropicana and you're thinking about who is your competition to remember with Tropicana you said who's that competition right are you Mike midnight simply garbage like all these arms juice cramps but importantly they succeeded are aware of some indirect competitors and when we say that a brand is an indirect competitor that's not to forget about them to scratch them off our radar screen that's to help us focus on those companies because they said we realize that milk is a competitor maybe not a direct competitor or mainly you might strategically say you know what milk is the direct competitor but that's a strategic decision that needs to be made I remember I think we agree that milk is an entire competitor at a minimum I think I read competitor and so what did the orange juice companies do with that remember not just interesting but actionable what did they do so they started promoting orange juice that has calcium vitamin A vitamin D what that idea from who sends up why website what does that have to do it well milk is best known best for its health qualities like calcium beginning 9000 yeah absolutely they see it as a substitute and so they're trying to give air like that a lot of people do drink milk and so to increase the usage rate of orange juice they are trying to promote their product as having some of the same benefits as milk so you see both the orange juice companies as the milk companies are looking at the category from the same perspective how interesting right both of them are trying to increase the usage rate but their approach is a little bit different our industries companies are definitely trying to increase the use of great memories that usage rate is a behavioral type of segmentation product benefit is a behavioral segment type of segmentation so they're trying to make peace to use a drink and the way that they're doing that is like you said they recognize that milk is a substitute in some cases for orange juice so they're gonna promote their product is having some of the same benefits vitamin A vitamin D calcium and milk right which is closely related to Oreos right because Oreos tagline is milks favorite cookies but we did a great show right yeah also I'm trying to increase their usage rate and focusing on the core benefits of their product which is kind of unusual there's most of the time we don't focus on creating category need or another term that we use another chart category need and primary demand is the same thing we focus on it's called selective demand which means that we're going to focus on a particular brand so primary the man who are advertising means that we're focusing on creating demand for an entire category like milk or what about I have you seen the commercials for beef it's what's for dinner pork the other white meat all these are campaigns that are really in the minority in terms of advertising because animal kisses on primary demand trying to create more demand and increase the usage of the poor milk most of the time you focus on selective demand to get customers to buy a particular brand whether it's Minute Maid or traffic kana or simply orange so in terms of the competitive set we need to know who are our direct and who are our indirect competitors what about let's take another example before we go on and we're going to look at a perceptual map we're going to create a perceptual map together fun times right all right so who do you think nothing about this this is strategic let's think about who farms direct and indirect competitors for McDonald's at any time all right so we want to think about first for the direct competitors and then for the indirect competitors from economists what do you think Brandon Guyer cool so I wanted them to erect competitors Burger King Wendy's so you super Brandon is Dominus somebody else so we have for McDonald's we're saying that indirect competitors or Burger King Wendy's what else thank you said my castle okay and White Castle now who can tell me what these four best food restaurants have in common yeah they sell hamburgers and they're some type of meat we're not really a hundred percent sure all right so let's say so this is a strategic point of view he was saying he's a fast fast food burger places looks like that's what we come up with here and then another way Stephen is saying that well these are fast food restaurants but that maybe we need to look at let's see like yeah yeah to go or another way right yes well that's something we need just to decide strategically I think definitely these are fast-food and then we would also have those that are not like you're saying like Applebee's right they sell burgers and gif right they sell burgers and and other places but that's something we need to decide I'll be focusing on just fast-food restaurants or do we need to say you know something they sell a burger that's $9.99 is that something that's of concern to us that's something that's a strategic decision that executive team needs to make so if we stay within its fast-food right if we're saying that this is where where the line is being drawn then if these are the direct competitors and Subway might be an indirect competitor because they're a fast-food restaurant and I heard somebody say dominoes dominoes and what else Taco Bell KFC so Walter we have one two three four these eight companies are all fast food restaurants but we're making this continued decision to classify these as direct competitors because those are that product is burgers and these are also fast-food restaurants but the item and we're going to talk about I notice the product line and product mixes their items are different they sell chicken they sell tacos they sell pizza so should we be concerned about KFC yes exactly so these are substitutes so if you want fast food if you're not going to get a burger and you might get chicken or you might get pizza so you see why I think this is a good example we're not saying okay we're just gonna forget about the indirect competitors right we're not going to forget about them I reckon Penner's is for us to keep them on the radar screen for us I understand strategically who are our competitors but it's important to say some are direct and some are indirect so absolutely if I'm McDonald's I'm very concerned about KFC because I know if they don't get a burger then they might get chicken or they might get pizza so we're talking to the same customer basically at the same price point so the same value proposition fast food at a low price and heart disease basically right now they are all on for the same value proposition what happens if you look at the way some of these some companies are organized you'll see that what we're saying has been implemented in the marketplace so for example KFC Pizza Hut and Taco Bell I'm owned by the same company does anybody know the same name of the company when I tell you you're gonna think he just makes this stuff up the name of the company is young I'm not kidding young I gotta believe me they don't must they don't believe me really the name of the company is wrong and it's a company that was these companies used to be a part of Pepsi they sparked more and formed this company young and what's interesting is that these restaurants are co-located so if you go image for example to Kings Plaza you come in from the parking lot there and they have a KFC by Taco Bell and Pizza Hut in the same store so doesn't that support what she was saying toe CNA yes ok mommy doesn't that support marki money to say is that we recognize that those are substitutes for for each other so one even sells all three votes thank me all right but we can't move if you come in here you're beaten gonna buy the chicken the pizza or the tacos and at the end of the day it doesn't really matter to us because that money is going all back to the corporate organization to the young company you'll see you'll you'll see check later so that that shows that they understand that yes these items are substitutes for each other now when we use a term product and we're not going to do a perceptual map before we go but when we use a term product a product includes both of good and a service now you might from your own personal experience or purchasing habits you might think of products that way but we have to be familiar with this terminology in marketing when we talk about products we're talking about goods and services so a good is something that's tangible and a service is something that's intangible but whether it's a tangible product or intangible product it's still a product so both goods and services are products now those products could either be consumer products or business products so again productivity goods or services good is something that's tangible service is a tangible and those products could be business records or consumer products and there's some classifications that we use when we talk about these different types of products and let me say this that the reason why it's important to classify these products this way so for consumers convenience products shopping specialty I'm sort you like why did that matter I just I went on board TV I bought an ipad I bought orange juice why does it matter come on coach you make it too difficult it's a product why do I need to know if it's convenient or shopping or specialty I'm sore the reason is who knows besides me please who knows the question is why do we need to understand the classification of different consumer products why doesn't it just enough to say it comes Edward Edward yes everybody's here everybody yes anybody want to answer that so once we classify the product as an organization that absolutely so once we understand that the product is a convenience product so we need to understand the consumer behavior we need to understand if it's a convenience product a shopping proper a specialty product or an uncertain because that's going to determine our marketing strategy our marketing tactics that's been influenced our marketing mix so again is not just like interesting although our product is a convenience product who cares no as an organization as marketers we care so what's a convenience product to facing the good example yes so fast food is that convenience product orange juice these are purchases that are made frequently so they have treatment purchases that are usually inexpensive and then what we would describe as low involvement purchases so there's not a lot of research that's done it's pretty much a routine purchase it's inexpensive you purchase the product frequently so we go into the store we buy a half a gallon of Tropicana orange juice and we might do that twice a week or we go over to the store and every morning we by Red Bull Red Bull you drink that you know bring your ancestors back from the dead those are examples of convenience products any questions about that if I see why that is why you would designate those items as convenience products so they're purchased frequently they're generally inexpensive they're blue jean purchases low involvement we don't do an extensive research those are convenience products now that's different from a shopping product a shopping product is more expensive a shopping product is more expensive that's more appropriate so for example a shopping product is one that we consider a variety of alternatives like for example a TV suppose you are going to buy a TV which was something you have TVs that are plasma you have TVs that or LCD you have TVs that are LED alternatives you have TVs that are smart TVs you have smart TVs that have access to a web browser and you have SWAT TVs that have access only to web applications so except the wall smarty pants are not the same some Smart TVs you can navigate on the screen and click on the icon firm YouTube or Netflix but the Smart TVs you actually have a web browser that comes up and you could type in our course website or you could type in the Brooklyn College website well you could type in the website for Macy's alternatives so a shopping product is one that's very expensive so if you want to buy a high-definition LED 1080p smart television at Best Buy is surplus about a thousand dollars but what about tomorrow violent that's not smart one next instead of helping D that's LCD or Plasma if you want to buy a plasma television that's not smart you could buy those for like $400 and then you have two different suite sizes you can buy ones that are 32 inches ones that are 40 inches 42 inches 46 inches 50 inches 55 in Genesis the engines so if somebody could literally go to Best Buy on Bay Parkway right and stay there like four hours trying to figure out what TV to fine so that's an example right that's an example of a shopping good we get to spend a lot of time it's a fairly extensive good turn about spending something like around a thousand dollars very different than buying a gallon of orange juice that's an example of a shop enjoyed does that make sense I mean could you see yourself going in there Chantal and be there for like four or five hours looking at the different brand sony samsung sharp plasma LCD LED smart TVs Matt Smart TVs 27-inch 40 inch 22 inch so that's more of a high involvement purchase we have all these alternatives now any questions about that now that's different from a specialty hood a specialty good it's usually something that's quite expensive and it could be for example let's say a rolls-royce it could be for example an expensive watch like let's say or Bo let's see that's the focus see that's like for example my unique selling proposition as a teacher is the students that my classes the expectation is that in their career they're going to be successful and be able to own a bonus those who are okay mechanics they take the other guy but those who want Rolex they take the classic coach right so expected watch or expensive car is an example of a specialty good folks yeah I mean this is my ultra luxury because you're thinking like maybe a a Ferragamo bag right so this is a nice bag but that's not in the category I'm going to consider that in the category of specialty because these are usually very expensive and importantly so we're talking about pricier prices one of the marketing mix elements but in terms of distribution this has inconvenience goods have a very high level of distribution shopping books also have a very high level of distribution like with the TVs you can go to Best Buy you can go to Walmart you go to Target you can buy $1,000 $2,000 TVs add a lot of source is that right no you're not sure I mean TVs you could find it pretty much in a lot of different places in a lot of different stores convenience goods right in Dunkin Donuts 7-eleven Starbucks everywhere we could go in and buy a soda or a bottle so those have high levels of distribution specialty products which are usually very expensive have a limited amount of distribution so that means there's not a lot of stores that sell Rolex orange juice for some people great boost is a convenience product which you could also buy them like every corner because there's literally a liquor store in every corner but not third rolls-royce yeah was there some hope for dealerships in New York there's certainly places in in the metro area where you could buy Rolex and other expensive watches like whitening for example for me yeah something this place but there's not a lot of places you don't have that high level of distribution that you have with shopping and convenience products questions so I would say some of those luxury cars have been thinking of like if it isn't about let's say like green and things like that I retain those are more shopping and specialty although I mean you know healthy products are quite expensive but not in the same price boring category as when we think of specialty goods that have limited distribution any questions so far yeah well we're going to use a different classification for service all right so we're going to come back to that but what this shop with some less classification for consumer goods which is unsorted so when I'm sort the prices vary so when I'm sort good could be expensive or not so expensive but what makes it different from the other goods is that it's something that we're not aware of or we're not looking for we're not actively looking for so many of those other products a lot of research you're looking for a bottle of water might be right down the hall but they're still looking for it you're still looking through our TV you're still looking for a watch but I'm up sort good is one that you're not looking for like for example what what might be something that you're not looking for a burial plot right a burial plot it's not something that tell me you might even be unaware and some of you are just like well I'm like I'm just not looking for that maybe for example let's say you know some types of insurance but let's say for example maybe life insurance life insurance you may not be looking for but car insurance you are so we can't just like classify all of insurance as I'm sure but I think a lot of people drive cars so a lot of people I would say are looking for insurance enough to meet everybody the other thing everybody here has a car buy something for car insurance but that's certainly more common than people in your age group looking for life insurance so like the church you might say is I'm gonna sort the burial plot is an unsorted yeah oh yeah I'm not saying that it's not I'm just saying that I'm just saying that it does it depends right so it might be something that that is unsought some individuals so there's still a market for that there's still markets for those products and actually life insurance are coming to describe Walker is a service then the public electronic but what do you think do you agree in terms of um sort goods specialty chuckling could be inside that there's differences in the types of products and specifically now we're talking about goods that we buy and we need to understand as marketers the consumer behavior we need to understand the behavior of consumers because if we're selling our product if our good is unsought then we need to figure out how through our marketing strategies and tactics that we're going to get people to buy that product and if our product is prospectively good that's quite expensive but has limited distribution our challenge is how do we get people to buy our product and you can see how the challenges for us Rarity's so for specialty good we have is a Challenge work because we have limited distribution but for a shopping good there's a greater amount of distribution and for a convenience good it's pretty much everywhere so do you think it's a lot easier as marketers to sell convenience goods that are very inexpensive and on every corner so our strategy and approach our tactics our marketing mix is going to be impacted by how we classified the products and in this case we're specifically talking about goods so we're talking about consumer goods now in terms of products we turn to me to be huggable I know why and mix so as a marketing organization we're gonna sell an item an example of an item would be a 16 gigabyte iPad that's an example of an item 1 item 1s can you know that SK newest yes it's a student SKU sense where stock keeping unit so every item has its own keeping unit number so our example would be a 16 gigabyte I that's right an example of a product line would be a group of items so a product line a product line is a pool of items so we said an example of an item is a 16 gigabyte iPad that's an item one item one stock keeping unit a product line is a group of items so a product line in this case would be so a group of a group of items would be a 16 gigabyte iPad a 32 gigabyte iPad and a 64 gigabyte iPad that would be a product line which is a group of items so a 16 gigabyte I said she's an item a 32 gigabyte iPad it is an item a 64 gigabyte iPad 2 to the right is an item when we took one of those items together those are what makes up our product line questions about that so we see what the items are 16 gigabyte iPad is item 32 thing about it I said item when we do folks together that's our product line this is not product line this is the Apple iPad product line we have a 16 gigabyte 32 gigabyte and a 64 gigabyte now the prompt mix is the company's product lines grouped together alright so follow me the item we talked about they identified a single stop keep stock keeping unit that's an item the product line is a group of items and the product mix is a group of product lines everybody got it so the product line is a group of items and the product mix is a group of product lines so far next would be the iPad the iPhone the iPod and he's said to be 16 gigabyte 32 gigabyte 64 gigabyte it needs to be 4 4 s by iPad we have well this upper ends here the Nano the shut hold and we have the different well then the product line is going to change so at this point in time that's not fog line but then we could replace our product line so as part of the new product development process of identifying opportunities creating concepts manufacturing products launching those products and tracking the performance right that's an example of a new part development process we're going to continue to innovate and introduce new items and those new items we're going to group together and classify them as a product line and then all the product lines in the company we put together and we call that a product mix so an organization could have multiple product mixes we could sell phones we could sell tablets we could sell mp3 players is this a good example you want to take another example what about Sony would Sony be a good booth give us an example an item that so me sells PlayStation 3 was an email all right so the don't look asks for example let's say the so many items would be let's say look at the TV so we're going to say on LCD TV that's 40 inches that's one item that their product line could be LED LCD TVs plasma TVs see this is just one item but then in their product line they have a lot of different items and maybe their product mix they have TVs and cameras what else your computers what was it games so this is another example of we have an item you take all the items we put them together to have a product line then we cook together all our product lines and that's a hardness all of these things reason why we go through the trouble and take the effort to classify them that's what we're doing they're classifying our products is that's what we're going to use as the basis in large part for our marketing plan that's what's gonna define our strategies and our tactics that's what's going to influence our marketing mix because if we just have one product line that's going to be a different type of marketing plan than if we're responsible for advertising TV cameras computers gaming consoles via Griffin and that's why we're looking at things that's why we need to understand this what are our product offerings what are our product commitments that we could go up and effective for example advertising campaign or distribution strategy or pricing strategy see when you just look at this as an item it's out of context so we might be saying well how much should we charge for this but if we look at the product line then we might it might make more sense to have a strategy that's like this which is a good better best pricing strategy so you see right away we're just looking at an item didn't really give us the chance to develop a strategy for the organization but when we look at it in the context of a product line then we see man is an opportunity for us to effectively manage the marketing mix now this is very compelling isn't it to have teething at $400 and $700 and $900 and of course I'm sure they have ones that are at 500 600 right 700 800 900 but this is what we would refer to as a good better best pricing strategy and here is that they're going to pay more and get more so at a higher price you'll have a product that has more features and benefits now remember we looked at the brand hierarchy and we said there's a corporate brand a master brand and very often companies use sub brands as part of their branding strategy that's also something that can come out of this analysis so for example the frame used here is playstation right and what about here do they have any need for computers the bio yeah so these are examples of sub-grants because the product line at the product line level we have a master brand which is Sony the corporate brand is Sony what so that's the corporate brand where it's Sony Corporation depending on to the way the double organization is structured and by the way they can change their names over time but one part that we know is participative so nice because there's so much brand equity in the Sony branding but importantly their master brand is Sony without the corporation without ANC just Sony that's the master brand that you see on bull net products but they do sub brands like Higa we identified the Sony Vaio is laptop that the company sells and if you want to buy a Sony Vaio that has a third-generation i7 Intel processor that has one terabyte of hard drive space and 12 gigabyte of RAM that's about thank you dollars yeah they have that that's the S or that they are on s product line if you want to get the C product line then the item T product they have actually 16 gigabytes of RAM but they have a much lower processor so think about right in the marketplace is we need to identify who are our direct competitors for storing in our indirect competitors in some may argue that I'd say another direct competitor or maybe would say that they are but for different reasons question now business you know have business products business products fall into two classifications components and support so a component is something that goes into the final product so it could be some type of raw material which would be a component so it's part of the manufacturing process so the Intel chip is a component and the reason why we describe that as business to business versus business to consumer see up until now we were just talking about Sony selling to consumers Tropicana selling to consumers Apple selling to consumers now we're talking about businesses selling to other businesses tonight was business to consumer now we're going to switch gears and talk a little bit about business to business a component is something that what business could sell to another Intel is a great example of that so Intel sells their chips their processors to Sony to Toshiba exactly to other companies and that's incorporated into their product into their computer into their laptop that's a component part another type of business product is what we refer to as a support product which would be for example what what would be an example of a support product Geek Squad um but is the Geek Squad providing a service to consumers or to businesses could be quickly to both that's an interesting situation because they might actually be providing a service to both but let's focus let's take on products for now maybe keyboard a keyboard yes absolutely that could be an example of a component you're saying as a support yeah I'm saying it as a support so a mouse tempo for a computer that might be something that one business sells to another so you've already purchased right we're not talking about manufacturing the computers anymore now we're talking about office supplies like you need to buy a mouse for your computer you need to buy a keyboard you need to buy paper clips now you might think coach come on you can't make any money selling paper clips well what about if you have a cell paper clips to an organization like Brooklyn College so there's different types of organizations there's businesses there's institutions government so what about if you wanted to sell paper clips to the state of New York so when we use the term business to business it's not just corporations or what we think of in terms of businesses it could be a variety of organization that I mentioned it could be a business as we normally think of a business an office or it could be the government or it could be institutions like hospitals for example wealth don't hospitals buy paper clips now maybe that's not the something that we have right away but what about Kings counting they have their offices and they need to buy paper clips and they need to buy bleach they need to fight minutes to clean the floor negative I clean the windows they need to buy mops and brooms well those are what we describe in business to business as support products so the brutal depict the clip that's not part of the product that doesn't go into the final product and let's hope that if you ever go to King's County that you don't need with a paper clip inside you because that would be like above and beyond what you would have expected |
Marketing_Basics_Prof_Myles_Bassell | 1_of_20_Marketing_Basics_Myles_Bassell.txt | all right so we're going to talk about marketing are you guys ready marketing all right so today what we're going to do is we're going to talk about what is marketing and we're going to talk about some business strategies we're going to talk about some different growth strategies for example Market penetration Market development diversification and new product development we'll talk about that later on in the class but um first I want to talk and um get your input as to what is marketing because that's what we're going to be talking about on an ongoing basis is marketing and what I want to share with you is something that is going to enable us to get our arms around the idea of marketing which um we refer to as the marketing mix and the marketing Mi mix consists of the four Ps so if somebody says well what is marketing about the four Ps although it sounds simplistic as a way to describe marketing is really rather complex but it's a good place for us to start because I think it's something that enables us to understand the scope of what we're going to be talking about so the marketing mix are those factors that we can control and the four Ps include price product place and promotion and promotion also includes advertising but advertising doesn't start with a PE but in general um in the industry it's normal when we think about advertising that we see that as part of promotion sales promotion um trade promotion consumer promotions and um all of that um plus advertising so the four piece that's an important buzzword if you will in terms of marketing that's really what marketing is all about is how we as Executives and business people change the four Ps because remember the marketing mix are those things that we can control we could control the price we determine the price not the invisible hand but business Executives managers we determine the price that we're going to sell our product or service we determine the features and benefits of the product we determine the messaging for our advertising campaign and how much that we're going to spend on advertising and where we're going to distribute our product so we don't really set the price though because they all can set the price of whatever they want the consumer the price because that's what they're willing to pay so absolutely we want to identify the price that consumer are willing to pay and there's five Key activities in marketing the first activity is to identify an unmet need so this is also a broad look at marketing because you can take lots of courses in marketing you could read many books in marketing you could read thousands and thousands of pages of marketing but just so that we start our discussion at a place where we could sort of get the big picture when we think about marketing so I want you to understand where we're starting from and where we're going to end up so the first step in marketing is to identify an unmet need and in order to do that we're going to do marketing research we're going to do qualitative research and quantitative research we're going to do primary research and we might also um purchase secondary research who could tell us the difference between qualitative and quantitative research go ahead um how much quality the product is uh how uh using how much the um the product is uh is actual value versus how much it's actually being sold is the the product how much how how much of a quality object is the thing that you're selling versus what everyone's qualitative is how many you're selling well think about it from a research perspective so for example a qualitative research a good example of qualitative research would be focus groups and focus groups we have 10 or 12 people that presumably are in our target market the target market is those people that we want to buy our product and we get their input on what are some of the problems they're experiencing in let's say cooking or in using cooking products for example and we'll share with them a variety of Concepts to try and understand whether or not those concepts are going to solve the problems that they have but after doing four rounds of focus group we're going to basically have interviewed 48 people we don't really have anything statistically significant there where we could say 87% said that one of the problems they have is food sticking to the pot now if consumers say that in their research that's helpful to us because then what we're going to do is test that in quantitative research so we're going to do a survey it could be a mail survey it could be a phone survey it could be an internet survey but with that survey we're going to try to get about 1,500 respondents and with 1500 respondents in most categories in most markets that's something statistically significant if it's a representative random sample so if the people that completed the survey are representative of our target market so we have to have a proportional number of men or women that make up our target market now it might just be that we need to interview all women maybe it's a product that's purchased and used by only women then it's appropriate for the sample to be only women and then maybe sometimes you want to get information about women in a certain age group that's okay but it just needs to be representative of who it is that we want to buy the product so qualitative research and quantitative research are different but they work hand inand because once we have the qualitative research that's going to be the basis for our quantitative research but we're going to come back to that um walket research is in um chapter 8 we're going to talk quite a bit more about how we identify an unmet need but um you made a good point about price The Next Step the next marketing activity is to identify a concept and once we identify and develop a concept then we're going to determine a price that the customer is willing to pay so so far we have three activities identify an unmet need develop a concept determine a price that the customer is willing to pay number four is to gain distribution and five is to build awareness everybody got that who could tell me so what are the five key marketing activities what are they go ahead tell me your name um mosa okay mosa go ahead identify a key need um so what everyone uh everyone one statistically I guess and uh would want um then develop a concept for it so come up with a model how would work um three come up with a price that would be fair um and fair for everyone to buy but also for everyone to make a profit um number four is to find distribution and uh who's going to buy and uh didn't down five five is to build awareness and importantly what we want to do is get distribution first before we start to advertise so we need to be on the Shelf so to speak literally in figuratively in Walmart Kmart wherever it is that is appropriate to sell our product Macy's blooming DS Best Buy Pathmark keu dagos Rogers whever is appropriate for our product we should have the product available before we start to advertise because what we want to do is not spend a lot of money on advertising and then have the customer go into the store and then find out that the product is not available now in some Industries it's um common to create some height where um the product is not available and that's intentional to create this um image of scarcity and sometimes that makes a product more desirable but we have to determine whether or not the category is one that's prone to high involvement or low involvement purchases so if it's a high involvement purchase then people will go back like for example music is something that people are very engaged in or gaming do you agree if they don't have it even though they said the release date was January 15th you go there they don't have your game or they don't have the CD then very often people will go back a few days later or the next week but in some categories that's not the case in some categories it's low involvement and if you go there and they don't have the product then you might leave and you may not come back back and what that means is we have to spend more money on Advertising to get people to go back into the store to search for the product so it's always better to have distribution first so the order is important before you start spending 10 15 $20 million on Advertising to make people aware of our product or service create interest and desire and ultimately to take action and then to find out that their action was in vain and that model is this so what we do is we try to get people's attention create interest develop desire and ultimately get them to take action but this is a a cycle that has to occur now in order to get from attention to action involves a significant amount of marketing communication and a significant investment so we're spending millions of dollars to make that a reality so if they go there and the product is not there that's a big problem go ahead so the first one is to get people's attention so our goal is when we're advertising is to get people's attention and that's why you see some of the ads they're um quite creative and even if you don't like the ad it's okay even if the ad is annoying if it gets your attention If it creates some stopping power for you to say what's going on here and it's able to communicate the key feur and benefits of the product or create interest so you develop um a need to know more about the product or service and ideally make you want the product right create a desire for the product even though if we've done our marketing research properly we've already identified the unmet need so now we're just making people aware of the product or service we're just making them aware of the solution it shouldn't be a tough sell for us to get people to buy the product since we've already done the research and we know what their problems are but still we want to um instill in them this desire for the product and ultimately get them to take action which means action is either they log on and um search for the product on the internet so they go to amazon.com to buy the product or they leave their house and they go to Walmart or some other store if not immediately the next day but at some point um shortly after they they s our ad so that's why it's important to make sure the product is available CU it's only in unique circumstances that they would actually go back and look for the product again after they went in the store and they were sold out do you agree in some categories it makes sense right any of you guys Gamers what do you think if the game is not there the day they said it was going to be released then you're going to you're going to go back right if it's something that you're really enthusiastic about then that's going to be a high involvement purchase for you but other products not so much and depends on the individual what might be a high involvement purchase for you may not be a high empowerment purchase for somebody else so it's definitely it's personal and usually um price the level of price is usually um associated ated with high involvement purchases although it's not the only indicator but then again keep in mind what's considered to be expensive for one person may not be expensive for another but the idea is what we need to understand is the consumer behavior that what Behavior will we anticipate if our product is considered to be a high involvement or a low involvement purchase that's why we need to understand that that's why we need to do the research that's why we need to understand consumer Behavior so that we could plan accordingly um sometimes don't businesses or companies only release a certain amount to keep ATT tension between the consumers like apple you have to sign up before you get the iPhone if if it's not there when you get there you didn't get it you have to return the next day it's part of a building tension they sort of create this drama for you yeah they try to create this pent up demand this um hike if you will but I would think that that type of product um is something that people would consider to be high involvement would you agree like iPhone um people yeah people will come back people will stand online for 15 hours to to get the product or to get the Xbox 360 now maybe that's not you but we need to understand that a certain percentage of the market behaves that way so we need to have a strategy and tactic that's um going to be able to address that dynamic in the marketplace yeah I just thought our chair I saw an interesting ad last night it was just I was watching basketball game I saw something for uh uh Taco Bells was saying that uh you can get their plac the PlayStation a new Playstation system before it even comes out on the market but through a contest so that's a way that people someone has it and now there's so much hypee on like your friend who has it right based on you and not the rest of the Market's not available to them so it's like a coveted thing now right absolutely and it's interesting that they picked a gaming console yeah right so I think that's very relevant to what we're talking about definitely that's what their target market is because whoever eats for that fast food is more like I guess teenagers and who plays games Teen yeah it could be yeah absolutely we need to understand all of that we need to understand the consumer profile who is our target market and we say who is our target market yes part of that is is what we're trying to understand is the psychographics the lifestyle which is what Jason is talking about what is the lifestyle of our target market that they eat fast food that they eat a Taco Bell um that they golf whatever whatever it is that's part of um their lifestyle as well as their age um their occupation um their gender their ethnicity their religion how are those things going to help us what if we find out that um go ahead yeah it will help us find the market where we're going to sell in the the most the most profitability I guess CU like we sell a burrito to a 85y old like they're OB not going to go for that you understand why well I maybe the 85y oldold is not going to eat the burrito but um or maybe they will but they're just not going to be into gaming I don't know we depends we have to see what the research um what what the research tells us so it's not what we think it's what the research reveals what consumers are um are uh what the consumer view is favorable what they're willing to purchase what they like because they vote with their dollars so there's no such thing as a great idea you don't have any great ideas I don't have any great ideas the only great idea is the one that the customer says they will buy that's the only great idea and the only way we could find that out is through research so that's a um an overview of marketing those are the five Key activities and each of those activities are vast but to give you a sense of the entire process that's what we mean when we talk about marketing um those activities and of course closely related to that is the marketing mix which those that's the our toolkit those are the controllable factors now there's uncontrollable factors like for example um environmental change yeah environmental change government regulation techology the economy technological advances is we can't control if the economy is in a recession so if the economy is in a recession of course that's going to have an impact on the demand for our product but that's not something that we control but if there is a recession what could we do what could we do just lower our prices yeah we could lower our price see that's something that we could do that's part of the marketing mix that's a controllable Factor then we have to determine how much we would lower the price because we want to understand if we lower the price 10% how much will total revenue increase and how much will our net income increase as a result now in an in an elastic Market elastic Market an El elastic Market is a market that's price sensitive that means when the price goes down the demand is going to increase the question question is by how much that becomes um a bit more of a challenge for us to determine that's something that we need to model to try and understand what um what's the nature of that behavior is it directly proportional because we're going to have to make decisions on how many units we're going to produce based on what we anticipate demand to be see sometimes times this issue of scarcity is not really a deliberate strategy on the part of the company it's not really their attempt to create hype or this pent up demand as you were suggesting sometimes they just didn't forecast correctly and they don't have enough product because forecasting is very difficult what we're trying to do is determine how many units we're going to need to meet demand none of us we I don't have any crystal ball I don't mind telling you I've been in business 20 years plus I'm not ashamed to say that forecasting is something that's difficult hundred billion dollar companies struggle with forecasting demand it's very challenging to anticipate what the demand is going to be and based on what we anticipate demand to be is going to influence our production schedule now how long does it take to make a particular product because when we get an order from Walmart for 100,000 units for most products that's not something you can make in a weekend you know that each um holiday season there's um some new toy that comes out some kind of new teddy bear or Electronics like you know they used to have like um Tickle Me Elmo you know that you know when they start making Tickle Me Elmo they start making it a year in advance of the fourth quarter so 10 months 12 months before they're going to ship the product from China is when they start making it so they've already started producing tickle MMO for that they're going to ship in September of of this year because if you're going to sell let's say 25 million units right if you need to produce that many the production period could be months even years and that's why it's so challenging when you think about um the demand for the new iPhone and the new iPad well if it's going to be available on February 1st that means they've had to start making that like in the summer to be able to meet the man how long do you think it takes to assemble one of those iPads 5 minutes I mean just imagine if you have to make 25 million of them I mean it's going to take you months many months to um to produce that many so we also have production limitations we have a certain limitation in our capacity and so that's why there's this dilemma if you will for managers because you don't want to make too much but then also you're limited how many you can make because of either the number of employees that you have or the um the number of pieces of equipment but you don't want to have too much equipment because look at what happened to um the Auto industry for example one of the biggest reasons why General Motors Ford and Chrysler have struggled over the last 10 and 20 years is because in the 1970s were any of you alive then no probably not in the 1970s they were the market share leaders we didn't have um Toyota dominating the US Auto market so they had this huge capacity the ability to make millions and millions of cars but then what happened is as foreign competition entered the market they sold fewer and fewer and significantly fewer cars but what didn't change is they huge manufacturing capacity and that's a huge fixed cost for their organization and fixed costs have got to be accounted for you can't ignore ignore them questions are we good are we great yes all right all right so the question is how are we going to achieve our objectives in any organization there has to be three plans there's three levels basic levels in an organization there's the Cor business and functional so we're going to talk a little bit now about business strategy so we need to have a plan in order to make our business strategy real we need to have a plan so where does it start at the top there's got to be a corporate plan that's what defines the business plan and that's what defines the functional plan so what is the corporate plan the corporate plan is the plan that's developed by the senior management team that addresses the mission the value values I should say and vision of the organization so the corporate plan includes the mission the vision and the values of the organization those are three key components it's not limited to that but that's certainly um three three of the key components vision and what vales values so the mission um and I should add that there's a tendency nowadays to Define mission and vision as the same but really they're not all right the intent is um is different but sometimes those terms are used interchangeably but let me clarify that for you mission is the business that the company is in now so what is your mission as an organization is to provide let's say Educational Learning devices to high school students in North America now also keep in mind that the mission and the vision of the organization should be short this is not your entire strategic plan everybody in the organization should be able to communicate what the mission is for the organization everybody from the president of the company down to administrative assistance janitorial staff everybody should be able to internalize what that is so if somebody is asked what is the mission of your organization everybody somebody at the switchboard they should be able to communicate the mission of the organization so it needs to be what I would say is deceptively simplistic so it needs to Encompass the organizational goal in um in a in a broad way but the vision is where we want to be see that was the original intent of having the mission and vision the mission is a definition of the business in which we currently operate but the vision is where we want to be in the future so our vision might be something like to be the number one market share in Educational Learning devices for high school students around the world now you see how that's different from the mission or no what do you think are they the same from what I described right the mission simply said that we're in the um business of developing Educational Learning devices for high school students in North America but then we said our vision is to be the leading or the market share um leading market share or number one market share producing educational device company worldwide see that's where we want to be we're not there now but that's where we want to be in the future so you see that you see the difference so one is where we are now defines our business now and then the vision is where we want to be in the future Even in our textbook they sort of blend those terms that was never really the intent so as and the intent is the way that I described it to you as the vision is being forward looking and where we're going to be in the future usually when when you describe Mission like like when a team goes on a mission like it's something that's that hasn't occurred yet like I feel like in a way like in order to accomplish the mission you need to finish it which has a lot to do with the vision so yeah oh absolutely they're definitely interrelated um absolutely but um the division is definitely more um aspirational as you described as like where we want to be but what I'm saying the mission is is actually where we starting from so you're um you're saying that the the mission is to get to someplace but in this case we're saying that well we're starting from here and then we have an aspirational goal to achieve another objective so we need to Define our business like what is it that we do what is it we do on a daily basis on a daily basis we produce Educational Learning devices in the North American Market that's what we do and there should be focus and you'll see there's um we're going to talk about growth strategies there used to be um many companies that focused on diversification they didn't have that that kind of focus and that was very popular in the 70s tobacco companies owned food companies those types of things were very common Sears you used to be the nation's in the United States used to be the nation's largest retailer Sears and they acquired um an insurance company All State they acquired a brokerage firm Dean Witter they uh um acquired Discover card and that was that was very common but now um Wall Street is rewarding companies for being focused and now you see companies are shedding these other organizations and they're trying to focus in fact that's actually what Sears did ultimately was sold although yeah they sold them off although all state was very successful um and we're going to talk about how it relates to this model because what happened is um all state for example was a star in this model we're going to talk about the BCG model but the cash cow was SE retail operations so they used the profits that they generated from SE retail to fund the growth of all state and Discover card and Dean Witter services but then they came full circle and then they ultimately decided that they were going to refocus on being the best at what originally was the key to the company's success which was retail and that's something that they've been struggling to do for like the last 15 or 20 years in fact uh a few years ago several years ago now um they um came together with Kmart so Kmart and Sears are one company which is um both companies um had been struggling for quite a while so you might wonder if that was a brilliant thing to do right for two weak companies to come together but that's um that's what they did um and that was really key to their survival because um if that didn't happen um both companies um would have um gone out of business yeah just a question on that I don't know I I feel feel like like most of the time it doesn't always work out that way when two weak sources combined to to work together Try to Make a Better product why do you think that is that when when two two weaker sources combine their resources why why why doesn't it necessarily like enhance it so much why is it just like like as like with this it kind of stayed the same I know Sims and uh and uh F basement combined and that didn't work out different companies a lot of times Sprint combined with Nextel didn't do anything for them yeah well you ideally what you want to do is is um combined with a partner that has complimentary skills or some sort of competitive Advantage so um in other words your strength is their weakness and their weakness is your strength but for companies that are really struggling they have so many weaknesses that it's just sort of like the blind leading the blind right that they they can't help each they can't help themselves no less help each other but you'd like to think that there would be some synergistic effect from them coming together sometimes um that happens like um for example Johnson and Johnson they're known for having a portfolio of um of companies so but they're decentralized but what makes up Johnson and Johnson is these group of companies also new new Incorporated is also made up of a couple of dozen companies and the key to their success has been the centralization of their operating systems Pro G so Proctor and Gamble um is also a good example very successful they acquire um other companies and integrate them but in order for that to happen you need to have be operating from a position of strength and then take a weak company and show them how to do it better Big Brother type right right but I mean it doesn't mean that it couldn't happen to um small companies or struggling companies that they couldn't come together and together be bigger than they were um the you know operating independently because certainly two companies operating independently are going to have redundancy so one of the advantages of coming together is well now you only need one HR department and now well how many um how many um manufacturing facilities do you need so you might be able to combine manufacturing facilities you might be able to you know reduce the number of employees significantly so you have to look for those types of efficiencies and sometimes um that happens um sometimes it doesn't happen at the level that people anticipate because there certainly there's a course associated with that merger but it depends you got to take it on a Case by case basis but I wouldn't say categorically two weak companies coming together are doomed but it does seem like very often it's hard for them to emerge um successfully from from their troubles because um very often they wait um to a point where their situation is so bad that um even combining um is really just a an act of desperation but you know really depends on the on on the case yes go ahead um so few questions um number one when this happens when they combine how does it Define like who is the sort of the boss like who become like the Wicker company and the well yeah that's something that the um the parties have to agree upon um and that's one of the also the the issues is um is really in integrating those companies that's one of the biggest challenges in um two different corporate cultures coming together and you're right there is this power struggle sometimes it has to do with the um the level of assets is one of the ways that usually they decide um which company is going to have the decision-making um Power so one company might have $50 million in assets another company might have $25 million in assets so they might say well we're not merging we're acquiring you right that's that's different right than to say oh well we're both you know the same size company um and we're really we're on equal footing um as opposed to saying well it's not um it's not a really a partnership per se we've actually bought your company out and other question is um when like when can this be done in in like in order to like promote like when for example two famous companies or one famous is another one is not so much can this be like just promotion just in terms of like image because either two xrade companies United and like in terms of for example stock shares would that go up that effect on that just like just names not like that before we see the results when we just hear the compy oh and anticipation absolutely so um the market will anticipate that there's going to be a reduction in the number of employees there's going to be um you know other um savings and efficiencies that are going to be achieved and certainly that's going to impact the stock price is more positive or it's more like uh let's wait or people are right away more like yeah this is probably going well I would like to think that it's going to be be perceived as positive but then the question is did we um were our expectations set too high so it should be an agreed upon outcome the two companies agreed that the best strategic thing to do is for us to combine and help each other and together we could be successful so presumably the general the marketplace at large right Wall Street is in agreement with the strategy that the senior management team of these organizations came up with that yes that's the right thing and you're right overall the company's going to be more profitable and uh stock price um would go up but it depends on what the actual proposal is um is there ever a possibility that the mission statement can change once you achieve your vision or even oh absolutely in this case the business company of the um Educational Learning device right so let's say you say your your mission is that you provide Educational Learning devices to schools high schools and your view and your vision is that you provide worldwide once you achieve that your mission stand is still that you want to provide that you want to provide um Educational Learning devices to high schools fact that you're just doing it worldwide right so it's okay to um to adjust your mission statement to reflect um changes in the environment um if you achieve certain goals or maybe you have certain setbacks then you could um you could adapt it yeah that's that's okay and in some cases I think what you're saying is that your vision becomes your mission so once you achieve that yeah I think that's plausible and then we have to decide well where do we go from there so absolutely so that's what we talk about in the corporate plan but the thing is that Senior Management doesn't have operational power so in other words once the senior management team addresses some of these issues and that's not the only thing that's in the corporate plan but certainly three of the key things that they talk about are the mission the vision and the values for the entire organization then the Strategic business units known as sbus the Strategic business units are then tasked with making that Vision a reality making that mission a reality so in of itself the key is that in of itself it's not enough just to have a mission statement it's got to be real how do you bring that to life so then you're going to rely on the business business units which could be now that's different from the functional units which are right these are three levels in the organization and three plans what we talked about in chapter two is three plans in an organization an organization is going to have all three plans operating simultaneously the corporate plan the business plan and the functional plans the functional plans would be like the plan that the marketing department has is an example of a functioning a functional plan or the manufacturing Department there needs to be shared goals and objectives so whatever the key goals and objectives are of the corporate plan has got to be part of the business plan because the business plan is the way that the Strategic business unit that division is gonna make the mission and vision a reality yeah just uh uh I don't really understand fully like uh what what business I know they're supposed to like you said bring it to life but like how how do they do that what what is that you mind giving an example of that so for example let's say um in a given um company let's say an electronics company so an electronics company like Sony for example they have their corporate plan but then they have a variety of strategic business units so they have a group of Divisions like for example where um TVs Compu laptops right computers what else gaming councils ceras when did I miss so for mp3 so let's say that um one of our um strategies or part of our mission is to be the leading or the number one Electronics producer worldwide all right well that's interesting I mean that's yeah Vision let's say that's the vision for the organization right where they want to be then it's up to each of these strategic business units to make that a reality so then the division that produces flat panel monitors they've got to produce the product that's going to outsell other producers of monitors right if they want to be the market share leader and then the same with laptops and game consoles and DVD players right they have to develop strategies and tactics that are going to make that a reality so that means that if we're going to be the world's largest and leading market share producer of electronics that means that all our strategic business unit units all our divisions have got to be the leading producer that means I mean we could try I mean maybe we won't be in TVs but our goal is to be leading producer of all of those categories of all of those strategic business units basically to make the mission come to reality yes right they the the the um strategic business units are going to make the mission and the vision a reality that's where it becomes operationalized because really what is the corporate plan for the most part is just words said this is our goal but then well how do you that's nice great the senior management team has set the direction for the entire organization and there may be some um some strategies sometimes um in a centralized organization they might actually um provide Direction to each one of these strategic business units and tell them some key strategies or areas of development or Focus but every day each of these strategic business units has got to be working to achieve the mission and vision of the organization and that's why I said it's so important everybody when they come to work they need to know like why am I here why am I here because we are going to be the single largest the most successful electronics company in the world yeah right so that's what you need to happen that's why it's so important I say everybody needs to internalize that there's going to be a lot of complicated reports and strategies and tactics but you need to the mission and vision has got to be something that everybody can grab a hold of and know like oh that's why I'm here I I this is my purpose this is my role just in a case like Sony where their vision is to be the number one Electronics retailer in the world what would be their mission that they're Electronics seller that they're Electronics retailer yeah that they're a provider their mission is I would say well we could we could SP get the annual report and find out but I would think it's to um be a worldwide Pro provider of electronics and Technology Solutions so I would think that their goal is to um to be in everybody's home right to have a very high level of household penetration that um you know and come and different strategic business units they might have you know goals like that to say that you know 70% of Americans will own a Sony bio laptop that's they said goals um like that we need to measure the level of household penetration and and market share so we discussed U vision and Mission but what exactly are the values like like the like guidelines that the company or organization goes by like we're not going to make cheap cheap materials and sell for right what's important to the organization so for example to um to respect um diversity and cultural differences of our employees and our suppliers that would be an example of a value that the company has but what I would caution you is is that it's got to be real so it can't just be words on the website there's we've got to actions speak louder than words so if that's true then what you should be giving scholarships to um minorities in the community in which you operate your business just Che out this right the functional plan implements the business plan well we have these are all shared objectives so um I mean like the business plan is basically how we're going to do and then the functional put it into action yeah the functional is you know you said that um we're going to achieve a high level of uh brand awareness then you need to then your marketing team needs to go to work and they're going to develop advertising campaigns print ads commercials outdoor ads to increase the level of awareness for our brand yeah absolutely and importantly I want to emphasize this again that these three they're it's not one or the other all of them all of them you have to have the corporate plan the business plan and the functional plan it's not like oh maybe you have one of these no you need to have um all three and we often refer to as shared objectives and goals so everybody's trying to achieve the same goals and objectives but how they do that how they contribute to achieving that goal is going to vary well you work in the marketing department so the question is how do you um contribute to that particular goal how do you make um respect for diversity real and how it's done in the finance department could be different or how it's done in the laptop division could be different than in the DVD player division but we're all trying to achieve the same thing the individual strategic business units and functional teams may have different tactics different ways um to go about that and part of that might be determined by the market in which they operate so these are different the um laptop Market is going to be different in terms of in relation to DVD players so thinking for example um who the competitive set is so who are our competitors do we have the same competitors in laptops as we face in DVD players not necessary yeah not necessarily right you could have um a different group of companies that manufacture DVD players versus laptops so we need to understand who are our direct competitors and who are our indirect competitors and those things are going to influence how we are able to achieve the mission and the vision and the values of the organization so it's strategic to determine the competitive set because that's not necessarily what you could describe as a right or wrong answer it's strategic and you need to provide a rationale for why that company is a direct competitor or indirect competitor take for example the beverage industry what do you think milk and orange juice are they direct competitors direct or indirect competitors direct direct so tell us tell us why because both things you have them in the morning so they're uh what you're suggesting is that they're substitute for each other they're they're against each other and they could be against each other as a drink so like same thing like say like Coke and Pepsi that's a direct an indirect would be well looking at now you're deviating we could look at a lot of different scenarios but yeah no absolutely it it could be and that's something strategically that we need to decide or as Executives is that a little general General though yeah but you might say well um our direct competitors we might Define as all Orange Juice companies let's say so if we're Tropicana we say minute made simply orange juice that those are direct competitors but with the orange juice and the milk they're two totally different things you're not going to be pouring orange juice into your Cal to Ser all right so that so they're two so I would say that they're two different markets because you're not using them for the same granted you can drink them just like you drink sodos and then be in the same Market as a soda or or water but not using them in the same thing that needs something we need to look at because the thing about indirect competitors is we don't classify them as indirect competitors to forget about them the reason we classify them as indirect competitors is so we don't forget about them because look at what um what the Dairy Farmers did with the gut milk campaign see what does that tell us about the way they view competition see to me that says that they don't view other other Dairy Farmers they don't view other milk producers as direct competition per se because the got milk campaign is a campaign that's paid for by the milk farmers of America I think that's the name of their trade Association or maybe it's the Dairy Farmers of America but the idea is that the Dairy Farmers right the milk producers they share the cost so they're focusing on creating category need or what we call some times primary demand for milk so what they're saying is that well wait a minute XYZ milk producer is not our competition ABC milk producer is not our competition who's our competition orange juice right isn't that who they view is the competition because they're bending together because they realize yes like you were saying that orange juice is a substitute for milk and they did further they did research to understand the benefits and the reasons why people buy milk and that's why you've seen um orange juice that has and they promote this very aggressively that it has calcium oranges has calcium but why do you think they do that because they believe that milk and orange juice are substitutes and people drink milk for calcium well if orange juice is calcium they believe people would drink more orange juice and it has vitamin A and it also has vitamin D and well sounds like we talk is this not milk that we're talking about and they say yes and so from both from both perspectives whether it's orange juice companies or milk companies they both seem to believe that they're substitutes for each other that's what so many varieties of each kind milk and orang Juice yeah absolutely appeal to so many other people definitely is something you wanted to add in the back I saw your hand there I was just at the beginning of the class you're talking about uh different marketing schemes you said one of them is like a delayed release date um and people will show to the search that is that not illegal in any way like isn't that false ad to tell people you're going to have something in stock and yet your books show the entire time you have no plans of having it in stock on that day oh yeah that's a problem um because usually what that suggests is what usually happens in that case is what's called bait and switch so you advertise something like you said that you never plan on having in stock or you only have one and you advertise it for $50 and people come in and of course you don't have it and then you try to sell them something that's not $50 but $150 yes the government um does not approve of that that's definitely um illegal all right so these are the three types of plans three levels in the organization let's see if we could talk now before uh before we finish up oh what we have like three more hours okay we're doing good so um let's talk about the BCG model this is a star right skills if you could do this then you also have skills this is a star this is a question mark this this is a dog okay yeah this is a dog not to be confused with a dinosaur but yeah this is a dog see there right this is a dog right okay and this is a dog this is a question mark sometimes um this is referred to as a problem child so there's a variation of the model but traditionally the model indicates that this quadrant is the question mark this is the star and this is the cash cow so I put a dollar sign there because in view of my dog drawing skills I thought a cow would that would just be maybe yeah pushing it a little bit too far so let's talk about how we read this um this chart what this looks at is the level of growth in an industry so this is what we use to do what we call portfolio analysis so what we want to do is classify our strategic business units as either Stars cash cows dogs or question marks on two dimensions and the two dimensions are the level of growth in the industry and the market share questions you follow me so far so this is about portfolio analysis and this is very helpful because literally what you could do is do that on one page now you could um have a 100 Pages as backup that's going to include your market research but what we want to do is to be able to capture that right have a snapshot of the performance of our strategic business units or our product lines all right so market share and industry growth so industry growth so we're going to have the growth rate and we're going to have the market share and this we'll talk about next time growth strategies Miz indicated included Market penetration Market development diversification and new product development all right but we're going to we'll talk about that next time but let's let's finish this verse all right we got a couple of more minutes all right so star in terms of growth rate in terms of growth rate the star has a high growth rate the cash cow has a low growth rate so it may be a product line or a strategic business unit that's operating in a mature category but the market share is high all right so we see how to read this Matrix this is a four box Matrix this says that the star has a high growth rate and a high market share that's the reason very often what companies do is they use the cash cows to fuel the growth of the Stars so if you're growing if you have a star in your portfolio that's what let's say a star would be like a product type would be um a tablet right like the iPad so you have high market share and high growth so then you're going to take something that's um not growing as much but it's producing a lot of profit so what do you do if it if the industry is not growing then should you keep investing heavily in a mature category I mean you need to maintain where you are but what very often happens is companies reallocate their resources so that a large proportion of the profits from the cash cow they use to fuel the growth of the stars because that's the category that's growing rapidly does that make sense right that seems plausible but there there's some consequences um of that which is you know if you milk the cash cow for too long then what starts to happen is you start to lose share so you have to have a strategy that's going to allow you to maintain your position which is your um a cash cow which means you have a high market share but you don't want to give that up so you're going to use some of the income to fuel the growth of your stars questions does this make sense so this is how again this is portfolio analysis so what we're trying to do is we're trying to classify our different strategic business units we're trying to classify our different product assortments we're trying to determine which of the stars and which are the cash cows so this model the Boston Consulting Group model says that a cash cow is one in which the market is has low growth so it's not growing or is growing very little but we have a high market share the star is in a high growth category and we have high market shares do that makes sense right you're the star why because you're in a high growth category and you have a high market share but in some cases what do we do here with the dog the dog has we have very little market share so we classify a particular um product line of ours as a dog that means that we don't have much market share and the industry isn't growing so we need to determine whether or not we should reduce the amount of money we're investing in these dogs in these products lines where we have very little market share and in a category that's not growing one of the things that makes a a market attractive is this well of a number of things but certainly the growth rate is the market growing that means there's future potential and certainly also the size of the market is um an element that um many find attractive so this is what we do we look at all the product assortments all our product lines our strategic business units and determine which are stars why is that helpful because that tells us where we're going to allocate our resources where we're going to spend our money so if we have aund million to spend on Advertising how much do we give to the stars and how much do we give to the dogs and the question marks so the question marks are those where it's high growth Market high growth industry but we have very little share so certainly low share which is dogs and question marks we're in a weak position we have very a very small percentage of the market but in one case it's really bad because not only do we have a small percentage of the market but the market isn't growing that's what we describe as a dog the other is a little bit better we don't have much market share but at least the market is is experiencing a significant amount of growth so that means we need to determine these question marks the reason they call it question marks is because well it could go either way right we have to decide the market is growing but we have very little share so do we invest to try and get more share do we invest to try and get a bigger share of the market so this helps us with our um strategic decision-making process so we could talk about this a little bit more next class and well |
Marketing_Basics_Prof_Myles_Bassell | 20_of_20_Marketing_Basics_Professor_Myles_Bassell.txt | [Music] So today we're going to continue our conversation about integrated marketing Communications so we said it's not just about advertising advertising of course is important what are some of the key takeaways last time just briefly to recap we talked about different advertising mediums so once we decide what our Target audiences who is the people we're trying to reach with our advertising that's what target audience is is the people we want to reach with our advertising and remember we said that usually the target audience is a subset of the target market who could explain why that is because sure your target audience is who you want to reach with you're advertising but you also want them to buy your product isn't that the distinction we made we said the target market is who you want to buy your product and the target audience is who you want to reach with your advertising but I said but they're not always the same in fact I said very often the target audience is a subset of the target market why is that remember we talked about the $100 bottle of perfume so why is the target audience for a given campaign going to be different than the target market the people who can't afford it know about it then the people who can't afford it want it more it's like like you say with the BMW like everyone knows it's a BMW or Mercedes or whatever so so you said that um one of our objectives certainly is to build awareness that's what Stephen is telling us is that we want to build awareness we want to build grand awareness that's certainly one of our objectives but we said that's the unwritten objective really you don't need to tell an advertising firm that um you want to build brand awareness because if it wasn't for Branding it wouldn't be advertising what there what do you think I mean what would you say in the ad unless you're just going to advertise to create category need for orange juice then what is the ad's going to talk about if it's not going to talk about the unique selling proposition of a particular brand we're not going to talk about the brand promise we're not going to talk about the points of difference and the points of parity for our particular brand in a category all the products in a given category provide the same functionality or the same benefit cars we said all provide transportation but some cars are $20,000 and some cars are $220,000 why well the product is wrapped in a brand and the brand is what communicates that point of difference it's what differentiates one product from the other so our objective absolutely is to create awareness we want to achieve a high level of brand recognition and brand recall so we say awareness you say well what is awareness well awareness is when we're trying to create brand awareness we're trying to create recognition which means at the point of purchase people will recognize our logo our symbol our packaging and that's why in every ad you always see the logo for the company not all companies have symbols but you always see the logo for the company that's a must you can't leave off the logo for the company so for example at Brooklyn College if we're going to promote an event the promotional materials should have the Brooklyn College logo on there that's important people need to know that it's a Brooklyn College event and also be able to recognize the Brooklyn college logo when they see it so the target audience is who we want to reach with our advertising and to be more specific let me say this it's not something mysterious our target audience could be women between the age of 18 and 39 who have at least high school education that are of any race or it might be that we want to specify a particular race we say that um our target audience is women 18 to 39 that are African-American or Asian and the reason why it's important to specify because our target market is all women we want to sell our $100 bottle of perfume to all women from 18 to 88 be ambitious you know you got to think of these things don't rule anybody out 18 to 88 and all religions all nationalities all income levels see so it's all those demographics so when we say what is the definition how do I so coach when I actually write it out what do I put down when I say this is my target market well you specify all the demographic characteristics of the people that you want to buy your product and in some cases it might be all religions all Races all income levels all levels of Education now that being said we have to decide if that's our target market everybody that we want to buy our product who is going to be the focus of of this particular advertising campaign now we're going to have multiple advertising campaigns that are in place simultaneously but the one that we're going to be developing is going to focus on not women that are 18 to 88 but women that are 18 to 29 who have income of at least $220,000 that are Asian so once we know who the target audience is that's going to deter what media is best to use to reach our target audience so two important aspects of advertising are reach and frequency so before we could reach them we have to know who they are so of course Michael reach is important I got it reach is how many people are going to be exposed to our ads but we have to Define who those people are once we Define that it's women that are 18 to 28 who are Asian and have an income of at least $20,000 then we're in a position to decide which TV channels to advertise on which radio stations which newspapers to have printed ads which magazines to have print ads where we're going to put our Billboards what the messaging is going to be and what talent we're going to use what is that mean when we say what talent so in other words who is going to be in the commercial so if it's if our talk an audience is Asian women then is it going to make sense sense to have an African-American woman in the commercial cuz remember we want our commercial to resonate with the target audience that means that they could connect with that commercial that it's going to be meaningful to them that it's going to get their attention we're going to create interest desire and action what is the action the action is that they're going to buy the [Music] product so our friend aah aah is our friend and she's responsible for helping us to get the attention of the target audience so remember we said our ads whether it's a printed or a commercial has got to have stopping power it's got to be able to get people's attention why because there's a lot of clutter there's a lot of noise what's clutter like other businesses that are not like the other things going on around the world right absolutely the other commercials for example the other Billboards so take time srap for example or Hong Kong or Las Vegas or Miami those cities have been transformed formed by Billboards because there's Billboards everywhere that's clutter so if next to your billboard is another billboard and then another billboard and another billboard that's clutter and very often what happens is we experience sensory overload we see so much there's so much stimulus that is we can't perceive everything we have to find a way for our billboard for our print adad to stand out from the Clutter so we got to be able to have that stopping power how do we get people to pay attention because what are they doing when um when the commercial comes on they might call their friend they'll send a text text message they'll put post an update on Facebook they'll go into the kitchen they'll change the channel how do we get people to stay engaged and to watch our commercial and ideally to be able to process the messaging because there's certain information that we want to communicate we want to communicate our our value proposition our unique selling um Point what makes us unique relative to other brands in the marketplace so we encode that we create this commercial or a print ad well what happens if we've encoded it then the viewer has got to decode that messaging means they have to process the messaging and learn the messaging questions about that and we talked about the different media types right so let's talk now about scheduling because last time we started to talk about dat parts like what time of day we're going to advertise for example and we need to decide which magazines we're going to advertise in because it's usually not going to be just one so we're going to look at the profile of the readership magazines have a profile of their readers what's the profile profile the profile tells us what percentage of the people are 18 to 29 what percentage of the people that read their magazine are 30 to 39 what percentage of them make more than $20,000 all the demographic things that we were talking about so what we try to do is align the profile of our target audience those that we want to reach with our advertising with the profile of the media every magazine has its own demographic profile some magazines are heavily read by women very few men might um read a a particular magazine so for example how many men here read Cosmo see what I mean but you should read it you should that's one of coach's tips it'll change your life forever um what would you think for example a magazine like ebony what percentage of um of the readers are female black women right without even going to their website because you could go to their website and get this information but they're targeting African-American females so we look at our target audience and the profile of our target audience and try to match that up with different magazines it's quite a challenging um responsibility to do that you're not going to find one magazine that's going to reach everybody in our target audience so very often what we do um in my experience you advertise in at least 10 magazines 10 different magazines because in a given magazine you might only be able to reach 30% of your target audience what does that mean that means that 30% of the people that read that magazine are 18 to 29 and make at least $20,000 and our asan but the other 70% are let's say older than 29 and they're not Asian you see you see the problem that's our challenge so then we try to find magazines that for example as part of our integrated marketing Communications plan we try to find magazines that are heavily read by female Asians so we don't want to reach everybody we're targeting A specific group of people with our advertising and ideally we're going to customize our advertising so if we do that then if we're targeting um Asian women that are 18 to 29 then I shouldn't be in the commercial an African-American woman shouldn't be in the commercial you need to have somebody that is Asian in that age group so not just Asian and 75 years old but Asian and like 18 to 25 do you agree does that make sense so what about you guys you all um young college men so if my company was selling let's say Cologne so we talked about $100 bottles of perfume let's night now we're talking about a $100 bottle of cologne F men so if I'm in the commercial and I show you the the bottle of cologne and you see me there and you're like I'm not I'm not really feeling this do you agree is that um is that reasonable but if Demitri you see Demitri in the commercial or you see Josh or you see Edward then you're like oh cool yeah somebody that you could relate with now we have to decide on a schedule we have to decide how often we're going to advertise so we're talking about integrated marketing Communications part of our plan is to advertise well we have to decide whether or not our schedule is going to be what we call continuous so we advertising all the time Edward help me us us o us I don't even remember continuous that's pretty close what do you think guys what's going on here what's wrong with this marker it's not it's not writing properly let's see continuous yeah I think this is right all right so let's say these are months continuous month one month two month three month four month five six so this is January February March April May June Etc so this is an example of our schedule what is our schedule in this case we're saying that we're going to be advertising every month but what is are there any other choices what about if we decide we're going to advertise every other month how about season yes it could we could um advertise on a seasonal basis if we're advertising every other month what do we call this type of scheduling flighting flighting yes this is an example of flighting so we could advertise every month or we might let's say advertise every other month so flighting means that there are periods during which we advertise and then there are periods in which we don't advertise it could be because of seasonality do you think this makes sense um do you think flighting Mak sense for every business or what do you see as being the maybe a a weakness of this approach I mean if it's a standard SKU item on on shelves it might not make sense to do like fighting but if it's something like costumes or or Christmas related then flighting or seasonal might make more sense but General day-to-day items might not want to just do flighting and what would be the the risk of doing that losing audience you're losing audience and sales sales and we're um we're investing money to create awareness it's losing awareness and then as we're creating some forward momentum what happens we stop advertising and then and we have clutter so out of sight out of mind so we stopped advertising we think for whatever reason we stopped advertising for a month and so while Coke is not advertising who do you think is advertising right so you're not seeing for a whole month just for example right just this is a hypothetical example Coke doesn't advertise for a month and all you see is adds for Pepsi see that's a that's a problem that's a concern but like Edward saying there may be some situations where that might make sense and there's some periods when we increase the amount that we're spending on promotions right basically everything we're talking about here is on the promotional elements whether it's advertising or public relations or direct mail or sales promotions all of those are promotional elements one of the four Ps remember I told you I said um one of the four Ps is promotion which includes advertising but advertising doesn't start with a pee right but advertising is certainly important promotional element so our schedule right our schedule we have to determine if we're not advertising during that month are we do we have any sales promotions like what would be some examples of sales promotions coupons and yeah coupon a coupon what do you think is is a coupon effective so you have a coupon that you could redeem at the retailer that's for a dollar off people use coupons yeah people use coupons their Redemption rate is Rel relatively low but certainly um consumers use coupons and the effect of the coupon is what what does a coupon do saves you money it saves you money which means gets you in the store it it might get you in the store certainly um more sales more sales by lowering the price so the impact of the coupon is it lowers the price and if it lowers the price then absolutely it's going to increase sales it's going to get you in the store all of those things are going to happen and what about Deals Deals are for example buy one get one free a BOGO is a is a deal premiums so an example of a premium would be that if you buy the um shampoo that it comes with a small bottle of conditioner or maybe it's not even a small bottle maybe if you buy the shampoo you get a same size bottle of conditioner rebates absolutely what do we um what is in addition to what we mentioned already what is um something that we hope is going to happen as a result of these sales promotions more sales than cost cut absolutely so we're definitely hoping that the total revenue is um going to compensate for the price reduction so overall we're going to have total dollar sales that has increased the number of units has increased and our total um margin has also increased so that if we lower the price 10% that sales increase by maybe 30% and what else think about um what we know about remember we talked about behavioral segmentation and usage rate we talked about heavy users moderate users and low users do you think any of these promotions will have an impact on those that are low users or nonusers even that's our expectation is that we're trying to get trial so if one of the reasons might be for people not buying our product is the price so there's a risk there's a Financial Risk sometimes there's a social risk what's a social risk from purchasing a product um it's if people just generally don't dislike the product yeah that people might think um that you're not cool so that might be a concern so you want to buy a product that's going to that's going to make you um cool that's going to um make you acceptable that um you a product that your friends approve so there's a risk for certain products that um if you buy that product it can impact let's say your popularity but sometimes we're worried about the price and so if you get a coupon that's a dollar off or $20 off or a $100 rebate then that reduces the financial risk associated with making that purchase and we're anticipating that that's going to result in trial that people will buy the product they'll try it even though they maybe they never Ed it before and then they'll be repeat purchased so one of our objectives of um a promotion is to get people to try the product who have never used it before and ultimately to get them to buy the product again in some cases we'll even go so far as give free samples so forget about the coupon right I'm going to give it to you that's how much we um have confidence in our product we're going to give you a bottle of Tide we're going to give you a package of Oreos because we are so we're so convinced that once you taste this cookie that you're going to love it and when you open it up and the cookies are done there's a coupon there for a dollar so free samples um for some companies is a major part of their promotional strategy in fact more and more um sampling is being done in store so they want you to try the product at the point of purchase in fact there's point of purchase displays so in addition to the shelving there's corrugated displays of products we're trying to get trial but also stimulate impulse purchase so we're trying to get people's attention by having these displays in the store what's the difference between a contest and a sweep stake because those are types of uh sales promotions but you might think that they're the same but they're they're really sweep Stakes contest those are types of sales promotions what's the difference you could win something both of them will allow you to potentially win something but a contest is based on skills so you have to do something other than enter into the drawing you have to design a logo for example or develop a print ad or write a jingle for the commercial so you have to do something that demonstrates some skill a sweep steak all you need to do is enter your name basically into a raffle and you might win a trip to Hawaii no you don't like Hawaii what where do we want to go Costa Rica oh so both of those are um tools sales promotion tools that we could use to ultimately increase sales but also build awareness build grand awareness so do you agree if we have a contest does that create a lot of Buzz some word mouth people talking about the contest the competition about developing a new logo or deciding um the packaging for a particular company or what the new website is going to look like or the color for their brand so in other words the trade dress you guys remember it wasn't that long ago that UPS had a campaign around its trade dress which is what what is the trade dress of UPS brown brown so there was a lot of discussion around well what should be the color of UPS um had a good contest this year had the the shoe contest on Instagram it was like viral all over the world they very and they would pick a winner each day and whoever won got 500,000 gift card nice so what you would upload your photo of your cooler shoes is that what it was shoes right your favorite pair of shoes oh okay and and whoever won good that's excellent so we're trying to engage we're trying to engage our customers so would you agree that to have like that contest is achieving a high level of Engagement with your customers or your potential customers iide what buers want so it's like oh okay maybe we have those shoes maybe we should get it like that's also a good idea interacting with right absolutely so to the extent that there's a level of direct response that's very helpful because that allows us to measure the effectiveness of our marketing Communications so for example if in our print ad which you said some of the we have some key components in our print ad we have a headline we have the image we have the body copy and then we have the logo the logo and also the symbol and very often the packaging why does why the company show um I think we're all in violent agreement that you need to include the logo in the print ad because we're trying to create brand awareness why show the packaging why is it important to show the packaging so people know what they're getting so people know what they getting so tell us more about that Brandon what what is it why is it important that they see it um in the print ad so in the magazine they look at it and they see this uh this picture and they have got shoes right okay so that's like interesting and then we have some headline that's going to try and get people's attention and we could also have a sub headline remember the purpose of the headline is to get attention and when does that become significant when Rand yeah so in other words the packaging is because yeah so in the store what we want to happen is people are going to recognize the packaging when they see it so the first time they see it is not in the store they already saw it in the TV commercial or in the print ad so when they go into the store they're going to recognize it you see why that's important when you're shopping so that you'll recognize it at the point of purchase and for some companies um they have trade dress that's strongly associated with their brand whether it's Kodak yellow or red for Coke whatever brand um it is some companies definitely have a strong association with a particular col color they have a very strong um trade dress but we want people to be able to recognize it at the point of purchase questions you heard about it Casino there a car which um encourage you to pay in cash more money have you ever heard about the story about carpet the carpet in the casino they encourage you to pay more money to play with the machines or something and oh the the way the carpet is designed oh that's interesting I'll have to keep that in mind I just don't look at the carpet folks that's it that's the key takeaway right that's interesting how do we know how many people SAR this print ad and we're going to talk in a moment about the level of circulation for a given magazine but how are we going to be able to measure it go ahead Brandon dep so and one of the things that has um enhanced the popularity of outdoor advertising is a device that neelon media developed called the npod the npod is a way for them to track how many times for example a person will pass by a given billboard or a bus shelter or a bench where there's advertising which previously um it was it was very difficult to to measure we could only estimate how many people see a particular uh poster or billboard whether it's it's in an airport or Subway or a bus but that device is a way that we could get a sample for a given individual and measure how many times they pass a certain um type of outdoor advertising and even a particular Ed in a particular area which was not possible before for print ads in terms of direct response we could for example a800 number now what we would do the reason why this works is what do we want people to do we want them to call so one of the things we definitely are eager to understand the level of exposure we want to know what is our reach so we want to know how many people saw this print ad now we're going to be in 10 different print ads this our print ad is going to be in 10 different magazines so that means what we're going to need 10 different 800 numbers you see why because what's going to happen is when they call in to get a free sample or for or to ask that we send them a coupon or to send them a catalog or a brochure we're able to track that we know how many calls came in and if we have a different 1800 number then we know from that 800 number is the people who saw the ad in Better Homes and Gardens and from the other 800 number are from those who saw the ad in Ebony and from the other 800 number those who saw the ad in vog and the other 800 number those who saw the ad in Cosmos so that's very helpful maybe in some cases you might find out because we need to know if our advertising if our promotional campaign is effective so sometimes our commercial may not be a success even though we do testing before we launch a particular campaign and so one of the metrics is we do brand awareness research on an ongoing basis because certainly for our integrated marketing Communications program we're going to want to increase the level of awareness and so that we're going to look over time has the level of awareness been increasing now you can't expect a month later you're going to see a big but we're looking at overtime so the first time you do the research it's not so important whether your brand awareness is at 20% or 50% certainly if your level of brand awareness is at 20% Then you have a bit more of a challenge ahead of you but what we're looking at is change if we're looking at the effectiveness of our campaign so we have all these promotional elements in play we're going to look and see well a year later is it still 20% or is it now 28% or 34% or 41% and continue to measure that over time and look for changes that's an indication of our success but there's got to be reach so we need to be there needs to be exposure so that's why this type of direct response where we encourage people to call or to visit our website because those things those actions remember awareness we said attention interest desire and action that action is memorable well actually measurable it will be memorable but importantly it's something that we could measure it's measurable so we need to know people are spending $500 million a year on Advertising some companies some are spending 100 million 50 million it's a big investment that's what we tell the finance department isn't it you think the um the finance department and organization and the accountants you think they want to spend $200 million on Advertising they think it's an expense but we keep saying no it's not an expense it's an investment and over time we're going to see a return on on our investment we're going to reposition our s in the marketplace as an Innovative contemporary userfriendly Grant and we're going to as a result sell more products and if we sell more products I like to think we're more profitable as a result we need to be able to compare the different magazines for example so let's say we have two magazines Better Homes in garden and Ladies Home Journal let say that for Better Homes and Garden the cost of a full page color AD inserted one time is $400,000 that's actually a real number it's not exactly $400,000 but it's approximately $400,000 that means to run a full page ad like this right a full page ad in Better Homes and Gardens one time is about $400,000 now in ladyes Home [Music] Journal it's about $200,000 approximately what's which one should we advertise in which is the better deal so one magazine is going to charge us $400,000 and another magazine is going to charge us $200,000 so come on aren't you guys like business students what do you think $200,000 400,000 which is the better deal what impression and what other the reach you're just giving us what the price right so go ahead does it also depend on the demographic as well it depends on right we want to look at um how much coverage is going to be the level of waste so you're right we might um there may not be a good match between the profile of our target audience and the profile of that magazine right the readership Brandon right so we know we need to know the readership the demographic of the leadership and importantly in terms of reach we need to know in magazines we need to know the level of circulation so that's why I didn't tell you so it's not enough to know okay this is cheaper okay it is cheaper but how many people are we going to be able to reach if we advertise in this magazine in the world of magazines we refer to that circulation so we need to know we need to look at the cost per thousand CPM is C per thousand and students always ask me this they said M I don't I don't get it what why is it not CPT like why you trying to confuse us m in is a Roman numeral Roman numeral for a th the Roman numeral for 100 is C so that's where they came up with CPM course per thousand so we need to look at the course per thousand so that we can make a comparison because we can't make a comparison here we know that this is twice as as expensive but like Edward is saying that's not enough information so what we do is we take the cost and we divide it by the level of circulation and then multiply it by a th why do we multiply it by a th why don't we divide it by 10 and then multiply it by 17.6 easier number to De with what do you think so I'm what I'm saying that was my idea of sarcasm okay but I'm like there's a reason why we multiply it by a thousand why it's per thousand yeah so if we take the cost and divide by the circulation that gives us the cost of one the cost to reach one person but we're looking at the course to reach a thousand people and an industry Norm is that we look at the course to reach a thousand people doesn't mean that you can't look at the course to reach one person but in terms of the way we buy um and sell media we're looking at very often um groups of thousand all right so let's see if we could do this calculation now as a rule of thumb so when you do these calculations when you calculate the course per thousand in the United States you're looking at a course per thousand between approximately $20 to let's say about $120 so why do I share that with you when I give you that Insight I could tell you from looking at cost per thousand um calcul calulations is because if you do this calculation and you get like 12,622 something is wrong or you get something like 1,397 something is wrong right that's generally the kind of the range that we're looking at now it depends on the medium depends on whether it's a commercial on NBC or it's doing the Super Bowl or it's in a magazine or it's in a newspaper but that's a a range in the United States that's helpful to us to get our our minds around that so let's see who could do the calculation what's the cost per thousand for Better Homes and Gardens 50 so the course per thousand the course to reach a th000 people if we advertise in Better Homes and Gardens is $50 so they have a circulation of8 million the C to reach a th000 is $50 now the reason why we're doing this is because we want to be able to compare these two options and we're going to do this for all the magazines we could do this for 20 magazines but we're trying to have an Apples to Apples comparison because we can't compare this and this because the circulations are different and so what is the c per thousand here so now what did we find out what does this tell us what do we learn by doing this calculation so the cost per thousand is the same So Meta Homes and Garden is double the price but we reach twice as many people but Ladies Home Journal and this is the way advertising is priced so to buy advertising it's based on the level of reach and frequency so this is one time so that's the frequency here is one that we're talking about and the reach is based on that circulation so they understand that they reach half as many people as Better Homes and Gardens so that means they have to charge less the cost per thousand is going to well the cost per thousand in this case is the same but the cost of an ad of a single ad is going to be less in this case it's approximately half as much so with the way that um these ads are priced we find that the course per thousand is the same but that's not always the case so what if we said the circulation instead of 4 million was 2 million then what would be the course per th000 what's the course of a th000 if the circulation is 2 million what is it so now what do we do so if the cost of thousand is the same then we're going to look at um like Michael and some others were saying brand and we need to look at the profile the demographic profile of the leadership and see if that matches or it's never going to really be a perfect match but he's going to provide the least amount of waste and the most amount of coverage what about here what does this tell us if the course per thousand for Ladies Home Journal is $100 yeah it doesn't sound like such a good deal now remember when I told you before it was $200,000 versus $400,000 you said but well Edward said well we don't really have enough information to decide which is better it's true it's less expensive right so here the cost of the ad is half of the cost of the ad here but in terms of course per thousand it's much more expensive so you see the cheaper even though this is cheaper in this example in this modified example even though the ad itself is half the price you say coach I got it look that's half the price this is a deal but the cost per thousand is doubl because the circulation is much lower so here we reduced the circulation for example P purposes to only two million questions about that no you guys know how to calculate cost per th000 silence means agreement all right we got a couple of more minutes let's see what else might be some of our objectives for our marketing Communications plan so remember we said our promotional elements include advertising publicity last time we said publicity is an unpaid form of advertising the problem is we have no control over what the um reporter or news editor is actually going to say some people say there's no such thing as bad publicity because they think that well it just creates height it creates Word of Mouth it depends on the category I mean certainly if they say your product isn't any good I don't know how you could sort of spin that into a positive um message but with all these different promotional elements whether it's advertising or publicity or sales promotions or direct marketing or public relations in addition to creating brand awareness which includes recognition and recall who remembers the difference what's the difference between brand recognition and brand recall recall is unated yes recall is unated absolutely so J says that brand recall is uned brand recognition is a type of aided awareness and who could explain brand recognition is go ahead Brandon and that sense you remember the brand because you see that okay this is Pepsi this is Pepsi commercial but when it comes to Brand recall going to a restaurant you have to kind of go back searching in your Dome trying to remember okay I think it's py that I like so yeah so um can I paraphrase what you said okay so what bred is saying is that um we're going to um recognize that logo or symbol when we see it so based on the advertising or and our other promotional elements we're going to be able to recognize the logo the symbol the packaging when we see it usually um we're most concerned about seeing it at the point of purchase which means in the store and then branded said that recall is we got to search our mind our Dome our memory for the name of the brand like for example in a restaurant they're not holding up flash cards and saying do you want this how about this or this or this you have to retrieve from your memory they say what do you want to drink then we have to remember Pepsi sunkiss Mountain Dew whatever is um in our consideration set that we prefer you remember we said that the consideration set and the evoke set are related but the evoke set are all the brands that come to mind in a particular category the considerations that are those that we would seriously consider purchasing or maybe we do purchase already questions comments |
Marketing_Basics_Prof_Myles_Bassell | Marketing_3.txt | [Music] you let me down show me when I needed you the [Music] I thought that mr. Bies when you let me [Music] let's learn I know I'm worthy get the wrong yeah yes bizarre I'm gonna take it down [Music] [Music] [Music] [Music] you let me down [Music] [Music] [Music] [Music] our SPU's now yes go ahead strategic business unit SPU a strategic business unit now let's talk about the market product strategies for generic market product strategies okay so we talked about an overview of marketing to marketing mix which he said as those controllable factors and four p's marketing segments we talked about the target market different levels of plans the mission division two values marketing metrics the BCG model now we're talking about the market product strategies what are they ready write this down here we go market product strategies these are for what we call generic market product strategies the first one is market penetration so market penetration is a market product strategy the second one is market development the third one is product development so we have market penetration market development product development and the fourth one is diversification so we have market penetration market development product development and diversification those are for market product strategies so let's look at market penetration what is market penetration market penetration means we're selling more of our existing product in our existing markets so market penetration so this could be a strategy for every company basically we need to support this with tactics that are going to leak allow us to achieve this market penetration strategy but our market penetration strategy might be to increase the sale of ice cream in our existing markets so our existing products so increase the sales of our existing products in our existing markets how are we going to increase the sales of our existing products in our existing markets how are we going to sell more ice cream for example anybody like ice cream you like ice cream what kind of flavor what's your favorite flavor how many people like chocolate as their favorite flavor raise your hand if you like chocolate that's your favorite flavor sorry banana but now it is your favorite flavor raise your hand if strawberries your favorite flavor raise your hand oh well that's a significant drop-off from chocolate to vanilla to strawberry so we need to think about which flavors we're gonna offer that's part of our product line are we going to offer one flavor or two flavors three flavors 10 flavors so market penetration is increasing the sale of our existing products in our existing market so for our ice cream company our goal could be to sell more ice cream or in this case let's say our super premium ice cream - Americans like haagen-dazs for example or Ben & Jerry's anybody like chocolate chocolate chip ice cream mmm delish right vanilla Swiss almond or how about Ben and Jerry's chunky monkey yeah like New York super fudge those are considered to be super premium so in terms of our pricing strategy a company can have a good better best premium super premium pricing strategy so we could segment the market when we segment the market what we do is we divide the market into segments we could do that geographically demographically based on the usage rate based on psychographics price is a way you could segment the market divide the market into sub markets so not everybody wants to buy a product a premium product supper okay need to buy a product that we could classify is good and then another category better best premium super premium so think about Toyota for example they have Zion which is made by Toyota Toyota Lexus so isn't that a good better best pricing strategy so how do you make that real is those products each have a different master brand each of those products is wrapped in a different brand because it's not enough to just say well we have products and three different price points because when we position our products in the market there has to be a crystal clear positioning you can't be all things to all people so you can't have the same brand let's say in for example in dishwashing soap God is considered to be a better at the better end of the category and then we have what else Palmolive is a direct competitor of theirs and what about Ajax is another competitor so if you look at that category there's good better best but importantly there's different master brands for each so the company markets themselves using different master brands market development is to increase the sales of your existing products in new markets so selling more ice cream not electronics we're now we're an ice cream company selling more ice cream to new markets or if you want we could go back to the electronics example selling more electronics not to existing customers both yes to existing customers but in different markets so new markets current products so market penetration is they increase the sales of existing products to existing markets and market development is to sell existing products so the same ice cream we haven't talked about customizing the product yet sell the same ice cream in new markets so that would suggest that we will reach new customers as well but market penetration is selling the existing products and the existing customers and we said they're going to do that by [Music] modifying the marketing mix we might change the price we are gonna advertise we're gonna offer promotions so how do we sell warped or the product well Chanel told us a while ago that we could advertise yeah we can advertise we could have print campaigns and radio spots and TV commercials billboards one of my questionnaire is going to be about outdoor advertising and billboards another one is going to be about TV commercials product development what is product development is to develop new products to sell to the current market so now that we sell to that market we want to sell a new product so for example for the ice cream company maybe now we're gonna sell a hug and oz brand of statute chapstick for clothing or soda now we have to consider the brand elasticity another thing that we study in research is brand elasticity how far can we stretch our brand now the awareness of haagen-dazs or Ben and Jerry can be very high the brand awareness but that doesn't mean that it can stretch into the clothing category or into the vitamin category bore into the electronics category how many of you would buy a haagen-dazs smartphone so you can't be all things to all people now categories expand they have brand extensions but they extend into categories that are closely related to the products that they already sell anything that tastes like ice cream huh you got me yeah now you got me so [Music] diversification is the other pocket product strategy so selling a new product in a new market so product development we said is selling a new product to all current markets diversification is selling a new product in a new market so instead of selling it something that the when I'm talking about selling ice cream to Americans anymore we're talking about selling clothing to customers in Brazil for example that's the versification so we have market penetration rocket development product development and diversification so who can tell us the difference market penetration is increasing the sales of existing products and existing categories that means selling more ice cream to Americans market development is selling more ice cream but selling it to customers in Brazil so that's market development so you see they're different these strategies market penetration sell more ice cream in our existing market which is America so sell more ice cream the product we already sell to our existing customers Americans sell more ice cream to Americans current product current market market development another strategy is to sell more ice cream to sell our existing product but in new markets so not in America but we now we're trying to go more global you want to sell ice cream in Brazil or we might even say now we want to sell in all of Europe so before we might have only been selling in the US now with market development we might want to sell it or North America which includes the United States Canada and Mexico or we might say that as part of our market development we want to sell our existing product or ice cream in not just the United States and not just North America but now North America and South America and also Asia so same product but now we're going global market development existing product in new markets they are getting because selling into more supermarkets and also getting increasing demand for the product with the consumer how would you get consumers to buy more well what you don't think that a coupon would work what about that so if you give a coupon for a dollar or haagen-dazs go consumers buy more yeah yeah a certain percentage of them will definitely will get consumers to buy more of our haagen-dazs ice cream absolutely [Music] [Music] that the suggested retail price the MSRP manufacturer's suggested retail price MSRP is 329 for that product and but they have promotions and they sell the product sometimes two for five dollars this is something I definitely know a lot about two for five dollars you can get pepper drop cookies right so Teresa same over how do they make money I made especially in the grocery channel they don't make the kind of margins that are made in other categories like clothing for example it's very commented to retailers to make 100% markup and 200% markup so if they buy the jeans for $50 they sell them for a hundred 150 or 200 dollars in grocery most of the products they settled they're making like 10% 12% but they sell a lot of those products they sell a lot of units but the percentage markup is low so when they have so you're thinking Bo how could they lower the price of 329 to 250 it's yeah right it's only in what 79 cents but they weren't making were they really making 25 points on cookies I seriously doubt it so the question is Theresa so how do they do that well the manufacturer funds that so that means that instead of them buying it from the manufacturer now at 287 now the manufacturer sells it to them for a limited time and let's say 225 so now they could sell it at 250 and still make money but that sometimes a promotion is balance a little bit differently sometimes they might only have they might actually have to buy the cookies for 240 because realize that that's gonna generate a lot of foot traffic anytime you had that type of promotion pepid Troms cookies like you say everybody knows it sells but 329 it's now 2 for $5 or 250 each that's going to bring a lot of people into the store the reason why we would even consider doing that even without the support of the manufacturer is because they're gonna buy orange juice for 449 for half a gallon that's not on sale alright so product development we're gonna sell a new product to our current market so now in the United States in addition to selling ice cream we're also going to sell haagen-dazs t-shirts now so you probably think yeah coach I do think there's an issue with brand elasticity there I don't think that that's going to go over well or haagen-dazs baby food probably not right but that would be an example of product development where we're going to sell a new product in our current market in the United States not in Canada not in Mexico not in all of North America and South America in Europe no just in the United States our current market but we're gonna sell a new product and the versification we said is to sell a new product in a new market so now we're going to try to sell those t-shirts and those cameras and that baby food in Canada Mexico South America and Europe so those are four generic strategies that we talked about in chapter 2 so let's keep moving we still got three more hours but let's keep moving so what so SWOT analysis SWOT is an acronym for strengths weaknesses opportunities and threats so SWOT analysis when we want to do a situation analysis one of the things that we can do is what's called a SWOT analysis we look at internal strengths and weaknesses and we look at external opportunities and threats so in terms of our strengths what would be a strength for let's say a hug and dogs or Ben and Jerry what would be one of their strengths Teresa large variety of flavors what else quality so one of their strength is their diversity of flavors Kasia gosh I'll get it I'll get it gosh gosh it went up to a cosh says that they have a high quality that's one of their strengths good the price can be a strength yes man they're a mature brand which suggests what a man consumer loyalty yes it's a mature brand absolutely yeah consumer loyalty yes their marketing tactics is one of their strengths yes good BP yeah their logo what about their logo is why you can see better strength [Music] yeah absolutely so they have a favorable brand image and how do we communicate the brand well a company has a logo of the logo is what's called a word mark the logo it is a graphic representation of the brand name so the logo is a graphic representation of the brand name so haagen-dazs those stylized fonts that they use at their logo what BP is saying is that that's something that's recognizable it has a high level of brand awareness and a favorable brand attitude people think favorably about that brand they have positive imagery about that brand what about what about their what do you think is one of their weaknesses the price so they are a premium product so one pint is five dollars for the path mark brand you could buy two half dollars for that price right what else what would you consider to be another weakness for them the competition so there are competitors even in the premium category so we have we talk we could talk about whether or not the Pathmark ice cream or the Breyers ice cream are direct or indirect competitors right we could talk about that we'll talk about that another time but the competition there's definitely competition for that even in the premium category Haagen Dazs Ben and Jerry those are premium brands and those are competitors what else yes guy what's your name Christoph Christoph go ahead Nicole sure tell us more about that oh okay yes so it's taste delicious but it's very high in calories and fat so as much as we think that they're going to sell a lot of ice cream there's a lot of people that are trying to lose weight that are dieting that are going to the gym every day so crystal makes a good point that's a weakness for the company so Theresa's saying is that they recognize that those segments exist that there's different segments in the market for ice cream those who are health conscious and those who are presumably not health conscious so Theresa says they're addressing that so they have alternatives they have alternatives that give people the same type of ice cream experience but as a yogurt or as a low fat or low calorie version of the product what about opportunities do you feel they have any opportunities and in the company what what would be some of their opportunities right developing new products absolutely so developing new flavors is an opportunity for them yes yes okay yeah absolutely good point go ahead making stereotypes available for a limited time only eliminate time products yep that's going to create interest and excitement about their products absolutely there's the US only represents about 5% of the world population so although the market is strategic for most companies there's a whole world that is yet to experience hagen eyes ice cream right or ben & jerry's what would you consider to be some of their threats what would be a threat an external threat to the company yeah absolutely so while wearing and thinking well we want to sell our product in Europe and the Middle East presumably there's companies there that already sell ice cream right gonna tell us your name my nisab absolutely so you have to consider the the weather those people are basically like the same company they're just as popular definitely so direct competitors not just in their recompense but absolutely they reckon Pettitte is absolutely some accessibility so not just selling in grocery stores for example but having their own company stores yes got Theresa no no thought cheresa no Chanel Chanel I haven't put back on my glasses I'm good now okay ah not to be confused as Theresa right definitely glad that's a good point you know definitely okay good definite that's a threat you have your haagen-dazs employees going to Ben and Jerry's yesterday Victoria and then unsatisfied customers so absolutely that's a problem something that we need to manage and try and minimize buyer's remorse also known as post cognitive dissonance for now we'll just call that buyer's remorse is that okay yes okay absolutely so regulation so we talked about the controllable factors we didn't talk about the uncontrollable factors someone that would be for example political and regulatory social factors those are things that are beyond our control we said the marketing mix of those controllable factors we can control what product we sell the price we sell an ad where we want to sell it our promotions our advertising but tell us your name Kai aim is saying that there's also some uncontrollable factors since that the marketing managers the marketing executives have no control over like government regulation political instability social unrest culture technology accessibility those are some uncontrollable factors anything else you know chose your name Christoph again okay I'm gonna remember yes absolutely recalls that's definitely that's a problem potential threat Lock OH Oh lactose yes lactose-intolerant absolutely or you can go a lot about ice cream you're my kind of students Wow all right good job you guys are amazing give yourself a round of applause that students ever Rock no joke all right so when we started I told you to smoke we're gonna talk about right and we talked about that right so we got still got 20 more minutes [Music] all right we're almost done all right so I told you what we're going to talk about I told you now I'm gonna tell you what I told you real quick but early safe marketing marketing is about creating communicating delivering and exchanging values does that sound familiar yes good the marketing mix product price place and promotion that's the marketing mix the four PS all right the four piece listen these are the controllable factors and they're integrated we get to great the product the place the promotion and the price that's what that is that we see here see how that boxes all the goals those four PT thing I work together these are some examples who say that the product is what rap in the brand then we talked about that and these are different pricing elements promotions remember we said advertising even though it doesn't come with a p/e right and promotion self promotion public relations that's what the part of the first questionnaire is about these ethics and what is it about ethics that were concerned about is the public relations component so when looking at ethics as it relates to apologies for example like what is the obligation of the company when they make a mistake do they need to fix the mistake do they need to tell the public that they made a mistake so those are all public relations issues and then the place we talked about different channels of distribution grocery stores drug stores convenience stores an example of a convenience store is what 7-eleven and wholesale clubs okay now we talked about the three different levels of strategy that business unit is between the corporate and a functional so the functional level strategies what we talked about today was the marketing plan so it is a sample marketing plan at the end of chapter two I also have under course documents and lectures samples of marketing plans history sample marketing plans there one of them is about Paradise kitchen all right so take a look at that do you remember this we said the mission statement what is the mission statement for Star Trek to explore strange new worlds to seek out new life and new civilizations as the boldly go where no one has gone before that's an example of a mission our example of a mission was a little bit different who could tell us the mission one of the mission statements that we talked about who could tell us who wrote it down bad tell us your name hi Emma Danielle our bad new North America so does that time to mention anybody we said that was one of our mission statements marketing metrics body people does this look familiar profit sales we're not gonna we don't need to go over this one right again no three times those are marketing metrics some of the key marketing metrics and look at this the BCG matrix where's BCG stand for Boston Consulting Group that's right and that's the door this is the cash cow the door not to be cook using the dinosaur over here is the question walks and you see here the BCG matrix the market growth rate on this axis market share on this axis so quickly the Stars have a high market growth rate and a high market share so the market share for the stars is high and they're in a category that's growing so the growth rate is high the cash cows the market share is high the market share is high for cash cows but the growth rate is low so when we said like for example in the beverage category it's a large category but it's not growing to 53% is not what we consider to be market growth that's amateur category two to three percent is market is a mature market we're talking about whether or not market growth we're thinking of 20 30 percent growth per year forty percent growth per year that's what we mean by throw a significant amount of growth we say high growth market product strategies market penetration market development product development and diversification and there is an example Ben & Jerry's example okay so we talked about this questions about that no you're sure SWOT analysis we talked about SWOT analysis strengths weaknesses opportunities threats and here's an example for our ice cream business what if the appier is something at some of the strengths well known brand names it complements the other ice cream brands Ben and Jerry's is owned by Unilever there they sell other brands of I stood as well and another strengths is that they are recognized with being a company that is socially responsible that does have values that are relevant to the consumers and values that are real not just on their website but values that they take seriously and is obvious based on their actions like the amount of money they give to charity for example in terms of weaknesses interesting late right their level of social responsibility has an impact on that profit that you could argue that they are less profitable because of the money that they give to charity but then again you could also make an argument that well if they weren't giving that money to charity maybe the cells would it be as high so do you think that people buy Ben and Jerry's because they know that they're a socially responsible company that they do give money to charity that they believe in fair labor practices I think you said yes alright I'm gonna make that available for you on blackboard so this is what we said we were going to talk about that's what I talked about and then now I told you what it is that we talked about right so three times all right so let's see [Music] [Music] all right so everybody's gonna get the book right everybody has the book that's good so you need to express the book that book you can gonna get for $15 but you definitely need to get the book because some of the homework assignments you definitely need to have the book specifically based on what it says in the book is what you need to write on blackboard it's not a reached research assignment so we have some research assignments the questions as it relates to the cases in the book like we have a case that we're going to cover are about Amazon Mall of America British Petroleum print sports all these different cases that's not a research assignment the answers are for the most part in the book you have to do some critical thinking and some analysis and evaluate the information you have but you're not gonna don't look for the information somewhere else other than the textbook so you're really good to have the textbook you're saving $200 so really it is I can't think of any reason why you wouldn't get the textbook if it was $250 like some textbooks are right that some professors have you buy I can understand but this one I'm deliberately using the back Edition so you can get it very cheap so that you can be successful because your success is my number one priority but sometimes students don't listen to coach and what can I do all right so log into blackboard and post the introductions but tomorrow make sure you complete the survey about ethics and public relations okay all right let's see Katherine yes Katherine a can that's you calm icon and let's see Alban is here Michelle our gang over okay let's see like oh is here let's move along okay Solomon Benny need up I see Loredana IOT cow soon Hussein bigger mom Danielle Burton well Shannon Brown Shannon Shannon next week she's gonna say I was there Shannon no turgid Denise Camacho pinging Alisa challan Jing Chang Chen Oh two people reason which one Ling young young when Caroline Rick miss Clark Victoria yes Victoria Veronica Aaron James D'Allesandro you came in any luck Alexis - Alexis which one no Goods another one yes this one yes gaga yes that one Kristen Gordon crispin hallo [Music] let's see simply disentis Diablo he's back yes yes yes right Brandon Bianca and Teresa Paul Chanel Jonathan Fernandes Lisette fortune Ella Rashad Rashad bling I didn't solace Sanjay Sanjay |
Marketing_Basics_Prof_Myles_Bassell | 14_of_20_Marketing_Basics_Prof_Myles_Bassell_92712.txt | last time we talked about uh segmentation we talked about positioning remember we said that positioning positioning is the space that our brand occupies in the mind of the customer we can't be all things to all people so we need to decide where we want to be positioned in the marketplace based on certain key Dimensions that are important in our category position is a very important Concept in marketing and what we do to understand how we're positioned is we do research we do market research and we ask customers or um potential customers their attitudes and perceptions about our brand so we're doing branding research we're trying to understand in the marketplace how people perceive our brand and importantly relative to other brands and that's why we're going to look at a perceptual map because what a perceptual map does is it shows us a graphic representation of our positioning relative to other brands in the marketplace which is critical so that's a critical aspect of the perceptual map is to understand where we're positioned relative to the competition where our brand is positioned relative to the competition and we should keep in mind that the reason why we're doing this research and the reason why we're creating these perceptual Maps is because if we're not positioned on the map or in the minds of our customers where we want to be positioned we can reposition ourselves so if the perception in the marketplace is that our brand is of low quality we could change that perception one of the things that we could do is advertise for example and communicate why it is that our product that's wrapped in that particular brand is of a high quality or a higher quality than people perceive in the marketplace so again remember I'm always telling you it's not just like oh it's interesting so we do this map and fun times right well once we prepare the perception map we have to do something with that information because information is only potential power information is only Power if put to use so once we could prepare the perceptual map then and we have that Insight then we have to decide what our action is going to be maybe there's actually going to be some corrective action we're going to try to change the perception and attitudes of our brand in the marketplace remember the brand is what's wrapped around the product the P the brand so for example this is a product the brand is what's wrapped around the product all products in a given category have the same generic functional it which means they all do the same thing cars regardless of the brand whether it's Ford Toyota Mercedes Lamborghini Porsche regardless of the brand they all provide the same generic functionality which is they provide transportation what makes one product unique from the other is the brand the brand is what distinguishes one product from the other that's what communicates the value and brands have personalities brands have identities and you try to create strong unique and favorable associations with our brand so let's take an example we're going to take an example of the positioning of cars in the marketplace so we're going to look at two Dimensions price and quality question so far any questions about perceptual ma ever um would the are those two static or would it change like it depends on our category so we could measure we could plot whatever Dimensions we want so for this example we're going to look at price and quality we could look at the level of innovation or the level of let's say um fashionability so like some products may be very fashionable others less fashionable the level of reliability the level of durability and we might want to look at multiple Dimensions so we might create T perceptual Maps and remember we need to look at both the points of parody and the points of difference who remembers what that is are similar features that you can offer same comp and the difference are opposite where feat that don't have you have absolutely so points of parity are those things those features and benefits that we share with the competition and the points of difference are those features and benefits that we have that the competitors don't whether they're direct or indirect competitors remember last time we made a distinction which is more important points of parity or point of difference what about point of parody do we care about that why who says yes get yeah we do care about that but not as much point of differ because that's what makes us as to so right the point of difference is what differentiates us from our competition and usually when we differentiate ourselves in the market we try to get a premium for our product we charge more relative to other competitors so think about um for let's say for example remember last time we talked about Sony Sony charges more in the marketplace because of these points of difference that NM is telling us about which one of their points of difference is quality so they position themselves in the minds of consumers as being a high quality there's a perception that their product is of a high quality and so they're able to charge more isn't that the reason why we pay more for Sol products than we do for Panasonic Panasonic products are much less expensive than Sony because Sony positioned themselves as a better quality product and because it's a better quality they're able to charge more but what about points of parody so that's going to be the basis of our unique selling proposition so Yan is saying well we can't even talk about the points of difference until we know what our points of parity are until we know what we have in common and importantly the points of parity are the minimum requirements in that given category and that's why it is important we shouldn't take that for granted to say well our product is safe but that's just the point the parody well hello if your product is not safe then you're not even you're not even in the running now in some um categories safety is not an issue like jeans what is that something that that's people are concerned about that your jeans are safe I mean there must be a safety Factor because I I guess they have to be very durable although I think that most je are not very durable from what I see um on campus and in the Hood um people jeans are like all torn up and everything and so maybe durability maybe that's what happens you wash them too many times but no that we buy them like that and actually interestingly we pay a premium for those some of these um torn jeans are $300 some some people started cutting them out of themselves questions so points of parity points of difference both are important Denise what syllabus wow she's got the syllabus y'all look at that that's what up that's amazing what about the rest of you the rest of you have the syllabus what you too Shantel you're on all right so now we're going to take an example right so Edward we can look at different dimensions and it's likely that we are because price and quality those are pretty common in most categories that's sort of like that's where you start that's the minimum when we we do um perceptual mapping and also we do branding is we're going to ask people about their perception about the quality of our product now keep in mind quality means different things in different categories do you agree like what things of good quality in let's say um computers is not the same thing is in cars or in orange juice what does it mean to be for orange used to be a good quality that's very different than saying that your car is a good quality do you agree so we need to understand that that that's going to mean different things in different categories but for now we're talking in general terms quality so we may ask some very specific questions about that when we do the research so our our conceptual might map might be a little bit more specific so we're going to look at the automotive industry and we're going to look at different brands in the automotive industry and what I need to help in doing is plotting those brands on the perceptual master all right so we're going to plot them based on quality and price so let's see if we can identify some different brands of cars and then plot them on this map so who's going to start who's going to tell me uh a grand name of a car Marina what do you think Lexus Lexus all right so Lexus where are we going to put that on the map is it a high quality and is it what about price so but where how high quality is it is it very high quality is it um somewhere in the middle all right so what about then um what about price how expensive is it is it the most expensive car you think I mean in terms of your perception because what we're going to do is afterwards when we do this research we get 1500 people that are going to give their perceptions right what Crystal thinks and what malus thinks and what amula thinks I said it right not bad not bad right you're not even practicing you don't know day and night is going to vary so Erica might think well in terms of quality it might be a nine but then Jennifer might think it's an eight and Jesse might think it's a three and Steven might think it's a seven so that we're going to aggregate that information that data together and then plot that but we're doing now is we're saying well we're going to agree as a group and then we're going to put right we're going to put a point on our perceptual map but everybody here might have a slightly different perception which is okay when we're doing the research what we would actually do is Agate all that data together the 1500 respondents and then determine what the average perception is so we talking about Lexus we said basically it's yeah it's not the highest quality and but not the highest price so we'll put it there what else Porsche Porsche so where would you put Porsche in terms of price is it more expensive than Lexus okay and what about quality better quality than Lexus anything else what other brands rollsroyce is that higher in price than forche not Rolls-Royce is more expensive yeah and what about the quality is it the same as the POR or less better what else which one okay Toyota so Toyota is it where would it be in terms of price high price moderate low price so it's somewhat higher price and what about the quality where would it be somewhere in the middle there for Toyota Toyota the car that moves forward right moving forward whether you want to or not right lower lower in terms of price so yeah much less expensive than Lexus right and what about what's less expensive than Toyota Which oneart smart car KIA KIA is less expensive than Toyota all right BMW where you put BMW said BMW Alex but ler Porsche all right where would you put forward found on the road dead mle yet okay l linoln so where we put a link somewhere where somewhere in the middle below Lexus above Lexus toy Toyota yeah Lincoln I like linoln Continental below Toyota better turn it all right so let's say that we're Mercedes so what we need to understand is where we're positioned relative to other cars in the industry so we want to understand who our direct competitors are and who our indirect competitors are so for example here we would argue that well BMW is at a similar price you no a little bit maybe a little bit lower um and quality now all of these we would say are indirect competitors right because they're all brands of cars so we're all competing as Brands we're all competing for the transportation Market we're all trying to sell people Transportation so for Mercedes you might say that some of these these are our direct competitors and the rest right we shouldn't forget about Ford because that's an indirect competitor they still sell what they sell transportation and also keep in mind that their parent company Ford um motor company acquired other higher end Brands right Ford owned Jaguar for example and Volvo so you see that's a good example of where you might say oh Ford why we're not going to worry about Ford until they purchase Jaguar now you start to shake and trouble a little bit right because we thought they were an indirect competitor and we're not going to worry about them but remember I said no you need to still be aware of them and be concerned because your indirect competitor could become your direct competitor overnight so what this shows is where Mercedes is positioned in the marketplace relative to the competition now they did something that was interesting Mercedes typically they're known for selling cars at $100,000 like the S550 but then they got this idea right and looked at the market and they found that what we might call the moderately priced the moderately priced part of the automotive Market was quite large this is not really to scale so you have the lower ended Automotive Market the moderately priced segment premium or we could say luxury market and then Ultra Luxury so here we have one two $3 signs $5 signs so Mercedes was here now remember we we talked about doing a market sizing that's what we're showing here is we're showing the size of these different segments and they decided that they wanted to sell cars here so they decided that you know millions of cars are sold at $330,000 so they're looking at Toyota Honda and they're saying wow this is like incredible we need to figure out a way to grow our business and so they said we want to start selling cars at $30,000 and in effect what they did really is they repositioned themselves maybe unintentionally but they did reposition themselves because all their commercials right and all their print ads show this and this what is this this is the symbol for mercedesbenz and so on every print ad they would show this symbol and then this price well what's going to happen when people see that enough times that means they're going to associate that symbol with that price remember before I said we want to create strong unique and favorable brand associations so we need to create associations with our brand well this is an example of creating Association every time you show this symbol you're showing this price well how long do you think it takes to people to process that and start to believe that that symbol means $30,000 to the competitors wouldn't they be like well the the idea of the Jag La go start but the difference is is that Jaguar contined to use that Master brand so Ford acquired the corporate brand is Ford Motor Sales a Ford company and they acquired a master brand and that Grant continued to sell using the Jaguar brand name but what Mercedes did was was they said we're going to introduce $30,000 cars and we're going to call them Mercedes so we're going to sell Mercedes at 30,000 we're going to sell Mercedes at 60,000 we're going to sell Mercedes at 90,000 that's the problem what you're doing is departitioning the market you can't be all things to all people now don't get me wrong it's okay to sell at multiple price points it's it's okay like for example Toyota they sell the echo they sell the Corolla the cam the Solara the Avalon and that ranges from like 177,000 to let's say like 27,000 so there all those sub Rands are at different price points but look at the range from 177,000 to 27,000 or even let's say 30,000 we have to look at how far we could stretch our brand and so the reason why they stopped selling Toyotas at like 30 something thousand is because consumers said in research that they are not going to pay $55,000 for a Toyota branded sedan so they introduce a new master brand The Lex so Lexus is uh one of the master Brands that's in the Toyota portfolio and where we going with that right yeah it's ch Fiat um but the thing is that again it's not that you can't sell at multiple price points it's just like at what what is the range so you see what the problem is the problem is are people still going to buy Mercedes at $90,000 because when you're driving down the block and you're looking at people's driveways all that you see is that symbol and so you're not going to have the prestige anymore now they have um product codes and they have different model numbers which are not sub Brands so the C230 and S5 50 those are not subr those are just model numbers how many people know the difference between a C230 and an S550 most people just see the symbol and they recognize Mercedes only gold diggers know the difference right so fellas be for real when you're on campus Road big right so they're just they're going to look and they going to see no that's a cclass don't get in right and they know if it's a S-Class but most people don't and that's the problem and that's why we need to use a very welldeveloped tranding strategy so that we can differentiate our products in the marketplace unit States made M you said but in Japan there's no in Japan like with there's no so they just like whichever Le miles a year they sell to to right so they have to decide what's going to work in each market so they don't need to sell the same product or the same model or use the same brand in every um Market that's something strategically that they need to um decide on which France in fact when we talk about Toyota um in the US the master brands are Zion Toyota and Lexus but they have other Master Brands as well that they sell outside of the US so your part of branding is developing a brand that's going to be relevant to your target market so some brand names for example might translate to something offensive in a particular Market all of those things we need to understand I remember we talked about the Chevy Nova so they said well we sell the Nova in the United States why not sell that in Mexico well it sounds like a great idea except it means no go not going to sell a lot of cars that are no go right or is it just me what do you think no go that wouldn't be um my first choice for the name of a of a are questions so that's an example of a perceptual map and we could do that for a lot of different dimensions we might want to look at the level of innovation or reliability durability all of these different dimensions we have to determine what's relevant for our category and again we could reposition ourselves so if we're here the good news is that we could work to move ourselves to someplace else on the map that's known as repositioning now we need to um develop new products new products are the lifeblood of a company companies need to continue to reinvent themselves so the first stage of a new product development process is to identify an opportunity in the market we have to identify an unmet need now importantly in um chapter one the marketing concept says that we need to identify an unmet need and Suzanna you ready and achieve the organizational goals some of you were a little unclear about that identify an unmet need and meet that need well yes but the marketing concept is describ in chapter one says that we also need to meet the organizational goals which might me be to um increase market shares increase sales incre improve profitability maximize customer satisfaction so the first stage of a new product development process is to identify an opportunity and we could do qualitative or quantitative research and that research could either be primary or secondary so secondary research is research that somebody else has already conducted primary research is research that we conduct or that we initiate cuz remember very often we don't actually conduct the research ourselves I mean we're there but most research is outsourced you rent a facility you hire a moderator to facilitate the focus groups for example so that's something that we initiated that's considered to be primary research versus remember we talked about secondary research Arch market research reports that you could buy online for example so somebody might do research for the automotive industry and then the reason why those reports are so inexpensive is because they either try and sell it to all these car companies so we identify the opportunity then we're going to develop a concept so the concept is going to include a visual like this is actually a concept board which has a visual of the product and then the concept statement now for the first time ever a nonstick pot with glass lids and ergonomically designed handles that's the concept statement so what we're going to do is we're going to show these Concepts to consumers in research we'll go into research with let's say 10 or 15 Concepts like this and we'll get input and reactions from the consumers about the concept so they'll tell us how they would change it how they would improve upon the concept so we're going to then take those concept boards and then revise them and then get further input during research that's inexpensive to do well four sets of focus groups is $50,000 but what's inexpensive to do is those boards right because you're going to have somebody in the company who could sketch these Concepts so that you can do fairly inexpensively and fairly quickly what starts to get expensive is when you have to create prototypes that would be the next step is to take that concept and then create a fiscal prototype that you could take to research and the participants can touch and feel the Prototype and then ideally after three rounds of research three rounds of qualitative research will be in a position to do quantitative research on the product cuz remember when we do the focus groups basically we have 12 people in a room so after four sets of focus groups we spoke to like four dozen people we can't draw a statistical inference about the population about the target market based on what 48 people told us so we need to do quantitative research to be able to draw some conclusions about our target market once we do that we're going to develop a product we going to manage the launch and then track results so these are the five key stages of new product development now each of these stages has like 50 steps okay but I want you to be able to get your your mind around it to get the big picture the book slightly um articulates it slightly different basically it's the same they talk about idea generation but they use slightly different terms this is the new product development process that I used in Corporate America but they're all basically um the same and they should be very rigorous because each one of these stages is a go noo decision Point who knows what that means what does that mean go no go decision Point whether you're going to continue or you're going to terminate the project yeah so you have to decide whether or not you're going to go forward and the problem is is that you might have already spent $100,000 here after you spend $100,000 we need to decide before we go and develop Concepts whether or not we're going to go forward because we might have a a lot of different opportunities remember I told you when we talked about targeting we have to decide are we going to do all of these things or right you want to sell clothing but are we going to sell jeans slacks shirts white shirts blue shirts green shirts yellow shirts orange shirts PL shirts short shirts lawn sleeve shirts polo shirts um skirts aine skirts dresses blouses right we have to decide what we're going to what we're going to sell so we're going to have have a variety of opportunities that we're going to be considering like for example in electronics right for um Apple for example they sold computers then one day they said right they went through a new product development process and they said we want to sell MP3 players and then people started to laugh and Steve Jobs said no I'm not kidding I have an idea we're going to sell MP3 players and they did and then they decided that they were going to sell phones you guys know what this is is it a five oh what H it is the five OMG wow that is like so cool look at this it's Brandon where's Brandon let me say hey profess I won't be able to make it okay but I just wanted to let you know all right so it's much lighter and faster A6 processor twice as fast as the A5 but okay but I digress so that's an example of new product development they identify these opportunities and then over time they introduc an MP3 player they introduced a phone they introduced a tablet and they went through I'm sure they had other products that they were considering as well and they went through this um process and some dropped out and some they went forward with so then here is another go no go another go no go until you actually launch the product surprisingly a lot of products fail why is that why do you think a lot of products would fail you say wow but you you describe this really rigorous process you said like this is the process that like I used this and develop new products in Corporate America and then you're telling me that companies introduce products and they're not successful I mean like they fail as in like bomb as in like loser like how is that possible why the products fail they screwed up the mark mix usually a big problem um sometimes it's not enough to have a great plan you have to have Flawless execution so interestingly some companies have only a average plan but the plan is flawlessly executed and they're very successful very important so keep that in mind throughout your career make sure you have a good team because you could have this most amazing plan brilliant but then you have people working there on your team that didn't graduate from Brooklyn College never took this class that you're relying on for your $250,000 bonus so execution po execution is definitely a reason what else oh yeah so you might not have been able to effectively reach the target audience sometimes people think that creating ads creating ads is the same thing as advertising well creating ads is an important part of the process but if you don't have the reach if you're just doing what I call prototype advertising which is you just create these ads print ads or postcards but nobody sees them maybe you just create 50 so that you could show them to your board course and your colleagues but nobody sees them like for example when um we promote the Halloween party on campus we print and distribute over 5,000 cards one year we printed 10,000 cards and distributed them but you can't just create 50 cards and then show them to some people on campus and think that that's advertising I mean you could create Concepts and ads for a lot of events for dozens of events on campus but if you're not like Janna says if you're not reaching your target audience and there's not enough frequency then people aren't going to be aware that the events are occurring what else melus absolutely absolutely so it might just be a me too product so we said okay there needs to be a point of parody but what about the point of difference is there any difference why should you buy the product what's your unique selling proposition the USP so products fail yes definitely another reason products fail is because there's no point of difference what elseed time yeah timing is everything so don't be discouraged if um if you think you've identified an opportunity and people say yeah we tried that before but it failed yeah I know but why why did it fail was it poor execution or was it like you said was it bad timing so you introduced a very expensive product during a recession and sold very little and then they saying well I don't understand I know I don't understand that so you're absolutely right timing timing execution advertising not reaching your target market no point of difference the product may sound too true so tell us more about that the brand promise is lacks credibility yeah I think that's right we bring your ancestors back from the death so what do you think are people going to you know are people going to buy that product maybe it doesn't lack credibility or maybe people maybe the the claim is so fantastic that people don't believe it but like they're okay like I got it you're exaggerating obviously it's not going to bring your ancestors back from the dead but it's you know yeah I mean it's like uh it still could be um compelling but you're right sometime what about if it wasn't so extreme what if it was just like um like for uh pain Rel you know some tablets you take once a day some you take twice a day some you take three times a day four times a day six times a day so literally some um pills you have to take like every four hours if you have migraines for example but then what what happens if you introduce um a pill and you say well you only need to take this once a day that's not like so crazy but it may lack some credibility people may be hesitant to purchase the product because they're like I'm taking six pills a day now you're telling me I could take one and they're skeptical so you're absolutely right there's got to be pillars of support there's got to be proof points in advertising so whatever we say whatever we claim we got to be able to back that up also right so you're not communicating effectively right yeah AB absolutely so a lack of um communication or um ineffective communication with the target market anything else um maybe just no economic access to it like for instance if you were to come out with a new grocery Goods there might be like 40 50,000 other grocery Goods that might not be competing in the same area but you know because there's so many people won't have enough you know time money to go through it right so absolutely what Edward is saying is getting on the Shelf at retail is a challenge so it's not enough to sell to sell the customer right convince the consumer but you have to be able to convince the retailer to carry the product so we always talk about selling three um individuals when we're launching the product we talk about selling the salese yeah because if your salese don't buy in you really you're going to have problems so you have to convince your own salespeople you have to convince the retailers and you have to convince the purchasers let's say in this case we're say going to say the consumers so it's challenging to get on the Walmart planogram you want to sell at Walmart they're the world's largest retailer well they have a planogram they have products on their shelf for ready if they take our product that means something else has got to come off and there's got to be a good reason for them to take our product so we have to talk about the shows space productivity that if we put our product in that space that we're going to sell more units more dollars a higher margin percent a total the total amount of margin dollars is going to be great so they're not going to just say okay I'm going to take um take your product and we're going to put it in all our stores in fact depending on the product well first of all a lot of products don't get um picked up by Walmart a lot of people go down to fentonville Arkansas and they beet with the buyers there and their products um Walmart doesn't um sell their products but some of them do and what they do sometimes is they'll give you a limited distribution like they'll do a test they say we'll put you in 500 stores or a thousand stores so they won't put you chain wide all right let's talk about questions about that so we're talking about we continue our conversation about products from last time we said that products are wrapped in Brands we need to understand how the brands are positioned in the marketplace so we looked at the perceptual map we talked about the stages of new product development why products fail and now we need to understand the life cycle of a product all right so these are the stages of the product life cycle so the first stage in the product life cycle is Introduction so what we're looking at here is sales on this axis is sales and this is time so at this point basically Time Zero you have no sales then we move forward year one year two year three four five and sales increase what is it that we're doing as an organization to increase sales so we introduce the product that's the first stage of the product life cycle is Introduction we introduc the product and sales grow over a period of time why why are your sales going to grow what is it that we're doing is marketers so how are we going to market the the product so we're going to advertise we're going to when we first introduce a product what is our Focus going to be on is it going to be primary demand or selective demand yeah primary Eric said primary so in the first stage of the product life cycle the new product life cycle we're focusing on primary demand creating category need for the product type but as we move out of the introduction stage now depending on the category some categories are already well established and you don't need to spend time creating category need or primary demand because these products already exist so like for example with the iPhone Wireless communication already exists mobile phones already existed what Apple did effectively was focus on selective demand and communicate how their brand providing a point of difference and actually quite a few points of difference relative to the competition so we introduced the product then we move from introduction to growth and then at some point sales sto growing now that's not necessarily a bad thing for you to be at the maturity stage of the product life cycle because at that point you might have sales of $250 million now if your sales this year are $250 million and your sales next year are $250 million and the year after that are $250 million well I wouldn't that's a business that I wouldn't mind having what about you are you okay with that yeah why not that's a nice business to have so that's not necessarily a bad thing although companies are continually looking for ways to grow their business maybe we're going to introduce a line extension or a brand extension who knows the difference what's the difference between a brand extension and a line extension line right exactly so the brand extension is you extend your brand into other categories absolutely so Apple extended their brand into MP3 players phones tablets Etc absolutely a line extension is just extend extend extending the line like it could be um like having multiple flavors for example in a beverage so you already have a beverage but now you're going to offer it in cherry or pineapple or vanilla those are examples of line extensions now when we say maturity remember when we looked at the um Bor Consulting Group model we talked about how some spus that we have are growing very rapidly so when we talk about maturity if the category is growing 2 to 3% per year that's basically mature growth when we talk about growth we're talking about usually 50% 100% 200% 300% we're talking about rapid growth so like the beverage Market remember we talked about that we said the US it's $200 billion per year at retail that grows about 2 to 3% per year yes it's growing a little bit but it's basically in the maturity stage of its product life cycle now what you need to worry about is controlling how long we stay in that period of growth so what we want to do is keep stemming growth to keep the business growing for as long a period of time as possible so the reason why this is so insightful you might say well does this happen for every product that we go through these stages we introduce a product it experiences growth then it reaches a point inevitably where sales flatten out the category becomes mature sales for our product flatten out and then ultimately decline and might become obsolete that at this point sales are zero we very close to zero like the VCR do you guys know what VCRs are you wow so VCRs are basically obsolete you would never guess how much I paid for my first BCR I bought my first BCR at Macy's you guys heard of Macy's right yeah I bought my first BCR Macy's and paid over $1,100 for VCR now you could buy them at like right a for well not even just like they sometimes they bundle them now with um DVD players and dcrs and pretty soon they're not even going to they're not even going to be selling those may not you see obsolete the product is obsolete this is so insightful because this tells us what will happen basically if we do nothing so it's telling us that if we don't manage this product life cycle our product is going to become obsolete well that depends on the product if you're talking about like technology it's always going to be advancing All Things Are new things are always going to come out but if you're selling things like mattresses like you know nothing new is coming out like cribs nothing new it's always going to be the same line for our product line I'm talking about for us for our company for our business we might um right now um two mattress companies are um birching Sera and celing so other companies might still sell mattresses but some companies might go out of business so we need to understand that if we don't manage this product life cycle that's one of the key takeaways is that hope is not lost we could manage this product life cycle you can take a whole course in product life cycle management we could manage the product life cycle we could continue to manage the marketing mix so that sales continue to grow so this is a foreshadowing of what's to come so if we don't effectively advertise the product promote the product make um modifications and enhancements to the product sell it at the right price have a compelling brand hierarchy and pricing strategy then we're not going to be successful so for example one of of our um important decisions that need to be made here is price so we could introduce the product at a high price or a low price we have to decide if we introduce the product at a high price and then lower it at a planned rate over time that strategy is called what skimming right skimming skimming is when we introduce the product at a high price and then lower it over time who do we expect to buy those products innovators so according to this model the diffusion of innovation model innovators are going to be the first to buy the product now the the alternative is to introduce the product at a low price what is it what is the pricing strategy called when we introduce the product at a low price what is it penetration yeah penetration pricing right absolutely so if we introduce a product at a low price it's called penetration pricing and if we introduce it at a high price it's called skimming so in this model we have the innovators the early adopters the Early majority the late majority and the nonadopter which sometimes is called lards in the book they use the term lards what's a lagard lards are nonadopters those are the um customers that are not going to purchase the product so we have our innovators our early adopters our early majority which is 34% late majority is 34% now is it always going to be these percentages no but what's important is that we're going to be able to influence the rate of adoption that's why this is so helpful to us as marketers is because this is time right this is the number again just like the product life cycle this is the number of units being sold what we want to do is accelerate the rate of adoption do you agree is that a wor file objective to accelerate the rate of adoption so that means you want to sell as many units as quickly as possible why would we want to do that what would be our concern why are we trying to sell as many units as possible we want to we want to I mean we want to let the product life cycle go as high as possible and also consider technology people use a lot of Technologies so they might worri about that get as much yeah absolutely so think about it while we might think we're going to implement skimming and we're just going to keep milking the product life cycle and introduce it at a high price and then lower it a little bit and then lower it a little bit more what are our competitors doing right what are the what are these guys doing you think they're sitting around twiddling their thumbs no so we're not operating in a vacuum there's a competitive set there's a competitive market So within that market we need to be sensitive to the fact that our competitors might leog us while we're sitting there thinking oh our customers we're just going to keep following our strategy and we're going to maximize our sales and our profits over 10 years and what happens six months after we introduce our product like Gana says a competitor introduces a new product that's better than ours that has a more advanced technology so there's some risk associated with skimming sure why not sell the product for as much as you can sounds like a a good idea but maybe it's better to sell it at a lower price in a price sensitive Market because some markets are elastic and some are in elastic elastic markets are price sensitive so that means if you lower the price you're going to sell more not all markets are like that not all markets are perfectly um elastic some you lower the price 10% you might see a little bit of an increase in either consumption or usage but what happens if Kad says we're lowering the electric 10% so what we all run home and turn on the AC no so some might so that's why I said it's not perfectly elastic what do we got another two hours let me check my iPhone 5 let me see oh we have um what 5 minutes yeah 5 minutes all right here we go 5 minutes all right multi product branding so a good example of multiproduct branding is that idea of having um a corporate brand or a master brand sometimes we call it family branding which is what we talked about before which is Sony would be a good example of multiproduct branding why do I say that because Sony sells gaming consoles they sell TVs they sell DVD players all of those are different product types but they all have the same Sony Master brand on it so multiproduct we're selling multiple products DVD players VCRs gaming consoles TVs but despite the fact that we sell so many different types of products they all have the same brand multi products but with the same brand all of them are branded Sony that's their branding strategy is this multi-product branding approach so something like Bo and jaguar would not be multiproduct branding no so that would be example of multi- grand um a multi- grand branding strategy a good example of that is Proctor and Gamble Johnson and Johnson yeah Johnson and Johnson so for example with Proctor and Gamble they have a dozens of Master brands in their portfolio so proor and gamble is the corporate brand they have dozens of brands in their portfolio such as scope tide Crest Sharman Downey all of those Brands and more are part of their portfolio they're all part of Proctor and Gamble but each product whether it's detergent or dishwashing soap or toothpaste or mouthwash they all have a different brand name associated with it so multibrand multibrand means that the company uses a different brand name for each product type so they don't call everything Proctor and Gamble they don't call everything Crest they don't call everything scope they sell mouthwash their mouthwash is branded scope toothpaste is branded Crest and they have um laundry detergent they brand tide but they also have laundry detergent that's granted gain they have dishwashing soap they have Dawn is one of their brands of dishwashing soap but they also have Ivory which is another one of their dishwashing brands so same company but they use different brands to Market their products the same with um with the bisco which is a part of craft so nisco for example no I take that back yeah the way they Market themselves now is um is more I would say more multi-product branding you know why because on all of their products on all of the theis products what do they have in the corner yeah they have their uh that logo there yeah so I would say that's more everything is branded abisco but they also have of course um some Masters right right so I would say that's an example of multi branding m brand branding strategy even though they have some products that are called Unilever right um that's um that doesn't mean that their branding strategy is not to use multiple Brands you're right they have also in the same category they compete with proor and Gamble and they have their own brands of um detergent and soap and so forth um private branding um for example would be um Sears like their Keno brand and what Sears at Sears company Sears robot who used to be believe it or not they used to be the nation's largest retailer they have private label products that they sell in their store so example they sell dishwashers washing machines dryers they're branded Kenmore they're Sy products those are products that are developed specific spefically for sear and some companies do both some sell their um their their products at Sears and they also have their own brand like Michelin for example the tire company they sell Michelin tires but they also make tires for Sears we need to decide which branding strategy is going to be the best for us within this context and what we've been talking about previously we need to decide how it comes to life how we make it actionable all right so do good things have a good weekend I'll see you on Thursday next Thursday |
Marketing_Basics_Prof_Myles_Bassell | Marketing_9.txt | [Music] you let me down show me when I needed you [Music] the [Music] I thought that mr. Bies when you let me [Music] [Applause] [Music] [Applause] [Music] [Music] [Music] let's learn I know I'm worthy get the wrong yeah yes bizarre I'm gonna take it down [Applause] [Music] [Applause] [Music] [Music] [Applause] [Music] [Music] you let me down [Music] [Music] [Music] [Applause] [Music] [Music] [Music] yes I want you to know that you're still the best student ever ever you give yourself a round of applause awesome so we're gonna be in again next week just to keep us on on schedule here and have I told you you got skills I haven't told you you got skills you see there's still a couple of things did I have to share with you you guys have back skills so one of the important takeaways from this course besides understanding about what is marketing and the product lifecycle and branding logos the adoption current model targeting positioning segmentation all those things of great takeaways BMW gripped IO BP while I mentioned BMW reptile washburn guitar it's kind of like a random group of companies why would I why would I mention that wait somebody over either you Shannon Shannon Brown what about Homer [Music] those were some of the case studies that we covered this semester some of the case studies to help bring to light important marketing concepts so some of the definitions you may not remember five years from now some of them you may not remember five weeks from now but more likely you're going to remember that BMW case and their branding strategy war how Washburn guitar address the issue of pricing how print sports segments in the market so you're not going to probably remember that segmentation is about dividing the market into sub markets or sometimes we say that it's about aggregating markets but trip sports is a good example of a company that segmented the market in order to identify opportunities but in addition to those key takeaways and other marketing related concepts another important key takeaway for everyone here is you've got skills so remember from the beginning of semester ours tell me that that's important that as a result of courses that you take and certainly in this course that you [Music] improve your critical thinking skills that your ability to analyze information and use information that's available to you is enhanced your epic awareness increases your communication skills increase and improve so definitely our objectives our goals of semester are multi-faceted so it's not just the the four piece of marketing also known as the price place promotion products the marketing mix so those are important takeaways but we're using marketing as a platform to develop your critical thinking skills and your communication skills every week you have a written assignment that's a type of communication communication that's very important and most of the time the post things that I've read in most cases you left out lol and rofl most of them were actually very well-written very coherent so I didn't see a lot of those oMG comments in your case analysis so communication skills are important infant what we call the information literacy so those cases you're provided with a certain amount of information what we've done is demonstrated that you could use the information that's available the information is given to you as well as the information in the textbook the console concepts in the textbook to conduct an analysis to make recommendations those are all important skills the ethical reasoning skills so when you go on interviews it's important that you emphasize that it's not just about of course you took a course in marketing everybody takes a course in marketing in college but what are some of the key takeaways yes you could tell them about the marketing mix you could tell them that you did a case study analysis and you know what on the interview you're probably going to be able to remember one of those cases that's gonna be impressive because it shows that you could apply the concepts now you're not just mum definitions but that you could apply the concepts to a situation to a scenario that's what they're hiring you to do is to have an impact on their business to be able to make a difference to be able to increase sales improve profitability maximize customer satisfaction are you guys planning miss them okay is there anybody here with me today all right so questions about that so very important to emphasize that you've got skills you've got skills that's important when you go on interviews to communicate that you've got skills that you're a person who has a high level of ethical awareness there's so many there's so many ethical issues and there's so many situations that have resulted in thousands hundreds of thousands of people losing their jobs companies going bankrupt because executives everybody here managers executives made decisions that were unethical so when you're going to hit the new believe me everybody's sensitive to that one of the questions that are gonna ask you is about ethics they're gonna ask you about ethics so again we're using ethics is a part of this course to hit the great marketing concepts and ethical issues so those scenarios or realistic situation that very likely you will encounter or you maybe already have encountered and you know what I know that some of you communicated with me already that you thought it was challenging you couldn't decide and that's why it's a dilemma one of my friends was reading the package into twelve scenarios and when he was done he was like oh this is uh I don't see the issue he said this is easy I was like well it's not that easy it's not as easy as it seems you might be very opinionated about what to do but these are issues that executives grapple with so a company a company sells a defective product now five people will die result of the defect in that product so do we call the product or do we not recall the product is that one of the scenarios and then we said what about if you recall the product that means that the company would be bankrupt does that change your decision whether or not to recall the product now for some basic no we should just recall it even in fact up bankruptcy company and others students have said well you know come on this is a business it's going to be cheaper to just pay the families of those that were killed as a result of the product defect and has this ever happened before yes a situation like this had happened before where a company decided that although their their car path was defective and when it was rear-ended get exploded so imagine driving down the Belt Parkway and the car's rear ended and the car versus the phrase well they decided that having never have taken in this course or any course of the School of Business that it would be they can of course benefit analysis to recall cost-benefit analysis and they decided that it was cheaper they estimated how many people would actually die as a result of the product defect because you know that I mean not every one of their cars is gonna be rear-ended and explode right not I mean that would really be no bueno so I mean that's not gonna happen to every one of their cars but a certain percentage estimated that a certain percentage of their cars what the rear-ended and on those a certain percentage would actually explode catch on fire and the people inside would be burns alive so what they decided to do was they decided that after doing this rigorous analysis that it would be cheaper to pay those families who were killed in the accident to pay them instead of to actually recall all those cars to Costa Rica all those cars they estimated would be much more costly than paying the we call the death claims as a result out of the accident so those scenarios they're not they require some soul-searching it's not meant for you to try to figure out what the right answer is there's no quote-unquote right answer I mean I think in some situations I like to think that the more socially responsible ethically responsible choice is a little bit more obvious but part of the process is to get you to analyze the information and think critically about what should be done so here are the executive you need to decide what should be done what's going to be your recommendation what should the company do going forward so there's two of them that actually are in a different order right you'd probably notice that now I think it's a one at eight I think are in a different order but when we finish this these group of assignments you would have completed all 12 so if you're trying to keep track of hmm I think I did that one already the best way to do that is to make sure that you've answered all 12 of the scenarios and each scenario has four questions where are they what's the ethical dilemma so what's the issue what's the issue that the executive is facing and then what are the alternatives so identify at least three alternatives what are the choices not what should they do when not to that yet just what are the alternatives they can do a they could do B they could do C they could do D they could do C handy those are the the possible courses of action and then what is your recommendation so you identified three or four alternatives right they could do anything they could do B they could do C they could do D now which one of those or more do you recommend that the company do and then explain why is that they should do that what's your rationale for that decision why is that the best decision so it might be that it's the least expensive that could be one of the reasons it might be practical the fact that it's legal that maybe it's the only choice that's that is legal the only choice is so even though one of your recommendations might be legal you might feel that even though it's legal it's still might be unethical that's why we're going through this process is because there's a lot of things that are legal but are unethical because remember the Lord only describes really a relatively small number of things that are considered to be illegal just really a few things and that's why we have to raise the bar and challenge ourselves and asked ourselves even though it's legal is it ethical is what we're recommending ethical even though it's legal so it's good to start with I'll be recommending something that's legal or illegal so certainly we need to ask ourselves whether or not our recommendation is ethical and think about the consequences is it practical what we're recommending is it practical is it realistic is it something that we can implement questions no questions if you haven't already then so you had already done too so the list of 12 that I posted swap two of those so just make sure that that you've done all 12 so for each question there's more questions so each one requires a 50-word a minimum of 50 words but for the scenario it's a minimum of 250 words okay so yeah you should answer each one separately so you saw that sample response the quick brown fox jumped over the lazy dogs does anybody know what that's about or are you just thinking he is drunk again he's drunk right it's gotta be the quick brown fox jumped over the lazy dog coach that's enough to kill up for you too much to kill a coach come on come on go easy nobody Michelle no yes Joseph yes so Joseph says that that sentence includes every letter in the alphabet how many letters are there in the alphabet 26 are you sure are you sure you don't know anything this sure yes 26 so in typing even if you don't have a typewriter now I'm not saying I have a typewriter don't even try that don't even try and say all coaches a typewriter because I'm not that old so check yourself so you still meet with your Mac for your vaio your laptop your desktop you still need to use a keyboard a qwerty keyboard so if you type that phrase then you're using every letter in the alphabet so if you want to practice your typing keep typing that phrase the quick brown fox jumped over the lazy dogs and then you've typed every letter in the alphabet so that's a good way to practice when you're not working on ethical scenarios but you got skills remember you've got skills there's gonna be a very long questioning okay here we go here we go here we go let's do this thing now you're ready anybody how about no nose I'm supposed to be the nodos you know that if you listen to these um videos on YouTube if you watch them I've been told that it's a very effective sleep aid so just you just go p-- the first five minutes and then that's it you're gone so keep that in mind okay let's talk about the product lifecycle so remember we said that in the product lifecycle we have the first stages introduction introduction growth introduction growth maturity decline and nothing in there well it doesn't need to introduction growth maturity decline and obsolescence so some products actually become obsolete but even after obsolescence digital issues in the wrong class it might be possible to experience revitalization so it might be possible for a product or a category to reach obsolescence which means that at a particular point in time sales are close to zero our challenge our challenge is to manage the product lifecycle so we don't have to accept decline as it evitable what's so compelling about that model is that it's like a foreshadowing of what's to come now knowing that that's in the horizon that that's in the future that's like getting a glimpse of your future so knowing that this might happen that you might introduce a product and sales start to increase and then it reaches a point in time where sales are flat sales stop growing sales stop increasing will increase very slowly and then start to decline so that's not like oh okay that's just the way it is we'll just wait for that to happen |
Marketing_Basics_Prof_Myles_Bassell | Marketing_6.txt | [Music] you let me down show me when I needed you [Music] the [Music] I thought that mr. Bies when you let me [Music] [Applause] [Music] [Applause] [Music] [Music] [Music] let's learn I know I'm worthy get the wrong yeah yes bizarre I'm gonna take it down [Applause] [Music] [Applause] [Music] [Music] [Applause] [Music] [Music] you let me down [Music] [Music] [Music] [Applause] [Music] [Music] [Music] all right [Music] alright did everybody come back there at the end of class I'll take attendance all right so we've talked a little bit about the simulation and Mushu simulation that's a great practical experience to you and something that you should put on your resume basically it's the type of virtual internship attendance so you're going to be able to get results based on your decisions so you're gonna have a sense as to whether you spend too much money on advertising or not enough money on advertising or not enough money on new product development because we have a shoe business so you need to decide how much money you can invest in new product development to come out with new lines of shoes so so far this semester we did the analysis of tricks sports so print sports active ion BMW and this week is Washburn next week is reptiles so we're saying that every week every Tuesday there's an assignment do a written assignment and you need to post that on blackboard so every Tuesday but it's helpful to have this in front of you have the syllabus in front of you but in general it's every Tuesday right every Tuesday there's an assignment do sometimes students say oh I didn't think there was an assignment to this week how come why would you think that that's why I tell them leave the thinking to me right I'm your coach I have a plan to get success so we're doing some some research this semester with doing marketing research does that seem like something that's appropriate for a class to do marketing research considering it's a course in marketing [Music] you guys think so so these are the questionnaires that we're completing this semester most of them why are we doing this who could tell cuz tell me why we're doing this why are we completing these questionnaires so we're completing these questionnaires so that you could get practical hands-on experience doing marketing research so that when you go on interviews you could tell people that you participated in marketing research and not only that but importantly my objective is not to just tell you how to conduct marketing research but to show you how to conduct marketing research so another approach would have been to have you write the questionnaires how much fun would that be yeah okay so what I did was I wrote the questionnaires so that you can get experience seeing what it's like to complete a professional marketing research questionnaire so these questionnaires were written by me I like to think that they're probably above average when it comes to marketing research questionnaires so this is part of the showing not just telling so it's not enough I think to tell you about different scales nominal scales interval scales ordinal scales when preparing a questionnaire I think that as your coach I need to show you how to do that so these are great examples of marketing research and in this case these questionnaires are what we consider to be quantitative research so those are the links and everybody needs to complete the questionnaire those questionnaires and we've been doing that week by week some of the questionnaires focused specifically on one of our critical learning goals which is building ethical awareness so hopefully because I can't teach you to be ethical but what I could do is get you to think about ethical issues sickle dilemmas and these questions I'm sure that you found that intriguing and stimulating and got you to think a lot about ethical dilemmas because as you move forward in your career and even in your current position the current job that you're in you're going to face ethical dilemmas you're gonna have to make decisions and you want to make the right decision the one that's both ethical and legal now for those of you who are watching on YouTube you can also complete these questionnaires so the mix are available and then at the end of semester I'm going to share the results with the students on our team now we're on a college campus now a lot of things happen on a college campus besides students meeting with professors besides lectures some of the things that happen on campus are fun not as much fun as this but fun you don't believe me right yeah we have Halloween parties and there's a lot of clubs that are on campus so you might have said well why did I post this on blackboard these are links to student clubs online the college campus some of them these are the ones that I'm closely involved in and are the faculty advisor so there's student organizations there student clubs there basically they're run by students but every club has to have an advisor a faculty advisor so I advise several clubs on campus so I had sent you know I posted that on blackboard for you so you get more information the elections we're having elections in March so hope is not a plan so I'm trying to demonstrate to you good planning so the elections the elections for the executive team the President the Vice President the secretary and the treasurer are going to be in March so you could join these clubs you could also be part of the leadership team if you like so those are the links if you want you could like them those in cyber space right now watching this on youtube if you want you could like them too I'm also going to post the applications the applications for those clubs as well look at this the marketing mobile leadership academic Association fun times so if you're interested in marketing there's business student organization since 2009 I'm the faculty advisor for this student club this is um one of the semesters prior semesters I had a marketing plan competition and the student with the best marketing plan received a $500 scholarship I coordinated that with MGL a a marketing local leadership academic Association but the prize the scholarship was funded by myself I funded the scholarship so no university college or student funds were used just sort of all on the same page somebody that actually asked me that are you serious no I look at anybody interested in income tax look there's a club in the club and guess what I'm the faculty advisor so I'm sharing this with you I want you to get the most out of your college experience it's not just about going to class this things that are happening on campus parties and we have guest speakers and if it's a leadership Society in 2006 I worked with two students to form the business leadership Society you could join the business leadership Society you could we can have an election in March you might be on the leadership team that was our five-year anniversary so we had a big celebration that's online and then this is a you saw this - this is a scholarship that i'm funding this was a business plan competition okay so those are all about student clubs right clubs on campus so I think it's important to take just five minutes out of our marketing lecture here to talk about student clubs because that's part of your college experience not just about this five hour lecture tonight until midnight these are the things going on campus these are clubs these are student organizations that I'm closely involved in but there's other dozens and dozens of other clubs on campus that you could be associated with and it's a good way to enhance your resume - so that you can get leadership experience so it's good to have internships it's good to be involved in student clubs student organizations hiring managers look for these things they want to see that this side of you other than your 3.9 GPA they want to know that you're part of the business leadership Society they want to know that you're part of the Chess Club that you do things out of that study cases about trip sports BMW and Washburn guitarra they want to see that you're multi-faceted in your interest and your Tiffani's anybody believe what I'm saying no not really I don't get paid I get students to sign up really I don't so but I'm just trying to get you engaged in things that are going on campus but no pressure no pressure really I know I mean that I'm just trying to make you aware there are a lot of things that happen on campus that are fun other than class these are some of them now you want to see a picture of my son the last time I said that I said do you want to see a picture of my son and the entire class in stereophonic says you're married it was like wow Wow I didn't say I was picture of my son that doesn't mean that I'm married right does it that's my son his name is Hershey Koko all right so today what we're going to talk about is different ways to segment the market so we're going to talk about demographic segmentation psychographic segmentation behavioral segmentation we're going to talk about the different criteria in selecting segments we're going to talk about wide products fail we're going to talk about the product life cycle we're gonna talk about ways to influence the product life cycle so it's important to keep in mind is that the product life cycle which is an extremely important marketing concept is something that we can manage we need to be able to influence the product life cycle we're gonna talk about branding I'm gonna show you what a brand hierarchy is we'll talk about different branding strategies and pricing objectives first I just want to quickly go through some of the things we talked about last time quickly I mean I know we're here till midnight but just quickly all right so let's quickly talk about what is marketing the marketing mix market segments different levels three strategic levels and plans mission vision and values marketing metrics the BCG model the market product strategies and SWOT all right so we talked about that last time let's see let's give ourselves 10 minutes to get through this all right so we talked about this already but just to ground us let's quickly go through this alright here we go so we said that market it's about creating communicating delivering and exchanging value that's what marketing is kind of for like 30,000 feet if you will so big picture what is marketing that's what marketing is about creating communicating delivering and exchanging value and value we said importantly some value is a quality and benefits so something could be very expensive it might be the market but it's a good value because the quality is high because the features and benefits are significant relative to the competitors we said that the marketing mix the marketing mix are those controllable factors the marketing mix is known as the four PS product price place and promotion those are the four PS of marketing critical to understand that so the marketing mix are those controllable factors so the product we could alter so the product that we sell can be altered the price we could change the price we can determine the place where we sell the product and the promotions is something that's within our control advertising for example is a part of promotions so promotions right in this model the word promotions is used very broadly it includes sales promotions trade promotions advertising and there's a lot of different forms of advertising outdoor advertising such as billboards that'd be a good example of outdoor advertising we could advertise on TV the radio newspapers magazines so advertising can be broadcast or it could be print for example there's different ways that we could advertise that we could communicate but we said that marketing is about creating and communicating so it's not enough to create a product or service but we need to be able to communicate that so we need to have an integrated marketing communications plan IMC integrated marketing communications so this is an illustration of the marketing mix and some of the things that are within our control so in terms of the product keep in mind that the that the brand name the brand name is what's wrapped around the product so what makes one product difference from another is that it's wrapped in a different brand all products in a given category had the same generic functionality what does that mean so take for example cars all cars provide the same generic functionality which is what transportation so all costs provide transportation but one car is different from the other because it's wrapped in a different brand what are some different names of cars different brand names of cars who could tell us somebody written if you know the name the brand name of a car yes read it in the back what is it Honda Hyundai is a brand name the Honda brand name is wrapped around that product that's what distinguishes that car from a different brand name which is what what's another brand name go ahead a Porsche so a Honda and a Porsche both provide the same generic functionality transportation but those products are wrapped in different brand names who can give us another example a Subaru of Mercedes so all of those individual brand names are wrapped around their respective products that's what differentiates four different chains one product from another so one product in a given category is unique because it's wrapped in a different brand now the brand name the brand name needs to be communicated how do we communicate the brand name but what we do is we develop a logo a logo is a graphic representation of the brand name so for example look at this packaging so the brand name is leis so the ladies brand is communicated to us in that graphic which is unique from the Cheetos logo so you want your logo to be unique you want it to be memorable you want it to be transferable and adaptable over time now companies also develop simples to communicate and represent their brand but a brand simple does not include any words a simple is a graphic representation of the brand but it doesn't include any words the logo is a graphic representation of the brand name it's a graphic representation of the brand name and we refer to that as a word mark so some other examples so rolled gold is another that's another brand name that's a logo that's communicated to us in a logo Fritos is another brand name and it has a unique logo we said that there's different levels in the planning process we have we develop a strategy at the corporate level at the strategic business unit level so here we're talking about an SBU strategic business unit and at the functional level so we're going to develop three plans there's going to be a corporate plan which is what communicates the overall direction strategy and objectives of the organization the mission the value in division for the entire organization and then each strategic business unit is going to develop a business plan to support the corporate plan so how is the corporate plan gonna come to life how we gonna achieve the goals and objectives that are mapped out in the corporate plan well the strategic business units anytime a business plan that's gonna enable each individual business unit to achieve the goals and objectives mission and vision of the entire organization so they're each going to have their own business plan and each functional area each department whether it's the marketing department or the finance department manufacturing HR each of those are going to have their own plans but remember that needs to be shared objectives shared goals so that means that everybody in the organization needs to be working towards the same goals and objectives now how they're gonna achieve those goals and objectives how they can influence those goals and objectives are going to be described in their plan so that plan is going to articulate their strategies and tactics we said this is a good example of a mission statement to explore strange new worlds to seek out new life and new civilizations to boldly go and no one has gone before so the mission statement basically communicates what business that you're in I mean last time we looked at some other examples marketing metrics so how do we measure performance well there's a number of ways in the convention performance what did I miss the level of profit we could look at sales so how much sales were generated in terms of dollars and in terms of the number of units so how many units did we sell and how many dollars did we generate in sales what is our market share remember we said well if we sold 50,000 bottles of shampoo we don't really know if that's a good thing or not until we're able to compare that to the category as a whole remember we said well 50,000 units sounds like a lot of shampoo to sell and I know you're thinking where does this guy know about shampoo but I know a lot about shampoo and I'm going to tell you about it sometime so we need to understand our market share what percentage of the market is our product so in terms of the number of units for example what percentage of the number of units being sold is our product which of course is wrapped in our brand what percentage so we said 50,000 units could be a pretty large number but how many units were sold in the entire category so then he said well in the entire category thousand thousand bottles of here and we sold 50,000 of those units then what is our market share business students ten percent so fifty thousand is ten percent of five hundred thousand is that right yes so our market share is ten percent so there's five hundred thousand units sold five hundred thousand bottles of shampoo ten percent of them were ours so that doesn't mean that ten percent is not a nice business to have it doesn't mean it's not significant but it puts it into perspective so relative to our competition we sold 10 percent ten percent of the shampoo that were sold was our products with our brand wrapped in our brand so we don't have fifty percent market share we don't have sixty percent market share we don't have 70 percent market share it doesn't mean that 10 percent is not a very profitable business but it puts it into perspective for us now we can determine whether or not we want our goal to be to increase our market share but in some categories that are mature and we're gonna talk about the product life cycle in some categories that are mature it may not be possible to grow market share because the market may not be growing so the only way that you're going to be able to increase your market share in a category that isn't growing is if you are stealing share from your competitors that means what that their customers start using our products they start buying our product so these are some metrics that are used to evaluate our performance as executives customer satisfaction is an important one how do you know if your if your product is a success well not just the number of units that you sell but what are the customers saying would they buy your product again do they find your product easy to use is your product helpful so a lot of market research is done around customer satisfaction around brand awareness so a lot of research has done around branding the level of brand awareness whether it's brand recognition or brand recall brand image brand perceptions in fact what we do is we develop a perceptual map based on our marketing research to understand where we're positioned in the market place relative to our competitors so market share we said is relative to our competitors so we're looking at how many units we sold relative to the competition we said that's very insightful and also in terms of positioning it's very helpful to develop a perceptual map to understand where we're position in the market relative to our competition so it's Honda and Mercedes and Porsche are they all positioned the same in the marketplace their positioning is not the same so we look at different dimensions we look at quality we look at price we look at any number of dimensions and develop a perceptual map very often we develop five or more perceptual Maps so we can understand where we're positioned relative to competitors and importantly keep in mind that we can reposition ourselves in the market place so the reason we do that research the reason where we try to understand where we're positioned relative to the competition is so that we could determine whether or not we want to reposition ourselves so if we've seen as being low quality and high price that's a problem we need to fix that the good news is we can we can fix that with a marketing communications plan if we successfully implement a marketing communications plan we could change the perception that the target audience has of our brand and our products we looked at the BCG model we said that with all those strategic business units we might have 10 or 12 strategic business units one of them is TVs one of them is laptops one of them is tablets the other one is cell phones the other gaming consoles we said that with all those strategic business units we need to be able to determine which ones are performing the best so we need to do portfolio analysis so the BCG model allows us to do a portfolio analysis and what we do is we classify we classify our strategic business units as either stars question marks cash cows or dogs so stars stars in the left corner here has a high market share so more than 10% so high could be let's say 50% market share and in terms of market growth the market is growing significantly not 3% not 5% significantly so maybe 50% or more so if we have a business if we have a strategic business unit that has a high market share and is in a category that's growing rapidly than as a significant amount of growth then we would classify that strategic business unit as a star this is important because it's gonna help us decide how I allocate our resources so let's say we have a hundred million dollar advertising budget and we have tense business units so we gotta just get every strategic business unit ten million dollars well maybe some we should give nine and others we should give twenty five million dollars to spend on advertising depending on whether or not they're classified as a star or a dog this kind of looks like good dinosaur well the way I draw it it looks like a dinosaur so if you look at the YouTube lectures you'll see that one of my sketches students had commented that it looks more like a dinosaur than it does at all because I'm just doing a quick review we said that there's different market product strategies market penetration market development product development and diversification market penetration is we're going to increase the sale of our existing products in our existing market how are you gonna do that how are you gonna achieve a market penetration strategy what do we need to do how are we gonna increase the sales or our existing products in our existing markets yeah some more but how how we make it more accessible so have it distributed and more retailers so instead of just having distribution at Kmart maybe now we can get distribution at Walmart so if we now have distribution at Walmart if we're able to now get distribution at Walmart is not going to enable us to sell more of our existing product yeah we're probably going to sell a lot more if you burn something at Walmart before what else what else you'll be doing coupons so in a price sensitive market joke there last just thing if demand is high that means there's a high level of price sensitivity if we drop a coupon which in effect reduces the price then wave it up sell more does that make sense soda coupons increase sales so if it's a product that's subject to a high level of price elasticity of demand so is price sensitive that when we lower the price that means demand is going to increase so we could get more distribution we could drop a coupon what else what else are we going to do to increase the sales or our existing products in the existing markets what else advertise it so advertising so if we spend money on habit I think I'd like to think that that means that we're gonna sell more about current products in our existing market so if we're selling the product in the United States only and we advertise so then I think it's reasonable to assume that we're gonna increase the sales of our existing products in the existing market market development though means that we can increase the sales of our existing products in new markets so now we're gonna sell not just in United States but we're gonna sell in all of North America which means that we're going to sell in Canada and also Mexico so how are we going to increase sales well we could implement a market development strategy which means that we're gonna sell our existing product not a new product the existing product but sell it in new markets so sell it now in Canada also sell it now also in Mexico maybe we also want to sell it in Europe product development means that we're gonna develop a new product we're gonna develop a new product and sell that to our existing customers so we're going to introduce mp3 player and then what we're going to do is we're gonna introduce a tablet and then a phone so that's an example of product development of a market product strategy and who's which company is a good example of that product development strategy Apple so they introduced the iPod then they introduced the iPad that they introduced the iPhone the iWatch ITV iTouch so those are all examples of a product development strategy so that's an illustration of a company that implemented a product development strategy which is to increase their sales by selling new products new products in the existing market but another strategy is diversification which is to sell in a new category and a new market but we need to be sensitive to the level of brand elasticity how far we could stretch our brand because can we sell what do you think can we sell apple ice cream what is the Howard Apple position in the marketplace their position as a electronics an innovative electronics company so you have to think about when you think about brand extensions you have to think about the level of brand elasticity how far you could stretch your brand so we looked at Ben & Jerry's so this is a good example so what did we say so a market penetration current product in the current market how are we going to sell more ben jerry's ice for example we said we're gonna advertise we're gonna introduce Alexis ed increase the level of distribution we're gonna drop a coupon right but is it a Katherine Katherine says we're gonna drop a coupon but in terms of other strategies we could also implement a market development strategy which means they're gonna try to sell warm up current product by selling it in new markets so as we suggested now we're going to sell map just in the United States but in Canada in Mexico in Europe in the Middle East and then product development what we said that we're gonna sell a new product so it could be something that's new to the world or new to the company and [Music] it could be a brand extension like we were discussing the iPod I had iPhone the iWatch or it could be a completely different product what do you think would make sense shampoo but about shampoo No so but it would have to be within her current market so it would have to be in the United States now we might introduce a completely different product the the Apple shampoo and sell it in markets outside of the United States which is our current market that would be an example of diversification so selling a completely different product in a new market in a completely new market SWOT strengths weaknesses opportunities and threats we need to understand when we're developing our plan both our business plan and our functional plan and of course the corporate plan we need to others and what are the strengths weaknesses and opportunities and threats let me talk about that last time and then here's an example of some of her strengths weaknesses opportunities and threats for her many Jerri somebody's things we talked about we said for example that one of the strengths but ven injuries is that the brand is well known so it has a high level of brand awareness he said that that's a strength so weakness is that growth sales growth has slowed in recent years that's a weakness so then we need to decide what are we going to do about that so when you develop a SWOT analysis it's not just meant to be interesting it has to be actionable so when we identify these witnesses we need to do something about it we need to do something about it we need to do something to increase our sales and then we talked about different opportunities in the market and also threats all right so now so let's talk about segmentation okay good we still got three more hours okay so segmentation hey you're right Marissa segmentation so these different ways that we can segment the market one of them is what demographic so when we segment the market based on demographics what does that mean what does demographic segmentation so it is different ways we can segment the market demographic is one of them what does that mean if we segment the market based on demographics good James right so James says their age for example so that's a demographic criteria so we could segment the market based on age what does that mean that we're going to develop groups so what we're doing is dividing a market into sub markets based on in this case age so what we're going to try to do is identify different age groups that have similar needs and wants so a particular age group let's say 19 to 29 that age group 19 to 29 and an age group from 30 to 39 and from 40 to 49 and from 50 to 59 for example those are we can identify those as unique segments so each of those segments the reason why that's significant why we would think that significant is because our expectation is that each one of those age groups is unique relative to the other age so relative to the other age groups they're gonna have different needs and wants so they need their wants are going to be different they're going to respond to the marketing mix in a different way what does that mean so for example based on the age group they may not or maybe they will purchase a given product at a particular price so look for example at products in the market that have a good better best pricing strategy so you could buy a product at $4.99 $5.99 $6.99 $7.99 the same product so isn't that for example the strategies that Apple is that Apple is implemented with their iPhone so there aren't they at different price points so if you want to get the the iPhone 4 and it only has 16 gigabytes instead of 32 gigabytes or 64 gigabytes or 128 gigabytes then it's at a much lower price so you can get an iPhone for $2.99 for 399 for a $4.99 for $5.99 $6.99 so $7.99 so why is that relevant because different segments different segments are willing to purchase that product at a different price so everybody's not willing to buy the product at $7.99 which is by the way $800 so you might consider $7.99 to be up what we call a magic price point but it's $800 so not everybody is willing to spend $800 for an iPhone so what should they do should they just sell one one model that's it one model $800 they're gonna leave it it would certainly simplify their manufacturing process but now they realize that you can't be you can't look at yourself that you need to segment the market and identify different opportunities to sell our product and one of the ways we could do that is based on price |
Marketing_Basics_Prof_Myles_Bassell | 4_of_20_Marketing_Basics_Myles_Bassell_28.txt | alright so we're gonna pick up where we left off last time which is a discussion about segmentation if we're talking about different types of segmentation and we're going to continue we're just going to review a couple of key points so today we're talking about chapter 9 we're going to talk a little bit about chapter 10 and touch a bit about chapter 11 but don't worry next time we're going to get into chapter 10 in more detail and also chapter 11 but I just want you to see the big picture of where we're going and how segmentation is so significant segmentation and positioning and then how that ties to products and then how the products are related to grams and one of the important takeaways is that the brand is what's wrapped around the product that's what this visual here suggests because what if we say that all products in a given category have the same functionality so for example cars will provide transportation but makes one car unique from another is the fact that they're wrapped in different brands and the brand is what differentiates one product from another and communicates the value and a brand is a very complex so entity brands have personalities and identities and importantly brands and accumulate equity so he took spend a lot of time talking about grant equity we're going to certainly talk about that in a lot more detail in chapter 11 because for example the coca-cola brand has its estimated a value of about 68 billion dollars which is quite significant when you agree and they're almost seventy billion dollars there was like 68 million dollars and you might think well that's a lot to 68 billion dollars I mean there's many companies that are even that big right so when I say 68 billion dollars that's not the assets of the entire company that's just a value of their brand that's why that's so compelling and why like from day one we started to talk a bit about branding and it's important so if you look at companies that are successful in the marketplace they accumulated a portfolio of power brands but we will talk more about that let's some try to continue where we left off a regarding segmentation and I wanted to just briefly recap who could tell me some of the key criteria for segmenting a market remember we said there are several things that we look at when we when we segment a market and we said there's also some criteria that we use when we're selecting particular segments so we're not going to try and penetrate all segments there's some that are more preferable than others but first let's talk about some of the criteria that we use in segmenting the market that tummy n/a okay I'm glad you ended by um so segments in centers that we identify we want them to have the customers to have other potential customers to have similar needs and wants is what better saying absolutely so when we divide a market into some markets or we aggregate potential customers into these groups or segments certainly what better saying is right on is we want them to have similar needs and wants and go ahead large absolutely large now remember I said last time it doesn't mean that a small segment which we refer to as a niche it doesn't mean that we can't be successful with focusing on a niche but more often than not it is important to identify segments that are large reachable right reachable and they talked a bit about that what that means in other words that were able to access them through our marketing communications plan which is very important each age okay both ages are a type of segmentation line as a type of demographic segmentation it's not one of the requirements but I see what you're saying we could certainly segment a market by age at people who respond and similar to what you're working right so respond to the marketing mix in a similar way so we have large reachable then says similar needs and wants and response to the marketing mix in a similar way now who could explain that what does that mean response to the marketing mix in a similar way what does that actually mean can tell us how type of behavior when it comes to consuming the product like is that such like part of it could be they can pay the same prices or they buy online or go to the store yeah so at a certain price a significant percentage of those in the target market would purchase the product so price is certainly let me say marketing next price is one of the elements and that they're going to respond in a similar way and also you suggested place which means that they shop for the product in a similar channel of distribution so last time we talked about the fact that let's say a particular segment that we've identified right we identify it this is very strategic this is something that we have to leverage our critical thinking skills to be able to determine the segment people in this segment our potential customers might all shop online that's important to us that's important for us that we've identified a segment that has that type of purchase behavior as you were suggesting that they all shop online why is that like why do we care like why are we why don't we just look at all like the entire market all men so say we're not we're all men we want to sell our product to all men why is that so crazy why does it matter that that they all have similar needs and wants or that they respond to the marketing mix in a similar way chances are from age 18 to 100 you're not going to have the same interests absolutely based on that you marked in a a Apple computer to a 98 year old he's not going to buy like how can you market to that I would think you're you're right I would yeah as much as we're all fond of Apple branded products yeah it's unlikely that we're going to close that deal yes go ahead John no pressure okay later hi now let's say I'll say what you said that quote they said last time class uh that we only char you we know it we're missing out we only get four nine percent of the market meaning that like we will move we want to target people that we know we're going to buy our products that's a one PS specific as possible sort of reach those people specifically yeah so we know that there might be some waste but we're trying to still be as efficient as possible and if we have segments where they have similar needs or wants and they respond to the marketing mix in a similar way and the segment is large and reachable well that makes marketing for us efficient now the thing is that we're still going to have multiple segments but we're going to have to customize our marketing mix from each of those segments and the more specific the better just like remember I said if we're selling a product and our target market is 18 to 25 year olds well you don't want me to be in a commercial because that's not going to be a selling point oh yeah I'm going to buy the product at coach buys like no you don't want to buy you'd like to think well you know the products that the professor use are not products that I would use because I'm young and cool and if and everybody likes me right so you want to have people in the commercial for example that the target audience can connect with that they could relate with does that make sense so we're going to identify multiple segments and then we're going to have to decide which segments we're going to focus on which is called targeting so after we segment the market after we divide the market into some markets then what we're going to do is focus on certain segments now why wouldn't we focus on all segments what would be - what would be the challenge yes go ahead absolutely so certain age groups the product is not relevant or certain let's say certain religions or certain ethnicities absolutely so really good point all right so let's keep moving forward you talked about geographic segmentation so that's dividing a market into sub markets based on a region for examples country city those are types of geographic segmentation we have to ask ourselves whether or not that's compelling or insightful enough because when we do that remember if we say for example if we segment the market geographically and we say region is one of the segments now suddenly North America is a large region in terms of the number of people that live there terms of the population right hundreds of millions and South America Latin America Europe etc etc what is the assumption that we're making with assuming the ad what what's the assumption if we if we take that approach that those regions that the people who live there all have similar needs and wants that that's a that's a pretty big assumption now in some cases maybe that's the case most of the time it's not so we need to customize our marketing mix and the same would apply by country but I think when you get out of a country level it's you might feel it's a little bit more we to generalize the country level the state for example Asia where countries comprise Asia so Japan China Russia Russia Korea in Central Asia is real-time yeah Beck Aslam Kazakhstan Tajikistan Minister so now he's now think about the countries that you just mentioned think about the cultural differences think about the cultural differences that we have here so we as an embryo as marketers we think of Asia as like you said China and Japan and we think of the people who live there as Asians but you know China and Japan they have a very interesting history that is very unpleasant so to say that their needs and wants are similar is also a very broad generalization Korea also a very different cultural dynamic now doesn't mean that Asian countries like Japan Korea and China don't have some similarities in cultural ways but there's also a lot of differences so as marketers we need to be sensitive to that you follow what I'm saying right so in terms of like this one size fits all to think that oh we're just going to sell this product to all Asian countries and we don't need to customize it in any way it's a very different countries very diverse and different from each other I take for example Japan has established a very significant presence in heavy manufacturing so for quite a long time for quite a long time Japan has developed an expertise in manufacturing items like cars for example right that's what we mean by heavy facture whereas China tried in the past to become a heavy manufacturer and they failed they are revisiting that again now so they are producing some cars but really they demonstrated an expertise in what we call light manufacturing which is generally what we purchase labor-intensive so a lot of cutting so operations which means making all sorts of apparel handbags things that require stitching right cutting materials and stitching them together and other labor intensive processes so very different countries and all aspects that's what I'm trying to show you here is that they're different in a lot of ways and that's why it's a it's quite a generalization to say that well they're part of the same segment part of the same geographic segmentation that we will just apply the same marketing mix to those three countries let's say not to excluding the others but let's say we're talking about Korea and Japan and China so you might want to go down to from the region to the country level to the city level now you add a level where I think you're more in a position to make some generalizations and you could say well people that live in a certain city whether it's Guangzhou or Shanghai or Beijing I think it would be more reasonable to draw some assumptions and make some generalizations about their lifestyle their needs and their wants I think it'd be more reasonable to say that there's similarities that we could identify what it could someone argue that maybe a product which didn't need to be more specialized we're broken down for different segments probably easier to sell to a large geographic setting maybe a better product sometimes for example iPhone maybe they market differently but it's the same iPhone all around but even different water companies have to use different styles different file types different artists see patterns on their bottles to sell into the region well that's all part of the marketing mix so if we're changing the product or the packaging or the amount of memory that's in the products or if it's 2 gigabytes versus 4 gigabytes or 6 gigabytes or 8 gigabytes then we're customizing the product and if we're for example selling in a market where the level of disposable income is lower and we're trying to sell products that provide the same functionality right that it might be a smartphone with some markets we sell smartphones for $600 and some 500 dollars in other markets maybe $100 but it has less storage capability maybe it doesn't have the camera functionality etc so once you start to change all those aspects you change the price you change the elements of the product then we're changing the marketing need that's marketing mix to meet the needs of that particular market so yeah I mean and that's ideal too that you've done that because more often than not the needs are not similar based on region ok so even those countries in the same region they're not going to have similar needs and wants even within a particular city there's some people that are very affluent that they might have they could afford to buy a model that's a ton but others maybe only $100 right but those are just some example in some cases it's relevant to segment the market geographically and can be very insightful and in other cases it's not going to be the key to us successfully marketing our product doesn't it also tie in to the concept of us those responsibilities I would like to think that all ties into social responsibility and ethics but tell me what you're thinking specifically because you're adjusting your productivity of consumers of financial needs yeah yeah yeah I see what you're saying in that case like if we stick with the smartphone if we believe that wireless communication is an inalienable right that we feel strongly that everybody needs to have wireless communication or everybody should have internet access or everybody should have we talked about access to prescription medication and so forth sure if that's some we might position it that way that would be an interesting way to approach the market it doesn't understand like social responsibility just seems like the company once made the most money they may give that as a product responsibility well I mean a company could sell a product at most of all price points you're right it doesn't mean that they are doing something socially responsible but I think what the way that you were suggesting it is that we would present the idea as that being our motivation not just that we want to sell wireless communication at $100 you're right you're right you could have a good better best pricing strategy which is very common and that doesn't mean that you're engaged in social responsibility but I think what he was suggesting is that couldn't we sort of spin that and say that the reason we're doing that is because we believe that everybody should have access to wireless communication I'm saying obviously that's a pretty far fish well I don't know is right now it sounds like a really - you look good absolutely right right I mean it's a way to somebody look you know it's the way that we're just sharing which is suggesting that that's our mo di i magnin could get someone listen it was for newly I have that feeling of social responsibility and I'm owner of the company Canada and market as though there's nothing and you have no marketing like that there's nothing what if I have such a big business I'm not losing anything point is that you can do yeah I mean companies do that now and what are some of the examples of where companies promote their activities as being something that's socially responsible like take for example Starbucks and you know this idea from companies supporting free trade and also they have on what's there they have a lot of water and so what they're just selling water but know that really what are they saying they say that they believe that everybody in the world should have access to fresh water because believe it or not this quite a few people around the world that don't have access to fresh water we take it for granted in the United States you go to the water fountain and so forth in our in our house or an apartment but that's not the case around the world but aren't they just they're just selling bottled water but they position it as no well this is where some of the reason we're selling water because we believe that everybody should have access to fresh water isn't that the way that they positioned or some companies say if you buy our product you know every product that we sell we donate $1 to a certain cause but aren't you really just selling laptops what is what does that have to do with every laptop you sell you donate $10 to breast cancer so what's the real reason that you're selling laptops to raise money for breast cancer or to sell laptops which is just saying right like you're selling laptops what would you know but are you kidding me why does that have to do why is that something socially responsible just because you decide you're going to get money to this worthwhile course does that mean it's something that's socially responsible I don't want to digress too much on that because wanting to talk about segmentation we could talk about that after class but you raise an interesting point we talk about demographic segmentation which we talk about examples of age we talk about gender race/ethnicity income level occupation level of education those are all good examples of demographic segmentation and the reason why it's so compelling the reason hardly even talk about that as an example is because in many cases it is insightful that people in a certain age group or in a snuggler or certain income level that they do have similar needs and wants that they do respond to the marketing mix in a similar way that these segments are large and reachable and by the way it doesn't mean everybody in that segment right don't get hung up on that well what do you know it doesn't have to be everybody just set a significant percentage of the segment is going to respond in a similar way to the marketing mix we talked about psychographics which has to do with lifestyles interests hobbies opinions attitudes that's what we mean when we talk about psychographics and we talked last time then we talked last time about different life stages how people in different life stages have similar needs and wants and respond in a similar way to the marketing mix so for example if you're single if you're married if you're married with kids if you're an empty nester so it's plausible we have to decide what's going to be most relevant for our particular product or service but certainly you can see how that's insightful right is that plausible we think well yeah people that are married and have kids they Colin do they have some commonality that seems plausible but again depends on our product or service and then where we left off really was we started to talk about behavioral segmentation and we started to talk about usage rate so an example of behavioral segmentation is usage rate so how much of the product do we consume so for example always light users so do we use the product infrequently are we moderate users of the product or heavy users why is that insightful why do you think that heavy users might have something in common and have similar needs and wants and the same being true of the other segments because what we're doing is we're aggregating potential customers were existing customers into these groups and we're saying we know that there's customers that don't use our product frequently like let's say it's peanut butter and there's some that while they only buy pea butter once a month there's some that buy peanut butter once a week those would be the moderate users and then some that are heavy users that buy peanut butter not once a week but three times a week so how is that insightful to us why would why do we care whether it's peanut butter or milk so somebody buys they were like user they buy one gallon of milk a day of a month moderate users they buy one gallon of milk a week and heavy users they buy a gallon of milk every other day how does that help us tell us buddy what do you think about that we would spend more more our marketing budget on the heavy users equals languages to advertise to heavy users we might do that why did we do that though oh my method we need to spend money no no I'm not disagreeing with you I just want to let the chok chok just talk it through why um tell a shepherdess I agree you should spend money I'm habit I do too heavy users what is this entity of doing that what's the benefit of advertising to Navi users yeah absolutely so don't make the mistake you know you raise a really good point um you keep religion that means we've developed they're apparently they're heavy users of the product we need to sustain that we need to make sure that they don't have what's called buyer's remorse so if they're heavy users we don't want them to experience buyer's remorse on what sometimes it's called hosts cognitive dissonance which means that after they buy the product that they're double guessing themselves we need to manage that part of the process so absolutely we need to reinforce yes you've made the right decision you bought milk instead of orange juice right so you need to continue to reach out to them and get them ideally to through a different variety of different approaches certainly advertising is one of them to get them to continue to buy no so excellent so what about the others so we're going to spend some money to advertise to those that are already heavy users if I go over time well not only attention with the president modify it's actually foremost for all categories like frozen peanut butter mixed with general like milk all cataluña zero fat low fat because longtime users they can get bored or like there can be health whatever like things that may prevent people from music so we'll make like low fat meal or whatever and for those who are not so to get them to be more heavy we can like in different varieties and little more love them yes so we could augment the product as you're suggesting and also add different features and labels we need to yeah absolutely different flavors because the light users the thing about the light users is that we need to understand why is their consumption of milk so low now see these are the things that you do research you need to probe and keep asking and questioning to try and understand the purchase motivation or maybe the lack of purchase motivation so we need to continue to ask the right questions and I think you raise a good point I'll actually raise a good point that maybe the reason they're light users of milk is because they perceive milk as being high in fat or cholesterol so if we come out with another version that would mark it as low fat or more healthy calcium then we're going to be able to attract those non-users so we you know the different prospective buying groups we have users we have non users for example so you're right there's some non users or some light users because we need to address that we need to find out why it is that they're a light user the same thing with with orange juice the other side of it is that well they say why don't you drink orange juice well because I my doctor said I really need to get a lot of counseling than my diet and I need to you know vitamin A and D is important to me so that's gonna address that issue we have to overcome those issues and concerns and those reasons that people aren't buying or using our product so this is definitely very insightful and also to your point we're going to certainly spend money on heavy users because we need to keep them as our customers but at the same time they're already heavy users so well it's easier to retain the customers that we have than it is to attract new customers with easier meaning that we have to spend less effort for them right so even more so that we should do that because these people have already used our product and liked it they've already seen our print ads they've already seen our commercials so we need to stay top of mind we just need to reinforce that so our advertising objective is to build and grow the level of awareness whether it's the brand awareness or continue to support and enhance Category need of what sometimes we call primary demand that's what they got no campaign is all about is to create primary demand for not a specific brand but for a particular product type which in this case is milk well the same is true for beef it's what's for dinner right all those are campaigns that are designed to create category need the let users know what your the depositary network domain disulfide users so would be the point there but we don't know we don't know the reason maybe it is a lack of awareness maybe they don't know the features and benefits maybe the reason they don't drink orange juice is because they don't know that orange juice is high in calcium and vitamin A and D so that's what we need to understand in some cases the light users that's their situation in other cases they don't because maybe the orange juice is too acidic and it's beaks have it on their stomach we don't know what the reason is maybe it's too expensive you know so that case advertise anyone only right at home if it's too expensive acidic only to change right it could change your product and we could use advertising to communicate to them that orange juice is high in calcium where I am juice is high in vitamin A and D so get the light users to become moderate users or heavy users so this is very insightful once you understand that there's some commonality amongst each of these individual segments that they have similar needs and wants but each case is going to be different we need to understand why they're light juicers why are they not purchasing milk or orange juice or peanut butter just we're going to have good focusing on me heavy users on the waitress but what about the moderate because if you want to try to get them yeah absolutely what we want to do is for all of these is increase the usage rate that's our objective is to increase to use a drink so either if they were already heavy users they buy milk twice a week why can't we get how do we get them to buy nope three times a week how do we get them to find out four times a week or if they just don't need it they might not but what we need to challenge ourselves to find out how do we increase usage how do we increase consumption all of our product or service we don't want to spend the most money possible in the light come on in your slogan or some beauties well it depends like you're suggesting it really depends on the reason why they're not purchasing like you suggested well if it's really that the juice is not in agreement with their stomach lining then matter how much have it ties it's just not going to drink it like who's going to drink that if it's going to you know give you pains in your stomach but you need to understand now in some cases that might be maybe only 10% of the light users maybe the others um there's other issues is other reasons maybe a substitute product is less expensive so why couldn't we have if we are marketers of orange juice why can't we have a good better best pricing strategy where we have a premium brand of orange juice and then we have a less expensive brand or an economy brand that light users will find affordable so it's interesting isn't it to see that there is a different level of consumption by different customers and importantly the key takeaway is that after identifying this and understanding it is that as marketers we can influence this certainly that's what we're going to try to do like you guys are pointing out is that yes they are like juicers how do we get them to become moderate users and the moderate users what is it we need to understand why they're modern uses and not have the users how do we increase their consumption and usage of our product wouldn't it be another category called numb users like yes well yet light or non users yeah okay like people group doesn't consent at all absolutely so a non user would definitely be one of the prospective buying groups absolutely so these are actually the way we're looking at it here as if we go up to this level we're looking at users which is what you're saying and then the other group is non users which is a good point so within users we have light moderate and heavy and then we have another segment which is the non users yes absolutely and with the non users also we need to ask that question why you really need to know why and sometimes very often not just sometimes you'll be surprised what consumers will tell you in research because it's not what we think or what we use or what link we like or don't like it only matters what the customer thinks what they like and what they would purchase or what they wouldn't purchase you just said it only matters what the customer thinks I'm just wondering today do you ever try to change the customers opinion would you chose you would you rather tailor to what they want to hear well once they once we know what their opinion is that we could try to modify their behavior but we need to understand what their perspective is and some in some cases it's something that we're not able to change about our offering and in other cases we have a solution we have something that will address their concern that's not always the case maybe maybe there's concern is not it's something that we could resolve here to check how much of an effect that this one this group of non users will to all have on your cup organization if you have eat like you 2,000 people who don't use it another 8,000 people who use it even at a light and like moderation yes and and it's not worth even touching their interest right so the next step once we send at the market is we need to quantify the size of the marketing of the water times what you're suggesting is we need to do market sizing so we need to know is this five percent forty percent and fifty five percent that's going to impact our decision now if Mike uses was fifty five percent then we might start to really think like alright well 55 percent of my light uses it then use the product but we just need to increase the usage rate it might make sense that's a very large segment that we would want to try and accelerate the rate of adoption of our product or service but five percent they're like users depends on the how many people that actually is five percent doesn't sound like a lot but five percent of the population in China is pretty significant because there's 1 billion three hundred million people there so five percent is what 65 million people well yeah I wouldn't be so quick to turn a blind eye at 65 million people but maybe we need to do some research and understand better their requirements so another type of segmentation that I want to talk about is benefit product benefit and a good example when we talk about the benefits sort is if we look at toothpaste so we have a toothpaste category and there's different segments now what we're going to do is by segmenting the market by benefit sort so we're grouping together customers that want cavity protection white teeth fresh breath plaque control tartar control so this is a good example of how you can segment the market based on the benefit that sort do you think this is insightful so do you think so in other words take these given segments do they have similar needs and wants David no you don't think so this is what they've done this is what Crest and Colgate has done is they segmented the market this way because they believe that the people who want a toothpaste it's going to whiten it teeth right that's a similar need in wine that segment is significant enough that they develop a specific product type that focuses on delivering that key benefit while others in their product line we're going to talk a little bit about the difference between a product line and a product mix and items in a product line others um in their offering focus on delivering these other key benefits now many cavity prevention is something that transcend all those benefits because even if it's not something that they're focusing on you would like to think that isn't that really ultimately that's so I guess maybe the minimum requirement is that it will prevent cavities but when you see the commercials when you see the product on the shelf they emphasize different benefits some of them they talk about in the packaging and the packaging is the silent salesperson at the point of purchase they focus on and include on the packaging the fact that this product will prevent cavities others promises fresh breath lightening cetera yeah just a question do you think that it's a bad strategy to try to uh say like we're going to go all one so like it Kress mega toothpaste they say well this applies to Google all in 105 days tartar control of this this at all things usual which says - sure - then you don't hit the crux of the market reforms of your target yeah I know they are bad they have my Colgate Total and it definitely is not in line with this model I'm saying if that's a bad strategy because then for cavity protection want to see the big letters on the thinset cavity protection I want to see cavitation white fresh breath all these things they're just really looking for that one thing yeah I agree I I think it does undermined what we're talking about is it bad I mean I think that you know a strategy could evolve and maybe you know they research suggested that these individual segments have more in common amongst them selves than independently so maybe ultimately they after segmenting the market this way that they ultimately said you know what maybe it's six of one half a dozen of the other maybe the customer now has come to expect all five of these benefits in one product because there is definitely groups of consumers who want multi functionality in everything just like we have phones that you could send text messages access the internet take pictures and yeah I think it doesn't support this approach is it bad it's hard to say without knowing the research I think that this is very compelling and then yeah you kind of scratch ahead and try to understand like why would think why would they do that like you because they still sell ones that promise white teeth freshening your breath and so for that I have one that does it all what make there is a segment maybe that's the other segment that we don't have here is the one that the segment of consumers that want all a little bit yeah so maybe that's their rationale is they said yet it's definitely a large segment a large group of consumers who want this benefit whitening and these others and then there's some that want all of them so we'd have to know you know what size or what percentage of the category that represents or and the size of each segment so maybe this is 10% I mean this still might be 35% but maybe this segment they feel is large enough that yeah there should be a product that is all-encompassing that has multiple benefits even though I think this is definitely more compelling because well based on this category what we know about the benefits that are sort in other categories it's less relevant but certainly this example is very compelling and all you need to do is you should go into the store and look at the show for two phase then you can see um where this segmentation comes to life when you spread yourself out maybe in this case a little too thin you came to be able to do everything don't you run the risk of losing the credibility of our customers so we should limit the offering to so like Henry Ford says any Model T Ford you want as long as it's black so operationally from that's brilliant but it ignores the needs and wants of the customers which is that people don't just want Model T or they don't just need Model T they want models a B C and D because let's say for example they have a large family so they need a bigger car and not everybody likes a particular color some people like black some people like blue some people like green some people want yellow cars some people want orange cars that's fine but maybe you know like keeping them separate you're claiming one thing and therefore the customer will be able to bye-bye to that and believe that when you think you do everything then it's it's difficult sometimes for the customer to take the product series oh so you're saying like this idea of like Colgate Total you're saying you're agreeing but have you like this is like really probably not such a good idea right yeah oh yeah absolutely there might be a credibility it might be initial people may not believe it there may be some skepticism yeah absolutely I could agree that with that I think you're right that there could be like the product is too multifunctional and has too many promises like it does this there's this and this and you're like really like yeah sure yeah maybe raise a good point definitely that could be a problem on this product benefit segmentation usually to the development of like new products like the benefit of like having raw fat control they make like black strips for example this is everyone's yeah absolutely um one of the things that we try to to do in the research it's like we said is to identify the unmet need and the needs and wants and so sure this is something that we're going to look at in research and that's what's going to fuel product development so once we find this out in research once we find out they say you know if I was going to develop a toothpaste I would develop one that could Whitely teeth that's think that's important to me that would be an important benefit then it's up to the marketing team and technicians and scientists to see can we come up with a formulation that would actually bite me team can we come up with a formula that would actually reduce the level of tartar or plaque so absolutely we do that in research we're trying to find out what are some products that we could produce that are going to meet those needs yeah very good point all right very important if we Newton we could talk about this again and it will come up again very important to understand segmentation and the significance of segmentation and the criteria and also we talked about after we segment the market importantly what we're going to need to do is then quantify the size of the market could be a percentage it could be in dollar terms it could be in units it could be in dollars the number of people to try and understand how large the segment is as we said one of the criteria is that it's large so first we're going to segment the markets then we have to determine both how much are they is it 50% or is it 5% is it 1 billion people or is it 300 million people or is it 80 million people is it a market that sells 200 billion dollars a year or is it 200 million dollars a year so they sell 50 million units whether they sell 50 thousand units of that particular item in a given year for example that's called market sizing so there's different ways that we could quantify the size of the market but certainly it's important because we said certainly we want the segment generally we want the segment to be large so the question is how large so that's market sizing and then once we size the market then we have to select markets that we're going to penetrate and we said well we're not going to you know statistically it's going to be very problematic to try and penetrate all the segments so for example if we're in apparel manufacturer if we may close that we start this company and we decide that we're going to penetrate all segments so we're going to one segment would be jeans so we're going to sell jeans and we're going to sell sweaters and we're going to sell t-shirts and we're going to sell polo shirts how because like you said in terms of new product development how big is our team I know how you know our designers I mean how could they possibly design all those different product types and be able to launch them simultaneously it's going to be very challenging it doesn't mean that we don't have a five-year ten-year 15 year plan where we're saying we're going to introduce jeans first and then we're going to then develop other items uh you know other clothing or apparel so size is something that we're going to consider but then what was some of the other criteria that we said that when we're selecting remember so we're segmenting what to find selecting and positioning so we divide the market into sub markets we quantify those markets right we determine the size and then once we determine the size then we're going to select but besides size what else did we say we said the size of the market was important but what else when we're selecting no we always said that's the criterion forming the segments but in terms of selection 31 selecting the particular segment that we're going to penetrate so we have all those different segments white teeth tartar control blue plaque we're going to pick not all of them we're going to pick some of them or if it's countries we're not going to say well we're going to penetrate 100 countries well we got us decide we're going to focus on Italy France Germany just for example but so how do we decide one of the criteria we said was the size of the market what else growth rate remember we said the growth rate of that particular market is an important criterion selective how do we decide which to select we're going to look at size we're going to look at the growth rate we're going to look at the overall market attractiveness of the particular segment those are things that we're going to use to decide which segments to select how much is it going to cost to penetrate that particular segment the level of concentration remember we talked about whether the market is highly concentrated or highly fragmented and I share with you Porter's five forces model which is a model we could use for determining market attractiveness which includes the threat of new which means how likely is it that competition will enter the marketplace in some cases the barriers to entry are very high and it's unlikely that when we enter the market and other competitors would follow behind us you see why that could be problematic is that we enter the market and then ten other competitors come behind us then the market dynamic has changed very dramatically and our ability to be profitable has also changed very dramatically the threat of substitutes that other products could substitute for ours they provide the same functionality supply power by a power all of those are important and the level of rivalry amongst competitors so all of those are things that we look at to determine the level of market attractiveness yeah questions are hard like selectively all interconnected because like if you have like I look great the link then there's obviously a lot of like market attractiveness oh yeah ultimately one way when we're selecting a segment multiple segments of penetrate we're trying to evaluate market attractiveness so all of those are components of market attractiveness the size of the market the growth rate the level of rivalry the threat of new entrants the threat of substitute buyer power supplier power all of those things we look at all those metrics we look at those to try and determine how attractive the market is so is it better that we should launch our product in France or Germany or China or Israel or Iraq that's what we're trying to decide and that ultimately how are we going to position our product and brand in the marketplace remember I said positioning is the space that we occupy in the customers mind and we're going to talk about that down the road and specifically we're going to look at a perceptual map and the perceptual map is a graphic visualization of our positioning importantly our positioning relative to our competitors and you're going to do um when you're doing this type of work you're going to do 10 or 12 perceptual Maps that's generally what we do and the reason we do that is because each perceptual map is going to look at different dimensions so I'll just give you a preview of this when we look at our perceptual map how we're positioned relative to the competition so here we might have low price high price low quality high quality so is there a market for products that are of a low quality or lesser quality yeah absolutely so we shouldn't shy away from that and think about importantly where our brand is position relative to other competitors so let's take cars for example let's take cars what um where would you say where would be position for now your guys got to be this this um this map right so this is low price high price low quality high quality so where is Ford is Ford low price or high price or somewhere in between your in between in between so we're here here they're right here okay what about quality are they up here yeah down down down under yeah yeah so let's we'll put 40 here much less now importantly the fact that you guys don't agree is is important because that's what we want to understand to our research is what is your perception of our brand relative to our competitors everybody's not going to agree then we could synthesize all that information and determine how the target market or a certain group of customers perceive our brand is being positioned in the market and what's so helpful is relative to the competitors because the next thing we want to look at is let's say Mercedes so where is Mercedes in terms of price highest ever decide percent opposed yeah all right there's no genius what about Dom Toyota right above us for no price quality little less guys getting behind a little higher concentrations repair so do you start to see how this is helpful so not just where their position but it's important to know that we occupy here and our competitors are here and we want to know who's in our competitive set so who are our number direct and indirect competitors this is going to tell us what Toyota Honda right that these are in the same competitive set we could argue that their direct competitors and that Mercedes is an indirect competitor since Mercedes also provides luxury luxury add a means of transportation so they're competing against each other but in different segments different price points of a Jaguar with fans awesome yeah we could with Jaguar there yeah BMW yeah so now strategically if we're going to do like you said motion right if we're going to develop new products we have to decide where we're going to be position now maybe we want to go here maybe we're going to decide we're going to try and position ourselves here or maybe here but then maybe over here so you know what that means that we're going to be competing against Jaguar Mercedes Benz and BMW maybe we can't get there from here so to speak right maybe that's not attractive that competitive set so we need to decide where we're going to be positioned all right so we have a few minutes left let's talk about I want to start our discussion about products questions I'll be good I'll be great yeah all right let's rock yeah all right let's keep rolling got a couple minutes let's see where we could cover here all right there's different types of products and in this category and you'll see this in Chapter ten when we talk product is a general term use that term very loosely there's goods and services so when we use the term product and I know I realize that this might be a little bit different from the way that you're used to using the term but in marketing we use a term product and that's why I always try to make a distinction I always try to catch myself from using the word consumer right I always try to say customer who's customers are more general term as opposed to same consumer because consumer implies yeah I mean also but I mean implies us us as shoppers and what I'm trying to say justice that doesn't need to be us as shoppers but it could be before the business right so product is a general term refers to goods and services and when we talk about different types of goods we have durable and non durable I'm sorry I know that for marketers would expect something more creative but that's the trigger terminology durable and non durable and often the word non durable is replaced with the word consumable so those words are used interchangeable alright so what's the difference between durable and non durable go Katella into regular well let's keep going whether whether it stands up in the market type of thing where how long yeah how long it will last you Marcus like isn't going to fail after one season or there we go also the prompt itself how many times are you have to use it over and over again you can have to buy it more yes right exactly tell us say a lot that's right if it's if it's one of the other so if you're going to have to pull Bonnie constantly like you have to renew your purchase those are jobs in danger like leather jacket will last a long time whereas if you get a poncho if I won you have to keep getting new ones and they're not the same thing right right so a durable product a durable good is one that's reusable and we could use it many times it doesn't mean that it has an infinite life but we could use it again and again like like you're saying a leather jacket we could use it again and again but non durable or which very often referred to is like it's Super Bowl is that it has a limited number of uses right like juice right like orange juice like you buy a half a gallon of orange juice its consumable you're able to get ten glasses out of it and then that's it so orange juice toothpaste milk all those are considered to be consumable products yes bad would like the warranty beats oh I'm thinking weights like these like yeah really would that be considered durable too - like if they break you can always buy the fact that it's that you could use it again that it's not consumed that you could use the product and it doesn't get used up it could wear out sure any durable product could wear out your leather jacket could wear out your car could wear out but in terms are the definition of durable means that it's um there's numerous uses right that you could use it multiple times disposable camera versus right like disposable it for it's a good example right so if you want to say that it's disposable you might say that synonymous with consumable so it's important for us to understand that because that's going to change our marketing plan if our product is turbo versus consumable so consumable means like we said people are going to buy our product every week that's very different from saying people are going to buy our product every decade right so how often do people buy a car for example that's very different from saying somebody's in the store every week and they're buying Tropicana versus I buy a car every 10 years you see how that's going to really shape and define our marketing plan and there are some things that are sort of you know in between maybe a computer like you know five years you couldn't have six years so that's not really consumable but that's not you know also do that their holes like sort of permanent but when we say yeah you could you can make that distinction if you want to make a distinction between a product like a car versus a computer right I think we're just saying I think you're trying to get at the lifespan of the product which is that some cases it could be twenty years right even cars with high mileage right and have probably still could stay around for 20 years right 25 years but not so much the case who had laptops that usually they just sort of stopped working and that's like sort of beyond our control no matter how many times you change the oil or rotate the tires or what I write that it's just has its like built-in obsolescence so yeah that's fine then we make that distinction that there's different levels of durability that's certainly helpful to us to understand that you know that because a product is durable doesn't necessarily mean that it's rugged you see the difference there that it's turbo means that we could use it multiple times we could use it over and over again but doesn't mean if you drop it that it will break so we need to get comfortable with the whispers with the terminology and the implications but I think what you were getting at is that right like you're thinking about oh yeah the car is there is a big gray area yeah so I think we should make that distinction between the durability of a product versus whether or not a product is considered to be durable versus consumable that's what questions and that's also why Happel always constantly updates our services and their products being I propose the same iPod ten years ago and it is today and notify yet like that's what makes people interested in a product that has a different feature two types of drags people involves |
Marketing_Basics_Prof_Myles_Bassell | Marketing_4.txt | [Music] you let me down show me when I needed you the [Music] I thought that mr. Bies when you let me [Music] let's learn I know I'm worthy get the wrong yeah yes bizarre I'm gonna take it down [Music] [Music] [Music] [Music] you let me down [Music] [Music] [Music] [Music] Matthew Nicole Nicole angry then you drop Julia Vanessa Roman Lily Tata Shamala Gianna Madeline motion hi Ella Oh Dave huh okay she's ready to roll roll about yummy see all right let's see max when BC and the other Alexis buzz Bonnie Oh Dallas Oh Thomas Perez going once going twice Catherine plan oh that's okay Catherine and jennifer pollock we're Philomena prisca yes Lisa wrap up Stanley Raymond a song let them Rose tendus turns in Sarah hi Sarah by Sir Geoffrey Sabina Catalina abu some money yes I mentioned Ron Johnson Ella sanella Daniel so no one's Gaia Szymanski that's what I said search Elizabeth Spencer can we still owe muhammad Subhan have a good night changes La Mina Sultana Kristoff there we go Justin Marco teapots Jessica that this is forgetting for pinafore graduation that's how they call you mimic graduation Ashley Thomas Ashley Thomas Lisette Lisette Opia Natalya okay Alexander here we go Bonnie people is it midnight yet let's see Edwin docent I am Danielle Williams have a good night Michael Winner Raglan's wrong Vivian wound Matthews along our enzyme and now we got some people not on the roster [Music] okay if you register today it's gonna take 24 hours yeah well you don't need the you don't have to access blackboard to - yeah the introduction yeah it's okay you can post it tomorrow that's fine tomorrow you'll have access to blackboard it just takes 24 hours so tomorrow you'll have access to blackboard and also you'll have access to the link as well okay all right that course is also open on blackboard already - yeah yeah yeah it only takes 24 hours sometimes it'll actually update after midnight so it's not even a full 24 hours I think the system actually reboots yeah but if it might be actually but but definitely by the morning you'll have access it's not fighting like 1 a.m. or midnight definitely by the morning you know all right that concludes our lecture of marketing for today join us next time give us at youtube.com slash Professor Bassel have a good night and remember your success is my number-one priority |
Marketing_Basics_Prof_Myles_Bassell | Marketing_7.txt | you let me down show me when I needed you the I thought that mr. Bies when you let me let's learn I know I'm worthy get the wrong yeah yes bizarre I'm gonna take it down you let me down price points but of course different features so like I suggested some will have 16 gigabytes of internal storage some will have 32 gigabytes 64 gigabytes and now you get 128 gigabytes but we could segment the market by demographics and the reason why that's significant is because what we would have found out by research is that there are differences there are differences amongst those segments based on age and also what are some of the demographics some other demographic characteristics that we could use the segment the market Marisa well that would be Geographic right so we could then also second the market based on Geographics but what about demographics the gender so gender would be a significant way to segment the market so males and females men and women they have different needs at once is that reasonable so when you go into the store so you even I know this bunch you have shampoo for men and shampoo for women right right I told you I know a lot about shampoo they have shampoo for men and shampoo for women so what are they done obviously that's an example of segmenting the market based on gender what else race yeah absolutely so the needs and wants of different races and ethnicities varies doesn't mean that there isn't any commonality of course the product is going to be similar but what the company needs to do is to modify the product so it's relevant so it's relevant to that particular race or ethnicity and in the United States that's particularly relevant in other countries the population is very homogeneous which means that everybody is very similar they don't have that type of diversity that we have in the United States and importantly in the United Thanks the level of diversity is very substantial so it's not that you have some level of diversity it's that for example let's say the percentage of African Americans in the United States is pushing 15% that's a very large segment and the percentage of Hispanics in the United States is also very substantial the percentage of Asian Americans also growing very rapidly so those are those demographics are substantial and relevant to marketers so we need to target our products one size does not fit all we have to customize the product to meet the needs of particular segments so demographics Geographics marisa said that Geographics is the one way to segment the market so based on where you live the region the country yes go ahead what was that behavior so we couldn't segment the market based on what's called behavior so behavioral segmentation so demographic segmentation Geographic segmentation behavioral segmentation what would be an example of behavioral segmentation buying online or buying in the store and what else um well I would I think you're starting to talk more about what we call psychographic segmentation which has to do more with lifestyle hobbies interests but take for example usage rate so a great example of behavioral segmentation is usage rate which means what that the level of consumption varies not everybody consumes the same level of orange juice do they don't we have some that are low consumers some men have moderate and some that are heavy users of orange juice light moderate heavy users so the usage rate is another important way to segment the market what we need to do is identify the light users the moderate users and the users we need to understand who those people are and why do we care about segmenting the market based on the usage rate and 50% are heavy users so that's interesting that's interesting but more importantly we said needs to be actionable what's actionable about that what's actionable about knowing that 50% of our customers well 50% of the market is heavy users who are 30% on moderate users or 20% are light users so the amount of orange juice that they consume is small relative to other individuals why do we care about that so we we've done the marketing research and this is what we found out we found out that 20% of the market is light users 30% is moderate and 50% is heavy users that adds up to a hundred doesn't it just checking does it you're not sure fifty thousand is what percentage of five hundred thousand yeah it sound a little bit more confident now it might be one of the questions on the exam yeah but you guys are good at guessing right so what do you think what are we going to do with that information so maybe one of the things that we could do is increase the size of the container that the product is sold in that's more relevant to the consumer so we're not going to just sell our juice in 16-ounce containers but maybe now this since we know that there are heavy users that their consumption level of orange juice is high that maybe we need to start selling at any one gallon containers so not sixteen ounces but 128 ounces right delivery service for those that are heavy users yeah maybe so different brands of orange juice so as a retailer suddenly we want to have offered different brands so we're going to have what Tropicana we're gonna have a minute made simply orange so we're going to carry different different brands of orange juice yeah absolutely yes so we want to find out why those that aren't heavy consumers that I already have to use this why they don't consume a lot of orange juice and yeah you could spend more money on advertising to reinforce and reassure those that are heavy users because we don't want them to have buyer's remorse we don't want them to have post cognitive dissonance which is buyer's remorse right we don't want them that buyer's remorse you want to reassure them but also we're going to spend more money on advertising to reach those that are light users to get them to increase their level of consumption maybe they're not aware of that orange juice now has calcium which is historically a key benefit for what product milk so we could increase the advertising to try and get the light users to increase their level of consumption that's very important so we need to identify those segments we need to do a behavioral segmentation and understand the usage rates but it can't just be interesting it's got to be actionable we need to be able to communicate we need to advertise and communicate with the light users for example to get them to increase their level of consumption so it's not just oh wow 20 I don't like users okay we have to do something about that the good news is we can we can so we're gonna advertise more maybe if the issue is price so maybe we should do what we've been suggested before drop a coupon maybe that's the issue maybe orange juice is too expensive what other ways can we segment the market in terms of behavioral segmentation another type of Bayville segmentation is known as benefit benefit segmentation so we could segment the market based on benefits take for example toothpaste how did they demonstrate it that they segmented the market make some benefit but what we need to do is is to go into target for example or Duane Reade or CBS and we could see on the Shelf what they've done to bring to light their benefit segmentation so you've segments in the market based on benefit what does that mean so when we identify different benefits that consumers want from toothpaste where are they whitening sensitivity what else comedy prevention flavor plaque control tartar control fresh breath one of those of different benefits that consumers want from toothpaste so that being said what did they do so do you do market research to find that out they probably did focus groups which is qualitative research and then they did positive research which includes questionnaires so they did questionnaires anybody here ever do a questionnaire so based on their research they found out that there's different segments in the market in the toothpaste market and what they did was they based on that benefit segmentation they developed different products to meet the needs of each one of those segments so they have a toothpaste Colgate and crest H other toothpaste that position in the market as whitening teeth that that's truth face the dot product will whiten teeth and they also have another toothpaste that on the packaging you can see clear as day it says that it fights tartar and one that fights plaque and one that freshens breath so that's an example we could see in the marketplace of benefit segmentation that's a great illustration so different ways that we can segment the market demographic segmentation Geographic segmentation psychographic segmentation behavioral segment for those are ways that we can segment the market they're not the only ways but those are some good places to start but we want to segment the market and the reason why we do that is because segmenting the market is to enable us to identify opportunities and if we act upon that research and great products to market and brand products accordingly then we're going to be able to increase sales didn't we say that that was one of our marketing metrics with sales but we want to sell more products don't want to generate more profit silence means agreement I'll take that as a yes all right so sickness we want segments to be large so a segment should be large reachable with similar needs and wants and whose members respond to the marketing mix in a similar way yes so what we're trying to do is identify before you talk about the selection criteria what we want to do we like discuss what we could sit to be an ideal 7fe so we want segments to be launched one seconds to be large reachable what does that mean reachable accessible so they gotta be accessible which means we need to be able to communicate with them for example through advertising so to identify a segment but them not be able to see our commercials or our print ads while billboards is a really big problem so it really needs to be something that's actionable but it can't just be interesting because if it's just interesting and we can't reach them then it's meaningless to identify a segment really that's not reachable you need to be able to communicate with them so we identify that doesn't want to communicate with them is it the messaging in our commercial for example going to be different for heavy users then for light users are we going to be saying the same thing in the commercial that's targeted towards heavy users as light users so remember what we're trying to achieve is something different like jusers we're trying to get them to increase their consumption so is it the message what we talked about in the commercial is be different than what we tell people who are already heavy users that are consuming a significant amount of orange juice every week yeah so we need to be able to reach the segment's that we identified so they need to be large reachable have similar needs and wants so within a segment within a segment let's say like those age groups that we identified the needs and wants need to be similar now outside of those segments amongst those segments the needs at once are going to be different and they're going to be large reachable with similar needs and wants and they're going to respond to the marketing mix in the same way so that means that they're gonna buy the product for example online so most of the consumers in that segment are gonna purchase the product let's say online another segment they might significant percentage of them might purchase the product at wholesale close and others might purchase them in grocery stores or convenience stores or drug stores or specialty stores now that would mean that everybody in that segment so who said homogenius but they're not going to be identical it's not a segment full of clothes but most of the people so we're generalizing most of the people are gonna respond to the marketing mix in the same way which means that they would purchase the products at a given price so some in that segment will purchase it 500 but not at 800 that's in a different segment and most of them will purchase online not in the store to purchase it in a virtual store not in a physical store for example okay so now now that we've created those segments now we need to decide how we're going to select them so we said that you want them to be large reachable with similar needs and wants and respond to the marketing mix in a similar way but now that we've identified the segment's we have to decide which segments were going to focus on because one unlikely we're not going to be able to focus on every segment so let's say we decide to introduce our new line of clothing to be able to introduce pants and jeans and Lourdes sleeve shirts and not just any mark sleeve shirt but mark sleeve shirts that are made of cotton and some that are made of silk and something made of polyester and some that are made of linen and of course some of them are stripes and some of them are red and some of them are blue and some of them are yellow and some of them are green so you can see there's a lot of complexity we're not going to be able to introduce a line that's gonna penetrate every segment so what I'm suggesting is one of the ways that you could segment the apparel line is by a product type which is pants shirts like those would be two key segments are you gonna be able roll out an entire product line for shirts and pants and for men and women so probably not so we need to decide which segments are we going to focus on so we said that we could segment the market based on gender so if we segment the market based on gender now we need to decide whether or not we're gonna introduce a line of apparel for both men and women so what do we need to look at well one of the things is the size of the market so the criteria we're going to use to help select whether or not we're gonna introduce a lot have a power for men or for women is the size of the market so how long does the market so which market is larger is the market for clothing larger for men or for women we need to understand that and we need to look at the growth rate so independent of the size of the market we need to understand how fast or how slow the market is growing so it's not enough to understand that it's five billion dollars at the market is five billion dollars we need to understand is it growing one percent per year ten percent per year 20 percent per year fifty percent per year so what do you think what would we decide are we gonna select a market that's growing one percent per year or thirty percent per year which do you think is better 30 percent probably right not all other things being equal but then we can also look at you can also look at ello right we could also look at the size of the market so which is better a small market that's growing rapidly or a large market that's not throwing those are things that we need to consider so one segment might be five billion but only but growing thirty percent per year the other segment might be not five billion it might be fifteen billion but growing one percent per year so we need to remember what we're talking about now is not only segmenting the market is how we're going to decide which segments the pet trade because we might have identified two segments we might have identified ten segments so we need to look at the size of the market we need to look at the growth rate we need to look at the cup the competitors set so in each of those segments who is our competition who are our competitors so let's take let's switch examples and go back to our example about beverages so we segment the market by product type and we identify different products water what else orange juice milk and soda sports drinks energy drinks and of course those are the non-alcoholic beverages then we could look at alcoholic beverages but just based on the non alcoholic beverages segments well we need to decide which one of those energy drinks sports drinks milk orange juice water soda are we gonna try to sell to all of those segments or are we gonna have to select which is walking your french ways of targeting we're gonna have to target specific segments that we're gonna penetrate so we're gonna have to target specific segments and we're gonna look at the size of each one of those segments and we're going to look at their growth rate and they're gonna look at the competitive position so where we are relative to the competition so in a given segment who will be competing against are we competing against Coke and Pepsi or are we competing against XYZ company so we going up against a large successful marketing powerhouse or are we going up against small competitors so it's a market highly fragmented or is it highly concentrated so we need to understand our competitive and what else the course to reach the segment we need to consider how much are they gonna cost us to reach a given segment how much is it gonna cost us to penetrate the orange juice segment and is there going to cost less to penetrate this segment where women sell milk to consumers and then we also need to look at the compatibility how does us penetrating the orange juice segment how compatible is that with our organization's resources and capabilities so in selecting a particular segment we have to look at the size of the market we have to look at the growth rate we have to look at our competitive position we have to look at what else the cost of reaching the segment for example how much we got to spend on advertising and what else and the compatibility how compatible penetrated that segment is going to be with the rest of our organization so does it make sense is there going to be a good fit selling orange juice and milk for maybe we should sell tequila so we'll sell orange juice milk and tequila good only top-shelf right and vodka and rum so we need to think about well how compatible is that going to be with our capability with our manufacturing resources with our mission with our vision with our values as an organization so we need to ask ourselves as an organization do we feel that it's ethical to sell alcohol and to try to get people to increase their usage rate so if we segment the alcohol category is it ethical is it ethical to sell alcohol and is it ethical to try and increase the consumption rate so we find out that's the son that used to drink is low hard to believe right but for some the usage rate is low that means they don't consume a lot of alcohol for some it's moderate into summertime so is it ethical to try to increase the usage rate of alcohol we need to think about that now some products some products fail in fact most products that introduce fail why do they fail why do companies do market research and then introduce new products only for the product to fail because a lot of products fail again most products that are introduced fail why why did the product failed ad Marissa yeah the timing could be bad so you might be you might be 2008 but they're so 2008 or maybe they're 2008 and you're mm late or one of those you get to what I'm trying to say right no right I don't think so we'll get through this yes we can did I tell you your success is my number one priority yes that's why I'm willing to stay here till midnight to help you guys out so Marissa sighs bad timing so maybe introduce the talk before the demand for the product existed so you were kind of ahead of yourself you were ahead of the curve like for example those companies that introduced mp3 players you probably think that Apple was the one that introduced mp3 players that they developed the mp3 player they didn't they didn't they didn't develop the mp3 player what they did was now to Marissa's point all the companies had bad timing they introduced the mp3 player before the market was developed what Apple did was not developed the mp3 player what they did was a better job of marketing it so they introduced the mp3 player and wrapped it in the iPad no the iPod brand-name and supported that with significant marketing communications advertising and promotions and they were obviously very successful with that product but the companies that initially Claire we're not successful this is that just bad timing what about no point of difference so an insignificant point of difference is a reason why products fail you need to differentiate your product from competitors so it needs to be a point of difference if there's no point of difference very likely your product is going to fail it needs to be different from the competitors a product could fail because it's poor quality so if the product is of poor quality then that's a reason why a product will ferrel also it's not enough to have a great plan you need to have a plan that's flawlessly executed so you know what you can have a good plan that's flawlessly executed and be tremendously successful but you can have a great plan that's poorly executed and your product will fail so for example you did a poor job of branding the product of the packaging of promoting the product of advertising the product so poor execution is another reason why products fail right let's talk about the product lifecycle there's several stages in the product lifecycle what's the first stage of the product life cycle introduction so the first thing for the product life cycle is introduction so at the time that the product is introduced we call that time zero here sales are zero but then what happens over time as a result of our marketing plan being flawlessly executed what's going to happen sales are going to increase so this is sales on this axis here so sales are going to increase that stage of the product life cycle is known as growth now importantly as marketers what we want to do is continue the period of growth because if we don't figure out a way to continue to grow sales to increase the number of units that we're selling of our product then what's going to happen is there's going to reach a point when sales are going to become flat where zero the flat is known as maturity so sales are no longer growing so right now we're talking about the product life cycle we said that the first stage of the product life cycle is introduction after the product is introduced and we've had a chance to implement our marketing plan the marketing communications plan then sales are going to start to grow why because we're advertising and because we have distribution at Walmart and Kmart and Best Buy so sales are going to start to grow but what this is this model is a foreshadowing of what could happen if we do nothing so there's a cost of doing nothing so if we stop spending money on advertising what's gonna happen sales are going to decrease and if this goes on for an extended period of time then what actually could happen is that sales are going to decline so the product life cycle Sage's introduction growth maturity and decline remember this is a foreshadowing of what could happen our job as marketing executives is to make sure and to try our best to keep the product from becoming mature to keep sales from declining that's what we're trying to achieve is to keep sales growing so we have introduction growth maturity decline anything else where does it say in the book somebody who is it but when to say introduction growth maturity decline and is that it let's see you Father know who found it in the book chapter 10 right 11 275 so introduction growth maturity decline all right so now be astonished be astounded so I just wanted to make sure all right watch this so the book says that it's four stages introduction growth maturity and decline but I'm here to tell you that there's more stages that's right that's right if you wanted four stages you wouldn't have taken this class with Coach B there's other professors that can tell you about the four stages I'm here to tell you about the other stages good well survival is another aspect but not a stage in the product life cycle so after decline what could happen is what's going on so lessons so the product could become obsolete so it's not just that sales decline but sales basically can go close to zero or zero so no sense at all so obsolescence so introduction growth maturity decline so here we're saying sales were growing to reach a point where because of market conditions but also we have to take responsibility as marketing executives if our sales stop growing whose fault is that so of course the category stop growing but what are we doing as marketing executives what are we doing to keep sales growing well advertising we could spend more money on advertising that could make a difference we could modify the product so and new features and benefits so look at look at the iPhone for example at one time you could only get about 16 gigabytes but I'm gonna get people to keep buying well then you could get about 32 gigabytes and then 64 gigabytes and then 128 gigabytes you could have gotten in with 128 gigabytes a long time ago but they're not in Jake the product life cycle that's what we need executive we can't just accept this this is a foreshadowing remember that's why this is so tremendously powerful and insightful is that we need to understand that this is what will happen unless we do our job as marketing executives we need to manage a product lifecycle not just say yeah sales are gonna plateau that's not acceptable you'll be fired fired fire1 if you're not again some of my students are cheerleaders they could come in here and do not give me enough up so you will be fired so decline obsolescence and then hold on to your seats folks you know what else be vitalization revitalization is another stage of the product life cycle so that you could bring a product back to life you could reposition a product even after sales at decline you could be positioned the product you could revitalize that product and make it relevant to let's say consumers so those are the stages of the product life cycle introduction growth maturity decline obsolescence and revitalization can you of what a revitalization what about the arch yes good do what yes like Michiko for example the lute Egret brass clothing line was very popular in the 80s has repositioned themselves in the marketplace and you can buy that product in Macy's even today so that doesn't mean that you could get people to buy VCRs now right because VCRs have become obsolete so these scars the first VCR the like point was I shouldn't say this out loud $1,200 because I was an innovator I bought the first VCR put $1,200 and sales continue to grow until the point where as sales firm and VCRs matured and then decline and then VCR the pretty much obsolete now because what do people buy instead DVD players DVD players so it's possible to manage the product lifecycle that's what's important to take away that's not a foregone conclusion the key takeaway is that we could manage that we could manage the product lifecycle that's what you need to keep in mind is that we're able to manage the product lifecycle now the adoption curve model is also very insightful okay so the adoption turn model suggests that people don't adopt they don't purchase the product at the same rate so they say that early on there's a group of consumers that'll purchase the product you want to be the first to purchase the product they're known as innovators then the next group is the early adopters then we have early majority late majority and finally landlords what'll a guards Lagarde's la g g8 rdf:li guards are non adopters so we have to realize that we introduce a product that there's some percentage of consumers for example that won't purchase the product but we need to know that there are some there's going to be some non-adopters some lag guards so when we choose the product we're gonna sell to innovators those are the first to purchase the product now our challenge though is to figure out what we're gonna do how are we going to modify the marketing mix the four piece the product the price the place in the promotion how we're going to modify that to get the early adopters to purchase now so innovators have purchased how are we going to get now the early adopters to purchase the product what are we going to do so we need to modify the marketing mix we need to modify the four PS so that the next group the next group in this model the adoption curve model book purchase so this goes hand-in-hand this concept goes hand-in-hand with the product lifecycle so we need to think about them simultaneously in managing our product so there's different groups there's different groups that are going to purchase the product than a given price with certain features and benefits for example we need to be aware of that that when we introduce the product that everybody is not going to purchase we're not going to sell 50 million units the first year over the course of the lifecycle we might sell 50 million we might sell 150 million units so we need to understand that there's different needs and wants in the market and different aspects of the marketing mix I'm gonna motivate people to purchase off our we need to be aware of that so we shouldn't think that we're gonna sell the whole market in the first year what we need to do is control we need to try to control and influence and manage the rate the rate of adoptions so we don't want the product the rate of adoption to be too slow why because competitors could preempt us so competitors can copy what we're doing if we don't have a sustainable competitive advantage they might copy what we're doing so we could modify the products we could add new features and who benefits change the appearance change the packaging change the color change the storage capacity change the number of megapixels in the camera who knows our wings that we could influence the rate of adoption and manage the product lifecycle so modifying the product modifying the product changing the features and the benefits repositioning the product is another way that we could influence and manage the product lifecycle we could also modify the market which we talked about new product development and product development and diversification those would be good examples let's talk about the branding of branding hierarchy so we said that the brand is once wrapped around the product so after we do our marketing research what we need to do is make it real it needs to be actionable and you have to develop strategies and tactics that's going to make a reality so one of the things that we need to do is we're going to have to develop a product and also we have to develop a brand a brand strategy so there's three levels in a brand hierarchy the first level is the corporate brand sometimes the third is comparing brand so an example of a corporate brand would be Toyota Motor Sales USA that's a company that's the corporation so the corporate brand is Toyota Motor Sales USA now they also have master brand so in a brand hierarchy there's a corporate brand there's master brands and it could be sub brands we're talking about a brand hierarchy no we said that the brand is what's wrapped around the product now a company a successful company has multiple brands in its portfolio for successful company the single most valuable asset is an intangible asset it's brand what do you think the value of the coke brand is now when I say the vibe with a coke brand I don't mean to value of their manufacturing facilities or their manufacturing equipment I don't mean the value of their headquarters in Atlanta Georgia I don't mean the value of all the soda that they have in the warehouses I don't mean the office equipment just what is the value what do you think how much 100 billion anybody else right how much priceless anybody else nobody else 79 yes you might actually do well on the exam that's a really good guess so the the value of the Hope brand just the Coke brand is about 70 billion dollars which is really out it is amazing isn't that kind of mind-boggling because remember I said that doesn't include the value of the soda that they have in the warehouses is that not the value of a manufacturer at present which certainly is billions and billions of dollars but just the value of that intangible asset which by the way they have a trademark they have a trademark so that's something that can be a sustainable competitive advantage so as the organization we want to have a competitive advantage that's sustainable right so a trademark is ideally something that's sustainable so their brand is estimated to be worth about seventy billion dollars so companies have several brands bond portfolio brands so for Toyota they're in their portfolio they're gonna have master brands they got master branch what are their master brands for their master brands include Zion Toyota and Lexus so somebody there segment to the market and probably took the coach Pete's class and they said oh I get it I get it good better best yeah we need to have something at this price point and we need to have something at this price point and we need to have something at this price point so they have now remember you can't be all things to all people so there's only so far you could stretch the Toyota brand so when they did research when they did the brand elasticity research they try to determine whether or not they could introduce a Toyota at $75,000 could they sell a Toyota at $75,000 and people said well you know something if I'm gonna drop $75,000 I might as well just buy a BMW like I wouldn't pay that much for a Toyota now a Toyota is a great car it's during a liable it's very fuel-efficient but in terms of where that position they're not positioned as being a luxury car they may have competitive set is not Mercedes and BMW do you agree Toyota is a very nice car it's not cheap it's not it's Taurus cars are not inexpensive but when consumers were asked in research they said if I'm gonna spend that much money I wouldn't I wouldn't seek Toyota as being a part of my decision set so there's other choices at seventy-five thousand dollars if I was gonna spend that much say what's buy a BMW or a Mercedes they said that basically their conclusion was that the Toyota brand right has limitations undergrads limitations in terms of its elasticity its brand elasticity so what they did where they introduced a new master brand so Toyota is a master brand they introduced a new master brand which is Lexus these three of master brands so in the brand hierarchy there's a corporate brand there's a master brand and there can be so grams so examples of sup ranks for Toyota would be Corolla so in Toyota Corolla Camry Avalon what else Highlander so these are examples of sub brands importantly the way they're marketed is in conjunction with the master brand because based on what I said brands have a tremendous amount of value so of course they wanted to leverage the Toyota brand and not have to introduce Lexus because in the short run you're talking about investing over five hundred million dollars to create a power brand to create a power brand a brand it's gonna have a high level of brand awareness and a high level of purchase intent with favorable brand imagery and grant attributes but it wasn't to be so they have to introduce based on the research a new master brand and they have a series of sub brands now importantly these are marketed together so the Toyota Highlander because what you want to do is introduce a product that's going to leverage your master brand so that if a company if we saw the company and we introduced a whoa our Camry how many do you think we would sell not many the reason why Camry sells so well the reason why that Avalon sounds strong while the reason why the Corolla sells so well is because it's marketed as the Toyota Corolla and the Toyota Avalon but now the sub-grant right these are sub-branch on his three levels the corporate brand the master brand and the sub-grants sometimes the sub brands become so powerful and have such a high level of brand awareness that they can become a master brand so camera is actually reached a point in its life that its brand awareness is so high and it's brand imagery is so positive that basically you could describe it as a master brand because when you even when you look at the vehicles you don't under on the trunk of the cars you don't see the Toyota logo you see the Toyota symbol but Camry can stand on its own that has enough recognition and enough awareness and enough positive attributes associated with it because we want to have Stroup brands that are strong with it and unique with favorable brand associations so they still have the symbol there but they don't have the Toyota logo the Toyota brand name so the simple who communicates the brand name it just doesn't include the the name itself - doesn't include Aries so questions about that this is an example of a brand hierarchy corporate brand master brand and sub-grants so this is an example of where they're bringing to light their strategy the strategy that they develop based on their research ok so briefly we'll talk about multi-product branding let's multi-product branding what would be a good example so what about Sony is Sony a good example of multi-product branding yeah Sony is a good example of multi-product branding because they sell a lot of different products TVs and gaming consoles and what else DVD players and what are all those products have in common is they're all branded Sony so that's an example of multi-product branding all their products are branded Sony now that's their master brand they also have sub brands like what what are some of the Sony sub brands what been what else PlayStation is it Playstation or subgrant of Sony isn't it the Sony Playstation come on gamers we're the gamers here no gamers up in here no no no gamers ok identify yourself as a gamer so some of they still bring in the blue Playstation walkman what else any other sub brands that you remember bravia which one Xperia what else the Sony Vaio yeah which they are yeah they're not selling that anymore but for a long time they had Sony the VAIO was one of their sub-grants which is very nice laptop with a i7 quad-core processor there very nice 1 terabyte hard drive sixteen gigabyte of ram but yeah they they are not selling that anymore but that's one of their certainly one of their you're right as one of their sub-grants and what about so that's a multi-product branding strategy what about multi branding what is that what would be a good example how about yes go ahead James okay so how about let's take Procter & Gamble so Procter & Gamble corporation is let's say the corporate brand Procter & Gamble's that's PNG Procter & Gamble corporation so what they did which is different from what something did so Sony has a multi-product branding strategy Sony Corporation they administer brand is Sony and what can they grant all their products Sony everything is sony Sony gaming consoles Sony TV's all their products and they sell a lot of different electronics or their products or branded Sony that's one strategy Procter & Gamble has a different strategy Procter & Gamble has a large portfolio of master brands unlike Sony their main master brand is Sony right Sony Corporation in maintenance a brand-new Sony Procter & Gamble has a multi branding strategy and for every product type they have a different brand so they sell toilet tissue it's branded Charmin they sell mouthwash is that branded Charmin no that's branded score they sell toothpaste is that Brendan Procter and Gamble is that branded Charmin no that's branded crust they sell laundry detergent that's branded ty number of products of branded Procter & Gamble now of course Procter & Gamble is their corporate brand the parrot brand and on the back of the packet |
Marketing_Basics_Prof_Myles_Bassell | Marketing_10.txt | you let me down show me when I needed you the I thought that mr. Bies when you let me let's learn I know I'm worthy get the wrong yeah yes bizarre I'm gonna take it down you let me down that's not like you know people say okay wait for it wait for it wait for it no you can't wait for that to happen but we need to do this to work hard to manage the marketing mix so that sales continue to grow so that the number of units that we sell increases he's in the biology class that's why he thought this was the biology lab but it's not now can we do this can we manage the product lifecycle this is extremely important the product lifecycle again not because this is what is inevitable it's because this is so profound that it gives us a foreshadowing of what can happen if we don't manage the marketing mix effectively product price place promotion those are four controllable factors we can control if the country is in a recession that's out of our control for example but the features and benefits of our product are within our control that's something that we could change we could continue to add more features and benefits to our products is that true we can increase the capacity the storage capacity on our product from 16 gigabytes just 32 gigabytes just 64 gigabytes to 128 gigabytes is that going to help us to sell more of our products is that gonna help us to grow our business or ourselves gonna increase by modifying the problem so it's modifying the product a good way to keep sales growing so we could modify the product add features so that's one feature that I technology company like Apple with their iPhone or their iPad that they could change to get people to keep buying the product that's one feature what would be some other features or that product that they could change so we said you could change the storage the capacity of the storage what else a longer battery life so instead of a two-hour battery it could be four hours it could be five hours it could be six hours that could be eight hours that's an important feature that's something that is of interest to consumers is the length of the battery life how long is the battery life so 10 hours is even better 15 hours so is that gonna do you think that that for a product that's durable right now that's not a consumable partner an example of right you have to make a distinction between two types of products durable and consumable so an iPhone is an example of a durable product that's um a consumable product what's a great example of a consumable product who said that who said it you what is your name Kenny Kenny said orange juice that you believe is that what would give him that crazy idea yes orange juice is a consumable product soda is a consumable product toothpaste is a consumable product and shampoo those are all examples of consumable and lots more but this product we're talking about is durable so the battery life they could extend the battery life what else change the design so absolutely change the design that's very common in the automotive industry isn't it so that's also a durable product and how do they get to grow sales is one way is that they modify the product and they change the design some modifications to the design are minimal and some of them are very substantial they completely redesign the car and it's a big investment it's a big investment in the tooling to be able to make that change and it's a big investment in design as well in engineering so as we've seen it's not as easy as it looks to design and manufacture a car by metalman and that's why there's so many recalls because they can't get it right the first time so it's not so easy so you can imagine there's a little bit of apprehension on the part of the car manufacturers when you talk about redesigning the car so they do it anyway chassis sometimes what they try to do is keep the check the same chassis but modify the exterior cosmetic components so we can modify the product what else right add new features like what what kind of features were there GPS right so that's pretty much standard now GPS do you guys agree that GPS said before it was first introduced so now this is an example of the adoption curve model which is also very important but it's also a bell curve so the GPS was originally introduced into higher ed model cars so more expensive cars then what happens at the adoption curve model so you have your innovators your early adopters your early majority you're late majority and you're not adopters so write this down so the stages of the adoption curve model include innovators so the first stage is innovators the second stage is early majority number early adopters the second step that's the second stage the third stage is the early majority late majority and then there's not adopters and sometimes it's the term that's used for not impactors is lakor's l8 g g a RTS laggards which really what does that mean where is records what that means is non adopters so based on this model it suggests that two and a half percent of the market so what we're looking at is we're trying to understand the rate of adoption so we identify that in a product life cycle that there are several stages introduction growth maturity decline obsolescence and in some cases revitalization so you think the VCR is obsolete what about cassette tapes or cassette tapes obsolete so you say well the parts of it become obsolete yes products become obsolete so cassette tapes won't worry about 8-track tapes do you even know what an 8-track tape is now you don't I don't neither so so 8-track tapes cassette tapes those products are absolutely now a product could be but in maturity for a long time so for example a car would you consider a car to be in the maturity stage so certainly the number of cars that are being sold are increasing but overall we could say that the category in the United States is mature that doesn't mean they're not selling more cars but they're not experiencing 20 30 40 50 percent growth like they once had or what about the beverage industry in the United States the beverage industry is over 200 billion dollars at retail 60% of that is alcohol they kid you not and the rest is non-alcoholic beverages like border orange juice milk so the beverage industry is only growing about 3 percent per year that's an example of a mature category so that's in terms of market attractiveness and we think about the level of rivalry threat of substitute by a power supply of power if industry if the category is mature so the level of rivalry is very intense why is that because if you can't sell to more customers right and there are more customers coming into the market then the way that you're gonna increase your sales is by stealing your competitors customers so if sales aren't dramatically growing overall if you don't see a significant increase in the number of people drinking minute a large juice or actually orange juice overall then how are we going to increase the sales of Minute Maid orange juice well we've got to get people who drink Tropicana orange juice to drink Minute Maid and those people who drink them to drink orange juice so they're very aware of the challenges of managing the product lifecycle they understand that they need to manage a product life cycle and that's why you'll see that orange juice is marketed as having calcium so one of the packaging it sends with calcium and with vitamin A and with vitamin D why would they mention that why is that important why all the sudden is that so important to have that on the packaging what's significant about that why not say biting an egg or vitamins e or whatever why is it important a and D and calcium |
Marketing_Basics_Prof_Myles_Bassell | Marketing_1.txt | [Music] because your success is my number one priority tell your neighbor our success is his number one priority look at your neighbor and say our success is his number one priority that's good good job so i'm hoping everybody gets an a in this class you can do it yes you can you don't believe me watch me watch me so marketing let's talk about marketing some more marketing so i told you what we're going to talk about now i'm going to tell you and then i'm going to tell you what i told you is that good is that a good approach so this way you don't need to ask me three times i'm just gonna tell you three times it's better that way right hope is not a plan i have a plan for your success all right hang in there marketing so write this down marketing marketing that's capital n a you guys do it okay come on stay with me here no sleeping no facebooking who's facebooking now anybody no text messaging all right no periscoping marketing is about creating so marketing is about four things creating communicating delivering and exchanging value so marketing is about creating communicating delivering and exchanging value that's in chapter one so who can tell me what marketing is somebody good what's your name genius chingus go ahead tell us what is marketing i don't know but i know it's about creating right excellent good job so marketing is about creating communicating delivering and exchanging value value what is value value is a function of price quality and benefits so value is a function it's a function of the price the quality and the benefits so let's make sure that we don't think that value means low price so for something to be of a good value it doesn't need to be a low price it could be a high price but it's a very high quality and it has a lot of benefits do you agree who could explain that further good tell us your name albert alvin alvin alvin alvin albert i'm getting bad coaching here alvin go ahead tell us your items yes luxury items so if you purchase a 55-inch samsung 4k high-definition led monitor that has smart capability and 3d for 2500 you might very well consider that to be a good value now you're saying wait coach hold on a second i know that i could go to walmart and get a large flat panel led monitor for only 5.99 so i could get a flat panel monitor 50 inch at walmart or 55 inch of walmart for 600. why would i pay 2500 isn't that a better value so what alvin is suggesting that it is a good value at twenty five hundred dollars because it's 4k it's high definition 4k not 1080p that's so 2008 you're so 2000 late that's the old technology 1080p we're rocking 4k now 4k so that's the new high definition monitor that's on the market is 4k so you're going to get 4k technology and 3d capability and smart capability so it's going to be a smart tv and so it's a good value at 2500 but again it's not the lowest price value does not mean low price questions about that so value does not mean low price so that's a good example anybody have another example of where something might be very expensive but not the lowest price yes go ahead tell us your name theresa bad um i mean would you consider that clothing yes go ahead like so like i find people saying like buying a pair of jeans and zara is expensive because there's like 65 but they last longer than buying these at all maybe unless so teresa is saying you could buy a pair of jeans at old navy for 20 or you could buy actually i've done this you could buy for 300 a pair of torn jeans a pair of torn jeans at diesel so you could buy a pair of jeans for twenty dollars at old navy or you could buy a pair of three hundred dollar jeans that are already torn you gotta love that guys that are already torn for 300 at the diesel store so why is 300 for a pair of diesel jeans considered to be a good value so quality now the way that we communicate quality because we said that marketing is about creating communicating delivering and exchanging value the way that we communicate the value is through the brand now the brand is what's wrapped around the product so all products in a given category have the same generic functionality so for example cars all cars provide the same generic functionality which is transportation am i right how many people agree with that raise your hand yes so cars provide the same generic functionality transportation what makes one car different from another is the brand so every car is wrapped in a brand every car is ripped wrapped in a different brand give me some names of cars what are some car brands let's see go ahead raise your hand teresa said toyota going forward toyota that's their tagline the problem is a few years ago is their call went forward whether you wanted to or not so they i think they kind of exceeded their brand promise that's not what people had expected go ahead what else so we have toyota infinity mercedes-benz audi bmw chevy lamborghini jaguar range rover tesla g porsche so all of those all those brands is what makes each of those products unique each of those cars unique so what makes one car unique from the other is the brand and we're going to talk more about perceptual maps where our brand is positioned in the marketplace relative to our competitors that's one of the things that companies spend a lot of money studying in research is the perceptions that consumers have of their brand so you could look at um let's say the level of quality versus the price so i want to understand whether or not customers or potential customers or i'll use the word target market which is discussed in chapter one target market is different from target audience target market is who we want to buy our product or service so write that down the target market is who we want to buy our product or service that's the target market who we want to buy our product or service that's the target market but that's not the same as the target audience the target audience is who we want to reach with our advertising campaign so the target audience is usually a subset of the target market so the target market could be for example all men that's our target market who we want to buy our product or service but our target audience our target audience might be grouped by age so we're going to have a different commercial for example for men that are between 18 and 29 and those that are from 30 to 39 and those that are 40 to 49 does that make sense so you want to have a commercial for example that's going to resonate with your target audience usually you're not going to be able to have a commercial that's going to resonate with everybody in your target audience or more specifically with your target market so he said all men so when you're showing a commercial do you think that those men that are seeing the commercial between the ages of 18 and 29 want to see somebody in the commercial who's 60 or 65 that's probably not something that's going to resonate with them right unless maybe if it's coach no if it's uh bruce willis let's say maybe if it's bruce willis or sylvester stallone or hulk hogan maybe they might um they might still find the commercial of interest and we will be able to get their attention their interest create desire and ultimately action so you guys know who hulk hogan is you do what about john cena yeah you know what they say about john cena right he's no good so cena sucks right yes no you guys don't think so okay what about um what about out of nowhere who's that you don't know who's out of nowhere out of nowhere that's his famous move what you guys are not wrestling fans you don't know randy orton that's his thing no rko yes out of nowhere what about hello you don't know who that is oh you guys are not too you haven't been to madison square garden if you were there i would have seen you it's like better than a broadway show they have fireworks on the stage and everything is amazing so the brand is what distinguishes one product from another we could look at that through market research to understand the perceptions that consumers have for our brand importantly relative to other brands so not just where we are on the perceptual map the value of perceptual mapping of doing that type of market research is that we could see where we're positioned in the market based on certain dimensions quality price innovation relative to our competitors so remember it's the perception it's not a question of whether or not our product is expensive or whether it's a high quality the issue is what is the perception of the target market the people that we want to buy our product what is the perception do they perceive our product as being high quality do they perceive our product as being innovative it's important to know that because in headquarters for us just to be sitting there talking amongst ourselves saying yes our product is high quality well that's interesting but if our target market doesn't think it's high quality then we have a huge problem a huge problem however we could solve that problem how how can we solve that problem if we find out that the target market actually believes that our product is a low quality versus a high quality do you think go ahead tell us your name chanel go ahead get someone famous so advertise chanel is saying advertise so when we talk about creating and communicating communicating is really code for advertising we're gonna advertise so we could change the perceptions that the consumers have through marketing communications chanel said a good example would be advertising so the message the key message in our advertising is going to talk about quality now that we found out that although we worked so hard to make our product a high quality we found out that the target market perceives our product as being a low quality so instead of going home crying what we do is what chanel said is develop a compelling advertising campaign to communicate that our product is of a high quality and include pillars of support we need to have when we advertise we need to have pillars of support that means that we need to have proof points we get not enough just to say our product is a high quality we have to support that we have to provide proof in our commercials in our print ads on our website so think of marketing another way to think about marketing because right now this is what we're going to talk about the entire semester so i'm trying to give you an overview of what marketing is so we said that marketing is about communicating you know creating then communicating so does that make sense so the order is important so when we say marketing marketing is about creating communicating delivering and exchanging value so from 30 000 feet if you will that's a broad overview of what marketing is another thing that we could say about marketing is that marketing includes five activities what's the first activity the first marketing activity the first marketing activity write this down the first marketing activity is to identify identify an unmet need so marketing we can describe marketing as being comprised of several activities the first thing is to identify an unmet need that consumers have so marketing when we talk about marketing the first activity is to identify we need to identify we need to determine a need that is not being met in the market so it's a need that's not being met so what would be a good example how about shampoo i know you're thinking what could this guy possibly know about shampoo i know right but an unmet need is a shampoo that is safe for hair that's what that's dry or that that's curly oily that's perm so the way we identify the need is how guess guess oh you're not they're not good at guessing that's not good for exam day you got to be good guessers supply and demand right the way we're going to determine the need is through marketing research we're going to do marketing research that's how we're going to identify what the needs are the first step is type is to conduct marketing research we can do qualitative research like focus groups where we have 12 people in a room and with a moderator we ask them a series of questions like what are some of the problems that they have when cooking or baking and what do you think they're going to say food sticks to the pot food sticks to the pan for example but remember when we do focus groups there's only 10 or 12 people in each focus group when we do focus groups we do four focus groups at a time so we do two focus groups in two different cities but we still only have 48 people that participated that's not a lot that's qualitative research we don't have anything statistically significant all we have is the input of 48 people that cost fifty thousand dollars that's a real number fifty thousand dollars to have a lot of experience in marketing fifty thousand dollars you wanna do focus groups see me after class okay so and with your checkbook all right so it's fifty thousand dollars that's tremendously valuable to do focus groups because we're going to get a tremendous amount of insight from those for example who bake those who cook those who um who use tablets or smart tvs whatever it is that we're researching it's very helpful to do that because what that's going to do is inform our quantitative research a form of quantitative research is what we're doing this semester which is a questionnaire so a questionnaire our goal in industry is to get a representative random sample of how many a million people so in the united states if we're going to research the needs and wants of people that bake and cook how many people do we need to have complete the questionnaire what do you think million now there's about 350 million people that live in the united states how many of them do we need to complete that questionnaire now if 350 million people complete the questionnaire that's called the what census a census is when a hundred percent of the population participates in the research only the government does that they're really the only one that could afford to do that what we're trying to do is get a representative random sample from that population so it doesn't need to be 350 million how many does it need to be 349 million 300 million 200 million what do you think a quarter of what's the number how many is that so you want me to do the math so what we're saying is 85 million i think it has to be a balance sample like from every type of person that we take somebody up out of this group and from different states maybe and let me do this research with these people absolutely so we have to have males and females people of different age groups peoples in their 20s 30s 40s 50s 60s people of different ethnicities people of different religions so how many people do we need to participate so there's no definite answer but i'll tell you from my experience about a thousand people so if you have a representative random sample that's going to include people of different genders people of different age groups people of different ethnicities people of different religions a thousand can be statistically significant a thousand to fifteen hundred but in most categories in the united states it doesn't need to be more than that so it definitely doesn't need to be a million people all right so for marketing research we need something that's statistically significant in the united states in most categories it really doesn't need to be more than 1500 and how much is that going to cost a hundred and fifty thousand dollars to do more intercept in multiple cities and get approximately 1500 respondents now when we do focus groups we also do multiple rounds of focus groups because each round is iterative so we learn from the first round of focus groups and then we incorporate that in the next round before we do the quantitative research so we could do certainly qualitative research i recommend that do the focus groups then we're going to do quantitative research before that now both of those focus groups and questionnaires qualitative and quantitative research as i describe that |
Marketing_Basics_Prof_Myles_Bassell | 6_of_20_Marketing_Basics.txt | camera action alright here we go who's gonna be the first question about the strategic business unit level all right friends glad you're on the segment you were just trying to see if I paying attention right yeah all right attention right ahead this is business unit level at which managers set a more specific strategic direction for the businesses to exploit right so the answer is big on page 28 and remember the acronym is SBU we talked a lot about SBU strategic business units and remember we said we start with the corporate level then we go down to the business unit level and then we focus on the functional level so the strategic business unit level is very important because we're gonna have shared objectives in an organization so the corporate plan is going to outline the vision the mission the values some key strategies for the organization and the strategic business units and we could have three five we could have twenty strategic business units for example the strategic business units are each going to develop a plan that's going to bring those strategies and goals and objectives that are addressed in the corporate plan they're going to bring that to life so we have to have shared objectives within the organization because the objectives and the goals and the values that the senior executives talked about in the corporate plan in of itself is not something that they operationalize it's something that the strategic business units take a major role in operationalizing so it's not enough to say that you're going to be a leader in what but what work product here we take is innate as an example once before we talked about being a leader being the market share leader in electronic High School educational devices so alright if that's what you have is a goal that's that's fine but how does that become a reality then it's up to the individual strategic business units to make that a reality all right next question number two Jason bad marketing and Finance are generally yes so the organization members we said they have free plans and one of them is going to be the functional plans such as marketing and finance generally refer to those as departments you have a marketing department you have a finance department a quality department a human resource department those departments also have to work to help achieve the shared objectives for the organization so remember we said that you have the corporate plan the business plan and the functional plan operating simultaneously it's not one or the other the company is going to have all three questions about that does that make sense all right third question talks about stakeholders and you remember we talked about the difference between stakeholders and shareholders get alyc tell us either/or yes so leave that to us stakeholders include so shareholders or an example of stakeholders right so a shareholder or suppliers vendors customers for example those are all stakeholders so don't mix up the terms stakeholder and stockholder or shareholder yes stockholders and shareholders they are stakeholders they have a vested interest like Alan is saying in the performance of the company and that Johnson & Johnson for example this is something that they take very seriously and they talk about in their credo they talked about the obligations that they have to their stakeholders and is a company that focuses on delivering health care related products and services they identify some of their stakeholders as being doctors nurses the community their customers and their patients as well as their suppliers etc etc all of those are examples of stakeholders including the shareholders and sometimes we use the word stockholders all of those are examples of stakeholders okay the first time we've seen GUI GUI what do you want to tell us people who own stock in the company yeah so if you don't stop sometimes we use the word stock sometimes we do this work shares in a company if it's a corporation then what you do is in order to raise capital as you sell shares you sell stock in a company to to get capital and then those individuals have shares in the organization that's different from if our capital structure includes debt which means that we just borrowed to just borrow from Jason 50 million dollars and we pay interest on the 50 million dollars and we have to pay back the 50 million dollars in a specified time period shareholders you don't have to pay them back and we don't pay interest and in fact you don't even have to pay dividends so it is definitely something that we have to consider when we're trying to raise capital for the organization is whether or not we're gonna issue stock or we're going to issue debt or we're gonna be finance 50% stock which referred to as equity or a 50% debt it depends what's the why is that an issue what's the difference what happens if you're 80% debt that means 80% of your financing is dead why would that be an issue right you have to pay it back and your investors might be concerned that you won't be able to pay it back and we should be concerned about that as well when we issue debt is all we going to be able to service the debt which means are we going to be able to make the interest payments and ultimately all we can ever be able to pay back the principal the money that we borrow so very often companies have a capital structure that's partially debt and partially equity and in fact in accounting we look at the debt to equity ratio that's how important it is as a metric in evaluating the performance of a company it's what percentage is equity and what percentage is death Jacob number four right remember we talked about that on page 29 I said the book suggests that vision and mission are terms that are used interchangeably and suggest that they're the same thing but what I cautioned us is to keep in mind that the original intent of the vision was to indicate where it is we want or expect the company to be in the future whereas the business only defines where we operate now so it's meant to define our business as it is today whereas here it suggests true that you know the vision has somewhat of inspirational or aspirational component in that it's look it's forward-looking so the vision talks about for example where we want to be in five years today for example our mission is a distributor of educational devices to high schools in the United States but our vision our vision is to be the market share leader worldwide so not just the United States anymore but the market share leader worldwide of educational devices at all educational levels within the next five years so you see this we certainly could argue that there is a difference between vision and mission although sometimes people use them interchangeably really ain't the intent was that the vision is forward-looking where we want to be in the future and that's talked about on page 29 what about number 5 right try saying that three times fast yes so the best answer is e says a mission refers to a statement of the organization's function in society often identifying its customers its markets its products and technologies so the example that I did you think that's a good one a good example of a mission did we identify for this educational device company that might that's the product and the market we said is for high school students and the flaw the customer the market is we specifically said in the United States so that's one example of a mission certainly you could go to pretty much any company and they can have a mission statement that defines their organization and importantly the mission is not five pages it doesn't mean that you can't have a five page document that talks about your business but the mission is deliberately meant to be sure and so that everybody in the organization can internalize that so if anybody calls for example whoever picks up the phone whether it's somebody who's a senior executive or somebody who's a manager or supervisor or somebody who works in the mailroom whatever it is that their role is within the organization they should be able to articulate what is the mission of the company just like in one sentence two sentences basically what is it that the company does that's what the mission is what is it that you guys do here everybody should be able to explain that yeah so we're now going through the quiz and use the candidate also VC no really the best answer is e well why do you think this Naaman is he definitely I circled in to get Kendall so we see well but what C talks about is in terms of dictating behavior sounds more like a code of ethics or the values of the company so or some sort of code of conduct that's gonna influence it see it dictates the behavior of all its employees that's not a mission that's I mean that might be one of your goals is to have all employees behave in an ethical manner but that's something that's going to come out in very specifically in terms of dictating the behavior of employees that's some sort of code of conduct or code of ethics it says you could can't do this you can't accept gifts from suppliers and so forth and so on and so on and gives examples of specific behavior number six talks about marketing metrics all right so what about number six who's gonna tell us metrics what's a metric yeah it's a measure so we're always looking at marketing metrics sometimes we use the term indicators we're trying to determine how we're doing as an organization we're trying to evaluate our performance good aren't what do you guys think what do you guys think wait a minute something a something easy right hey I'll tell you this much all right now I'm gonna give you all the answers to the exam all right these are all the answers to exam a b c d e you have my word those are all the answers today most of them were gonna yet most of it is gonna be multiple guests heaven and it's different from the exam last semester hopefully this one is easier than last semester so the best answer is do we want to vote on this you guys want you guys want to go the best answer is what do you think deep really deep let me see what this ND that's interesting is interesting for product performance based upon him put five members on a cross-functional team uh yeah I see what I see what you're thinking there but that's not the best answer yeah the best answer is a is a measure of the quantitative value or trend of a marketing activity or result now what does that mean that mean a marketing metric that talks about on page 33 a measure of the quantitative value or trend of a marketing activity or results okay Joseph tell us basically how well their strategy is working yeah and so what would be a good example of a marketing metric data but so what would be what is the data gonna tell us what is the metric right what's the Aaron said measure what is the measure how much sure we're not doing well how much what went up and down how would you know how much products were sold or people's awareness okay so how much product would be from how many units of the product you sold so did you sell 50,000 or did you sell 150,000 and then it go up or down like a saint compared to last year or last period can you do sir awareness certain products you see if your marketing is working well absolutely you want to do every quarter and it's expensive to do this but it's important to understand the level of brand awareness and to do a branding study because if you're spending 50 million dollars a year on advertising and by the way that's not even a lot and there's some companies that spend for their entire organization and for their portfolio of brands they spend five hundred million dollars a year but let's say for a given brand you spend fifty million dollars how do we know and that's what is what we're talking about when we're saying the measure of the quantitative value as relates to a marketing activity so if we spend fifty million dollars on advertising how do we know if it's effective so we one of the things that we could do johnson is saying is that we could do a branding study and try to measure the level of awareness so has the level of awareness gone up as a result I would like to think because at a minimum our goal for every advertising campaign now we could have several objectives for an advertising campaign several goals that we want to achieve but at a minimum we want to achieve brand awareness we want to increase the level of brand awareness that's you don't even need to put that in your advertising brief if you're working with a advertising agency that is like unspoken you don't even need to say that I would still put it in but I mean that's like the minimum requirement is to achieve a higher level of brand awareness so absolutely so that would be worthwhile and to see the change like Jason is suggesting over time whether it's a number of units ever sold the dollar sales the level of brand awareness so of course it's meaningful to know if your level of brand awareness is 43 percent or 83% that's important to know but also important is to know well we were at 43% now we're at 55% so to monitor the change and presumably that's a result of the advertising so we have dollar sales unit sales level of brand awareness what else what about market share so dollar sales and unit sales is just we're just trying to find out if we're achieving the goals that we set for sales but doesn't say anything about how we're doing relative to the competition now that gets interesting like we might be excited that we sold 50,000 units of the product like combs we've done it yes right fifty thousand containers of orange juice do you think we're the best team ever but then you find out that the competition sold a hundred and fifty thousand containers of orange juice now that kind of puts it into perspective you see what I'm saying and that's what market share is doing is paying for a given category what percentage of the sales and we could look at it in terms of dollars and units what percentage of the sales were carrying our brand name and what percentage of the sales were carrying the brand name of other companies so we might have twenty percent market share let's say and another company might have thirty-two percent market share and another company might have twelve percent market share and another company might have twelve percent another company might have ten percent and how much should we up to now that's like 80 something right there's somewhere around there and then that gives us a sense of perspective so that's an also an important marketing metric a way to measure out before yeah well when you have access to when you purchase syndicated point-of-sale data then you could look at it by category you could look at it by channel of distribution so you could look at well how many units in grocery stores did we sell what percentage of units sold in grocery stores did we sell versus the competition so for example for a given category in a given channel you have to it's going to cost approximately seventy five thousand dollars to get that data so if you know what that data is that we're buying is you know when you go into the store and you purchase on a group of items and they scan them it goes beep beep beep beep when you have to check out I don't know how somebody could do be a cashier I mean do that Allen yeah I would lose my mind I just impossible I mean that would just drive me crazy well what about you guys could you do that anybody do that I'm Mattie I worked in wheat uh but they didn't we've ever had that was you know in the Stone Age we didn't have scanners so you just ring them up you know $2.99 that's like right right they might actually you're right they may actually have ear plugs people might start to use that like I really don't think I could do that I think I would gently insane hearing that but that's where they're getting the data from so what happens is the market research firm is they contract with these retailers to get all their data so they want to know how many how many Snickers bars did that particular retailer sell in all their stores and what they do is they combine that data with how many Snickers bars were sold at other retailers see what they do is they combine all the data of how many Snickers bars were sold at all drug stores so Zach we could look at that and just say well what percentage of candy bars sold were Snickers in just drug stores but what we can't find out to that data is by retailer because would you want if you were the retailer would you want to sell that information would you want somebody to know how many Snickers bars were so that doing lead or if you were the executive at CVS or Walgreens so what they do the market researcher isn't basically an independent third party is they combine the data so it's the company that's presumably in the candy industry or wants to be in a Canada industry buys that data they don't know by a specific retailer they know who participates in the panel so I know which retailers in this case drug stores are in the panel but you can't see retailer specific data that way but definitely you could look at it by Channel you could look at it by share of several channels combined and that's why the remember we talked about Oreo that's what the issue was is they were looking at the data and they said that Oreo is America's favorite cookie which means that they're the market share leader based on that data but then their competitors went back and said no wait a minute that's not really true what about in this channel but what about in this channel you're not the market share leader we sell more cookies all brands in that channel than you do so actually everything happens for a reason so I think it really worked out better for them because now they have as their tenant line milks favorite cookie which for that you don't really need pillars of support you don't need to have proof points which is what they were lacking with their prior tagline saying that their America statement cookie is they came back their competitors and said where's your proof points what are your pillars of support how do you justify that and so they had a transition from that tagline to milk's favorite cookie question marketing metrics so make sense so there's a variety of different metrics that we could use very important to determine our performance as an organization how do we know that we're doing well and not just relative to our own goals and objectives but relative to the market melted relative to our competitors so that's why I gave me that example we're excited about selling 50 thousand cartons of orange juice but that's somewhat less impressive when we realize that our competitors so the hundred and fifty gallons or cartons of orange juice questions that's on page 33 number seven who's going to take number seven all right Jason go ahead step one in the planning phase of the strategic a situation announces right absolutely so a is the best answer the situation analysis and B is step two is the market product focus right which is B but that's not the first step so be the way it's listed here is ABC so the first step actually is the situation analysis the second step is the market product focus and the third step is the marketing program so a is step one the situation analysis and what is that what is the situation analysis why does that step one why is that so important what are we going to learn from the situation analysis Joseph analyze and neat yeah you want to identify that unmet need that's certainly part of it absolutely all right the next question who talks about SWOT which is an aspect of portfolio analysis right absolutely any questions about that SWAT is an acronym for strengths weaknesses opportunities and threats so the first two components strengths and weaknesses is an internal focus and opportunities and threats are an external focus and win money we want we want to understand our strengths as an organization but importantly we want to understand our weaknesses and the reason we want to understand our weaknesses is why yeah so we could turn our weaknesses into strengths so it's not just interesting if it's interesting I'm glad but it's got to be more than interesting you want it to be actionable so we want to know what our weaknesses are not just for for interesting but be able to turn those weaknesses into strengths to close that gap so we know what our witness is also we could fix them so very often when you go on they'll ask you what are you the strengths what do your weakness is so you want to say something about your weaknesses that are correctable and say that well I'm aware that one of my weaknesses is and taking a class in time management but don't say it I hope one of my weaknesses is you know I drink a bottle of vodka every week that's probably yeah I don't think that's gonna be received very well unless you follow that quickly by saying however being aware of this problem I am in rehab and I believe that I will discontinue alcohol use in the very near future but certainly organizations that it's very common for them to ask you what are your strengths and weaknesses and so for as an organization we want to understand our strengths and weaknesses our strengths we want to leverage our weaknesses we want to turn into strengths water correct those and we want to understand where are the opportunity and the threats in the marketplace and we'll do a SWOT analysis for ourselves right as as an organization but we'll also do a SWOT analysis of our competitors how is that helpful Martin you want to do a a SWOT analysis of others yeah we want to know what their weaknesses are now it may not be that easy to determine what their weaknesses are but we certainly want to try and identify and understand where they're strong where they're weak what are opportunities for them and you are a threats in the marketplace for their organization so it's a very helpful model and we could explain basically in one page as a matrix in one word well usually a sentence I would think that's just one word but in one line lists the strengths we have five strengths we can list those in five lines and so it could be on one page so our entire analysis could be on one page now we could supplement that with supporting documents but what's very compelling when you submit that to your manager or an executive is that in one page they could view the matrix and digest the information which is something that you should try and do when you're sharing information with your manager or executives in an organization is try to be concise you always need to have an executive summary so it's okay if you submit a document that's 150 pages but provide out an executive summary of two or three pages of what the entire report is about and your conclusions or your recommendations so SWOT is really very compelling in some ways you might think it's sort of simplistic strengths weaknesses opportunities threats but it's actually very insightful and can be very profound depending on the level of our analysis that we do and that's talked about on page forty which is part of the portfolio analysis all right next question number nine we're going to be number nine in the 1980's a lapse in production quality and increase in Japanese imports drove the Harley Davidson motorcycle company to the brink of bankruptcy the company's share of the u.s. super-heavyweight market that is motorcycles with engine capacity of 800 cubic centimeters of more collapse from more than 40% in the mid-1970s to only 23% in 1983 so they went from having market share of more than 40 percent to only 23 percent in about 10 years so they lost about half their market share however by 1989 harley-davidson controlled some 65% of the US market and both in the United States and overseas markets the companies won't be able to meet demand for years so demand significantly exceeds supply in this case from a marketing perspective what was the most likely first step in harley-davidsons resurgence so when they realized that their market share went from 40 percent to 20 something percent and then within five to six years went back up very dramatically to about 65 percent what do you think enabled them in part to achieve that type of improvement gut Ally right so they did a SWOT analysis to understand what are our strengths and we're going to build on those strengths and where are we weak where are the gaps where our development opportunities and what they did was they fix them they fix the weaknesses and where there were opportunities they capitalized on them so this is a good example of a company that could use SWOT analysis is a very powerful tool like you're suggesting remember information is only potential power very often people say information is power well no it's really only potential power it's all the power is put to use do you see what I'm saying so it's not just so that you have all this data you have all this information you have to do something with it it's got to be actionable so you do a SWOT analysis great but we don't just get credit for doing a SWOT analysis we have to take the learning from the SWOT analysis and act upon it which it sounds like that's what Harley Davidson did in this case questions about that number 10 who's gonna be number 10 max close reviews as to how far to the edge G unique stranger growth letters after my superiors we turn off the basement volumes have false information or even if eccentric graph that is except that goes in or losing some gain in the edge of the marketplace Auggie Deeks rapes relatives competitors at the recipe rooms correct we need to be familiar with these two terms points of parity and points of difference so max tells us that points of difference are those areas that are strengths relative to our competitors so the points of difference or where we are unique relative to the competition the points of parity are those aspects that we have in common with our competition now why do you think that it's important to have points of parity why is it important and let's see we come up with an example why it would be important for us to have points of parity because you might think we always talked a lot about differentiation and points of difference and the unique selling proposition and how that's a competitive advantage but what I'm suggesting here is that while that's important don't get me wrong bothersome point that points of difference while we need to differentiate ourselves and certainly branding is a big part of that because our brand is what's wrapped around the product and certainly the brand is a very compelling way to differentiate one product from another but points of parity are also important good Jacob yeah I think that uh by getting your points of error by identifying your points of parity it can lead you to find where your points difference would be meaning that if you like we'll sell the same two before we can say how we're going to make a point of difference well let's make them a certain package don't be they'll give us a competitive edge in the marketplace will authors long for cheaper and will give my should be dealt with that home so that leads to point of difference to make you interesting so you guys got what Jacob the say anything before you could determine what deport the difference are first you need to find out what you have in common once you know what your points of parity are then each you could decide how you're going to distinguish yourself how are you going to differentiate yourself from your competitors what else anybody want to add to that Ted Joseph Matt I think it's important enough to be too extreme and then in terms of your difference like you could have even if like Apple which is probably the most common popular laptop or iPhone if it looks so different than other phones that people probably wouldn't buy them well with this new design we've looked all different as funky do it is awesome system but it's so different everything else sort of won't consistency to anything system but and so what I understand is why in some categories having points of parity or more important than others so Joseph's saying if sometimes being different it's not good and what I'm saying is that yeah having points of parity is important what in what situation would that be yeah if that's your weakness try to not copy it exactly but kind of take this form you know they use so you become one of your strengths as well it's harder for the consumer Senate like because let's say Joubert marketing pro remember that's available better tasting orange juice so you kind of copy to taste a little bit they go out like this brand better market their marketing better and in case of say interesting yeah it could be so what about it an example can we come up with an example another example so from me I can see my opponent history is an iPhone the bit see both have smartphones but but let's say my fans up in is and up in a week from now and iPhone just came out with an iPhone 5 then the iPhone wants to have some kind of parity through other smartphones for the user of Samsung can have some similarity with the iPhone or vice versa and so why would that be important though because so so the customer has some sense uncomfortability that's you know work phone the answers would be new stuff in the iPhone that they want to get comfortable with but as long as they have the basics that they had with this phone thing and they'll be still some sense of so it's good to have parity just so the customer feels good about what about the decision in five competitors point of difference you probably go to this to make that bunch of parodies working for them well once you know what their points of difference are I see you look at another angle now to raise that into your points of parity who's working for them in there you might then you're taking a meet to approach and some companies do that they look at the market share leader whatever the market share leader does they try to copy that or for example if you notice like when they open up the McDonald's now you might think oh they just open up the McDonald's on a certain Street just by chance just random but actually they've done a lot of research to select a high-traffic location and then once they open up there then what happens you notice right now I'm sure that they didn't do research once they see that McDonald's is there they know McDonald's has spent a lot of money doing research to determine that that's an ideal location and then you see Burger King opens up Wendy's White Castle and that phenomenon is actually known as clustering where you'll see in this case like certain fast-food restaurants together in a given location which is become very common actually some classes like youth in it to give orange juice between them would be at the book beverages you have morning so are just starting at the edge use the point of difference which was step to really push this that has a little bit more calcium than milk because they realize that one of the they've looked at both beverages and please don't worry people might prefer milk over a significant may have a point of difference over something that's out there beverage in the morning so we're gonna do is we're gonna add a little bit more calcium that way we can get the edge so they looked at their point of parity and I know they're splitting what they are and got to see their point of differences through that and then made their point of difference right away in order so they try to make their point of difference or yeah no actually I see what you say you saying that the the point of parity is that they want to have at least the same amount if not more calcium as milk yeah so in that category yeah that's a good a good example that sort of the minimum requirement of the customer is that they're they want to drink a beverage in the morning that's high in calcium and high in vitamin A and D and so if you're going to be a morning breakfast beverage then yeah we want to achieve that that level of parity with them with the substitute beverage yeah yeah I think that's a good example but is it does this also work that they sometimes present parity are important and different to having too much difference is bad I'm not sure if she was saying the same thing where in the case of the Nissan Cube the car that looks very basic yeah right that's Jason this is Jacob right so Jason where if you if you make things to different people aren't you know like the they're gonna want similarity so if it's two different people are gonna stay away from that kind of thing is that I'm they might I mean it depends on the category you know sometimes having something that different and unique is a reason why people purchase a product if they want to have something that the sports car yeah that's something that nobody else has they want a car that is yellow for example not that that's so outlandish but blue is America's favorite color so you think a lot of people on blue cars black cars white cars red cars but yellow that's not so much that doesn't mean that we can't have a yellow car but maybe that point of difference is a reason why somebody would purchase that because is that a color is there a case where there's too much difference that people just stay away from oh it could be and in fact that could be the reason why somebody purchases a particular product in fact in luggage they've even noticed this in the mall where they have suitcases in the window and they have suitcases for example that might be like some sort of like hot pink for example like a really hot pink color and that gets people's attention but when people in a store right they saw the hot pink suitcases like oh wow that's so cool and they go on the store but and at the end of the visit they end up leaving with black luggage or brown luggage but the reason that they purchased the black luggage is in comparison to the hot pink right that was like you're saying is like that was like - maybe for them like too far-fetched or too flamboyant so they ended up buying taupe luggage or some other color that maybe it was more neutral less flamboyant but understand how the retailer uses that drawer people into the store and to create foot traffic because when you see that you're like wow that's you want to go in and see and then afterwards very well then people will not purchase that but that's what gets people to to come into the store and you're right so I think maybe that's too different relative to standard luggage in the industry anybody else what about in terms of point of parity let's say the point of parity is safety what do you think is that do you think let's say and remember last time we're talking about pain relief what do you think is safety or something being safe to use an important point of parity in the pain relief category so do you see how that's like the minimum requirement the point of parity what is it that all brands of pain relief have is that they're safe to use that's the minimum requirement then you have points of difference then you have well some you have to take one pill a day some you have to take two a day three a day for a day those are points of difference now if your medication only requires that you take one a day that's certainly a point of difference that you to emphasize because really I mean who wants to take four pills a day if you can would you rather take just one pill a day what do you think no that doesn't you guys who had to take four pills a day than one what about you take a vitamin in the morning they said no instead of just taking one vitamin in the morning and I had to take four vitamins a day I think most people that they had a choice they would want to take less pills so being able to take one pill a day instead of four is a compelling point of difference in that category or how fast the medication takes effect so in some cases you take the pill it works in 30 minutes you know the cases it works in 60 minutes sometimes 90 minutes those are points of difference now again if yours works in 30 minutes that's appointed power a point of difference that you want to emphasize but the point of parity certainly is that the minimum requirement is that it's safe to use so you see how that would be an example of a point apparent at all competitors in that category have in common is that the product is safe to use does that make sense oh that was question 10 |
Marketing_Basics_Prof_Myles_Bassell | 7_of_20_Marketing_Basics_Myles_Bassell_31212.txt | [Music] all right so today we're going to continue our conversation about price so we talked um last class about price who could give us a little bit of a quick recap from last class you could tell us what we talked about what are some of the key the key takeaways we talked about pricing objectives yeah what do we say the pricing objectives were profit is one profit is one sales is another market share is another pricing objective unit share the the volume anything else what about social responsibility is also um uh a potential pricing objective and we talked about price elasticity of demand you said some markets are elastic some are inelastic and today what we're going to talk about is we're going to talk about different approaches to pricing demand oriented approaches to pricing so one example of a demand oriented pricing approach is skimming what is skimming means that we start at a high price and then lower the price of our product or service in a planned way over time it's not a reaction to the introduction of a product by competitors or the competitors increasing their advertising it's a deliberate strategy it's a demand oriented strategy which is referred to as skimming which is very common for example in technology markets so for example in electronics skimming is very often used by companies so a good example would be the VCR do you guys know what VCRs are you do really like a tap so over time so Electronics manufacture facturers whether it's Magnavox or Sony they try to manage the adoption Curve Model they try to manage the adoption curve process and that's why skimming is something that we're interested in in terms of a demand oriented pricing approach because it's going to impact the rate at which demand for a particular product or service is going to occur and even more so in elastic markets in markets that are price sensitive so what that means is that when you lower the price of the VCR over time whether it's every 3 months every six months whatever schedule we set our expectation is that as the price declines that more people are going to purchase the product and the first VCR that I purchased I purchased at Macy's Wow and I purchased the first VCR I purchased I paid over $1,100 for a VCR now you could buy them $11 for 39 bucks in CVS Walgreens Kmart Target you could buy u a VCR for very little money the manufacturers lowered the price over time because they wanted to increase the number of people that purchased the VCR so those who were willing to pay 11 $100 or more for the VCR those we identified as innovators which is approximately 3% of the market the the model says 2 and a half% but it could be 3% it could be 4% it could be 5% the idea is that a very small percentage of the target market are going to be the first to buy and are willing to pay a high price but then our challenge remember when we talked about the adoption Curve Model which is also referred to as the defusion of innovation model by Everett Rogers Everett Rogers created this model and it's a bell shaped curve that says that when the product is first introduced in the introduction stage there's a small percentage that are willing to buy the product at a very high price those are the innovators but then the question is well now that they've purchase the product how do we get them to either purchase another product or how do we get the other 98% of the market to purchase a product and we said that in elastic markets that means that one of the levers one of the things that we could do to increase the rate of adoption to get more people to purchase is lower the price now that's not the only thing but today we're talking about price last time we talked about price as well so one of the things you could do is lower the price of course you could advertise more it's also going to help increase the rate of adoption you could add more features so there's other things that you could do that he's going to increase the rate of adoption but for our discussion today we're focusing on price so they lowered the price and then the early adopters purchased the product and then they lowered the price again and then again and then again now in an elastic Market this is going to be very effective in a market that's price sensitive not all markets are price sensitive which means that if you lower the price or increase the price the demand is not going to change so think about for example medication let's say for example heart medication if the price goes down so what do you do instead of taking one pill a day you take two pill a day your the demand is not going to increase is it so before you were taking one pill day now because they lowered the price 20% so now you take two no that would be elastic it's not price sensitive or if they raise the price what if instead of lowering the price 20% they increased the price 20% that's it I'm not taking heart medication that's ridiculous you're still going to take your medication even though the price has gone up now there may be some substitutes maybe you might ask the doctor to write a different prescription a different brand that provides the same functionality but might be less expensive but in general I think it's reasonable to say that what I've described is a inelastic Market a market that's not price sensitive in other words the price of medication now in some cases some people they just can't afford it so there may be some people who say you know what I just can't afford to pay that much for medication and that's unfortunate we'd like to think that somehow people are just they have the money and they'll be able to afford their medication and their consumption wouldn't decrease so companies also have patient assistant programs where if you can't afford the medication they'll give you the medication at um a reduced price in some cases even free so skimming we start at a high price and then lower the price over time so remember we're thinking about two models and overlapping them we're thinking about the diffusion of innovation and the product life cycle so at the time of introduction what we're saying is that in electronics it's common to use a skimming pricing strategy so when we introduce the product we introduce it at a high price and then how do we get from introduction to growth how do we sell more units well we lower the price and how do we continue to grow well in this case we'd lower the price again and then again and again so the models these two models are related so they're very relevant two really insightful models the product life cycle and the diffusion of innovation they're very insightful and importantly what's the key takeaway is that we can influence those models so the product life cycle includes several stages introduction grp growth maturity maturity is the point at which sales are flat so growth is no longer occurring then Decline and then in some cases obsolesence which means if the product has become obsolete then sales are zero or maybe even when it's close to zero we might even consider the product to be obsolete so why is it important for us to understand that because what it says is that model suggests that if we do nothing that's what's going to happen so there's a course of doing nothing and the length of time the length the the period the amount of time for the growth is going to vary from product to product so it's not like you could say well for how long is are we going to be in the growth stage of the product life cycle you can't say it's going to be a month or a year or 5 years or 10 years it depends on the product the category some products could be in maturity for a very long time that's not necessarily a bad thing when you think about Market attractiveness when we go through our portfolio analysis to say that the category is large and whether we have a small share of the market or a large share of the market is going to influence our future decisions in terms of how we manage the organization does it make sense so maturity is not necessarily a negative it just means that the category isn't growing but if we have 30% of the market we have a significant um business business is large hopefully it's profitable the BCG model looks at what what are the two Dimensions that we look at in the Boston Consulting Group model grow Mar so the growth of the industry and the market share so that's why I'm mentioning maturity because that would mean that growth is low or virtually non-existent because it doesn't mean when we say that a category is mature that there's not any growth so like for example the beverage industry beverage industry in the United States is about $200 billion per year at retail and it grows approximately 2 to 3% per year that's not what we mean by growth when we talk about growth we're talking about growth that's significant so 30% 50% some Market some technology markets or categories are growing at 100% 200% 300% per year so 3% you could say no but it's growing but that's not the kind of growth that we're we're thinking of it's only growing 2% 3% per year then that's an indication that the category is mature and certainly if it's less than that then this there shouldn't be any doubt so what we need to understand is how do we use price to manage the product life cycle how do we use price to influence the rate of adoption that's critical for us to understand so remember when you look at that model we don't just accept that what we're trying to do is as marketers and business people is to extend the period of time that the product experiences growth and accelerate the rate of adoption isn't that why they introduce that to high level to make up for later on possible losses like as far strategy or or some kind of planning in the beginning of like planning fault yeah so that um when you you should be able to lower the price and still be profitable yeah because over time you're going to achieve economies of scale and that high and that high price will cover whatever losses you might want or cover salaries or machine or use yeah and offset um startup cost absolutely so we introduced the price we introduced a product at a price that's high and then lower that over time that's known as skimming what would be a good example um so we talked about the VCR which is certainly a a great example of skimming they introduced the VCR at a high price and then lowered it over time the video game systems all right so tell us about the video gaming so I know like I don't remember how much like I remember PS3 like started at like 500 or $600 now it's like I don't know $300 I think I have no idea but like they they always come out with a higher price especially because there's new like technology factors so let's say a new system comes out that could reduce the price but uh just over time once people have it they want to try to get new users so they lower the price right they want to get new users they want to get um increased the rate of adoption what about clothing in general uses a lot that like so some retailers use what we call a high low strategy which means that they're on sale like 80% of the time so they're constantly running sales so remember we talked about channels and distribution that's very common in department store Channel where um the product is set like you're saying like clothing um at a high price but it's always on promotion they're always having sales 20% off 30% off 50% off that's um what you're talking about is a retail strategy that's something that retailers use what I'm I'm talking about here is a skimming strategy used by the manufacturer of the product some retailers use um a strategy known as edlp which is everyday low price like Walmart Walmart's strategy is edlp which is that they don't run sales usually um they don't heavily promote their products in terms of um selling the product at a reduced price their positioning if you will their pricing strategy is such that they want us to believe that their products are on sale all the time so to speak that it's the everyday low price that their prices are always low so they don't run sales what about software like uh it's it's very expensive in the beginning but once something new comes out to that it loses its price completely well we have to understand who is it that is implementing the strategy like for example the Xbox right um in the Heights there's at that when that was first introduced the Xbox 360 um students were telling me that well they're selling it um down the street for $750 so high so the question is but who is that a Microsoft strategy is that who's getting the margin there is it Microsoft or is it the retailer course it's like clothing already yeah you see that see the difference software it's not like that is it well so tell so give us an example for illustrator right like the whole package of right so for example Adobe um Creative Suite so they have creative site two three four so does Adobe sell the prior version at a lower price or is that something that the retailer with time the first version you can buy from the what you call the Adobe itself for cheaper once the new versions come out so is it scheming also because first right now the last version right now cost like almost 2,000 or something like that so it's like but the previous one also cost like that but now it's like 400 so it seems like it's skim right so what what do you think about that what what's what's happening there is that an example of skimming or is it a different product are we just selling different products at different price points so basically is it a line extension so we had a product that was at $2,000 um now we replace that and we're selling a different product of course it's the Adobe Creative Suite but different features now we're selling that at $2,000 the CS5 Creative Suite 5 and then the one like you're saying Creative Suite 4 creative sweet three is at different price points so is it a line extension are we extending from CS2 extending now to CS3 to CS4 to CS5 so are we actually lowering the price or we actually increasing the price what what do you think I mean like if it's land then yes we're increasing the price if we're looking as a line extension but um in terms of service like VCR for example like right it's outdated by DVDs right so like I mean in terms of service is the same thing now this software is outdated by newer software except just it's a one owner and there it's like two different technology so that's the definition then I guess that was the extension and this one would be the scheming yeah so it depends on how you're going to define the product right if we Define in terms of services then then most be what in terms of services like like what kind of service they do like like that they both yeah they different features yeah right so what do you think about that where a product um becomes obsolete you replace it with now is that an example of incremental um sales or cannibalization remember we talked about that we talked about incrementality versus cannibalization so what is that what Alexia is sharing with us are we having incremental sales when we introduce Creative Suite 2 creative site 3 Creator Suite 4 what do you think probably not we're cannibalizing that product yeah we're cannibalizing the sales of the prior version that yeah exactly so yeah it's the same good thing yeah so it's a I think it's a good thing it's example of continuous innovation and we'd like to think that that's going to result in some incremental sales but most of the sales we're replacing we're cannibalizing our own sales if you make the old one cheaper than you're cannibalizing from your new sales right then people are going to say I may as well buy the old one for price right and that's what we're seeing also what we saw with the i Ione what happened when they um when they introduced the iPhone so very shortly after it was introduced like within the a month or so the when the iPhone came out it was $600 and then only several weeks later like maybe six weeks later they lowered it they lowered the price to what 400 and what do you think about about that situation is that example skimming for sure yeah right so the the product is the same right so I think that's what what we're I mentioned that see is what I'm trying to clarify with your point is is it the same product or is it or is it a different product that um that was selling in the marketplace see in that case it was basically the same product but they understood that um as it approached the holiday season they knew that they weren't reaching their target and also so there's things that happen behind the scenes at Apple that we're not privy to so there's a number we have to make some assumptions one is that they weren't going to achieve their unit volume sales and so they lowered the price the other um thing that could have happened is that because there was a delay in the launch that they had already decided like we were suggesting that they were going to lower the price at that particular point for the holiday season but because of the delayed launch it turned out that it was a change in price within four to six weeks after they had introduced the product originally that's not what they had expected they would have introduced the product when they had originally planned then there would have been enough of uh a cushion if you will between the time that it was introduced and the time that they lowered the price but certainly I think that's a good example also of skimming where they introduced the product at a high price and then lowered um the price of the iPhone and then how much is the iPhone selling for now with a contract model the first model probably 200 right probably can get 16 yeah 16 gig that's because the phone company takes some of the like they bill papers on it it's not really that price right absolutely that's an excellent point Zach you see what's happening is that the service provider wants to get us to purchase their wireless communication service so what they're willing to do is subsidize yeah they they're subsidizing the cost of the product in some cases you you could get phones or any number of phones for free even but this phone is obviously is so expensive that they um incur part of the cost and the customer pays part of the cost and they do that as an incentive to get you to take the service so the price at the Apple Store is different than the price at let's say like at AT&T if you go in there to buy store because if you go INE to buy a phone in their store because what they want to do is have you sign a two-year contract if you sign a two-year service contract then they'll give you the phone at a very reduced price but there's other strategies the company has a decision to make as to whether or not they're going to use a skimming strategy or maybe they might use a what we call a penetration pricing demand or oriented approach which is that you introduce the product at a low price so you introduce the product at a low price so right out of the gate you're trying to sell as many units as possible as quickly as possible but sometimes where students are a little unclear is they try to draw an analogy between skimming and penetration and they say they write on exams skimming is you introduce at a high price and then lower the price and then they say penetration pricing is just the opposite you introduce at a low price and then increase it over time no in fact in many cases it's like almost impossible to do that to introduce it at a low price and then you tell them what just kidding now it's double it's very difficult to raise the price after you've already established a price point in the market and that's why promotions when we try to promote products and let's say like for orange juice orange juice it's very common that orange juice is promoted at two half gallons for $5 so two for $5 but a half a gallon of Tropicana orange juice is not $2.50 but people expect because it's promoted so so heavily at that price that really that becomes the price in people's mind and so very often people are reluctant to buy it at the regular price which is basically $4 for a half gallon and what what do they do when it's on sale so they buy more so let's say if they buy if they usually consume half a gallon of orang juice a week when it's on sale they buy 3 half gallons and then what does that mean next week they don't buy any and the week after that they don't buy any so we need to keep that in mind the impact that promotions are going to have on our sales in the short term so what that's described as is overstocking the customer who are overstocking the trade because retailers they also buy AE if the manufacturer offers a promotion sure they're going to they're going to order 10 extra container loads not that they're going to sell it that week but that they're going to sell in the next week or the week after that ideally perishability is not an issue it could be then they they have to adjust their approach but they might also buy ahead so penetration pricing is we introduce the product at a low price just in general right period there's no like then you increase the price I don't know that's very challenging to do that you see why but what would the example of somebody doing that like anyone ever like in what case would you ever are you saying raise it after just like like why would you do that I was thinking has anybody ever released a new product and said like the price of this product is going to be 500 bucks but if you buy it in the first two weeks it's only 300 bucks yeah you could do that what we do is very often um to manage channels what we do is we introduce a product um yeah it's it's it would it's unusual to to do that um because people are not going to want to pay a high price once they've already purchased a product at a low price they're not going to want to pay a high price anymore we've already established the perceived value in the market so the thing is we need to keep in mind though is that the number of units that we're going to sell if we Implement a penetration PR pricing strategy is going to be very significant so we're trying to sell a lot of units in a very short period of time at a very low price so that creates a challenge for us the challenge is we have to have the production capability so if we sell the product at $1,100 then let's say we have a demand for 10 million but if instead of at $1,100 we're selling the product for $39 now we have an annual demand of 250 million so we have to ask ourselves and this is also something that we consider when we deploy a skimming approach to pricing are we going to be able to meet demand so if we introduce the product at $39 are we going to be able to produce 250 million units to meet demand now in very um that's a problem because the assumption is that we are going to have that demand and they're making assumption that people are going to buy it at $39 so we're going to build capacity to produce 250 million units but is that are we managing our risk effectively because what happens if they don't come what happens if we built capacity to produce 250 million of this product and the competitor introduces a product that LeapFrogs us at the same time so in words they have an advanced technology the technology that's more sophisticated than our product now we have the abil you got a big problem so we have to think about that because it's we have to be able to produce the amount of units that are going to be demanded and that means that we have to create the production capability which means we might be investing billions of dollars in manufacturing equipment and a manufacturing facility obviously a manufacturing facility that's going to be quite huge to produce produce so many units but if we have a skimming strategy then we know well yeah there's some advantages and disadvantages to that strategy but maybe we're only at that price we're only going to sell like 3% and so it gives us a way to test the market right we could there's other ways to um Implement a test Market as part of our research approach but we get a sensus to the demand and we could create category need over time because even with a penetration pricing strategy that doesn't mean that we have immediate category need does that mean that everybody is still going to see a need for that product to remember part of the diffusion of um Innovation deals with not just price it's not just price price we said is certainly one way for us to impact the diffusion of innovation model the adoption Curve Model but remember those different segments in the market the innovators the early adopters the Early majority the late majority those are also Lifestyles so just because you lower the price that doesn't mean or you start at a low price that doesn't mean that people are automatically going to buy MP3 players there needs to be category need we need to create that primary demand so it's complex this is a very complex situation low ing the price or starting at a low price doesn't just solve our problem we still need to create primary demand we still need to create category need so we've got to integrate these approaches and take into account how they're going to influence the demand for the product because people may not see a need for MP3 players like what do you mean digital media I don't what I don't get it what I need a digital media storage device or digital media player I don't have any digital media Med media what am I going to do with this thing so we need to understand that what about Prestige pricing so we think about demand oriented pricing approaches what is prestige pricing how does that influence the price that we're going to charge for our product or service what does that mean yeah Alexi just SP the name I guess presti I guess whatever the image it has if it's like for example I know let's like celebrities often like they're set Prestige for level of certain things like clothing and like CS so like the car has certain uh what do you call not like physical value but more of like idea like um like a value of being just popular and yeah conceptual value then that's worth the price and then more value is then higher price you can set I guess it's also kind of like scheming if you think there intangible there's some intangible um aspects of what we're selling it's uh something that's above the market price meaning that uh Rolex you're not going to put it in the same category as other watches based on price so normal watches that are business watches but you're above like you're you're a business watch but you're above that of those you want to place your brand above that as a prestigious brand so if every watch at that is want to $5,000 you place your brand as you're the $15,000 Watch the $220,000 watch so that's part of it all right you want to add something to that um just give an example of Rolls-Royce when you're not really paying you pay not really paying as much for um how the car functions which is similar to most other cars it's more right it's how everyone P it's how other people perceive it how you perceive it that that's what most of the money is going for things so there's based on the price there's an expectation there's an expectation of Quality quality the performance quality and that's going to impact the perceived quality so what is it what what is the implication there in terms of how we price our products so what does it mean then when we talk about Prestige pricing setting a high price to make your customer believe that it's so great that it's worth the money yeah so that you're going to charge more relative to what competitors are charging for um a product that has comparable functionality like um Jacob is talking about watches well don't all watches basically serve the same purpose yeah they all tell time some watches are a dollar right some watches are $1,000 some watches are $50,000 but the basic functionality is the same but if it's a Rolex then the product is going to be priced accordingly because the price is what's going to help us create an element of prestige so very often for example um like for Father's Day retailers are sensitive to this and so for Father's Day you think oh they're going to sell um let's say like he talking about clothing you think oh they're going to sell clothing very cheaply like sweaters and ties we're going to sell buy your father's sweater for $10 but they understand this idea of prestige pricing and so what they do is instead of selling sweaters for $10 they swear they very often what some retailers will do is they'll try to sell sweaters for $50 why is that based on what we just discussed why is that an example of prestige pricing because at this point sweaters gain much more Prestige that's what you would give your f for example right so the perceived value is going to be higher so so that you want to buy your father a nice gift for Father's Day so $10 you might say why wouldn't they buy the sweater for $10 you think at $10 you're going to sell a lot of sweaters for Father's Day I know and maybe for those who have very low income maybe that's all they could afford maybe some will but very often what will happen is people feel well yeah but I don't want to that's too cheap $10 you want to get your father something nicer and so maybe even the sweater that used to sell for $10 maybe now they just charge $50 and it's going to become more desirable because of that so if you came in there and they had the price um they had a sign that said $10 you wouldn't buy it but if it was the same sweater and it said $50 you you might buy it because it this element of prestige because like we were saying the price helps deliver this intangible conceptual element which is prestige so if it's at a higher price like Jason is saying then the perception the perceived value is higher the perceived um Quality level of quality is also higher good question um isn't it up to us also sort of to like make the prestige sort of like should we think about maybe before putting the price like well Mar as marketers absolutely so we have to determine what associations are we going to make with our brand and our product to deliver Prestige so how do we um position ourselves as a high luxury prestigious product in the given category well part of that is impacted by Price Right but also like if we can have a commerci of like a I don't know some Bill Gates or somebody with this kind of clothing or watches or car whatever we're trying to sell it would increase the right away oh so absolutely so there's other ways I'm just saying that right now today we're focusing on price I'm saying that sure that um there's other ways that we we could create these strong unique and favorable brand associations to establish ourselves as prestigious as luxurious but that's what Apple did kind of also in with which product Apple which which product like in general it seems like they they also put themselves as a very prestigious company and so what is it that helped them do that besides price because certainly their products are not inexpensive like MacBook Pros are over $2,000 well you could buy a netbook for 200 bucks now I'm not saying it has the same features and benefits but there's Alternatives right there's substitute products if you want um desktop word processing Etc so absolutely price is definitely a factor but what else what what else has enabled them to achieve that positioning if you're saying that it is prestigious um it is high quality design and The Innovation Innovation Innovation yeah the The Innovation so that supports the price point and that's what enables them to um use um if you will in some cases Maybe maybe this Prestige um pricing strategy um although very often they've used um for their products a skimming strategy but the element of prestige in terms of how they create that in the marketplace is in large part driven by their Innovation if it wasn't for the high level of innovation so in other words the newness of their products the research that they do then they wouldn't be able to charge those prices in the marketplace because their products are state-ofthe-art right but just like state of the art in terms of also the way they design and like the way they made it like that most you can always like tell which computer is a Mac you know you can never confuse it with anything they like making themselves like a they add their brand personality like a little bit because they're like so deep like a whole race of computers you know you can say like in certain sense well the brand is we said what makes um one product unique from another so the all the um watches tell time and basically uh all laptops provide the same basic functionality doesn't mean that some don't have more advanced features but they have the same basic function but what makes them unique is the brand they they're wrapped in a different um a different brand so I want to um switch gears and talk about because we talked about um a few examples of demand oriented approaches and I want to talk about cost oriented approaches so what's the uh what's the difference why do we classify these approaches that way because we talked about skimming penetration pricing Prestige pricing those are demand oriented approaches to pricing and cost oriented approaches are for example Cost Plus pricing so what's the difference why do we make a distinction between Cost Plus pricing and penetration pricing why do we classify those as different what do you think how is the focus different B how much it cost you try to find out yeah so um when we talk about cost oriented that means that we're going to determine the price based on our cost versus when we talk about demand oriented we're of course we need to look at our cost we need to know what it costs us but we're focused more on what the demand will be at a particular price and that's going to determine whether or not we're going to have a penetration pricing strategy where we introduce the product at $39 and try and sell 250 million units or we can introduce the product at $1,100 so Cost Plus pricing for example is very common in retail Pharmacy so it could be Cost Plus pricing could be a fixed dollar amount so for example in retail pharmacy let me share with you how they Implement Cost Plus pricing so the the store purchase purchases the medication at a certain price so let's say they purchase the allergy medication a month supply of the allergy medication for $150 and then what they do is they add a fixed amount to that to determine the price so if they purchase it for $150 so if they got it wholesale at $150 then an example of Cost Plus pricing which is where mostes this is an industry Norm so there the Insight I'm giving you about retail phes based on my experience usually we'll add $5 $10 sometimes $15 to the cost of the medication so if it costs them $150 for that month supply of medication then they'll sell it to you us the customer for $155 some charge more right $5 is um what some charge some are able to charge $10 some are able to charge $15 add $15 to their cost and we said in some cases they might have a different strategy which is a strategy a pricing strategy based on social responsibility where they might sell the medication at a very low price so it's not their pricing is not based on profits it's something else it's not always about profit sometimes about it's about doing the right thing about helping people that are in need yeah is is that just like a markup rate like each item they mark well percentage so that's what we call standard markup is a course another course oriented approach which is different from Cost Plus so standard markup is a percentage we charge our price is based on a percentage a markup percentage of our cost so standard markup an example would be if we buy let's say we purchased a product as a retailer we product at let's say $10 and we have a 50% standard markup so the markup is based on course that means that we would sell the product for $4.99 so depending on the cost the amount of the markup is going to vary so if we purchase the product as a retailer for $100 and are we still had a 50% stand in walkup then we would sell the product at retail for $149 instead of$ 14.99 so there those are different course oriented approaches one is standard markup a percentage of cost and Cost Plus pricing you see the difference so who could tell us what what Cost Plus pricing is based on what we just discussed yeah just adding fixed amount to the it as opposed to a percentage based on yeah so adding a fixed amount to your cost which an excellent example really is retail pharmaceutical sales now very often they have a Cost Plus pricing for their prescription medications and for their nonprescription medication and non-prescription products they have a standard markup so over the counter medication or orange juice or shampoo that's usually standard markup but it's very common in retail pharmacies to use Cost Plus pricing for prescriptions okay uh how do both these play out when um a manufacturer wants to sell a product for a certain price in like a bunch of stores so like if there's a set price that like let's say for the game systems you're using so let's say for the Xbox if it if they if they say it should be sold at $400 and so how does how do both these things work like how does that work um for standard markup and cost pricing in terms of that so as a manufacturer then we need to understand what are the margin requirements there's margin requirements for um any category in some cases it's 10% in some cases it's 30% sometimes it's 50% some cases it's 100% so we need to price the product at a level where the retailer for example can purchase the product at that price and then be able to achieve the standard Marco now re uh manufacturers have an MSRP price which is the manufacturer suggested retail price which I think is what you're um alluding to is that we think that you should sell it at $500 or we think you should sell it at $50 what whatever it is whatever the price is the manufacturer makes a suggestion and manufacturers will also have what's called map pricing map pricing is minimum advertise price so if you have a product that's very popular what's going to happen is very often retailers want to promote that product to generate a significant amount of foot traffic in their store which is another pricing strategy known as loss leader pricing loss leader means that you're selling the product in some cases below your cost less than what you even purchase it for as a retailer why because you know that that's going to get people in the store so if you able to do that with um the iPad for example or the iPhone products that um or the iPod which has been very successful there's a very high demand for them so if you're going to sell it at a significant discount or even below your cost that's an example of and we talk about this also in this chapter of lost leader pricing is there is that is that not allowed like like let's say if we know an iPad set let's say for a certain memory and so if the retailer does that then the manufacturer has a choice which the penalty is they'll stop shipping your product oh like they tell you you can't do that right well that's what um map pricing is map pricing says you can't advertise the product for less than this amount amount and if you do then we're just not going to ship you any product so if they run an ad if some company runs an ad for $300 and we said the MSRP is 500 and we run an ad for $299 that's it is we're not going to Shi you any product so that's why they bundle it with like accessories let's say or gift cards something because they can't do that right so that's an approach um that some um companies do um some companies even put the MSRP on the product or on the packaging rather as a way to try and um enforce if you will or to try and Achieve that suggested retail price so in other words you couldn't um charge more that's another problem is that they might charge more than the suggested retail price which is what I was suggesting with the Xbox 360 now you might want it so bad that you're like I know it's only what what did um somebody said I think somebody say oh that's like that's usually only like 300 bucks yeah and I said no we somebody might was selling it in the heights for like 750 and like what are you crazy but they could sell it for whatever they want right they might do that and remember it's not Microsoft that's not Microsoft pricing strategy that's the retailers pricing strategy so we got to remember that when we talk about pricing and we look at the value chain there's different players in the value chain there different organizations and we need to when we think about pricing we have to determine well oh wait a minute are we talking about the manufacturers pricing strategy or are we talking about the retailers pricing strategy those are different um different situations what's the point of that to sell it to 750 like how how would you make a sale for 7 um because of scarcity which means that there's um there's no product available that um very often for products like that that are very popular when they first come out when it sells out right yeah it sells out right away and so somehow you were able to get a 100 of them or maybe only 10 so you're the only guy in New York city that has 10 Xbox 360 you could charge whatever you want maybe you get somebody to pay $1,000 but remember that's different from saying Microsoft introduced the Xbox 360 at $1,000 the implications are very different no we're not saying that we're saying that a retailer um charged more than the manufactured suggested retail price and that's one of the things that we talk about in this um chapter is besides the demand oriented penetration pricing skimming what else did we say Prestige pricing and there's others there's also cost oriented pricing strategies we talked about Cost Plus standard markup and then there's also um map pricing yep there's a map um pricing which is is um the manufacturers attempt to maintain a certain perceived value in the marketplace a certain minimum perceived price or value for their particular product because they're concerned that once it's promoted at$ 299 that people people won't be willing to buy it for 500 anymore that's the challenge with promo that's the problem with selling orange juice two for $5 is after that then people don't want to buy orange juice for $3.99 anymore for half a gallon because to them we've now reduced the perceived value from 3.99 to 250 so that's why I said when we talk about penetration pricing I mean how do you start off at a low price and then raise the price or once you lowerer the price then it's very difficult to then one week you tell them it's 250 and then next week you say it's $3.99 and you say but it was on sale that was a tpr temporary price reduction but you know explain that we're we're thinking about it in rational terms but our customers rational or their behavior is rational very often it's not rational it's emotional they say well last week I paid $ 250 this week it's $3.99 something just not right about that so it's a challenge and so we talked about also um competition oriented approaches which is the loss leader pricing strategies and then also we talked about the idea of selling above the market price so loss leader is you sell below the market price and then there situations where you might actually sell above so those are um other examples and then sometimes we have profit oriented approaches so in other words we have a certain amount of margin that we want to make or a certain um return on investment so so we look at the gross margin or the return on sales for a particular product so that's going to influence the price that we charge so it's not the number of units it's that where that's our pricing objective but sometimes it's not based on demand it's not based on cost it's not based on competition it's based on a Target that we set for profit so our Senior Management tells us that we need to have um 50% gross margin but maybe that means that at that um at that level of profit that we're going to sell less units and you know what they might say because some retailers in my experience I worked with a variety of different retailers and selling right working with clients to sell in their products and some of them say you know we realize that at a lower price will'll sell 50,000 units per week but that means we'll have to accept only 20% margin 20% margin 50,000 units 50% un um margin 20,000 units now you guys could we could all decide for ourselves what we think is better but what I'm telling you is that at some retailers for example they say you know what you might think we're crazy we'd rather sell 20,000 units and make 50% margin then sell 50,000 units and make 20% margin so sometimes the pricing strategy is based on a certain level of profit that we want to achieve so a certain percentage now what we how would we counter that if we're trying to sell into their store how would we counter that and try and encourage them to take only um a 20% markup on the product instead of 50% how would we get them to sell the product at instead of $149 let's say $119 what would be our logic what what what would be our rationale people buy so they have more of a customer base higher Shar market so they possibly um they'd get certainly they'd sell more units more foot traffic in their store and also and sell more units as opposed to selling $250 ones or $320 ones to make more money but sometimes sometimes for example retailers their objective is not to maximize the number of units that's what I'm sharing with you in terms of pricing approaches sometimes the pricing approach is focused on profit and they're willing to say I'd rather sell 20,000 units instead of 50,000 units and be more profitable but that's just looking at the margin percent we got three more hours right that's just looking at the margin percent but what we would argue what we would suggest is that yes you would be at 50% margin but what we would argue is but your total profit because if you're selling 50,000 units instead of 20,000 units that your total profit would be more that you can't take percentages to the bank only dollars so what is it who has a calculator let's see if if if this if I pick some numbers that um are appropriate how much is 50,000 time 150 somebody anybody right 750 right and then so 20,000 20 um 20,000 time 20 2.4 million and how much was the other 7 7.5 7.5 million versus 2.4 million is that what you got both you guys so the difference in sales the difference in sales is very substantial but now let's look at how much we make per unit the unit contribution margin in the first example 20,000 units is $20 per unit so $20 per unit times 20,000 units no times 50,000 units right is a million and what about at $50 per unit times 20,000 units you guys can't do the math in your head it's also a million right so what does that tell us that tells us based on our assumptions right cuz they might say well wow how do I know it's going to be 50,000 so but what we're saying is that the amount of profit the number of units is less but the total amount of profit is the same in this case the total amount of profit is a million so whether we sell 20,000 units at $150 or $50,000 units at $120 we would argue you're still making a million your profit your gross margin on that product at retail is still a million dollars so it depends how you want make it and so that makes that's certainly um part of our case in terms of selling it at a lower price and taking less margin certainly we would want to present it in a way that's even more compelling by showing that the total amount of profit would even be more than a million dollar so this is just based on our assumptions which they might challenge they might say well would you said it would be 50,000 units we think at that price it would only be 30,000 units all right so good job see you guys next time than orange juice |
Marketing_Basics_Prof_Myles_Bassell | 19_of_20_Marketing_Basics_Professor_Myles_Bassell.txt | [Music] all right so today mostly we're going to talk about integrated marketing [Music] Communications and importantly I want to emphasize that when we talk about marketing Communications we're not just talking about advertising very often the focus is on advertising and certainly we're going to talk quite a bit about advertising but in order for us to effectively communicate with our target audience we need to have an integrated approach so certainly we're going to consider advertising we're going to um consider sales promotions publicity public relations personal selling Direct Mail very often companies use all of those approaches to communicate with their target audience target audience remember is those that they want to reach with their messages with their Communications the target market is who they want to buy their product or service any questions about that so again it's not just about advertising yes advertising is important we're going to talk about that you could take an entire course in that I worked in advertising for a long time it's important but it's not the only aspect of marketing communication and even with an advertising there's a lot of different mediums and we're going to talk about that but before we do I just want to close the loop if you will on where we left off last time as it relates to distribution you talked about place so as it relates to place remember we talked about different channels of distribution does that sound familiar we talked about Mass Merchant department stores grocery stores drug stores convenience stores wholesale clubs specialty stores and we talked about the different retailers that operate in those channels and then we said that we need to decide well which channels are we going to sell our product at which retailers are we going to sell our product so we have to make a decision as Executives as marketers as business people it's our responsibility to decide because in order for us to get distribution we're going to have to instruct our Salesforce it's not going to just happen on its own we're not going to just get Distribution on the Walmart planogram by wishing and hoping we have to have sales people that are going to travel to Bonville Arkansas to meet with the buyer there for that department where our product would be sold and convince them to take our product into their stores now that's very challenging because the buyers are interested in in managing shows space productivity so they're looking at how many units they're selling for a given space so for a given 4ot section most shelves are 4T an entire aisle could be 20 ft or more but in terms of How It's constructed actually the shells most shells are 4T and so buyers are looking at that for their section whether it's a 4ft section with five shells or it's an 8ft section or a 12 foot section or maybe this particular buyer for this category has an entire aisle then looking at show space productivity they want to see how many units did they sell last week this week how many units did they sell for the last 52 weeks how much dollars did that section and also that item generate how much margin dollars did it generate so all of those are metrics that they're going to use to evaluate the performance of that section so they're going to evaluate the performance of an entire section a shelf and even the items they want to see are we selling fewer items over a given time period and is there a reason for that is it because the item is seasonal so it may seem like the number of units that we're selling is declining but maybe that's for a good reason maybe because the product is seasonal but regardless of the reason there's an emphasis on category Management on this showspace productivity and so in order for our product to get on the Walmart planogram and we talked about a planogram planogram is the fixed layout for a given section so part of the secret for Success at retail has to do with the fact that they have a a layout that's you could call called um CH keing if you will it's standardized with some exceptions sometimes they have what's called Regional planograms but the layouts in many cases are standardized so remember we talked about the visual theme at retail what is the look and feel of the store do they play music is the paint on the wall Orange what type of lighting what type of flooring I remember we talked about the flow remember we talked about the flow at Ikea remember we said that we go round and round and round and how the flow is something that's very deliberate because retailers want to sell complimentary products and accessories what does that mean what is that what what what would be a good example of that so we have this flow which means that let's say for example we come in here then we walk this way then we they take us through here through there what is the retailer expecting to happen goad myus um SE to make sense so if you are in B Department um the next one might be B Furniture B something like that so it has to make sense it makes you buy something that complement what you already bought right exactly so something for the home or if it's the bedroom you might be one section might be bedroom furniture the next section might be what pillows then you walk down the other aisle right cuz you can't go straight it's like a dead end right you can't that's it so you have to turn left and then what do they have sheets and then you keep going this way and then oh there's a wall so you walk up that way or in clothing so you come in one section is where they had the pants then you walk down the next aisle you can't go straight so you turn right what do you find there sweaters shirts you keep going up this to the end of that aisle you have to turn right and what do you find there shoes sucks that's very deliberate that's an important part of retail management is the flow and also the visual theme so that being said we recognize that we need to decide the market coverage for our product what is going to be our distribution strategy so what are the what's the options when we talk about Market coverage if we look at it on a continuum so in terms of Market coverage it could be intensive now remember if we decide that our Market coverage strategy our distribution strategy is intensive what does that mean what's the implication everywhere what was that it's everywhere Josh yeah so Josh is saying if our strategy is intensive that means our intent is to sell our product everywhere that's our goal now the salese they're going to be crying because that's going to be a very big challenge for them so if our strategy is intensive if we're trying to achieve intensive Market coverage that means we're trying to sell our product in grocery stores drug stores convenience stores specialty stores department stores Mass merchants and we should ask ourselves whether or not that's realistic does that make sense so what do you think let's say for example we were talking about um clothing so what do you think if we're if our business is selling jeans so so we sell a line of genes torn jees of course they have to be torn right and some are more torn than others which is sort of interesting but I'm not going to go there um are we going to would that make sense to sell torn genes in all channels of distribution so would we sell genes in grocery stores what do you think does that does that make sense no they have um they sell things other than groceries in grocery stores we call that scramble merchandising scramble merchandising is when you sell products that are not typical for your channel sometimes when retailers do that it becomes typical for that channel like for look at all the products that you could buy at drug stores am I right we can we sort of take that for granted you could buy laundry detergent there there you could you might actually be able to buy a pair of Tor jeans there because they have so much diversity in terms of their assortment I know I've seen they sell t-shirts um and a wide variety of um products other than what you might normally expect health and beauty aid type products health care products so besides aspirin and Tylenol and all these other products shampoo right besides all those products they they sell t-shirts and I wouldn't I really I wouldn't be surprised if they would sell jeans there's probably some that um that do carry um a small assortment of apparel jeans and so forth but that's very non-traditional that's not what you would expect to find there so we need to decide what's going to be consistent with the product that we're selling what channel is going to be a logical match what product do you think would be so if you say well jeans yeah I don't yeah I don't think we're going to sell jeans in grocery stores right we just we would just that doesn't make sense we just be wasting our time trying to get distribution there and then they're going to laugh at us it's going to be embarrassing cuz remember I told you it's important we need to know our customer we need to demonstrate that we know our customer that's important part of sales you need to know what is the strategy of Walmart we need to know for example that Walmart is an edlp retailer what is that everyday low price so what is that suggest how is that different from a high low retailer who who would be an example of a high low retailer the department stores right Department Stores um have sales all the time why because they carry the product at a high price price and then lower it for sales that's how we come up with that term high low whereas Walmart they really don't they don't run sales it's their positioning in the marketplace their value proposition their unique Val value proposition is what that their prices are low every day you don't need to wait for a sale the price that you see 747 which we describe as what what type of pricing is that odd even pricing or 948 that's the lowest price every day low price no sales it doesn't mean that they're barred from having any type of promotion yeah they could they could have promotions they're known for rooll backs so as a salesperson importantly we need to design an offering for them that incorporates their roll back strategy which is what that prices are going to continue to get lower that not only is that the everyday low price but that their promise is that they're continually striving to lower the prices even further you ever see their commercials watch for falling prices so that's known as a roll back when you go into stores you'll see let me do that again you'll see that they have these um danglers shelf danglers that'll say roll back so when we want to try and get distribution in their store we need to give them an offering that says hey we we we understand your strategy we're going to sell it to you at this price the first year and in the second year we're going to lower the price that you buy it for and in the third year we're going to lower the price even further why because this way you could Implement a roll back so in the first year that the product is sold in their store they're going to sell it for $297 the second year we've priced it so that they could sell it for 278 and then in the third year we've lowered price so that they could sell it at 254 that's sales that shows you know your customer and you design a a program that's relevant to their strategy now it's very different from a retailer who says well I want to take in this product now and I'm going to sell it for let's say $50 and then I'm going to have a sale and I'm going to mark it down to $40 then 3 months later I'm going to uh mark it down to $35 some retailers have sales all the time why do they do that what's the logic behind having sales what is it that they understand what is it that um whether it's a sale or it's a roll back what is it that the retailers understand about the market why would they do that Stephen right no oh my bad okay tell me your name oh Demitri okay go ahead in an elastic Market if you lower the price then more people buy yeah so in a price sensitive Market when you lower the price people are going to buy more so they're expecting when they lower the price that the number of units that they sell is going to go up and that's why if you tell them that you want to raise the price of the product that you sell them they're very reluctant in fact they're not going to accept a price increase because of what Dimitri just explained they understand that oh wait a minute so now you're going to tell me I'm going to raise the price from 248 to 27 for hell no because of what Demitri just said what's going to happen I'm looking at showspace productivity and now you're telling me that I'm going to sell less units obviously my sales for that shelf for that item are going to be less my margin dollars are going to be less but what would be a product that you think would make sense um to have an intensive Market coverage gum so yeah maybe maybe gum you could sell in department stores yeah right that's not their um their main assortment but they have um food that they sell in department stores they have um you could buy C there you could buy candy that's not um the main emphasis of their assortment but you might you might be able to sell gum or um orange juice maybe right you go to Blooming Dales right so you go to bling D but you expect you go there you have a glass of orange juice and uh maybe a a glass of milk but those type of Commodities um because uh they have food service there you might be able to sell um those products there they might be able to sell it in pretty much every channel Pepsi those all those products again even though Home Depot sells products to for the home that's their main product assortment right snow blowers sheedra Nails locks all these kind of things right c yeah but yeah you could also you could buy you could buy candy there you could buy Pepsi you could buy lemonade I don't think I saw orange juice there huh yeah I think in um it's it might be in Lowe's I think they have uh small bottles of um I think apple juice or um or lemonade so so maybe um maybe those products would be would make sense to pursue an intensive Market coverage what is selective what is selective we say the market coverage is selective new whates say again Marina and tell them in the specific stive places and who wants to elaborate on that is go ahead like certain perfumes i' say they're selective in the sense that they'll only sell them to certain kinds of channels but not to every channel and not to just say one channel absolutely so certain products are more expensive than others like some perfumes are more expensive than others some are $100 and some are $5 so what Gana is saying is that we're not going to sell if our perfume is a $100 perfume we're not going to sell it at Kmart there's other things that we might want to buy at Kmart Kmart is a nice store there's things that we would buy there but we wouldn't buy a $100 bottle of perfume there right we wouldn't even look for a $100 bottle of perfume there which is important because it's not just about where we want to sell our product so if our um perfume is $100 or more in retail so we're going to try to get distribution at Bloomingdales Macy's norstrom nean Marcus Sachs does that make sense we're not going to want to sell our product at Kmart why why wouldn't we want to you might say but what if they want to carry it what if they said we want it we want we want to carry it well youm expect someone so it's like where the audience actually go find it yeah so there's a couple of challenges one of them is um what you're suggesting is that people are not going to go to Kmart to buy buy a product at that price in other words in that category so it has to be where consumers in this case expect to find the product and also what about our positioning in the marketplace it will change the positioning yeah it's going to absolutely it's going to change the positioning so you're going to erode your brand Equity as what a high-end luxury perfume what do you think about ver Wang selling wedding dresses in Kohls right beer Wang is a high-end Couture dress maker so you could have a wedding dress fellas you guys pay attention cuz you're going to have to pay for this you're going to buy a wedding dress for your fiance soon to be F that cost $50,000 for a ver Wang Couture custom wedding dress now what happens so they also sell verer Wang um products at cooh because they're thinking you know what maybe exclusive maybe we need to transition from exclusive to selective or intensive distribution so what happens you go to the wedding you it's your wedding day it's your wedding day and what happens somebody comes up to you and they say I love your dress that is crystal that's amazing wow and then you say and your husband is standing there right and it's 50 Grand later and he's like and you say very proudly because you know that you paid 50,000 well your fellow paid $50,000 for the dress well thank you it's a ver Wang and your friend says I shop at Co too you see that's the problem you can't be all things to all people you have to be clearly positioned in the marketplace either you a high-end luxury brand or you're not it's one of the other go ahead Hy it's kind of similar to when Mason made a design for Target but they designed special things for Target like you're not going to find the same thing at Sachs that's missing at Target that's missing that's they um as part of Channel management they try to do that so um just like whatever um Ralph Lorent sells at saxs they don't usually have at Macy's so companies are aware of what we were just discussing and so they try to manage that overall will you still is it possible you could still erode your positioning if you're a high-end luxury brand yes I mean is it smoking mirrors or is it I mean people um consumers are not naive they know what about um Mercedes remember we talked about the Mercedes symbol and the tremendous amount of equity in that well there's a big difference between buying a Mercedes for $130,000 and a Mercedes for $30,000 but what did both customers get and he said that the only one who really knows the difference is the gold thing is know that this is an S550 right and this is a C-Class so there's reasons why them doing that make sense from a manufacturing standpoint it makes a lot of sense from a marketing standpoint it creates problems so they sold of course why did they sell so many you saying well wait a minute coach you're saying it's a dumb thing they did but why how come they sold if it was so dumb why did they sell so many cars at $30,000 they still had the of Mercedes but their other car sales probably like $100,000 right so if you bought a car for $130,000 and then what you come to campus to see all your students are driving Mercedes then you're like oh no that's you got to get you got to get a Bentley because now even the students the students in college are driving Mercedes so what Demitri is saying is that historically this meant luxury Automotive $100,000 plus vehicle and and so from the first day we talked about value perceived value so what Demitri is saying is that the perceived value is still very high that of course people realize well I know the car is smaller I know the engine is smaller I know it doesn't have leather seats I know it doesn't have cherry mahogany wood paneling but it's still a b right we're not they're not being fooled but there's still this association between that symbol that brand and highend high price luxury cars and so the value the perceived value is very high remember we said value is a function of price quality benefits and of course it could be other things but those are three key um aspects of value so at that price price the perceived value is very high so selective is going to be distribution at fewer stores in fewer channels so intensive is basically we're going to try and sell our product in every channel every retailer that means that we're going to have to allocate our Salesforce accordingly elective is fewer channels fewer detailers and exclusive is going to be a limited distribution last time somebody had mentioned as an example the distribution strategy for the iPhone which was what that they had an exclusive with AT&T questions sometimes we'll give an exclusive for a certain period of time so for example sometimes we'll introduce a product in the department store Channel which we know is sells products at a higher price and sell it there for let's say a year and then distribute it then open it up to other retailers to mass merchants and specialty stores so the exclusive might be for a limited period of time what's the logic behind doing that yeah but they're going to lower to have to lower the price right but the new product is coming yeah so they understand the adoption Curve Model so they know that the innovators are going to buy the product at a high price at department stores once they sell to that group of consumers then the question is well how then do we sell to the early adopters how then do we sell to the early majority well then we expand the channels of distribution so then we start to sell it in specialty stores then we start to sell it in Mass Merchants then we give it to Walmart Kmart Target coals so there's got to be an alignment between our strategy and the channels in which we're trying to sell does that make sense so you see why we would do that that ties back to what we talked about the rate of adoption how do we accelerate the rate of adoption well we said one of the things we could do is lower the price well department stores they can lower their price but if our pricing strategy is a skimming strategy and Our intention is to deliberately lower the price over time in a planned way then that means that we need to expand distribution into other channels that sell products at a lower price it doesn't mean it's the identical product maybe it's a modified version of the product very often it would be because you don't want to embarrass your department retailers by having the same product selling at Walmart for half the price maybe what do you think about selling Rolex $50,000 Rolex at Target no you think that's good idea no how about uh $115,000 brightling brightling is a nice watch $115,000 what do you think target I'm just checking I just want to check checking in checking in I'm checking in questions all right so let's um shift gears if you will and talk about integrated marketing communication and yes we're going to talk about advertising so all of these are part of marketing Communications when we talk about advertising we have to decide what medium are we going to use to communicate so what vehicle so what are the what are the choices for us so advertising we're all so inamed with advertising where are we going to advertis what are the what's what's our choice for us TV radio TV newsers radio and these two are known um as broadcast those are broadcast mediums so we have TV radio and then newspapers newspapers okay wait I'm I'm still on newspapers now what inter the internet yes what else billboard Je internet let's see um Billboards is a part of outdoor advertising the bus so um Billboards and Transit advertising so yeah definitely on the side of buses on trains all of those are considered to be out to advertising yeah word Mount um we're going to try to create um some Word of Mouth the question is if that's our objective how are we going to achieve that is it going to be through advertising is it going to be through sales promotion publicity so sure we're going to try to create some um some word them M maybe it's um part of promotions like for example maybe we'll give um free samples to try to create some buzs to try to create some um word and mouth house let's see what about um Street Furniture do you know what street Furniture is this is also a type of outdoor advertising what like when they buildings and like when you see the Billards at the bus station right so Billboards could be on the top of buildings they could be on the side of buildings um train stations yeah we could have posters um there definitely what else okay dri that's a question all right so Street Furniture is for example like bus shelters I got you I got um benches so um you're so used to seeing it right that you probably like block it out remember we talked about perceptual vigilance perceptual defense but yeah I mean think about it there's a bench you walking on the street there's a bench well that's furniture and um the um newspaper stands which proliferate many cities are a type of uh Street Furniture why is this important why are we're talking about Street Furniture benches why because on those benches well that's kind of like interesting that oh the city put a bench here okay but there's a sign on it that says call 1800 lawyer that's advertising so what we're talking about here is all of these things are options for us so when we're going to advertise we have to select the vehicle for communicating our messaging how are we going to reach our target audience remember our target audience is who we want to try to reach with our advertising and the target market is who who we want to buy our product so when we advertise very often the target audience is a subset of the target market so if we're going to sell right if we're selling this $100 bottle of perfume to women so let's say it's a self-purchase there a purchase can be either self or gift and so our target market is women but our target audience for this advertising campaign might be women that are between 18 and 29 see our target market is old women from 18 to 90 that's our target market that's who we want to buy our perfume all women all women in that age range so it's a very broad range but our target audience is going to be narrow why would we do that why does that make sense why would we how is that going to impact our advertisement it's not going to be the same commercial just say what do you mean we make a commercial the production might be a million dollar to produce the commercial why don't we just show it all day all night every channel why well that's true you don't have an unlimited budget what's the what's the expectation from that um that we have like let's say if you're selling a product to like women I know this not really sexist but like let's say it's a cleaning product you're probably going to show it between like certain hours because you know maybe like during soap off chances are you're going to see commercials for like you know cleaning products more than I don't um Sports more than sports right it's kind of more common because you would know that most likely your commercial will be more effective your ad would be more effective so what we're talking about here is reach so we're going to select the medium and if it's if we decide it's going to be TV then we're going to select specific date parts that's what Gana is talking about is date parts and specific programming so it might be in the um in the afternoon during general hospital because you expect that that's when you can reach your target audience so it's got to be very deliberate now mind you I want to emphasize again that as part of marketing Communications it's not just advertising we're going to continue to talk about advertising but it's only one component of our Communications plan but still can be very important and when we advertise very often we use multiple Vehicles multiple mediums so we're going to have on TV we're going to advertise on radio we're going advertise in magazines we're going to advertise in newspapers and by the way if we advertise in a magazine right if we decide to use that Medium that type of media we don't just advertise in one magazine very often to achieve reach to achieve a high level of reach we advertise in 10 or more magazines things to get the highest level of coverage because maybe everybody who reads that magazine although they're women maybe they're not what between that age group that we're trying to reach but in terms of developing the commercial why would it matter why would we just run the same commercial all the time change yeah you're going to change the visual you're going to change the messaging so that it's relevant to the target audience so if you want to sell to college students or students that recently graduated from college then who do you expect to see in the commercial what's going to resonate with your target audience is it going to be somebody who's 65 years old so for you ladies here taking this college course if you see a commercial for a perfume a high-end perfume for $100 and the person in the commercial is the woman that's 65 would say at least 65 maybe 70 years old is that going to make you want to buy that perfume or not I mean be even worse if it was me trying to sell it right but let's just say it was actually a woman but the woman is like 70 years old Milan what do you think I mean is that you so we need to customize are commercials we need to customize the print ads for example so reach is an important part of advertising and frequency frequency is how many times how often on average will our target audience be exposed to the ad the greater the reach the more expensive so in terms of how media is priced the greater the reach the more expensive the greater the frequency the more expensive so for example it's much more expensive to run a commercial during the Super Bowl when you could reach 200 million people usually than it is at any other time because the reach is so high and with magazines in magazines we talk about circulation what's the level of circulation so for example some magazines have a circulation of 1 million and some magazines have have a circulation of 5 million some magazines have a circulation of 8 million and that's a realistic number in terms of circulation so for magazines when we think about the level of reach we don't this you never talk about like reaching 200 million people with a given Magazine with one magazine there's no there's no way there's no magazine that has that kind of circulation so those numbers like a magazine that has a circulation of 8 million like um Better Homes and Gardens for example then that's considered to be a very high level of reach so 8 million in the world of magazines is considered to be a very high level of circulation a very high level of reach we need to be able to compare these different Alternatives so for example remember I said as part of integrated marketing communication we're going to advertise we're going to have sales promotions we're going to have publicity what's the difference between publicity and advertising or are they the same what do you think it's advertising and publicity the same thing no it's not the same okay so what's the differ and bad publicity not done by you like just because it's publicity doesn't mean that it's good and sometimes it's not your opinion or what you feel about the product is someone else's opinion what they feel about the product absolutely so advertising is a paid form of communication from you from us we determine the messages we determine what's being said about our product publicity is free it's an opinion and it's it's somebody's opinion somebody who wrote um let's say an article or is doing a story about the category or about our brand or our product now although it's fre it sounds too good to be true although it's free what's the what's the risk what's the concern always be good what was that it may not always be good yeah you have no control over what they say they may say that um your product is no good that's just like you said that's just their opinion they're writing a story they're writing an article so publicity is an unpaid form of communication the concern is that we have no control over what's being said now it could be very effective publicity can be very effective but um and just saying it may not always be positive we need to decide let's say for an ad between two Publications first of all we have to decide what are the key elements of a print ad for example first is the headline so how do we design a print de well we need to have a headline lose 20 pounds in two weeks what's the purpose of the headline yeah to get people's attention you got to get people's attention we want to create interest desire and action what's the action they're going to buy our product or in direct response marketing we get people to call and request a free sample or a catalog why is that effective why is that important so we have a headline and then we have an image and then we have the body copy and importantly what else what's like the most important thing in advertising the single most important thing when we advertise logo yeah we got to have our logo our logo has got to be there and if we have a symbol we should include that as well see what what should I go here that's I'll make it easier on myself I'll do the the new Pepsi logo and there symbols absolutely I don't care what the headline is I don't care what the image is whether it's a photo a drawing you could use stock photography so you don't have to take the photo yourself so as part of production sometimes we go into a studio we take pictures of whatever it is the shoes the jeans we must show the logo because nothing else matters it doesn't matter what your headline is it doesn't matter if your image creates stopping power so when we advertise for example in a magazine people how do people read magazines you keep turning the pages and then you see something that gets you to stop and read the ad so it could be the headline it could be the image we need to create stopping power once we do that we need to communicate our brand That's essential otherwise what are we what are we doing if if it wasn't for Branding well there would be no advertising cuz what would you talk about remember we said that the brand is what differentiates one product from another all products in the category whether it's soda or cars they provide the same functionality they either provide transportation or they quench your thirst whatever the functionality is whatever the benefit is what makes one unique from the other is the brand so it's critical to have any type of marketing Communications that doesn't have the company's logo is a huge mistake we need to have a brand champion in the organization because if they remember the headline but they don't remember our brand then we're wasting our money it's essential that we create an association between that headline the image and our logo because remember this is not something that's like one off we're thinking about the entire touch Point map the entire experience from the visual theme to the layout every point of contact that our customer has with our organization whether it's online or in the magazine or on the TV all of those things have got to be communicating the same identity the same brand identity the same look and feel the same personality for our brand the same brand image so branding is extremely important it shouldn't be just like off in the corner like okay this is our logo over here but why bother that's ridiculous look at any AD very often one of the most prominent things on the page is going to be the logo remember what did we say the value of the Coke brand is about $70 billion that's why we're advertising what's the basic objective you don't even need to really write this in your advertising brief or your advertising plan or your marketing Communications plan because it's so basic is to create brand awareness now you could have other objectives like to reposition your brand to position your brand as fun or contemporary or Innovative those could be objectives for your Communications plan that could be a reason why you're advertising but the most basic thing basic brand awareness you have to create brand awareness brand recognition brand recall so that's got to be a key takeaway from all of our marketing Communications whether it's part of sales promotion or direct mail the emphasis is on the brand so if you develop a print ad and don't include the logo or the logo is not prominent or not visible then we wasted our money that's got to be the key takeaway so after they process the messaging remember we have a certain message that we're trying to communicate that gets encoded basically right so our messaging is encoded in that ad the person that sees the ad our target audience is going to decode that that message and when they decode the message what's got to be the takeaway as part of the communication process so they learn the message and they associate that message or that image with our brand recall the brand recall the brand recognition and our brand name is how do we communicate our brand name well we have a logo that's the graphic representation of our brand name questions |
Marketing_Basics_Prof_Myles_Bassell | 17_of_20_Marketing_Basics_Professor_Myles_Bassell.txt | [Music] all right so today we're going to continue our conversation about price and let's um just uh briefly recap some of the things we talked about last time as it relates to price we said that in order for us to set a price to determine a price for our product or service we have to understand our costs now remember we said that our pricing objective is not always going to be profit but in general we talked about the profit equation and we said that our total revenue minus our total cost is our profit and so it's business people and as marketers it's reasonable to assume that we want to make a profit we said there are some other pricing objectives that we might have for example besides profit might be what what might be some of our other pricing objectives market share might be a pricing objective social responsibility remember we said social responsibility could be our pricing objective which means that our pricing objective might be to make our product available to low income individuals whether we're going to make a profit or not so we gave that example about prescription medication so our objective our pricing objective for a particular let's say heart medication might be social responsibility and instead of charging $150 for one month's supply of medication we might charge only $15 a month and he said but wait a minute coach we're not going to be able to make any money at $15 right but our objective in setting the price because we could set the price at either $15 or $115 or [Music] $215 that's our responsibility to set the price but in that case we decided that we would set the price at $15 because we're not trying to what make a profit that's not our OB objective our objective in this case is to do something socially responsible which is to Pro provide heart medication for low income families or low income individuals we said our pricing objective might be sales could be dollar sales it could be unit sales what else anything else well we talked about competition in the context of market share so market share what market share tells us when we look at market share numbers whether it's our share of dollar sales in the category or unit sales so we talk about unit sales we're talking about the number of units of the product that are sold in that category it's relative to competitors that's why that was so insightful when we talked about about how much orange juice our company was selling and then compared that with how much orange juice was being sold in the entire category because remember we said in our example we were selling 50,000 gallons of orange juice and our competitors were selling 450,000 gallons and so that means the total category is 500,000 gallons of orange juice in our example that means that our market share is 10% % relative to the competition so yeah that's a good point to clarify that one of the reasons that market share is so relevant and so helpful to us in managing a business is because it tells us how we're performing relative to other companies in the category so those are some of the pricing objectives that we talked about and we said that we need to understand our course we talked about course classifications and course Behavior you remember some of the course classifications we said that course can be manufacturing course which we refer to as product course or they could be non-manufacturing course which we refer to as period course that's known as course classification so we're classifying our CS we have a variety of Cs then we said we need to understand course Behavior what is course Behavior well we said for example the course that we already classified those manufacturing course or non-manufacturing course can be either fixed or variable so if we say that the cost is fixed we're talking about the behavior behavior of that cost the behavior of the cost means that if it's fixed that the course is not going to vary with the production volume so in other words when we produce more units our course our fixed course are not going to change within a relevant range so costs that are fixed don't don't change with the production volume does that make sense we still need to capture those costs we still need to measure them because they are going to impact the profitability of the organization very often fixed costs are very substantial could be billions of dollars some of our fixed costs but we also want to understand our VAR course those course that are going to vary with the production volume so the more units that we produce the greater our total variable costs so we have total cost but we also have and total revenue we said from our total revenue we subtract our total cost and that's our profit but sometimes we talk about unit costs right the cost per unit so we need to understand the quantity the quantity because if we have the unit cost just want to make sure that everybody everybody gets some if we have if we know what the unit costs are right so that's most likely what we're starting with is the unit course and we're trying to find out what the total cost is so we have to multiply the unit cost by the quantity so remember last time we talked about for example if we said to make the product it might require let's say plastic right we said and the plastic is going to cost 50 cents per unit so we need to multiply the 50 Cents by the quantity to find what the total cost is so the more units that we produce the more plastic that we're going to use understanding our cost is going to help us determine our price so you can see why that's so important to understand our course and a lot of um accounting a lot of time in accounting is spent trying to understand what our course are in our operations whether they're fixed or they're variable we need to know what our course are and you know something it's not that easy when you're actually running for example a manufacturing operation it's not so easy to understand what your course are why because some of our course are direct and some of our course are indirect direct course are course that we could easily Trace back to the product indirect course are course that are difficult to trace back so you might say what well yeah because think about a direct course we know how much plastic we're using we know how much plastic we're buying and we could easily Trace that back to the manufacturing of 50,000 of these items that are made of plastic but what about for example the plant manager the plant manager right let's say Shantel is the plant manager and we pay Shantel $250,000 a year to manage the plan is that enough no it's not it's not enough but oh okay cuz Josh told me he would do it for 200,000 so now I got it aside but okay the question is though what do we how do we keep track of that cost because in a given manufacturing facility we have items we have product lines what's a product line a product line is a group of items and we have a product mix which is what a group of product lines so in other words we're making more than one product product in our manufacturing facility so the question is how do we allocate the cost of the plant manager because don't we need to take that into account if we're going to determine what is our cost it's not just the cost of the plastic to make the product and the little um computer chips or screws or whatever it is that we use to to make the product we have other expenses we have other costs like the salary that we pay the plant manager that would be an example of an indirect cost why is it indirect because it's difficult to trace that cost back to a specific product to a specific item so if we're um if we have a manufacturing facility that makes shampoo and I know you guys always laugh because you say what does this guy know about shampoo right but you have shampoo that's for oily hair for dry hair for colored hair right so if you have um if you want to change the color of your hair from brown to blonde right so you use different shampoo because you don't want the shampoo to wash out the color is that right so there's different types of shampoo shampoo for people with dandruff well if you're making all those different items and that's part of our product line then we're going to need to assign cost to each one of those items in that product line so what's the key takeaway we need to classify our course we have to classify our our course and our course are either going to be manufacturing or non-manufacturing course so they're going to be either product course or period course those are um some key classifications I'm not even going to get into conversion course and Prime course those are also types of classifications and we talked about course Behavior we need to understand how is the course going to either change or not change so we need to understand if it's fixed or variable why because we need to make a decision about the price how are we going to decide on the price if we don't know what impact that price is going to have on our cost so one of the good things about fixed costs is that although they're very substantial so often is that no matter how many units you make within a relevant range the cost doesn't change isn't that that's good right isn't that helpful to know that your property tax which might be $500,000 is not going to change if you make one unit or 1 million units CU in order for us to determine the price we need to understand how how our costs are going to change if we produce more units and we need to understand at what point we're going to make a profit what is the point of great even so we need to understand what is the break even volume that's the point where total revenue equals total cost why do you think that's helpful Break Even volume analysis is very important in our decision regarding price so we need to determine what is the break even Point At what point is total revenue going to equal total cost so remember we didn't even say we've made a profit yet the breaking the point is just the point at which at a certain quantity and a certain price the total revenue equals the total cost before we even go into production we could do a break even volume analysis so it's going to help us make a decision with regard to how much we should charge so we know beforehand whether or not it's going to be possible for us to to achieve a 10% market share cuz let's say it's a brand extension we want to extend our brand into a new category well we have to ask ourselves well How likely is it that we're going to be successful would be able would we be able to get even 1% market share no less 10% in that given category what about this and every get this I just want to make sure that nobody misses out this time last time I got some complaints everybody didn't get Snickers and uh and Milky Way just want to make sure so we need to have information is power but it's only Power and put to use so it's c go as part of our discussion of price to understand what our costs are and we said that some markets are elastic and some markets are inelastic and some are what we refer to as unitary when we're talking about price elasticity of demand that mean some markets are price sensitive so remember we're looking at our profit equation is total revenue minus total cost so we need to know what's going to happen to our total revenue if we reduce the price now that means that we need to understand we need to know whether or not the market in which we operate is price sensitive or not price sensitive and a lot of consumer markets are price sensitive right a lot of um we talking about the sales that go on in Macy's the reason why they have a sale a one- day sale literally like almost every day right is that true why because they believe that at a lower price demand is going to go up so we talk we looked at the demand [Music] curve so what this says that at this price this is the quantity and when we lower the price the quantity is going to go up that's the demand Cur what is some of the pricing constraints so we talked about the pricing objectives we talked about costs we talked about different types of costs and how costs behave what are some pricing constraints what are some of the things that are going to impact the price that we're going to um charge consumer income yes the income of consumers so if we're in a recession that's going to impact our price that's going to be a pricing constraint so are we in a recession or not in a recession that's going to be something that's going to impact the price that we charge for our product or service what about the price that competitors charge for comparable products that could have um an impact and in some cases that's part of our pricing approach which is that we charge a price that's either equal to our competitor's price or below or above what they charge what about the price of other items in our product line so the price of other items in our product line is going to affect the price that we could charge for this product do you see why that is because if you already established for one item a certain price let's say with certain features and certain benefits now you come out with an item that let's say has less features less benefits a lower quality then the two products side by side have got to have an appropriate perceived value so that means that you can't charge more for an item that you're currently selling as part of your product line so you see why that is so you already have a product that's to $100 now you come out with a version that has less features less benefits lower quality you can't sell that for $125 we painted ourselves into a corner so to speak right that's a cliche I don't know if you guys are uh familiar with that does that make sense so in other words when we have a product line the items in the product line are priced relative to each other so we have a product in a product line that's $9.99 and then the next item is $4.99 and then the next item is $19.99 and then the next item is $24.99 based on the value that's being offered so why is each item $5 more well maybe because each of those items have five more gigabytes of storage so you want five more gigabytes of storage okay that's $5 more so you could buy one that has 10 GB for $9.99 you want one with 15 GB that's 14.99 you want one with 20 GB that's $ 199.99 so that's going to be a constraint when we determine the price the price of our items in the product line have got to be in relationship to the other items and what I just described to you is actually a pricing approach that idea of $9.99 $4.99 1999 the pricing approach is known as odd even pricing the belief is that consumers are going to perceive that $19.99 is less than $20 that $24.99 is less than $25 and it could be um Walmart uses both odd and even pricing because they have products that are 47 and they have products that are 948 878 14.97 so it's that off um pricing so it's off the full dollar amount we also refer to that as magic price points so even in our own shopping experiences youve probably seen quite a bit of pricing at those for example those price points that I mentioned those are considered to be magic price points at retail and it's also known as a pricing approach described as odd even pricing what do you think does that impact your decision if it's at 1999 you think like subconsciously you're like it's less than $20 if it's $20 does it seem like a lot more than 1999 what do you think and then there's tax too right some people said no some people said yes and I think you know when we talk about it in this setting also I think we're we're being rational but when we're shopping maybe we're a little bit irrational and even um if it's not consciously but subconsciously TR it seems a little bit less expensive and it is a little bit less expensive by like one penny but research has shown that it impacts consumers shopping another thing that impacts consumer shopping is rebates even though you're going to pay the full price when you go to the register the availability of let's say a $200 rebate impacts the consumer's purchase because we all intend to send in the rebate but very often customers don't very often they don't send it in so it's an effective strategy to do that because you know that in people mins they um they're purchasing based on that $200 lower price point but very often even for $200 it's common that people don't submit their rebate what about the cost of marketing and marketing Communications isn't that going to impact our price that's one of the non-manufacturing costs one of the period C right we talk about in managerial accounting people are so obsessed you know they all keep talking about um how much it cost what is the fob price of let's say a hoodie what is the fob price of a hoodie that um P Diddy is selling you guys know who P Diddy is yeah so people of um who are making those remarks don't understand like what we've been even talking about here today no less be a certified public accountant because they don't understand that that's just the price of the item that's not the cost that's just the price without shipping so we need to know what is the landed cost what is the total cost because if you ship a product from China you ship it in a 40 foot steel container so all the items these hoodies they're boxed up in these cartons and the cartons are all loaded into these 40ft steel containers you have to pay to rent those containers and the price ranges from depending on availability 2500 to approximately $5,000 that cost has got to be absorbed by the items now $5,000 even at $5,000 you may think that doesn't really sound like a lot but how many cartons are you able to fit in there and how many hoodies are we actually shipping for $5,000 and then what about if we're shipping 50 containers so that's it's misleading when people say I don't understand how they're selling hoodies right for $100 when it only cost $5 to make it in China well that's the price that's the item cost but you have to pay to ship it you have to pay your you have to pay Shantel right in head quarters right because now that we decided to Outsource instead of firing her right like Donald Trump says you're fired right we gave her a job in headquarters we still have to pay her we have to pay our accountants we have to pay our marketing team we have to pay for advertising so we're running ads during the Super Bowl so we have all these other costs all these other expenses that need to be considered that's going to be a constraint that's going to impact the price that we charge the customer so it's not just about the cost to produce the product yeah we need to know what are the product costs we need to know what are those manufacturing costs but we also need to know what are the non-manufacturing cost what are the period cost right those terms we use them interchangeably how much are we paying our advertising agency how much are we paying the public relations firm how much is it costing us to develop and Implement promotions all of those things we need to consider that's going to impact our price and what about the stage of the product life cycle that's going to be another constraint that's going to impact the price that we charge is the product new So based on that we could talk about some different pricing approaches right so for example depending on the stage of the life cycle so we have what demand oriented approaches cost oriented profit oriented and and competition [Music] [Music] oriented so we mentioned a number of times some of the demand oriented pricing approaches so as we think about the price elasticity of demand you're like what does that have to do with anything so I okay I got it's a little interesting that there's some price sensitivity in the market or there's not price sensitivity in the market that's meaningful to us because that's going to help us decide what our pricing approach is going to be so is it going to be a demand oriented pricing approach so okay we have our demand curve and we understand the price elasticity of [Music] demand and that being said we're going to have to decide whether or not our pricing approach is going to be demand oriented so are these concepts are these issues is this information here going to determine our pricing approach and specifically is it going to result in US selecting a demand oriented pricing approach so one demand oriented approach that we mentioned a couple of times this semester so far is penetration pricing pen ation pricing is a demand oriented pricing approach why is that because we said that if we believe that the market is price sensitive then maybe we should Implement a penetration pricing strategy which means that we introduced a product at a low price so that we could penetrate the category so that we could accelerate the rate of adoption the rate that we move through the adoption Curve Model by introducing the product at a low price but there's some challenges associated with penetration pricing what do you think one of the challenges is for us with penetration pricing what do you think is an issue so if we introduce it at a low price then that means that we're going to have a very high level of demand assuming that we did our market research and we know that our product is going to meet an unmet need at a low price the demand might be greater than our production capacity so that means that if our strategy if our pricing approach is going to be a penetration pricing approach we need to make sure that we could meet demand and what that means is that we're going to have to invest a lot more in manufacturing equipment and production capability because if indeed we do see demand for 10 million units within the first two weeks of our launch then we have the product or we don't but to make that kind of quantity that means we would have had to have started making the product months before depending on the product it could be a year before so we have to do a lot of strategic planning it's not that we just wake up one day and say all right this weekend we're going to we're going to make the uh roll out quantity these things have ideally been very carefully thought out beforehand and part of our marketing plan and we set a price at which we expect to see a certain level of demand that ideally we can satisfy so it's bad for us to introduce a product and then have stockouts at retail which means that there's no product on the Shelf in some categories it it is considered to be somewhat acceptable in that it creates some uh some level of hype around the product but in most categories we spend millions of dollars to get consumers to leave their homes even walk out of Marketing classes to go to Target Walmart Best Buy to get a product and then find out that it's not there so what are they going to do keep checking back every day well if it's a high involvement product maybe maybe they might but most of the time probably not that means we're going to have to keep advertising and keep spending more money so now there's another aspect that we need to consider if we do [Music] if we do have if um an investment in a significant amount of production capacity we're taking a big risk so you might think well we're going to sell a lot of units you might sell a lot of units but what happens if you don't sell a lot of units what happens if the day before we're going to ship our product or our product is available our competitors leap progress and come out with something that's more Innovative that has more features and more benefits and of course at a lower price than us so there's a risk to building this massive production capacity to support a penetration pricing strategy that's the challenge of the American Auto industry is that in the 1970s none of you were born then in the 19 1970s I wasn't born that either so the big three General Motors Chrysler Ford they were the market share leaders toyot didn't have a significant foothold in the US car industry so they were the market share leaders they had this huge capacity to produce millions and millions of cars and what happened competitions came to the market and each year they produce fewer and fewer cars and that fixed cost had to be absorbed by fewer and fewer units that means the cost per item per car is actually going up and that's what put a strangle hold on the automotive industry in the United States is because their fixed cost right here's an example where fixed course is a bad thing right you have these used fixed course yes I understand the costs are fixed they don't vary with the production quantity but that's great when your production quantity is increasing but what about when your production quantity is decreasing and your fixed CS don't change and you produce a million less and two million less and 3 million less 5 million less cars and your fixed course have not gone down how could you be profitable so penetration pricing is an option but for the reasons we um we just discussed it may not be be the best approach now what would be an alternative how could we address that in today's economy there is an alternative that could help us mitigate the risk associated with penetration pricing pre-order pre-order yeah actually pre-orders um is a good way if you do that far enough in advance like for example in real estate with did something I said okay oh no all right I don't have feelings anyway so don't worry it's I I don't it's you know professors they don't have blood well they do it's just that it's not red it's green but it is I guess you could consider it still to be blood you guys watch Vampire Diaries what that's what's up yeah are they really coming out with a new TW Twilight yeah like what is it like Twilight 9 now Twilight you know vampires right yeah superal oh okay yeah Twilight is interesting you got to get into it the first one is like really slow like yeah but I don't know some people are really fans of that uh that movie pre-sales are common like for example in real estate is done very effectively so before you spend $100 million to put up condos well why don't we try and sell a few right on spec sell one to Crystal sell one to niala sell sell one to Jesse sell one to stepen to Brandon to Liza right then once we get the money of these suckers I mean these investors then we could start to build then we could start ordering cement and bricks and things like that so definitely that's a good approach that's used effectively in some categories like what we've seen recently in electronics it's not real because there's no way when they say pre-order now like in two weeks there's no way they could make the the product that's being demanded it's just a way for them to allocate the product that they have because in that short period of time it's not possible to create the millions of items that they would need but what about Outsourcing so they say you know something you're right I'm not going to tie up $50 billion in a manufacturing facility why don't I um contract manufacturer with another company so another company already has manufacturing facilities why don't I make their product in their manufacturing facility and then I'll wrap my brand around the product that's very common in a variety of categories today not every category but in a variety of different categories and in some factories they make products for competitors and the only difference is at the end of the assembly line they put a different logo on it and sometimes their quality is different sometimes it's the same it depends on the factory it depends on the requirement of the marketing organization so that could be a way to reduce the risk associated with creating this enormous capacity but I'll give you a little insight be cautious because you know what some of these manufacturing organizations do is they oversell their capacity so they understand oh I got it I know you guys you're entrepreneurs so you want to sell pots and pans and um Evan wants to sell pots and pans so and Mulan wants to sell pots and pans so yeah we'll we'll take your order we'll make pots and pans for you and we'll make pots and pans for you and for you the only problem is they don't have enough capacity to make pots and pants for all of those customers so you need to have multiple sources if you're going to use contract manufacturers make sure that you have multiple suppliers cuz it's common that they will oversell and you um and your sales organization made a tremendous effort to get your product at Kmart for example and then what happens you're all excited you think the deal went through and then the product doesn't show up from China which it normally takes 30 to 45 days something else to consider about when you're sourcing product so if we're going to make the product ourselves or Outsource it it's going to take 30 at least 30 days on a a ship from China what about skimming skimming we introduce the product at a high price and then lower it in a planned way over time that's not analogous with penetration pricing sometimes I have students who say well penetration pricing is you start at a low price and then raise the price over time and skimming is you start at a high price and then lower theice price over time NOP that's that's not true skimming you start at a high price to sell to those innovators that might be only 3% of the category and then you lower the price in a planned way over time because we believe the market is elastic and it is price sensitive and that as we lower the price demand is going to increase that's different from penetration pricing penetration pricing is you introduce it at a low price that's it you can't raise the price after that that's a that would be very unusual to introduce the product at a low price and then raise it how many of you want to buy a product today and then the next time you go to buy the product like say orange juice right Mulan you go today to buy orange juice for $2.50 and then next week it's $33.50 and they like well that's penetration price thing no that's not penetration pricing you introduce the product at a low price and then it's going to be very difficult to raise the price after that why because the perceived value is $2.50 that's one of the challenges in implementing promotions we have promotions we have these um 50% off sales we lower the price even though we call them tprs temporary price reductions the fact of the matter is we lower the price and then next week you expect them to pay a dollar more or $2 more we' again painted ourselves into that proverbial Corner how could you expect customers to pay more so when we lower the price with a promotion we could have we may have outwitted ourselves because we lower the price because we want to increase demand but we just want to increase demand and maybe to try and get trial to get those light users to use more of our product or maybe even get some of the non-adopters maybe a lower price they'll buy the product they'll try the product and then we're hoping that there'll be repeat purchase so price oriented promotions are not always the best it would be better instead of lowering the price to increase the perceived value so give five more ounces of shampoo at the same price so now the perceived value is yeah the price is still $7.99 but now I get 5 O A 5 O bonus for free so the perceived value has increased and we're going to sell more but we didn't lower the price the price is still $7.99 or even buy one get one free right we didn't lower the price we didn't say it's $4 now even though basically that's what it is but we're giving we're increasing the perceived value to the customer but often it's going to be in our best interest to give more of the product for free what about giving less of the product for the same price less of the product at the same price so that means that we've raised the price then so that's a price increase isn't that very common in in food well I mean you ask me do I recommend that or does it happen yeah it happens um so instead of raising the price um some organizations they realize that the market is price sensitive that if you raise it from $799 to 825 that people might buy less so what Josh is saying is well we'll keep the price the same but we'll give like one less cookie in the Lunchables right one less Oreo nobody will notice I'll notice but yeah that happens absolutely what about Let's see we talk talked about let's see what we what we missing we talked about um a even pricing we talked a little bit about Prestige pricing which is that a product is desirable because at a high price we've established a level of prestige so it's priced because at a higher level because we recognize that one of the motivations for buying the product is that it's going to provide a certain level of prestige or it's going to boost our self-esteem very common with luxury items we said all right cost oriented pricing we talked um well we didn't talk about this yet actually about markup standard markup so what it says is that based on what it cost us let's say if we're a retailer based on what it costs us we have a standard markup which varies from category to category so in some categories the standard markup is like 10% like in groceries for example in groceries the marup is very low sometimes it's only like 8% 9% in other categories like for example in home furnishings you know like we're talking about these pots and pans and so forth the markup is 30% and 50% which isn't even a lot because in Clos clothing very often in apparel the markup is 100% And 200% so in terms of determining our price we're looking at the cost and then we're marking it up it's called standard markup we're going to mark it up a certain fixed percentage in some cases we're focusing on profit which means that we set the price based on a certain level of profit that we're trying to achieve and in other cases it's based on what the competition is charging so our price our pricing approach is defined by the price in the market place for comparable products and we could set our price either equal to above or below what the market is currently charging for that product now let's talk about oh did we talk about uh oh one other thing about competition besides above below is the concept um in competition oriented pricing known as loss leader pricing so a loss a lost leader is when we sell the product at a very low price so we could drive foot traffic for example into our store let's see how much time we have we got two oh we still got like two more hours okay so that's a loss leader forget about making a profit it's not about making a profit it's like for example orange juice is very often a loss leader because orange juice is not 250 half a gallon but very often it's promoted at that price because people know that Tropicana a half gallon of Tropicana orange juice is $4 and so when you see it promoted at Pathmark or Publix or key food or W bounds for 250 it's a loss leader because people know that the price is $4 And when they see it 250 they go there to buy it our expectation as a retailer is that they're going to buy other items as well so we're trying to drive foot traffic to our store and a loss leader is a way to do that so by selling it at a very low price we're getting people into our store with the expectation is that they're going to buy other things that aren't on sale all right let's talk about Break Even because we need to know at what level we're going to break even remember break even at break even total revenue equals total cost so they're going to set our price at a level presumably right that is going to be enabling us to achieve sales or a volume rather that is below the break even point because at that point we're making a profit so the break even point the way that we calculate that is we take the fixed cost and we divide that by the price for unit after we subtract the unit variable cost all right it's not complicated what we're trying to find out here is the break even Point how many how many units do we have to sell to Break Even This is extremely important to us as marketers in determining the price we need to know whether or not we could achieve that break even point so again before we even go into manufacturing and we're sourcing and we're spending $50 billion to build manufact ta ing facilities and distribution centers we need to know can we sell each year that volume at least this volume which is the break even point so there's there's actually an example in the book that says that our fixed cost I think it was for photo frames said that our fixed cost is $32,000 so our fixed costs are $332,000 those are the costs that are not going to vary with our production volume it's our real estate tax it's our um premium for insurance all of those are examples of fixed course and then from that it says that these picture frames we sell for $120 so it's a very nice picture frame $120 for a picture frame that's the price that we sell the picture frame for but we have some cost some variable costs it tells us that our variable costs are $40 our variable costs are $40 so each picture frame that we make has variable costs it's for the crystal it's for the felt the cardboard what whatever it is that we need to to make the product so what happens so we have 120us 40 is $80 so our fixed costs are $32,000 divided by $80 is what 400 picture frames that's our break even Point that's the point where total revenue equals total cost well actually it's the point yes total revenue equals total cost and if that's the case then our profit is zero right so we said that total revenue minus total cost is our profit well if these two are the same that means our profit is zero very important we need to know is 400 realistic can we at least break even now we know that when we're setting our price when we're setting our price it has to be at a price that at a minimum at a minimum will allow us to sell 400 unit so this is going to help us decide what price we should charge does it make sense you see why that is this tells us that we're going to break even our total revenue and our total cost will be the same our profits will be zero if we achieve sales of 400 units we sell 4 400 picture frames so we could go through this analysis we could keep changing the price remember this we do early on in the process this is not like when we're ready to launch we decide all right let's do a break even volume we could do this as early as the opportunity identification stage of new product development so even in stage one we have to be thinking how many coffee makers do we need cell to break even now in the United States approximately approximately 20 million coffee makers are sold each year so if we do this break even analysis and we find out that our break even volume is 25 million coffee makers what does that tell us find a different Grand extension opportunity because if we need to sell 25 million coffee makers to break even and the whole category in the United States is only 20 million that tells us we're barking up the wrong tree right we're barking up the wrong tree how we going to how is that going to work unless we're assuming that the category is going to continue to grow so very important for us is going to be very helpful in US deciding what price we're going to charge because we're going to go through this analysis and we're going to try to anticipate well what happens if the price was $110 what if the price was $100 what would our total revenue be then remember last time we said well at a low price we're going to sell more units but maybe if it's an in elastic Market we'll lower the price 10% but demand might only increase 2% then the lower price is not going to result in a demand high enough that will increase total revenue so in elastic markets we lower the price because we believe that at a lower price if we lower the price 10% that demand is going to increase by 20% and therefore even at a lower price we're going to sell more units a lower price and our total revenue is going to be higher are we good are we great wow you guys are awesome you ready to bounce all right so see you next time do good things |
Marketing_Basics_Prof_Myles_Bassell | 16_of_20_Marketing_Basics_Prof_Myles_Bassell.txt | [Music] So today we're going to talk about price he said that the marketing mix contains the four Ps and one of the four Ps is price and of course there's also Place promotion and product those are the other three Fe but today we're going to focus our attention on price and how that's going to impact our ability to be successful as marketers remember the first day we talked about value and we said what is value you remember why we talked about that because we said that Marketing in Essence we said that marketing is about creating communicating and delivering value so from 30,000 ft that's what marketing is and then there's tons of information about marketing that disseminates from that but in essence marketing is about creating communicating and delivering value so then we say well what is value and we said that value is a function of several things value is a function of price quality features benefits those are all aspects of value So today we're going to talk about the pricing element now as business people and as marketers we're trying to make a profit so we want our organization to be profitable do you agree what do you think is being profitable a worthwh wild objective so we need to understand what price in most cases we're trying to figure out what price we should charge in order for us to be profitable now that's not always an objective we're going to talk about um some of the different pricing objectives one of them is profit sometimes it's sales sometimes it's market share sometimes our objective is social responsibility but we need to first start with understanding how it is to determine what our profit is so before we could get to profit we need to determine what our total revenue is going to be at a given price because we have a choice we could price our product at $10 we could price our product at $20 $30 $40 $50 $60 or $29.99 39.99 so whether we use that odd even pricing or a whole dollar amount we still need to determine what price we're going to charge so the first thing we need to do is figure out at a given price what our total revenue is going to be so how much sales are we going to generate at a given price how are we going to determine that how do we determine what our total revenue is going to be well in order to calculate our total revenue what we do is we multiply price by quantity so does that make sense so at a given price let's say $20 if at $20 we sell 1,000 units then our total revenue is [Music] $220,000 does that make sense just like if you go to the store and you purchase something for us as consumers that's our cost right for us it's our cost what we're talking about here now is a business is the price the price that we're going to sell our product so at a given price we want to know what our total revenue is going to going to be what our total sales are going to be so in order to figure that out we know one item we sell for $20 and if we sell 1,000 then 20 * 1,000 means that we're going to have total revenue of $20,000 is that a good way to figure out what our total sales are going to be our total revenue is that make sense or is there another is there another way to do it right we're just trying to figure out how much our sales are going to be but it's not enough to have sales we want to have profit so the next thing that we need to understand is what our costs are so this big minus sign means that from our Revenue we have to subtract our cost and we have variable costs and we have fixed cost so remember part of this price part of this price that we're charging right that's not all profit so if we sell it at $20 part of that $20 we have to use to cover our costs is that right so we need to figure out what our costs are now just like we have total revenue we have total costs very aable course are course that tell user right they're going to vary with the level of activity so variable course are course that are of course variable but importantly they vary with the production volume so the more units that we produce the more units that we sell the greater our variable cost so an example of variable course would be the materials that we use to make the product whether we make the product out of plastic or gold whatever it is that we make the product out of the more units that we make the greater our variable cost our total variable cost because we could also look at the variable cost per unit which is the unit variable course but right now we're talking about the total cost so in other words the cost for the plastic might be let's say 50 cents per unit so the more units that we make the greater the variable cost so in this case if we make it sell 1,000 units then the variable cost is going to be how much the total variable cost is going to be $500 but the unit variable cost is 50 but what we're focusing on here is the total cost the total variable cost and the total fixed cost because what we're trying to understand is how much is our profit so if that's our total revenue we need to subtract our total cost so we have variable cost right we're look looking at Cost Behavior so total cost is comprised of variable cost and fixed cost fixed costs are those costs that are not going to change with the production volume So within a given range so if we make and sell a th000 units or 10,000 units our fixed costs our total fixed costs are going to be the same what would be an example of a fixed cost that what we might have as an organization so um rent insurance insurance office um the the space to the office well your personnel in the back office perhaps we got a number of um different inputs here first up let's um talk about the first couple so for example the rent the rent that we're paying for our manufacturing facility or the mortgage or whatever it is that we're we're paying that amount that we pay every month doesn't change based on the number of units that we make and sell that's going to be the same within a given range why do I say that because maybe when we reach 500,000 units we have to have a bigger factory but within a given range those costs are fixed so we could make 20 ,000 units our rent is still going to be the same our insurance is still going to be the same our real estate tax is still going to be the same although we might reach a point where our insurance is going to go up why because the insurance company might feel that we've reached or exceeded a given range where they feel they have more risk so the amount that we pay for insurance the premium that we pay is going to increase so that doesn't mean that the course is not fixed yes it's fixed up until a given point so once we start making over a million units they might say we didn't sign up for that you told us you were going to make 200,000 units per year now you're making a million units per year it's more likely that there's going to be a claim against the policy so we're going to increase your premium you're going to have to pay more but other than that exception right other than that scenario our costs are fixed in those examples now we have some course that are considered to be manufacturing course and um we refer to those as product costs and non-manufacturing costs that we refer to as period costs so we have course that are manufacturing course and non-manufacturing course those can also be fixed or variable are we good so we could have course that are either the result of our manufacturing operations or not a direct result of our manufacturing operations why do I mention that because some of the examples that we started to get were what we call period costs like advertising or the office space for the sales people those are non magn facturing courses those are period courses now you might say well what about electricity well it depends why does it depend because if it's electricity that we use in the manufacturing facility to make the product then it's a product cost but if it's the electricity in the headquarters of the company or in the sales office that's a non-manufacturing course so in accounting we make that distinction and it's an important one so those course can be both variable and fixed so we're trying to understand our total cost why because we want to make a profit as an organization so we need to know what our total revenue is and what our total costs are so when we subtract from our total revenue our total cost that's going to tell us our profit questions about that so remember when we talk about total cost you have to make sure to capture both the variable cost and the fixed cost what we have here is total fixed cost and total variable cost so we're not talking about the unit variable cost if we didn't know what the total variable cost is the way that we would find that out is by multiplying the quantity times the unit variable cost question no now there was a question um on the quiz about marinal Revenue marginal cost who knows what that is why do we talk about marginal cost when we look at Cost we want to understand how much it's going to cost us to make and sell one more unit you see why that's important and we need to do um margin analysis we need to understand what does it cost to make one more unit and also what is the marginal revenue so what is the revenue that we're going to generate from selling one more unit that's what that question on the quiz is addressing is if we sell one more unit what is the additional course going to be and in order to find out the additional course we need to know what the variable CS are and what the fixed costs are and in that example they tell us what the variable course are and what the fixed course are the reason why we talk about um the C and place such an emphasis on identifying the variable cost and the fixed cost is because we want to make sure that we're capturing all of our course whether they're manufacturing course or non-manufacturing course because ultimately we want to make a profit generally that's one of the major pricing objectives [Music] so we need to understand which course are going to vary with the production volume that's the variable course and which course don't vary with the production volume those are the fixed CS so it's common for an organization to have as a pricing objective profit very common but it's not quite that simple there's other pricing objectives and those objectives are subject to certain constraints that we're going to talk about in a moment and we also need to understand price elasticity of demand we need to understand say how price sensitive the market is so a market that's elastic is going to have a price elasticity of demand that's greater than one in elastic is less than one and unitary is going to be equal to one now so what does that mean so an elastic Market is a market that's price sensitive it means that if we lower the price that demand is going to go up does that seem reasonable that's for us as consumers that's very common place that's why from a marketing perspective sales and promotions are so successful because in those markets whether it's clothing or food when the price goes down the demand increases and we could look at that on this demand curve that's what this demand curve shows so this demand curve shows this downward sloping demand curve shows that at this price this is going to be the quantity demanded when we lower the price from P1 to P2 then this quantity is going to be demanded what we're assuming there is that the only thing that's changing is the price so in this case our price is the only thing that's changing not the price of similar products not the income of consumers those things are not changing consumer tastes and preferences are not changing so the only thing that's changing is our price that's what the demand curve is assuming so we refer to that as movement along the demand curve we just changed our price competitors we're assuming our assumption an important assumption that we need to make is that the price of similar products is not changing so we're assuming that the competition is not changing their price we're assuming that income for the customers has not changed because if income change let's say if income increase then they might buy more or if income decreased they might buy less but here we're saying for the sake of this model which we refer to is the demand curve we're saying that their income didn't change and their preferences haven't changed and competitors haven't change their prices so we're just trying to understand what happens if we change our price in an elastic Market in an elastic Market one that's price sensitive it says that a 1% decrease in price will result in a greater than 1% increase in demand so if we lower price by 1% demand will increase by greater than 1% so isn't that what happens I mean this I know it seems a little theoretical but this is isn't this what we do in marketing all the time so why why do we lower the price 20% why does Macy's have these sales why do they lower the price of sweaters 20% because the expectation is that if you lower the price 20% that demand is going to increase by more than 20% and we're going to be more profitable isn't that what we see all the the time look at any um any magazine newspapers we see sales all the time say it again demes Andre Revenue would well if we raise the price our total revenue will increase right but if we raise the price and our total revenue increases in an elastic market demand is going to decline so that's you're right so we have a choice we're going to sell the product either at $20 or at $120 well if we sell it at $20 then we we're a little bit ahead of ourselves but that would be we might consider in this scenario a penetration pricing strategy where we inod the product at a low price but we might introduce the product at a high price we might introduce the product instead of at $20 at $120 and you're saying that the total revenue would be greater which is true but in an elastic Market the demand at $20 is going to be much greater than the demand at $120 do you agree so what happens if the if the iPad 3 was or the iPhone 5 right that's their latest release so the iPhone 5 was selling now for $100 so without a contract the iPhone 5 is selling for what $700 for the 64 gab version so and if you have a contract with AT&T it's still $400 and they're saying that um they expect to sell well they expected that they were going to sell in the first two weeks 10 million 10 million at that price so let's say even with a contract so in other words if you sign a two-year contract you can get the phone from AT&T for $400 for the one with the 64 gab memory what about if it was $100 do you think we would have a greater demand in this market yeah I think we would agree um that's the case but the point that you're raising is at the heart of our challenge do we charge $20 or do we charge $120 but in an elastic Market there's a tradeoff total revenue we would think is going to be greater but the number of units that we're going to sell is going to be less that's what we need to decide that's our challenge do we introduce the product at $20 as part of a penetration pricing approach or do we introduce the product at $120 and then lower the price over time we said last time that's known as skimming we introduced the product at a high price with the expectation that there's some innovators out there maybe it's only 3% of the category that are going to purchase the product at that price and we made some other key points what that at that high price that we wouldn't attract competitors and what else cuz remember we said that if we're selling it at $120 right now we said we have a choice we could sell it at $120 or $20 now at $120 we're trying to recoup some of our investment we might have spent 5 years researching and developing this new product but what if at $120 we attract a lot of competitors you don't remember last time we talked about the fact that most well sure other companies are going to be looking and saying $120 we could make that product for $8 and sell it for 50 bucks so that's one of the things that we need to consider if we're going to adopt a skimming strategy a skimming approach to pricing so it has got to be a demand at that price so at least remember the adoption curve model suggests that the innovators could represent about 3% of the category and also we need to consider the level of newness of the product so how new is the product what stage in the product life cycle is it that's going to determine um our pricing approach is it introduction is it growth the skimming is very common in some categories especially Electronics in the introduction stage of the product life cycle questions 20% off every every month two months so they they are does that mean they're able well they could decide to have a promotion for a given period of time um when we talk about elastic and inelastic it has to do with how the customer is going to respond to our pricing approach so we said that elastic an elastic Market is one that's price sensitive so if we lower the price demand is going to go up but in elastic is one that's not price sensitive which suggests that if we lower the price then demand is going to increase like let's say we lower the price 1% demand will increase by an amount less than 1% can you think of some examples of some products that you think are not Tri sensitive right now keep in mind that for um person to person this is going to vary there's some products that for me I'm I'm not Tri sensitive maybe for you they are but we're making a generalization we're making an assumption go ahead what is it water water so tell us you think that um water is an um an inelastic Market yeah so if you lower the price then you think that the demand won't increase but if you increase the price still going to be the same oh I see what you're saying so you're saying that that if we increase the price of of um of water that consumption won't decrease what do you guys think okay so if it's from this I'm making a distinction now between if it's from your um from your foret or if it's the price of Aquafina so if lower the price of aquaa or and what you're saying is if we increase the price of aquaa that you think that um demand is not going to decrease okay what else any other examples yeah Stephen gas gasoline yes this is an interesting one so explain that to us so you think that gasoline is inelastic in the market for gasoline yes all right so tell us why you say that um because people got to go fast need guess to do it so um you know commun to work and guess is4 beond be like well that sucks but no just like that all right we got there's a quote from Stephen that's not that's not yeah I never remember seeing that so um what Stephen is suggesting is that if you drive to work if you have a car and you need to drive to work then when the price of gasoline goes from $3 to 325 to 350 to 375 to $4 to$ 450 that demand is not going to decrease which I think is plausible in some cases now that's not everybody now we know definitely when G gasoline hits a certain price let's say 450 some people are going to say I'm going to take the train or I'm going to take the bus or I'm going to car pull but there's those that are um more dependent on their car maybe for them that's not an option so you could argue that gasoline when the price of gasoline increases that the demand for gasoline is not going to significantly decline so in some cases that definitely could be true cigarett yeah so the excise tax that's um you know a common argument is they keep raising the price um with the thought that at a higher price the consumption of cigarettes will decline but it's $10 a pack now I remember and maybe I shouldn't admit this openly but when cigarettes were a $150 a pack now it's $10 like that's crazy have people stopped smoking because of that so that's one of the arguments the counter arguments uh against that tax and you know also with the um with the soda tax so if there's a soda attacks are kids going to stop drinking soda no they're just going to don't get the chips instead so if you get chips in the soda now what are you going to do well you're just going to get the soda and not the chips or maybe your parents are going to have to increase your allowance medicine medicine absolutely so if you're taking let's say for example heart medication or medicine for allergies whatever it is diabetes if they increase the price of the medication 10% or 20% you still your the demand is not going to decrease you still you need your heart medication you're not going to stop taking your medication say wait a minute it's an elastic Market well no I don't I don't think so now your doctor might prescribe a generic but maybe there is no generic Edward on that same note uh healthare cost healthare C other stuff like going to doctor for instance surgeries oh going to doctors yeah I think healthc care course would be something would be um more of um more elastic because as we see there's about 50 million Americans that don't have Healthcare why because the price is too high their employer is not providing it in fact it is so high that a lot of um employers are not providing health care and so people are um less people are getting health insurance but yeah going to the doctor for a surgery so if the doctor says you know I told you it was going to be $155,000 for the surg surgy but now it's 20,000 what are you going to say that's it the deal is off so yeah there's certain things that um that we need and we're not going to be price sensitive so the demand is not going to increase when the price goes down what about for example think about um some products that we just have more than enough like let's say salt how much salt do you need so when they have a a sale in the store and they say 50% off on salt you're like I have 10 boxes in my pantry already so different products have different levels of sensitivity to changes in price and again it's going to vary from person to person for Joy she might say I need salv now I'm going to buy it now 50% off now I'm going to buy Molina says but I have for me I don't I have enough yeah and and they know that too right so um if the if they raise the fair then is the number of people that are riding the trains going to decrease doesn't doesn't seem like it so that would be an example of uh in elastic market like toilet paper everybody uses it so like if the price increases people are still going to buy yeah they they might um some for some people if the price increases um they'll um continue to buy that particular product or they might look for um another item or another brand that's less expensive uh premium Goods like expensive cars and other yeah so if you're going to get a Rolls-Royce and they raise the price 10% well how much is that going to impact aand so before it was $300,000 now they tell you it's $325,000 so what do you do cancel the order so you're already were going to spend $300,000 what's another $25,000 for Edward right because Edward's a shot call but again keep in mind it varies it's going to vary somewhat from person to person but we could still make some generalizations about the price elasticity of demand so we said that some markets are more price sensitive than others that means that when we lower the price we may not see an increase or a significant increase in demand in some cases the percentage decrease in price will result in a equal percentage increase in demand yeah the examples that we have heard are about certain products and cands the market right so does that apply to the market or to the brands well we're assuming that um based on the examples that we heard that that applies to that particular market so that's the way I interpret those suggestions is that we're talking about um the market for bottled water or we're talking about the market for um prescription medications would it also be for both because I if you have a top brand of a specific uh Market either way either no matter how much the price increased people would still think it's the top still well that's why we care so we need to we're going from General to specific is is we want to understand the price elasticity of demand for a given Market why because we have a particular branded product and so we're trying to understand and determine what happens if we charge a particular price so our product our branded product in that particular category is something that we want to promote so we want to understand if we reduce the price 20% is there going to be an increase in sales because of what you suggested right what if I lower the price 20% and demand doesn't increase oh boy now we got a problem why absolutely that could happen but you had told us before about the impact on total revenue so you said that a higher price total revenue would be higher but I said but demand might be lower so at a lower demand total revenue more than likely is not going to be higher but in this situation what we described in an in elastic Market if we lower the price 20% and demand does not increase by at least 20% then our total revenue is going to drop dramatically do you see why that is so the reason why we lower the price 20% in an elastic Market is because we feel that the increase in demand is going to compensate for that so that we're going to be having Revenue that's greater even at a lower price so the total revenue is actually going to increase and our total profit is also going to increase because even at a lower price we're going to sell that many more units so fine we lower the price 20% but the number of units that we sold increased by 30% so our total revenue is going to increase but we need to understand if it's in an elastic market and we just lower the price 20% what did we do why would we do that now we're selling the same product but at a lower price and demand is the same so we're going to be less profitable may may not even be profitable at all so that's why we need to understand the price elasticity of demand how is the demand for particular product going to change when we change the price that's what this demand curve shows this is just movement along the demand curve not a shift just movement because we're assuming income has not changed price of competitors products have not changed the consumer preferences have not changed questions what do you think what would make sense in any elastic Market that means we're not going to sell more units consumption is not going to increase would we lower the price so competitors could try to um introduce a comparable product at a lower price and so one of the things um I don't know if we're going to get to it today but when we talk about pricing approaches we could talk about competition um loss leaders customary pricing above or below competitors so in an in elastic Market even if lowering the price is not going to increase demand we might need to do so because our competitors lower their price but if we could differentiate ourselves maybe we could still charge a premium if we still have features and benefits in our product that differentiates ourselves from comp we could still charge more it doesn't mean that every product in a given category is the same price remember we talked about good better best and even premium pricing so not everybody's going to be the economy brand so there might be um some products in the category that sell at $20 and ours might still be able to sell at $120 because even though the functionality the generic functionality is the same we have more features and more benefits but you're absolutely right if we're in the same if we're positioned in the same place on the perceptual map and we're going head-to-head with the competitors and we're at $20 and they're at $20 and they lower the price to $15 then we're going to have to decide whether or not we're going to match their price why do they lower um why do you think they would lower the price to $15 maybe they can afford it more than you yeah they might their cor structure might be lower than ours so they might be able to lower the price maybe they've their production they've achieved economies of scale and they're at a point where their costs are actually declining or significantly decreased from where they were previously also another reason that they might um lower the price is they might be trying to steal our customers so the total demand for the category may not change but remember I said in the mature category we're either going to grow the category right if we're going to see an increase we're going to either grow the total category or we're going to steal our customers our our competitors customers so they might lower the price to $15 even though the demand for the category is not going to increase but they expect that they'll be able to steal our customers so remember we're talking about first General the entire category and then now we're getting to specific right so from the category is a whole to specific products and specific Brands flashlight flash flash bu if the amount of The Flash B like the price of flash go down like okay you know this is time to buy a flashlight that can change the something that would oh well they're complimentary products products that are used together and um there's in business to business marketing we talk about a lot about derived demand so the demand for flashlights could be impacted by the cost a decrease in the cost of the bowbs so if the price of the bulbs decrease then um the demand is going to increase and then I think what you're suggesting is that the demand for flashlights would also increase yeah so those um complimentary products for example example um you could say like um tea bags and teapots so what happens if the demand for T decre the price for Te decreases dramatically then you might see a significant increase in the demand for teapots now look when everybody's buying teapots why because the price of tea might have declined 50% so people are going to um drink more tea all right so let's um are we good all right I didn't blow your mind you're good right so you know how to calculate profit we understand the impact of the price elasticity of demand in a given category and for a specific products and Brands because ultimately this is helpful to us because we need to decide what price we're going to charge so if we don't understand the price elasticity of demand then how are we going to decide what price to charge if we don't understand our cost then how are we going to be able to make a profit so one of the one of the um pricing objectives we said is profit well how could we achieve our objective if we don't know what our costs are and if we don't know what our costs are then we don't know what to charge because remember we need to set a price that's going to cover our costs so profit is one pricing objective now there's some others what are some other pricing objectives so when we're thinking about what price we're going to charge there's an objective that we have in mind certainly one of the most common is that we want to charge enough we want to set a price that's going to cover our costs so that we could be profitable all right that makes sense we're just talking about profit what else what else would be a pricing objective so the dollar sales now you see then it gets a little bit more complicated because if we want to maximize the total dollar sales one or two things really has to happen either we're going to have to charge a higher price or we're going to charge a lower price but sell a lot more units so here we're talking about the dollar sales we can also look at the unit sales now you see why these are these two can be somewhat conflicting the dollar sales and the unit sales why is that if our objective is to sell as many units as possible in an elastic Market what does that mean so M says we're going to lower the price so if our objective is to sell a million units well we know how to do that in an elastic market so we lower the price and when we lower the price then we'd like to think that our total revenue is going to increase but maybe not as much as we would have liked and maybe we're not going to be as profitable so if we just look at unit sales we say our objective is to sell 1 million units so that's simple right just set a low price and in elastic Market a lot of people are going to buy but that doesn't mean we're going to be profitable sure at $50 how many iPhone 5S do you think we're going to sell 500 million so that being said companies do set um Targets in terms of the number of units they want to sell but what I'm trying to show you is it's not so simple when we think about setting the price look at all the things that we need to consider we don't just say oh we're going to charge $20 well how do you know that $20 is the right price are we going to cover are fixed and variable costs and at $20 how many units are we going to sell why is the units important think about fixed costs why is the number of units that we sell in important go ahead have amount you want to sell then you have to have an idea of am so there's a total fixed cost now what happens what do we do with our total fixed cost in accounting we talk about fixed cost absorption so what that means is that the more units that we make and sell the smaller the amount each unit is going to have to absorb so what do we do with that $100,000 in fixed cost well that $100,000 each unit is going to absorb an amount a certain amount so the more units that you produce and sell the smaller that amount per unit so instead of selling 20,000 units we sell 100,000 units then each unit is going to absorb a smaller amount of the fixed cost that's why some companies they are looking for this incremental volume even though they may not even be covering their marginal costs so some companies understand that the more units that they're producing the smaller amount of fixed course each unit is going to absorb so they're spreading those fixed costs over a greater number of units and in fact it's so prevalent that um countries around the world have banned that ban companies from doing that it's known as dumping so what do they do they produce a million units and they sell it below cost in a particular market so governments they don't want that you're trying to optimize your manufacturing facility but you're going to destroy a a domestic um business so dumping is illegal what else what about market share remember a few weeks ago we talked about market share so market share says what percentage of the total category is branded with our brand so what percentage of the products in a given category carry our brand name and we could look at market share in terms of dollars and also in terms of units so what percentage of the units in a given category our are our brand versus competitors so remember we had talked about soda and I said but was it probably orange juice right I said you might feel happy that you sold 50,000 gallons of orange juice you say we did a good job right n we did a good job we sold 50,000 gallons of orange juice but then we find out that our competitor sold 450,000 gallons of orange juice so if our competitors sold 450,000 gallons of orange juice and we sold 50,000 that means our market share is only 10% so the total Market is 500,000 gallons we sold 50,000 that means our market share is 10% so one of our pricing objectives could be to achieve a market share of 10% or 20% or 25% or 28% that could be what drives our decision to set a certain price so we need to have pricing objectives that's going to help Define the price that we choose so if it's just the number of units now remember when we look at market share we look at both we could calculate the market share in terms of dollars and in terms of units so if you're looking at products in a given category some that are $20 and some that are $120 the market share in terms of dollars we could expect to be much higher for those that sell their product at $120 but in terms of units you're probably going to be surprised to see that their unit share is going to be much lower so take for example vodka so if you look at market share of vodka well of course when did you think in terms of dollars that greay goose would have a greater percentage of the dollar market share than smof why because a bottle of greay goose is $35 a bottle of smof is $5 so I don't hear any objections so that means that that the pricing is correct right but in terms of units at $5 they sell a lot more vodka they sell a lot more um bottles of vodka so our market share could be in terms of dollars a market share objective could be in terms of units and our another objective might be before I mentioned social responsibility [Music] so for example in um before you guys mentioned um prescription medication so our objective our pricing objective may not be about profit it may not be about the number of units the dollar sales the market share it might be that our pricing objective is what to make sure that that all lowincome users of heart medication have access to their monthly Supply so what does that mean mean that means in other words that it's affordable so we're not talking about making a profit we want to make sure that people that have low incomes can get their medication not for $150 a month but for $15 a month so our pricing objectives how do you come up with $150 for a month supply of this heart medication are we going to be profitable are we going to um increase our market share are we going to increase the number of units sold no we set the price so at a at an amount that it's a affordable for low-income households so that might be a pricing objective so think about that when we say how do we decide what price to charge well it could be to make the most sales the most total revenue or to be profitable to maximize our profits so we're going to set it at such a price not $20 but maybe 1950 so that we could make as much profit our total revenue as as high as possible that could be that could be what we try to do or it could be that none of that matters to us our pricing objective we're going to set the price so that everybody who needs heart medication is going to be able to afford it how about like environment you would make a price higher locally grown and you know environment as far as social responsibility yes so that's going to increase our cost um that might be part of our um our core values and if anything to charge usually we're able to charge a premium for that by saying that our product is environmentally safe or aderes to fair trade practice is same Asic ah Prestige pricing which we'll talk about next time is one of the pricing approaches Prestige pricing means that we charge a higher price because we believe that that enhances the perceived value so why is the product desirable because it's an expense because it's expensive right in part so we charge more we charge a higher price because at a higher price price the product goners Prestige right you really think how much more does it cost to make a Rolls-Royce the problem is is that they their manufacturing process has to be different than for cars that are $10,000 because of the quantity that's demanded all you're saying is like organic foods the type of pricing yeah I wouldn't say that I wouldn't describe it that way you're charging a premium you differentiate your product and you charge more for it PR pricing has to do with the fact that like with luxury items those generally luxury items would be considered um as using a type of pres pricing because the fact that um your famo bag or your U Prada bag is that expensive it makes it desirable all righty |
Marketing_Basics_Prof_Myles_Bassell | 3_of_20_Marketing_Basics_Myles_Bassell.txt | [Music] all right so marketing team how's everybody doing today good good good so today we're going to talk about a very important aspect of marketing which is segmentation and some of the related Concepts so we're going to talk about segmentation we're going to talk about um Market sizing targeting positioning very important Concepts in marketing so I want to start our discussion by defining what is segmentation so segmentation is let we say segmentation what we're talking about is dividing a market into submarkets or into segments this is um chapter nine Everybody follow what I'm talking about we talk about dividing a market into submarkets we're going to take a large market and we're going to divide it into smaller segments so any given Market is going to be made up of a group of segments so segmentation is dividing a market into smaller segments and then once we do that what we're going to do is quantify the size of those segments we refer to that is Market sizing so right now I'm just giving you an overview we're going to get into the details but I want to give you the big picture as it relates to segmentation so we segment the market into smaller segments quantify the size of the segments and then once we've Quantified the segments then what we need to do is Select segments so we have to Target segments specific segment ments that we want to penetrate so we want to find um we want to identify segments to sell our product or service and we need to do a market analysis to understand what segments are going to be um more ideal versus other segments in the marketplace and then we need to decide on how we're going to position our brand in the market because remember we said all the products in a given category have the same generic functionality do you remember that we said for example all cars provide the same generic functionality which is transportation right transportation and what makes one car unique from another is that each car is wrapped in a brand so we said the product is wrapped in a brand and what's compelling about creating a perceptual map is that we're able to look at where our brand is positioned on two Dimensions relative to our competition so where we are positioned relative to our competition so that's an overview of what we're going to talk about today so let's um talk more specifically about segmentation when we're dividing a market into submarkets so there's certain criteria that we have when we're segmenting a market so when we're dividing a market into submarkets or sometimes we phrase it another way we talk about aggregating potential customers into groups right so that's another way to to look at it um but however you could um wrap um your head around it is is fine basically it means the same thing either we're dividing the market into submarkets or we're grouping customers potential customers um together so what we want to do ultimately is identify segments that are large now it doesn't mean that a niche cannot be something that's desirable for us as an organization but most often what we want to do is identify segments that are large reachable so when we identify a certain segment of let's say um people who play golf well that segment can be pretty large in the United States but importantly we could also reach golfers we're able to reach them but to say for example that our segment is people with purple hair well that could be interesting and um something that we're fascinated by is that really a segment that's reachable so golfers we know we know um what programs they watch we know what time they watch they all read go Golf Digest for example so when we talk about reachable it means that we're able to communicate with them we're able to communicate through advertising for example that's what we mean when we say they're reachable so in other words our marketing Communications plan is something that they're able to view now if we run an ad in Golf Digest or we run an ad um a commercial during the time that um there's um golf being played then um aspiring golfers as well as um maybe some um professional golf offers would have the opportunity to view either our TV commercial or our billboard so that's what we mean by reach everybody's clear when we say that the segments need to be large and also reachable so you might say what does that mean reachable reachable so we could reach them for example with advertising they have the opportunity to be exposed to our print ads to our Outdoor Advertising to our commercials to our radio spots if we can't reach them that's a problem you agree you see why that's um creates a problem for us even though the market or the segment could be very large we don't have a way of communicating with them then we don't have a way to create a favorable brand image to build a level of brand awareness customer relationship right develop a relationship with them so the market needs to be large reachable and also the group the group that we're um forming the segment must have similar needs and wants so when when we're aggregating these group of potential customers when we're grouping them together it's got to be a group of customers or potential customers that are going to have the similar needs and wants remember we talked about we said one of the key marketing activities is identifying an unmet need so we need to find out what their needs and wants are so those that have similar needs and wants so for example those that have a need or a want for a high quality golf club we group those together so they have similar needs and wants large reachable with similar needs and wants and importantly there's a fourth component um in terms of the criteria that we use in um forming segments you guys ready the the fourth criteria is that they will respond in a similar way to the marketing mix what does that mean what does that mean that we say that now that we've aggregated these group of potential customers that one of the important criterias is that they're going to respond to the marketing mix in a similar way first of all who could tell us what is the marketing mix Four P the four Ps and what are they what are the four Ps well that was interesting your hands were doing like this and his mouth was moving right that's good you guys worked at out beforehand that's it's amazing rehear they rehearse that they were doing you're doing that outside so product Price Place promotion so in other words when we set the price at a certain level that means that customers potential customers in that segment are going to buy or when we develop and run a particular commercial that the people who see it are going to have a similar reaction that it's going to get their attention that it's going to create interest that's going to stimulate desire and get them to take action would that be one of the concerns before you aggregate before you um break it up into into group right that would be one of the um concerns is because remember we talked a lot about that we want to customize the marketing mix we want to tailor the marketing mix to meet the needs of a particular segment so when we have let's say for examp example all men okay but are all men going to buy golf clubs at that price see that's what we're trying to determine um are all men going to react the same way to a particular advertising commercial now we know in the US for example um the US is very diverse so there's people of different um ethnic backgrounds so so you have in the given Market um even if we take New York you have African-Americans Caribbean Americans Asian Americans Hispanic Americans and so on and so on so are they all going to respond to our commercial the same way are they all going to have the same reaction no so um we need to anticipate that so when we have when we form a segment we want it to be large reachable the members of the segment have similar needs and wants and more often than not they're going to respond to the marketing mix in a similar way not always right it's not perfect it's not we going say every single person in that segment is going to respond the same but ideally why because that's going to be cost efficient for us questions so one of the things we need to think about is well that being said so we know what the criteria are who can tell us what are the four criteria go ahead okay so the group must have similar needs and wants and uh which they also have to be large and receptable reachable and they and they will um they will respond to the marketing r as like the same right in a similar way so now that we know what the criteria is the next thing is well well how do we segment the market then we know what we're trying to achieve so we have the criteria that's smart we identify that first what are the criteria but then the question is well how do we go about segmenting a given market so there's a number of ways that we could do that so let me tell you what um some of the key ways are first demographic segmentation Geographic segment ation psychographic and Behavioral all right so I'm going to tell you what each of those um are and then we're going to look at some examples so what I've just shared with you is that some of the ways that we could segment a market we said that means dividing a market into seg into um submarkets or segments is demographic Geographic psychographic and Behavioral so a demographic segmentation means that what we do is we divide the market into segments based on for example gender race religion education level income did I say age is it make sense so those are types of demographic segmentations so what we do is we group together we aggregate potential customers based on their gender let's say so what that means is that we group together in a particular Market all women and all men and we see those as two distinct Market SE segments and the assumption is that each of those segments are large they're reachable they have similar needs and wants and they're going to respond to the marketing mix in a similar way that's one example what about age so we could segment the market so what this suggests what this suggests is that we believe that in each of these segments right based on each of these age groups there are similar needs and wants that they're going to respond to the marketing mix in a similar way that these age groups are reachable in a given market now it doesn't need to be 18 to 25 maybe our research says 18 to 35 and 36 to 55 remember what we're trying to do is group together potential customers into segments that are large reachable with similar needs wants and respond to the marketing mix in a similar way so let me give you another example of what we mean when we say responds to the marketing mix in a similar way so an example another example would be when we talk about place we talk about distribution so in other words if we say they respond to the marketing mix in a similar way as relates to place might be that they do all their shopping online now that's a key takeaway so when we think about whether or not this is a compelling segment and we say it responds to the marketing mix in a similar way that would be a really good example so I don't want you to think oh that what does that mean responds to the marketing mix in a similar way well that would be a good example that means that people in this age group let's just say they shop online now that's very important because that means that we need to have a virtual store this age group an older age group maybe they shop very little online maybe they shop only in department stores we need to know that we need to know that beforehand to make sure that we have distribution in department stores in that particular market so it's not like conceptual it has a very practical application when we talk about response to the marketing mix in a similar way so that's an example of place of distribution and so these particular customers potential customers that's how we're going to um distribute the product online I have an odd question but to which I like odd questions to which U segmentation variables or just sign group with like pregnant women I would say that's um like lifestyle psychographic psychographic so it's a life stage for example what if you have like um sort of inter Laing overlapping that happen like that yeah but remember where the ones defining the segment it's based on our analysis so we Define the segments based on our research through our qualitative research through our quantitative research through our secondary research and primary research that's how we're able to segment the market is we've already done research once we have that learning then we're in a position to segment the market to divide the market into these segments and to group potential customers and then we name the segments we decide what the names of the um the segments are going to be so for example we could name each one of these um age groups whatever we want that's up to us so that's why it's so important for us to understand this because for you to add value in an organization you need to be able to think critically like this you need to Able be able to do this type of critical analysis because what's going to come out as a result is going to be a significant opportunity for the company and the way that you're going to segment the market very often is going to be different from the way somebody else is going to segment the market and that's why we say one of um the greatest competitive advantages that a company has is its people so your unique they could hire other people but they there's only one of you right so your creative genius your analytic skills your critical thinking ability is what's going to be unique in an organization and that's what's going to um help the company to be successful and profitable somebody had a question here go ahead yeah can you answer can you explain again what is psychographic yes we're going to get to that but I want to try and so I gave you an overview of what those um ways that we could segment the market so we're going back now to talk about demographic segmentation Geographic segmentation and so forth so with segment the market um this is an example of a demographic segmentation we could segment the market by religion [Music] now why is this significant now these types of segmentation that I'm sharing with you we know as marketers have relevance have significance that's why I'm sharing this with you is because these are certainly there's a lot of ways that you could segment a market a lot of different ways here are some traditional ways to segment a market that could be very insightful and very compelling but they're not the only ways why does this make sense what do you think based on what we said the criteria that we have for segmenting a market why would it make sense to segment a market based on religion they'll have similar needs and wants if they're all the same religion yeah they're going to have similar needs and wants a lot of these segments are quite large and I think I put it in the right order um It's Christian Muslim um Buddhist the last time I checked um was about 750 million which is very substantial and then um there's a few others um that actually um in terms of the Jewish population is only about 14 million so relative to um these other segments that segment is um quite small but yeah your point is certainly very well taken they're going to have similar needs and wants like let's say for example Christmas trees well you could if there's um once we do the market analysis it's going to be very important to know if 88% of the market is Christian then you know you might have a good chance of selling Christmas trees in that market now there might be other manufacturers of Christmas trees but if it's 88% Muslim that's a problem right Muslims are not going to buy Christmas trees and I know it sounds like a a blinding Glimpse at the obvious but we have to do our research right we can't think oh well yeah I think there's a lot of Christians that live there well we need to know how many is it half the population is it 10% now if it's 10% it might still be worth while for us to pursue that opportunity but we have to go through the analysis how uh general or specific is would want to get in the in the research Department like uh for example if I was selling a golf product right and would I advertise in a golfing magazine or would I rather just a general Sports magazine something like that um I would do both um because remember our challenge is to reach the target market so the Golf Digest for example I would like to think that that would be um one of our first choices for to run a print ad but I think you raised a good point that even in a magazine that's uh what you consider to be a general Sports magazine I still think you might find maybe um let's say 25% of the readership that would buy golf products I'm just saying it could be 15% depending on the particular magazine and then that's what media planners spend a lot of time doing that's why media planners work 90 hours a week trying to determine which group of magazines for example is going to provide the highest level of reach and at an um efficient rate so some magazines might the uh profile of a given magazine only 50% of the readership might be a match with our target market but in some categories in some markets that's actually a lot so you have to determine which magazines and that's why like for example I could tell you in some magazines like let's say um Better Homes and Gardens Better Homes and Gardens is not a Sports magazine but just for example has a circulation of about 7 and A5 million which is a lot it's really a lot it sounds like a small number because with television we're always thinking about reaching 200 million people during the Super Bowl but for print actually 7 and A5 million a circulation of 7 and a half million is one of um is an indication that that magazine has one of the highest levels of um circulation not the highest but certainly one of the highest and a full page color ad for um one month right so one insert is almost $400,000 so you think $400,000 I always hear them talking about spending $50 million on an advertising campaign yes 400,000 times 12 months is what almost $5 million and then one magazine is definitely not enough I can tell you from my own professional experience generally we advertise in 10 to 12 magazines so now you're went from what $400,000 a month to now you're talking about spending if you were just to um spend um in print right you could easily spend 30 40 $50 million now mind you other magazines that have less circulation are going to charge less for a full page ad so some of them might be 300,000 some might be 200,000 some might be 100,000 50,000 yes go ahead um but for that magazine yes there's 7 million subscribers and viewers of it but isn't that a very wide base like how do they know how to Corner that market who are they advertising to in that case because there's going to be so many different types of people reading that magazine right so absolutely so one of the the challenges in advertising is that um there's waste that we're reaching people who are not in our target market or are not part of the target audience we what media planners do is try to minimize the waste but um for example one of the former um Executives at Proctor and Gamble which you know is a very successful um marketer of consumer products one of their former Executives said um this is like maybe 20 years ago 25 years ago but it's so relevant to your point he said I know that 50% of my advertising budget is wasted the problem is what you just said I don't know which 50% now that's just a realization of the market right that's you're right with it's not um perfect efficiency we know that um we're reaching some of the target audience but we also know that there people who reading the magazine who are not part of our target audience so we're going to try to pick the magazines that have the best CPM course per thousand and those that are going to reach a greater percentage of our target audience but in some cases we have to use a publication that is going to reach people that are not in our target audience that's one of the um certainly the disadvantages of advertising on television is yes of course you're going to reach a lot of people advertising on television but you're going to reach a lot of people that are not in your target audience so if you sell soda for example then television would be a good way to advertise whether it's um during the Super Bowl or any other time because pretty much you would like to think that everyone is your target market that certainly would be um the aspirational goals of Pepsi or Coke although there's quite a few people who don't drink soda but those non-users maybe they um would try the product so those are examples of demographic segmentation we could also segment the market by geography go ahead um do any of these ever overlap that there that gets that specific um in Target trying to Target like a very specific um section of the mar of the the market overlaping which way so in other words they're AG they're Christian and between the age of 25 to yeah so what we want to do when we say our target market that's a good point when we talk about our target market and defining our target market well that's what that means so in other words if somebody says who is our target market you should say our target market is men between the ages of 18 and 45 who have at least a high school education and live in the United States and are of any race or religion so that's um all inclusive so it says that yes they are um in that age group and they could be um 28 to 4 but they could also be Hispanic American or Asian-American and Caribbean American and they um have high school education right so there's that overlap is that what you're trying to say is that they're both right that they're in that age group and they're also Jewish and they have high school education so and that's that's fine um that's what we need to do when we Define our target market but then what happens is our target audience which is who we want to reach with our advertising is very often a subset of our target market do you see why that is so in other words our target market let's say is all men 18 to 45 but then our target audience and we're going to have several Target audiences right that's who we want to reach with our advertising is we're going to have an advertising campaign that's trying to reach asian-americans an advertising campaign that's going to reach Hispanic Americans an advertising campaign to reach African-Americans so each advertising campaign is going to capture this idea of Multicultural marketing that you want your advertising to resonate with the target audience it's something that people have got to connect with now let's say you want to sell a product to 18y olds you're not going to have me in the ad they're not going to want to buy a product that I use they want to see you guys they want to see you say yeah look at him he's cool and oh he looks like a college kid just like me and he wears $300 sneakers and $250 jeans so you want to be able to connect with the target audience so that's why we customize our ad campaigns and especially in the United States it's certainly very relevant because the market is very diverse and the segments are also large the Hispanic American population in the United States is increasing very rapidly the Asian-American population is increasing very rapidly African-Americans in the United States are approximately 12% which is what that's more than 35 million people that's a pretty big segment that makes sense to customize a advertising campaign that African-Americans can connect to is that right you guys agree does that does that make sense and then for example let's say for um let's say for Hispanic Americans you're going to advertise in magazines that are read by Spanish speaking Americans and you're going to advertise on TV stations like telmundo for example that run programming in Spanish so that's what they want to see that's what the customers want to see is that what you guys want to see when you're purchasing a product you want to see a um an ad that's representing your lifestyle or maybe it's aspirational so it's um representing the lifestyle that you desire how important is it to Market to groups within different type of demographic for example like all different shs that come from different backgrounds yes I think the um the more specific the more compelling so the better that you could customize the ad so that a specific sub segment will connect and relate to the ad better and ultimately purchase the product I think that's ideal and that's why I I drew a distinction between what is very often referred to as africanamerican but then you also heard me use the term Caribbean ameran but those are two very um different cultural groups right although generally in terms of skin color often they referred to as blacks but their culture is very different it's very different from somebody who's grown up in Mississippi down south and somebody who family moved here from Jamaica 15 years ago so it's a very different um very different culture and it means that their needs and wants are going to vary in a variety of ways if there's similarities then that's okay there might be some similarities for certain products and other products there could be differences so for example um the food there's definitely very different um food and Delicacies that are preferred by Caribbean Americans and not so much African-Americans but there could be other products where um the needs and wants are similar like for example hair care and I know you like to think what does this guy know about hair care but I know I know a couple of things I know what shampoo is I know you're thinking shampoo you follow what I'm saying you see does that make sense oh you got it now he got it just got it okay good go ahead um how would you appeal to a variety of people right like say you want to sell a product that anyone can use and it's applicable to anyone's life over the age of 18 and below the age of retirement how' you appeal to anyone like that because you're talking about every single every single sub segment or subdivision everything and so what you need to do is communicate to each group with a different uh marketing Communications plan so this idea of like one size fits all I don't Rec recommend that so I know what you're trying to say how do we sell to all religions all age groups all ethnicities it's challenging um to do that because whatever it is that you do um there's going to be some groups that are going to connect better with the commercial and our product and service than others even if you use animation like look at what Geico has done so they said you know what we're not going to show um a Hispanic we're not going to show an Asian we're going to show a gecko okay right or a caveman and we're not going to tell anybody his um religious beliefs right or associations right that's leave that to your imagination we're not saying he's an atheist we're not saying that the gecko is Jewish or Christian but that's something that's TMI too much information we're not going to share that but then you say oh well coach that's yeah why not that's sounds like a good idea we'll use the gecko and wouldn't everybody relate to that but what about humor what everybody considers to be humorous is going to vary from culture to culture maybe In some cultures they find that very amusing and in other cultures not maybe In some cultures they find that offensive yeah yeah yeah maybe they think that they're mocking the gecko and that's maybe somebody's um pet they they're offended by that so um you have to think about that um carefully but I think that's a good example of where they're trying to sell car insurance but you know what importantly think about this they're trying to sell car insurance but not to everybody why why would I say not to everybody those who have cars exact exactly so now what about you run this ad like they do and please don't tell me um that a big company did it that makes it right because big companies make big mistakes but certainly they advertise on television but like Alexia is saying well everybody doesn't own a car so what about all those people that are being exposed to that television commercial who don't own a car and don't need they don't have a need for car insurance that's waste that's what the president of proor and gamble was saying I know I'm but what could I do Alexi owns a car and slowmo does it I that's what could I do they're both watching the show at the same time the same day of the week so it happens what we want to do is try and minimize the waste so we talked about about demographic segmentation age gender race religion let's talk about geographic segmentation so Geographic segmentation could be based on region so the idea is that we believe that people who live in a certain region have similar needs and wants and are going to respond to the marketing mix in a similar way are reachable and the segments are large giant but don't you think we also going here like culturally wise like because it's Geographic like different parts of world and countries it's also cultures different cultures so like how does the cultural differences fit in that in terms of regional yeah like like Geographic uh segmentation would also be cultural segmentation in certain sense oh so absolutely so maybe this is not relevant for the particular product or service that we want to sell so you guys got what Alexi is saying he's saying well in North America we have the United States Canada and Mexico what was that does does that make sense for the product or service that we want to s maybe what Alexi is saying is you know that in Mexico right the culture is very different than let's say in Canada or in parts of the United States although there's a lot of Spanish-speaking um people in the United States the language you speak does not always indicate a common culture because the people speak Spanish all over the world and the cultures are very different and also the Spanish the dialect of Spanish is very different so Alexi brings a good point so maybe this is not appropriate for our product North America South America Latin America Etc or if we look at let's say um Asia for example so we have Korea Japan China just for example wow what I mean yes they're Asian but certainly there's vast differences in the culture there in each of those countries so maybe this is not the best um segmentation maybe we need to look at instead of at the region level maybe if we're focusing on that vision of the world then maybe we take it to the next level and we focus on specific countries China which has 1.3 billion people India which also has about a billion people Japan Korea and by the way what I just did you see what I just did here by quantifying the population that's referred to as Market sizing right what I just did is quantify the size in the market by saying that 1.3 billion people live there it could be in dollars it could be per capita income it could be the number of people but we want to know once we segmented the market the size of each segment do we prioritize because of that yeah so one of the things that we're going to look at after we've identified these segments is which ones are the largest which have the greatest level of expected growth we're going to look at the concentration the concentration of the market so the size the growth rate remember we talked about the Boston Consultant Group model remember we talked about portfolio analysis and we talked about the Stars the cash cows the dogs not to be confused with dinosaurs and the question marks right so the size of the market is important the rate of growth the concentration of the market so in other words what percentage of the market is controlled by let's say five competitors so in other words the is the market highly concentrated or is it highly fragmented so a market that's highly concentrated for example is wireless communication in the United States so basically United States what do we have like four companies that control literally about what 90% of the wireless communication in the United States the largest is AT&T then Verizon then Sprint and T-Mobile right aren't those the four largest competitors so that's very different from a market in which you have a 100 competitors make up 90% of the market if 100 competitors make up 90% of the market then what that's highly fragmented versus highly concentrated that's going to have an impact on how we view the level of Market attractiveness so we need to take that into consideration also Michael Porter has a Model A Market activeness model known as five forces and the five forces model looks at some other aspects such as the level of rivalry so the level of rivalry is an indication of how attractive the market is so if the level of Mark of rivalry is very high then the market is less attractive threat of substitutes if the threat of substitutes is high then the market attractiveness is low so for example if we sell milk in a particular market then what would we what would be be concerned about other milk Sellers and orange juice orange juice right that's an example of a threat of a a substitute that people might drink milk produced by other Farmers other dairies other the Branded milk products but also a substitute would be juice or maybe soft drinks or maybe water it depends that's something that we need to understand from a consumer Behavior perspective in in um in a given Market there's no right or answer so only what consumers say is if there was no milk I would drink orange juice or I would drink soda is it another way around that like what PepsiCo does and that they own milk company and the orange juice company so um a company like Pepsi and Coke they operate in multiple segments in the beverage category so absolutely so Pepsi owns a variety of soft drinks right they have a portfolio of soft drink Brands including Pepsi Sierra Mist what else is theirs they own Starbucks so Pepsi and is the cola brand Sierra Mist is the lemon lime and crush and Crush oh c so they have an orange um flavored soft drink but to your point they also own Aquafina which is a brand of water and what about um juice do they own a juice company minut well minute made I think is Coca-Cola I think they might own trop yeah they might I think they finally did um acquire um Tropic caner didn't they Pepsi PepsiCo yeah Tropicana owns all those drinks Frito Le Tropicana Quaker and Gatorade and then Fredo LS like all those chips tropican ever juice Quakers right so but I think um Coke had taken the lead in with its minute made brand um for a long time and then Pepsi emulated them and realized that it was relevant in terms of the way they were viewing the way they segmented the beverage category that owning a juice company an orange juice company um It's Made strategic sense to them but both of them are very adamant that they don't want to sell alcohol now in the US 60% of the dollar sales in the beverage category are alcohol so in the US the beverage category each year is about $200 $200 billion at retail each year a $120 billion is sold as alcohol and the other 80 billion is soft drinks water juice teas it's quite interesting though that by Coke diverse or Pepsi diversifying incling the milk and the orange juice company per se they're not competing with themselves but they're competing with now coping since everyone is now diversify they're all just competing with each other rather than within the different categories yes it's very interesting to think about who are your direct competitors and who are your indirect competitors and they might be competing within Coke the organization with themselves now why would you do that because if you now own a orange juice company and you are known for selling soft drinks and that could be what people perceive as being a substitute then maybe your Pepsi sales are going to go down but the logic is that if we don't cannibalize because remember anytime we introduce a new product we want to achieve incremental sales we want to have incremental Revenue we don't want to just replace sales but in this case we're not talking about incremental Revenue we're talking about just the opposite which is cannibalizing our sales that means for example we might sell less Pepsi and sell more orange juice and the reason is because if we don't cannibalize our own sales somebody else will there is a cost of doing nothing don't think that doing nothing is the safe decision it's not so just because you say you know what I'm not going to acquire an orange juice company cuz that's going to Cal bize the sales of my soft drink business well that doesn't prevent orange juice from cannibalizing the sales of your soft drink business but they're also just reaching a whole another Market also people just drink orange juice and not soft drink so could be profitable not taking away from the other sales oh Absol I think it's a good idea um in terms of um expanding their business absolutely yeah I think it makes um a lot of sense you know like they said you can't beat them join them so if you know that one of the substitutes is orange juice so then why not also sell orange juice you don't want to have this remember the first day we talked about the difference between a marketing orientation and a production orientation production orientation means that we make what we try to sell what we could make whereas the marketing orientation is we make what we could sell remember we talked about that distinction we said the marketing orientation is focused on making what we could sell so it's not just because we have a soft drink bottling capability that we're just going to produce soft drinks and just try to sell as much of that as you can that's a production orientation the marketing orientation says we're going to find out what customers want and what we found out and what without even doing uh in-depth analysis just walk into any grocery store you'll see that there's a need for different types of beverages soft drinks juice water tea Etc said pepo adamant are not going to the alcohol industry why why is that focus at one time diversification now there's two types of diversification related diversification and unrelated diversification so if you're a soft drink company and you acquire a tea a bottle tea business or orange juice that's considered to be a related diversification now what companies did in the 70s which was considered to be very common any of you born were any of you alive in the 70s I don't even think so all right maybe that was a bad example but anyway in the 70s right which was like at the dawn of time basically right in the 70s companies were focused on unrelated diversification so you would have for example cigarette companies buying food manufacturers moris exactly so RJ Reynolds and um Philip Morris that's pretty bizarre what did you think you have retailers remember we talked about seers that they acquired an insurance company All State they acquired a brokerage firm Dean Witter and discover Financial Services what does that I mean you're a retailer and at that time or just prior to that they were the nation's largest retailer what do you what what business do you have owning a an insurance company your stock and trade is retail but that was very common um news companies owning theme parks and um alcohol companies and so forth but there's some advantages to being Diversified that way and there's also some disadvantages and the biggest disadvantages well one of the biggest disadvantages is lack of focus it's this idea that you can't be a jack of all trades if you're a retailer be the best at retailer but you can't be a retailer right it's very challenging to Be an Effective retailer to Be an Effective Merchant and also run an insurance company and a brokerage firm and a credit card business or like some of these other companies me there's a lot of examples of these conglomerates that were formed companies that own um like General electric for example still today is a very large um conglomerate and has a very diverse Holdings they've been very successful it doesn't mean that some companies can't be successful um with diverse Holdings but the reward on Wall Street if you will is on companies that are focused and they believe the more focused the more profitable the company is going to be so the paradigms shift but that's um the way the market is today all right so we talked about demographic we talked about geographic and what else what else did I mention psychographic psychographic is about lifestyle lifestyle and personality so when we SE mment the market by lifestyle that means that we believe that a certain lifestyle has similar needs and wants and that they're going to respond to the marketing mix in a similar way like for example what what would be an example of um a lifestyle somebody mentioned before they said what about if you're um you're pregnant didn't want you guys ask that do I look like I'm pregnant people always ask me when is the baby come coming and also they said that well you know um if things don't work out for you you know Christmas is coming they're always looking for Santa Claus on 34th Street so keep your options open but um I told him I said I could never do that Santa Claus had hair right lots of there so in terms of Lifestyle there's gol yeah golf is certainly um different types of uh sports but also your life stage so for example married with kids so like life stage would be single married married with kids and then we have what's called empty nester what does that mean empty nester right when you finally get the kids out of the house right so what this says is that people who are single right and we're talking about lifestyle people that are single have similar needs and wants and they going to behave to the marketing mix in a similar way is that everybody who's single no it's not but remember we're looking for ways to segment the market that are going to help us to operate efficiently and to be profitable and maximize our sales so maybe this is not the best way to segment the market for our product and service married same assumption married with kids empty nesters so those are different life stages go ahead so like Gerber um they sell life insurance they sell babies life Insurance mhm so is that them segmenting the market away from another baby food company where they're just selling baby food with Gerber you are getting your baby food and you know getting you know your kid needs Insurance well I think what I'm hearing you say is that what they did is they identified the market as life insurance as baby food right I know Gerber is the one that sells the insurance um but what I'm saying is that they segmented the market the life insurance market and they said that there's different segments there's babies who need insurance there's teenagers that need insurance and then adults within different age groups that are going to need insurance so I think um the way they're looking at the market is smart because they took this huge Market life insurance and they said this is the way we're going to break down the market and we're going to Target right when we're when we're targeting what we're doing is selecting a segment or multiple segments we're going to focus on this on this segment the segment for life insurance for babies yeah I think that's compelling now whether or not they decide to Target these other segments is you know a different um business model but I think that them focusing on this segment is also relevant to their brand so in other words when we brand a product or service we have to think about whether or not it's logical to brand that product or service with that particular brand so we have to look at the brand elasticity how far could we stretch our brand now Gerber as you were suggesting is a very wellknown marketer of CH baby food so Gerber for most people means baby so you could extend their brand the Gera brand into a lot of different categories that relate to babies baby food baby insurance I think um a lot of other categories but maybe Gerber jet skis maybe that doesn't there's not a logical connection there so I think this is really smart because they realize that their brand can be extended into life insurance but it's very relevant specifically to life insurance for babies yeah I think it's very smart what they did I think what Gerber is doing they also have a college fund type of thing that they set up from when they're babies this way by the time they're teenagers like us I guess or or 21 or whatever uh oh so you were just saying if you're teenagers and I'm 21 is that no I'm 21 but like like in general whatever you know what I mean so like the in general like the average teenager I mean college life is 18 to 22 I'd say and so from day one they're basically targeting each group meaning baby babies or baby food teenagers for college and adults to pay for I guess the college and the baby food and their marketing pitch would most probably be towards those adults at the current moment just based on who's paying for the thing the product and who's like raising the their loved one I guess they so and so tell me more about the um the tuition the uh the program that um that they created they're contributing towards the scholarship yeah from what I I mean from what I all I really know is from the commercials they say like they had this whole like family discussion and saying how like I started a college fund and like they just like put diapers on the baby basically so like that that's the type type of thing like I'm assume from what I assume they're putting Gerber puts a percentage of their whatever that person buys into the college fund or actually probably whatever their profits are in the end percentage goes into whoever signs up for their college fund program and so what did we um say that would classify as last class we talked about this what is that why why are they doing that what are they trying to social responsibility right Corporate social responsibility that's a that's a good example right basically they're giving money into a scholarship or some sort of Charity that's a good example of corporate social responsibility that's that's the reason why they're do monthly payment what is it it's a monthly payment how does to tell us you found it what is it what is it um it's a monthly payment that fits your budget whatever it is so does the company do to put money the No No you the customer does you decide when you want your money between 10 and 20 years you receive a guaranteed payment of 10 to $150,000 uh of 10 grand to 150 Grand when your policy reaches matur wow so they really are getting into Financial Services that's interesting so they're basically they selling annuities if they if they had to stop making baby food right they'd have no more source of income and they have this life insurance plan and the life insurance plan needs to be backed by some Capital so the only way to guarantee that they'll have that Capital to pay that plan is the babies that stay alive for 18 years get college and they swallow all the life insurance money that they don't need to pay out and that goes to the college fund no pun intended right yeah okay all right so it's a good example um maybe we'll have a chance to revisit that in another class it sounds like um an interesting um company to study before we go I just want to touch upon this I don't want to rush it but I just to give you a um some insight and we'll talk about this again next class about behavioral segmentation which has to do with usage rate and another example is product benefit so let me just tell you this quickly and then we'll we'll start with this next time but in terms of usage rate we have heavy users moderate and light and we'll also talk about um product benefit and how that's a significant way to segment market so a good example would be toothpaste for example what they do is they segment the market by the benefit that the customer wants so for example some customers buy toothpaste because it fights cavities others white teeth others fresh breath others um fights plaque Etc all of those are compelling ways to segment the market all right before you go what I want to do is give give you this sheet which is a review of chapter one |
Marketing_Basics_Prof_Myles_Bassell | Marketing_8.txt | you let me down show me when I needed you the I thought that mr. Bies when you let me let's learn I know I'm worthy get the wrong yeah yes bizarre I'm gonna take it down you let me down distributed by Procter & Gamble Cincinnati Ohio that's minute corporate headquarters are but they don't they have a different brand a different master brand each product type so time is one of them massive brands Charmin is one of their massive range scope is one of their massive brands all of those our market share leaders in their respective categories questions about that and then private branding so some companies manufacture products that are sold by other organizations like for example whirlpool whirlpool manufactures dish washers and dryers and washing machines and refrigerators and they sell them at Sears except its branded kenmore kenmore is a brand that's in the Sears portfolio deceive us brand portfolio so that's an example of private branding all right then I tell you that you're the best students ever ever ever I didn't tell you that are you sure you are you're the best students ever and your success is my number one priority all right so that's it for tonight I'm going to flowers take attendance exam all right let's see Katharina happen Michelle let's see I want to see this I gotta see for myself hold on let me a second oh well we're all right hi every day oh come on our practice come soon let's see Huseyin summer summer summer no today Danielle Danielle Burton shannond surgeon is here Denise Camacho pinion Alisa salon is he a genius chai Ling Hwa Yong well Caroline oh that's you Caroline okay Caroline and Connie rick is here cannot Sully clock can Ashley Clark no Victoria Veronica Aaron James is the dark Glen Crispin Dawkins sending the sentence yellow khadijah right this year this event I know are you Daniella Maura Dimitri drop a couponer Loretta Julia Vanessa Roman Lily Stephanie Lopez Stephanie Lopez Claud okay Natalia Natalia chamalla Marcos Kiana Mason Kiana Mason Madeline Mohammed Boston yemassee baby Alexis Bugs Bunny is here a Dallas Perez Katherine plans Jennifer palette Philomena is here what about Prisca Briscoe ramzani no Lisa rapper save her son no let him Rose Sabina Caterina know Catarina softchalk habu he met you Brian Senora runs on cinema Daniel which one what's your listening yes search a little bit Spencer Kelly Mohamed Salah chicken pie Minnesota just stop Joseph kellytown Marco yes Jessica Ashley Thomas Ashley and Tran Natalia which one Alexander Doris Velez every time waters Daniel Wiggins Michael Winner Raymond Wong Vivian moon Matthew salon Erin Zach youngling miles the sun-times I all right posted on blackboard again but it I can make sure to song and thank you |
Marketing_Basics_Prof_Myles_Bassell | 5_of_20_Marketing_Basics_Myles_Bassell.txt | so we saw the video segment about transports where and it talked about how they segment their market first let's talk about some of the different ways that we said we could segment to market what are some of the key ways that we said we could segment a market then we said segmenting is about dividing a market into sub markets or aggregating a group of potential customers together that have similar needs and wants that respond to the marketing mix in a similar way and are reachable and so what are some of the ways that we could segment the market good so let's do it demographically age gender yeah so what psychographic what is that that's by lifestyle the psychographic market where are we're gonna we're gonna talk about that what else so we have demographic psychographic behavioral which would include usage rate would be a good example of behavioral so they're a light user moderate user a heavy user which is important for us to know Geographic right and there's a lot of different ways that you could segment a market or a category let's think about the way that prints segmented the market and so they they named those segments where do they call them because once you segment the market we name the segments and remember this is something usually that's internal although sometimes our segmentation is something that translates obviously into our branding or product strategy but we could have you know we have internal names for our products whether one of the segments that they've identified how do they name them good they give the first one I think was a professional performance performance and and this is specifically for tennis so for tennis the tennis market the lot of different ways that we could look at the tennis market and they segmented the market based on these classifications what is what are these mean what's the difference between those segments are why is it there's other ways that we could segment the market we're just trying to identify the way they segment the market so definitely we could come up with different approaches absolutely but for us in terms of a takeaway what we want to do is understand the way they segment did it that I agree there's other ways definitely we could look at it occasionally if you want to casually play like they showed in the video segment is for younger players so it has to do with they segmented the market based on skills so the level of skill as you know I'm the frequency of use okay so that's another interesting component is that so a good level of skill and also how often that you play right so now you'll be segmented the market this way so then how does that translate into their product so in other words we said that some have a very high level of skill some have a very low level of skill how did they modify their product cause remember we're saying that this segment is large and they have similar needs and wants and they're going to respond to the marketing mix in a similar way and they're reachable what do we do about that so are we going to sell the same racket to each of these segments or is the benefit of segmenting the market this way so that we could tailor and customize a racket for each of these segments so the level of skill is significant and in terms of product but Zack is saying is that this racket is going to be very expensive and then the rackets that they're gonna sell for example to the junior segment is going to be basically inexpensive certainly inexpensive relative to what they charge for performance and Jack also took another step which has to do with where you distribute the product so he said some products are distributed at Walmart and Walmart is known as an everyday low price retailer EDLP but then Zack pointed out that these rackets you're not going to be able to buy there you guys agree yeah so that it has an impact the way this segment through the market has an impact on the price for the product where we distribute the product so what is what are those two things about the product how does the prove the performance difference between this performance one specific level so there's features that the racket has that somebody would more skill he's going to be able to utilize but the rackets that they're trying to sell to the junior segments it doesn't have those features or it does so it's easier to use yes so it's more simple things that are like maybe a junior wouldn't be able to use the performance racquet because it's more specific if you know how to use it it's a better tool so they've definitely they definitely modify the product in that way anything else what else does a house and they modify the product size so that's important so they definitely they've identified these segments and now we're talking about the different ways that they tailored the product what else so the size of the product we said what else what else is it about the product that's different so it's not one size fits all is that right when they're trying to sell a standardized product each one of these segments no note to which question it's not standardized so yeah either the segmentation of the market was not an academic exercise for them it's not just interesting the segment of the market and identified these segments that have similar needs and wants and then they developed products to specifically meet the needs of each of those segments is that right and it means that they changed the size of the product they change the price of the product the design of the product the features of the product so it has implications it's not just that you segment the market and that's it you segment the market for reason so that you could identify segments and maximize the sales or the company because each one of those segments right we said that we tell it a product for each one of those segments and each one of them is large and reachable and that means that we're going to be able to sell more rackets because we also understand as that was saying that we're not going to just try and sell all our rackets at Walmart or the Walmart is the world's largest retailer but we have to know where is the best place to sell our product and if we have multiple product lines then very often we're going to sell in different channels of distribution so we might sell in discount stores we're also gonna sell in sporting goods stores in some cases we might sell in department stores you might even find maybe these um junior rackets may might even find them in convenience stores in some cases you might find these products in wholesale clubs but certainly there's got to be an alignment between our price and the channel in which we sell the product so this is an example and I think this is a good example of in pricing what we call good better best pricing so you see why it's just not academic that they did that it has significant implications and so they have an inexpensive bracket a moderately priced bracket and also an expensive bracket which they're trying to sell to professional athletes now how does that impact our advertising are we going to be able to use the same commercials or print ads for all of these seconds or is that something we have to change go ahead it's a very helpful because they help us identify market probably go to a store that's more characters to sports they go to a sporting goods store they send in a video a place to specifically is a tennis shop or people go there to get the tennis product so it's helpful because it helps tailor absolutely that's a very good point and then take it the next step now in terms of advertising how's that gonna impact our approach to advertising in the the different media that we might use and the messaging way do you think that maybe did with creational one you can like social networking stuff like that and the performance you can the tennis courts would have been more professional on frequent players play and the general ones maybe the toy stores or something like that so yeah so outdoor advertising as you're suggesting you might have a a billboard at tennis events where you could reach tennis players but also definitely people who are tennis enthusiasts or aspiring professional tennis players do you think that's that significant or the only people who buy the performance rackets are those that are professional athletes what do you think I can get something too or buying a racquet just as easily is anyone else no matter of promoting in the right way and so what is it what is part of the expectation when you buy a product like this or later on we're going to look at a golfing Club you guys familiar with this talking club called reptile what is it what is it about the reptile glove or this performance racket or Air Jordan sneakers but what is the expectation good tell us what it kind of seems like it's worse than middle and best so even though it's not exactly where it is supposed to be for different people but the expectation is that the bachelor would be the best one so it's not even on that level they might say oh this is the best so it is a high perceived value but when we talk about quality there's got to be perceived quality and performance quality do you guys see the difference let's who could tell us the difference and then we're going to come back to that good that our replicas or they're cheaper they're made cheaper but still from Nike still brought Mary Jordan those really allow the company the sales from those I think allow the company to be able to finance the making of the more expensive shoe where they really would really show the performance advancements they've put into it and any type of research that has gone into making it better Jordan basketball sneaker is displayed in that in that model whereas in the cheaper bottle they've tried to emulate the look of the more expensive models so that people feel like they're getting it but know that there are they able to afford the real thing but there's more sales of cheaper products than there are of the higher price performance products in terms of the number of units yeah anybody want to add to that so there's two separate points that we need to address here one has to do with the expectations when you use this product and that's related to performance and that performance is a component of quality so what I was suggesting is that when we talk about quality we have to look at performance and also perception both are very important and it suggests that there needs to be a way for us to substantiate our claims now as relates to these types of products generally there's an expectation of performance that using these products are ogen able you to be a better athlete that it's going to give you some type of edge and there's even a suggestion whether it's subliminal or maybe it's us that think subconsciously that if we're wearing a pair of Air Jordans that we're going to be able to jump higher what do you think do you think people expect that and what does it mean for a product to be a performance athletic Trotter so these are especially but Nike products the the way the product is marketed is that these are the sneakers were the footwear that athletes use and they historically over the last several decades have used celebrity endorsement as a way of building their empire and so the suggestion is that these athletes use our footwear they wear our Footwear and that that's the reason why they can excel in the sport do you get that sense of the appetizing and the marketing that just as as consumers have you gotten that sense in terms of the expectation that they're not coming out directly and saying that well that's all you know once you wear these sneakers that's it you know you're gonna be able to do the alley-oop and then it's right you know just that's it you're gonna three port shots all the way right thirty points a game can you believe it the Knicks won on Friday believable yeah it's impressive yeah when I first started watching them Patrick Ewing was done that team you guys remember Patrick Ewing really yeah he wasn't good you know I don't know but it was it's most of the time the shots wouldn't go in but that's why they would always tell him that was part of their strategy but anyway well an ethical it could be unethical but you have to ask yourself if it's even effective like a subliminal messaging is that something that we believe is really having an impact on people and what is the nature of the messaging so in other words if you're in a movie theater and before the show begins they have some previews and so forth and every one second they flash up the Pepsi logo and it happens like so quick that really I mean it's not something that that you are aware of that you might consider to be subliminal right that it happened so quick and then again it happens and but it's not out there long enough so that that you're conscious of the of the message or seeing the logo so yeah your point is a good one I mean is that okay is that something unethical you know if it depends what the what the messaging is know if it's something that could be harmful to others what do you think about product placement is that something that you consider to be subliminal what's product placement whoo-hoo you're marking - as far as we're at our Facebook well that's one way I could see what you're saying sometimes we use the term that way we're going talk about product placement with earnest tourism that's also part of it but there's um a strategy that marketers use so you're right and sometimes I was thinking else brand names like giant or so yes yes so placement on the shelf is definitely important is it at eye level for example or is it at the bottom and children will influence the decision making process whether or not to buy that particular cereal and the location in the store is also significant so is that I'm only gonna have an end cap which is at the end of an aisle you have a big display that's considered to be prime real estate in a store and brands compete over that space because you have a lot of visibility and it stimulates a significant amount of impulse purchase but what about when you have let's say a TV show or a movie and the star or the key actor or actress in the moving of the show reaches for something to drink and names pick up a bottle of Pepsi now everybody's watching and you see that Pepsi logo that's what we refer to also as product placement and the companies have to pay for that because they could have reached and picked up a bottle of coke or some other branded products yeah or orange juice right absolutely oranges like that there we go what's that orange juice is somebody drinking orange juice what is that let's see let's see bring it up bring it up let's see this look check it out you see that was a subliminal message did you see this excuse me how many YUM grapefruit juice all right try to trick us but look you see the young that's unless it looks like an orange on there so now I have to start changing that's my mantra now to grapefruit juice no it doesn't work what sound you know aren't juicy think that's better or just me but yeah a while back remember we're talking a little bit also about somebody I think mentioned us about the fact that they had changed their packaging and the customers were very upset by that because of packaging and when I talk about this is an important brand identity element and it is part of what we call trade dress something that's recognizable and something that will show in every commercial so for consumer products you'll notice that almost always in a television commercial they'll show the packaging at least once sometimes twice because they want us to be able to recognize the packaging at the point of purchase so very important to have brand recognition but also be able to recognize the packaging so consumers were very upset when they change the packaging because it stripped away the equity that they had in that design that look and feel and I remember myself the first time I saw it and I was in the store and I was looking and I'm looking and said what they don't have Tropicana and I bought the one I thought it was the store brand right I just ran in there to get orange juice and I figured oh whatever I'm just gonna get this well I mean this is ridiculous how much time could I spend here and I was double parked right then I'll just get this and I'm like when I got home I looked at what this it had such a different look to it and something that the customers had become accustomed to and comfortable with which is important just like when they change the logo for gap do you remember that you know that the historically the Gap logo looks something like this right and then they changed it and then I remember her students they were debating with me whether or not this logo was better or the new one was better but the thing is that the customer was unhappy with the fact that the company changed the logo because this was something that was familiar to them and importantly with this particular logo they were strong unique and favored brand associations that they make connections with this logo and the brand names and for them it was something that was very favorable so it doesn't mean that you can't ever change your logo yes you can but you have to understand the expectations of your customer because remember I told you the easy part so to speak is to determine a brand name and creating a logo but to create associations with your brand name takes a long time and usually takes millions and even billions of dollars to be able to achieve people already had a positive association with it and wouldn't want to see it change a company will reposition themselves they want to reposition themselves so that they stay relevant to their target market so sometimes you have positive associations but then sometimes you might have other associations with your brand and it might be something like for example that your brand is perceived as outdated or no longer relevant to the target market or it's not contemporary or state-of-the-art and so they want to change the perception that the customers have or the potential customers so one of the things they might do is to change the logo to maybe make it look a little bit more contemporary and something that maybe a younger generation can connect with but it's not just changing the logo you're gonna change your entire marketing campaign as well so but if your logo is yes you gotta ask yourself why you would do that is a good question is the level of brand awareness declining is the level of brand attitude right declining is there some metrics are we losing market share so there's got to be some reason I'm not sure if we would say if it ain't broke don't fix it because I'd like to think that we can you know committed to continuous improvement but yeah you have to have a reason for doing it right yeah so remember we talked about do we talk about brands and said that when we look to create a brand identity that it needs to be memorable protectable adaptable and transferable no ok we will no those are four criteria so when we develop the logo or I believe where we come up with a brand name when we develop a tagline slogan and packaging those are four criteria that we need to use to evaluate the branding elements yeah Joseph before they do any of that stuff logo you can't just pick on people unless it's are trying to pharmaceutical industry yeah you do but of course we run a good marketing reservation with this new logo about this absolutely we want to test when we said to identify the unmet need we're going to test concepts absolutely we do copy testing for advertising or at least we should I mean some of the things that you see out there you kind of wonder like really you showed this you know your target audience and that they said this resonates with them like okay exactly but um yeah sometimes what we consider to be a commercial that's let's say for example annoying is not really within the industry what we would consider to be bad because sometimes an annoying commercial yeah it's something that maybe there's a jingle or something that you can't get out of your head or something that you talk about you know it's so annoying that you tell everybody that you know well get a few if you're able to create that buzz then that's very compelling so a commercial doesn't need to be like okay that some people um yeah try to do that to get attention and also we don't need to use when we think about the approach of our advertising how we're going to execute it it doesn't need to be funny so humor is only one approach that we could use the commercial doesn't need to be funny but even if dicks in your head like an annoying commercial that that's a good thing because it's taken his headband that's an annoying commercial - a good commercial but so the trade-off is that because it's annoying that you talk about it with other people and you're talking about the brand and so you're creating brand awareness by doing that well it's but you're talking about the fact that the commercial is annoying but that doesn't mean that the product is bad right it's just like oh you know that commercial I mean it's just like so annoying and every time I hear it but every time you hear it what happens do you turn the channel or you do watch it and then you talk about it with other people now if you were saying that the product was that dad meant the promise was bad and that would be an associate that would be a bad association that they then made the leap and said well the commercial is annoying that means that the product must be of a low performance and low quality yeah that would be concerning some people even argued that even bad publicity is good publicity is bad publicity which because yeah yeah that's bad publicity but that's that Felicity in a good way that everyone's talking about so people are gonna say oh her albums out let's go see what she has to say what yeah right absolutely so it depends on what the focus of the publicity is now remember you know the difference between advertising and publicity what's the difference between advertising and publicity what's the main difference that we should be concerned about just-just since its plenty of bed but I'm saying like like let's say this is like good to seem like PR advertising right like one is actually engaging people but could have this same impact but look good let's see if you could um enhance somewhere them that you do it in advertising you're putting advertisements out there you're marketing a product to a certain category publicity is like it could be paparazzi it just happens if it comes about yeah so all of those what you guys are saying a good point so to recap advertising is a message that we create and we have control over publicity is a message that we don't create and we don't have control over so when an ad and a TV commercial for example we have control over what's said in the end with publicity concerning the thing that concerns us is that we have no control over what's going to be said so even if they interview you and is it what we're gonna write an article and so forth and we're gonna have a spot or a segment in our newscast you have no control over what they're gonna say publicity is considered to be free and advertising is something that we have to pay for now we could try to create publicity very often that's what you're suggesting when you do things like that that are kind of draw attention to yourself that's why this is a lot of discussion about some of the things that are going on with celebrities like was that you know something that was fabricated is that was that real was it some kind of stunt or something to get publicity but yeah so we got it we have to be sensitive to that so sometimes it could work to our advantage and sometimes not especially if we're working with celebrities what's one of the issue working with a celebrity what's one of the concerns dad so it could be very polarizing so some people might not like that celebrity and Prince they try to use celebrities to write as part of their approach what else madam are they can do something stupid off the field Yeah right so exactly so if you have a celebrity spokesperson if you're using athletes for example that's great that they're a performance athlete but what about if they're arrested for driving while intoxicated what if they beat up their wife etc etc then our concern is that that's going to have a negative impact on our brand yeah it could be there definitely um there could be a disconnect absolutely so we want to pick somebody that's gonna be you don't utilize each celebrity you can't just overflows yeah I mean it could Dom it could be confusing and it definitely needs to be so Marcus each celebrity has different market segments of the saying we're advertising this product and we're instead of saying we're using the publicity of this person yeah some companies try to they think that publicity is all that they need to create buzz and engage in viral marketing but yeah well we're trying to do is to have a long term build to formulating a relationship with our target market and target audience to be able to engage them and that's only something that could happen over time it's difficult to be able to sustain publicity for for a given company over an extended period of time because basically you're like sort of if you're creating publicity right well you're also you're creating the events that lead to the publicity and is that less expensive than advertising yeah in some ways it could be depending on this situation or where would we be advertising but it becomes challenging to execute that over a long period of time so it's something that that we need to consider carefully and remember Courtley is that we have no control over what the publicity is gonna say so we create we try to create publicity we try to do things that are newsworthy to get this so-called free advertising and then we're at the mercy of the reporters or newscasters as to what they're gonna say they might give it a positive spin they might give it a negative spin and then isn't going to be something that's relevant to our brand and to our product line absolutely so we're not the only ones that are out there that trying to get publicity they understand that they understand when they do an editorial or a segment for a particular product or brand they know what publicity is all right so good discussion let's talk now about where we left off we'll just briefly talk about the difference between durable products and non durable products what did we say is that another term for non durable products consumable so what's the difference between consumable and durable products so I think I heard it so an example I think I'm hearing you say that a durable product and president would be an example a car which has multiple uses right well that we could use multiple times over and over again that we don't use up now it could wear out we could wear out a car for our computers but it's not something that we consume so food would be a good example of a consumable product because it's something that we use a few times and we use it up and we have to buy more we have to replenish like orange juice right so you buy orange juice and you drink it and when it's done it's done it did wear out we consumed it so it's a consumable so we used it up and then we buy more so it's important time to spend that because that's gonna have an impact on our marketing strategy so it's different than we're marketing a product that people buy every week versus a product that people buy every year or every five years what do you think we do not think is something that's durable or consumable because either way that's our extra kids our concern you have two different products are gonna have a different life span so your car might last ten years your sneakers might last ten months but that doesn't mean it's not durable no time is not like one of the key determinants so the key distinguishing factor between durable and not durable is that nonverbal is something that you actually consume that you use up which is different from either you're wearing it out or you're outgrowing it so that doesn't change the fact that the product is durable because your child's feet have grown it's still a durable product it's still a product that you could use again and again and again without using it up you're not using it up but it may be wear out or outgrow it well ultimately you see that the issue with durable products is what we want to do as marketers is shorten the time between the initial purchase and reteach perfect purchase right so with consumable goods the period of time for a repeat purchase is usually gonna be short it could be like you're buying orange juice every week versus with a durable product it could be every year or every ten years but although that's something we need to take into account the key distinction is that one you are consuming and using up and then need to replenish and the other you could use again and again without using it up but you're right I mean you might outgrow those sneakers or if they might eventually wear out durable within this context is not being indestructible its durable but it could still break or wear out over time okay so what do you think about service because when we talk about products in this context we're using the term very broadly so products would include durable and non durable and what do we say last time dad first our services I think they're more durable so if you remember last time we made a distinction we said these are goods and then we have services so when we talk about products we're going to use a general classification to term products and products consists of boots which could be durable or non durable oh I thought I have to chuckle about that sometimes we use the term consumable and then also services so two different classifications of products all right let's see we still have a little bit of time mmm all right that's time we're going to talk about branding and product lifecycle but don't move and we'll talk about introduction growth maturity decline obsolescence and revitalization which are the key stages of the product life cycle which is very important but let's touch upon these convenience products shop in front of our specialty parks and uncertain why it's important to make this distinction and the reason why it is important to make a distinction between durable and non durable is be that's gonna influence our marketing strategy and tactics so we need to classify the goods we need to understand that because that's going to define our strategy so what would be an example of convenience product food orange juice right so convenience products are ones that write easily accessible products that we buy frequently so it could be like you said it could be juice it could be types of food right so that's a very good point in retail we're going focused like you guys started to address the issue of product placement whether it's in the back of the store or the front of the store and also in retail we look at what's called a Jason sees a Jason sees our what's on the shelf next to our product what's on the shelf below our product on the other side of the aisle what products are there are they are there complementary products or substitute products so do you put the tea kettles next to the tea bags and do you have dual placement do you have tea bags in the aisle with coffee but then have a second section where you have cookware type items pots pans and tea kettles and put tea bags there and you also put honey next to that so very important in retail and what about shopping products what's the difference pretty much it's almost the opposite of convenience product something that we buy that much less frequently and something that we spend a lot of time generally research before we make a purchase so another way that we could look at this is say that convenience products are usually low involvement products and shopping products are generally high involvement so the two different models if you will two different ways to to look at the purchase dynamic but I think it's applicable here to kind of make that that's something that we need to be aware of is and it's called overstocking the trade but also overstocking the customer or the consumer because what happens is when we do that if we sell that means we need to understand or we're selling buy-one-get-one-free what happens is people stock up right and so we're gonna see a spike in sales for that career and then what happens the next month well everybody's they got all they got like a year's worth of honey your tea bags or cereal but whatever it is already what that's month they're not gonna buy and so would that be some sort of hybrid here between well it's a convenience product but maybe even though it's low involvement but we don't buy it very often now I think it's that's gonna not change whether how we classify the product but how we classify you as the shopper you see so I would still say that in general that product would still need to be convenience or shopping but your and as a for other different focus is specifically consumer behavior is what's going to change I wouldn't say that that would change the classification of the product what do you guys think do you think that the that paper towels are still a convenience product even though you might buy it in bulk I mean it's still product that usually you buy regularly and is a low involvement purchase yes I would say that it's a it's two different issues is how you classify the product and the other is how we classify your behavior so whether or not you're buying whether or not it's a planned purchase or an impulse purchase or you're buying in bulk like you're suggesting all right so you guys ready to go fabulous all right have a good night we'll do this again soon |
MIT_HST508_Genomics_and_Computational_Biology_Fall_2002 | 7A_Protein_1_3D_Structural_Genomics_Homology_Catalytic_and_Regulatory_Dynamics_Fun.txt | The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license and MIT OpenCourseWare in general is available at ocw.mit.edu. GEORGE CHURCH: OK, ready. OK. So welcome to the third phase of our omics discussion. Someone pointed out that I should point out [? this shirt ?] says omics not comics. So we've covered genomics and last time transcriptomics, and today we introduce a very important all-inclusive subject of proteomics. We'll connect it to last week's through the vehicle of focusing on motifs that are involved in protein interactions with the two nucleic acid macromolecules. So we're going to be covering, just as we introduced RNA omics with RNA structure, we're going to spend this entire class talking about protein three dimensional structure, how you get at it experimentally and computationally and its implications for the binding of small molecules such as drugs. We will in short order get to the scary pumpkin-like molecule. So the connection to last week was this diagram showing palindromicity in three cases and a direct repeat in the fourth case. And I offered that this might reflect the symmetry of the proteins-- of the three dimensional structure of the proteins and the three dimensional structure of the nucleic acid and these symmetry elements would align. Now in order to introduce these symmetry elements and the possibility of having codes that you can at least program, even if they may have been tinkered about during an evolution, the question is to what extent can we get our hands on these kind of protein and nucleic acid motifs that interact. In order to get at this issue of where there is a code-- and I just take this as one of the ways of dealing with the incredible complexity of proteins is to give this a theme that connects it to the last class and connects, I think, to many of the sentiments of people in this sort of audience interested in computational biology is ways of having simple codes. And of course, the way of one way of breaking up proteins and thinking about them is these ABCs-- the alpha helix, the beta sheet, and the coil. Each of these can be characterized by the hydrogen bonds that hold it together. The weak bonds between the hydrogens, the nitrogens, and the oxygens. And the alpha helix, these are all kept within the helix with a repeat of 3.6 residues per turn. In the beta sheet, they tend to have a longer, straighter chains where there are unpaired hydrogen bonds inevitably until you form enough chains to form a cyclic structure while the alpha helix is immediately helical. There are many different types of coils. It's a catch-all phrase that includes everything except alpha and beta, but a particularly well-formed type of coil has its own nomenclature and parameters is the turn, and the turn is illustrated here at the end of a beta sheet. That beta sheet basically is going in on the lower right hand corner in the direction of the arrow. The arrows typically point from the n terminus to the c terminus just as in nucleic acids is from five prime to three prime. And here it goes in the arrow, it turns around very tightly, and goes back out again. OK, now how can we use these basic motifs? These are the smallest meaningful units of protein three dimensional structure. How can we use these to recognize other macromolecules, other proteins and nucleic acids? So let's connect this to the motifs of last class. We have these motifs that we could find, weight matrices for them by aligning lots of sequences. Now instead of aligning sequences, let's see what we can do by mutating both the protein part and the nucleic acid part. And in order to do this, just as an illustration, let's say we have three zinc fingers. This is a real human and mouse DNA binding protein with three zinc fingers in a row. So this is an example of the direct repeat or tandem repeat type of symmetry. Remember, there's the direct repeat and the inverted repeat. And in this tandem repeat, let's anchor the two ends and change the middle. Make every possible peptide sequence in the middle or randomly sample the vast space that might occur in changing a few, say, six or more key amino acids. And then we know from the three dimensional structure that it interacts mainly with the middle three nucleotides, so let's change those middle three nucleotides to every possible trinucleotide sequence and see quantitatively how much that the different sequences in the protein affect the different sequences in the nucleic acid. So this is not going to be by staring at long sequence alignments where we're going to get the weight matrices. We can get them by actual experimental measures of the binding in vitro. And so what happens when you do this exercise, the wild type, now this-- the wild type sequence is something that may want to recognize a family of sequences. We don't know exactly what the wild type sequence is for this particular DNA binding protein, this zinc finger from human and mouse. But the subsequence of the business end, the amino acid subsequence of that recognition alpha helix that's binding into the major groove DNA is shown in the upper left, [? RSVHLTT. ?] And the sequence that it mainly binds to-- remember this is a weight matrix. It's not a consensus sequence, this TGG. It obviously recognizes a variety of other sequences. So remember, there's about three nucleotides on either side of it that the other two zinc fingers bind to. When you try all 64 possible trinucleotides-- remember from the genetic code, this is pure coincidence that these are triplets the same as the genetic code. This just happens to be the amount of the chunk of DNA that a zinc finger will cover. But it's not coincidence that 4 to the third is still 64, just like the genetic code. And if you run through all possible nucleotide sequences for this wild type, you find the winner is TGG, and it has that particular binding constant. The binding constant is measured in the molarity, roughly where you get half maximal or equilibrium binding. That's 10 to the minus 9 moles per liter. You can now mutagenize the peptide and select for peptides that bind to GCC. Remember, the flanks are kept constant. You get two peptides, both bind to the GCC. They both give matrices very similar to this with very-- these are very high affinity binding constants just like wild type. You've essentially engineered by selection a new specificity and two different ways of getting it. If you now go for something radically different, now no GC-- the first one was high GC, the second was pure GC, and then the third one is pure AT. And you get another peptide sequence that binds to that, and you get another weight matrix. Now remember, this weight matrix is not sequence alignment. This is binding constants where the weighting of the 64 different sequences is based on how much binding you get for each of the 64. And then for some sequences, all these different trinucleotides all result in a rather poor selection for any kind of peptide out of all the vast number of peptides you have. None of them do particularly well, and the result is a weight matrix here which has very little information content. So this is a way. It's not the only way, but this is a way of getting a really good empirical data set, which in principle, you can combine it with similar functions on the flanking ones, and you can dial up any sequence of a nucleic acid protein interaction, at least with this class of proteins. Others are a little more problematic. But you can see how this can generate a code even if the actual detailed amino acid nucleotide interaction is not so simple. So that those are the results of the study. Then I'll show you just a snapshot of a schematic of how the actual experiments are done. And then finally, I'll show you a little math behind how we got those apparent binding constants. Remember, in those lower binding constant means you can get the binding at a lower concentration, which means a stronger binding. So the way you do the binding here is you take a nucleic acid array, similar to the ones we've been talking about in last class. Instead of being single stranded, ready to bind fluorescent nucleic acid, it's double stranded, ready to bind a fluorescent protein complex. The protein complex in this case is a bacteriophage, which is displaying the three zinc fingers in red. The middle one is the one in the past slide was mutagenized. And similarly, the array is combinatoric-- every possible DNA sequence that you're interested in is present. And what you do is quantitate the fluorescence of the zinc finger protein indirectly by the binding of the covalently-attached phage to the antibodies, which are fluorescently labeled by [INAUDIBLE] fluorescence. The quantity-- how you relate the fluorescence, the more binding, the more fluorescence. But how you relate that to the binding constant we had in the previous slide is the subject of this slide number eight. Now we call this the apparent equilibrium association constant because these experiments, just like many binding in living cells is not at equilibrium. It's a dynamic process in the cell and in vitro. There are ways that you can measure the equilibrium constants, but what this is apparent in the sense that you need to wash off the excess fluorescence in order to detect the fairly low signal that you get from the specific binding, rather than having fluorescence. You bathe it with 10 to the sixford excess of fluorescence. And so as you're doing that wash, you're obviously not at equilibrium. In the end, you take a snapshot before you wash off everything. And so what you're measuring-- what you're basically measuring is-- attempting to measure is the equilibrium constant between protein P in the upper left here, DNA D, double stranded going to the product, which is this P.D associated biomolecular product. And this is basic physical chemistry and algebra so that-- so you rearrange that to get the association constant. That's what you want. And the fraction of DNA molecules with the protein bounds can be found from this. It's just by definition, the fraction DNA molecules protein bound is-- the protein bound is the numerator. The brackets mean concentration in moles per liter, just like we stated. And then divide the complex by the amount of the fraction, which is the total DNA, which is the DNA present in the complex and the DNA is free. That's the D plus PD. And then you just substitute in this definition of the association constant. The definition is product over reactants, and then you get this intermediate term. Which now you cancel out all the concentrations of these, and then you end up with the last term on the far right, which is where if you hold the protein-- if the protein concentration is known, then the whole thing is constant except for the association. So that one over the association constant is directly related to the fraction of DNA molecules with protein bound, which is directly proportional to the signal intensity and fluorescence. So this is how you get the numbers that were on that slide. Now let's return the question that connected this talk with the last one, which was the symmetry of DNA protein interactions. We illustrated already one of these three zinc finger complexes, as illustrated on the left hand side here. The double-stranded DNA is in blue and the three zinc fingers follow along the major groove, the large groove of the DNA. And the reason the textbook is wrong, first of all, it emphasizes the non-helical part of the zinc finger. You can barely see the helix with the background there. And also the way it loops through the DNA, if you look at this carefully in your textbook, this is actually the [? Mount ?] book, it actually interdigitates with a phosphodiester bond, basically going through the base pairs, which is not at all what happens. Similarly, the leucine zipper, this is, again, recognition with a helix in the major groove of DNA. Here, you can see that the helix that causes the dimerization of proteins-- you can think of this as your really most elementary protein-protein interaction code. A very fundamental one that comes again and again, so-called coil-coil, two alpha helices interacting. The direct extension of that, almost coaxial with a coil-coil [? or ?] protein-protein interaction, they go down and touch the DNA. In contrast, the textbook where it does the sharp right hand turn and in some way poorly schematized there, it goes coaxial to DNA. That's not what happens. The helices are more or less direct extensions down from the dimerization region of the protein maintaining almost perpendicular to the DNA axis. But again, so on the left is the three tandem repeats, and on the right is a dyad axis, where the twofold 180 degrees symmetry-- think of it as rotating exactly 180 degrees. This is not a mirror. This is a rotation. Of the DNA on itself also coincides perfectly with the twofold symmetry of the protein association with itself. These are the two major symmetry classes, and it's amazing how many nucleic acid protein interactions fall into one of these two classes-- direct repeat or inverted repeat. And that's the one you find direct and inverted repeats in nucleic acid sequences, you get a little excited. The other reason is the hairpin structures that you found in RNA that we talked about in the last classes, those also are indicated by inverted repeats. So we now have a semi-empirical way of computing-- in a certain sense, predicting new regulatory protein DNA interactions with double-stranded DNA. Can we extend this to RNA? This is a much more complicated situation with RNA because you don't have these long perfect double helices anymore. You have these very short RNA helices that I showed in the last couple of classes. This is transfer RNA, one of our favorite molecules here, with the anticodon at the bottom of each of the pink structures. And the amino acid acceptor three prime end of that 70-some nucleotide-- 70 to 80 nucleotide nucleic acid. So the pinks are all the tRNAs, and there are at least 20 different types of amino acid and has 20 types of-- at least 20 types of transfer RNAs and 20 types of proteins that add the amino acid onto the three prime ends of the transfer RNA. These break themselves up into two major classes. These can be recognized at the structural level. Class 1, which is the single letter amino acid code [? CEL, ?] so forth, cysteine, glutamate, and so on. That's one class, which are structurally similar. Class 2 is structurally dissimilar to Class 1, but they are similar within the class. Anyway, the point is that they recognize all different parts of the nucleic acid, not just the anticodon, which is the code itself, the trinucleotide code. Not just the amino acid end where you need the recognize amino acid, but various points along the transfer RNA. If you wanted to create a new code, as these authors have, or to create hybrids between these various things, you'd have to find homology among the proteins or graph domains of recognition between each one or mutagenize particular regions that are known to interact with the nucleotides you want to contact, and that's been done. You can arrange to make a new amino acid by carving the pocket the amino acid recognizes and grafting on the appropriate nucleotides-- the appropriate amino acids would recognize, say, a stop codon. OK, you've had some programming experience that hopefully will prepare you for the real world of interacting with input and output from various devices. The topic today is proteins, and this really is the main contact between the exquisite regulatory mechanisms, which will be the topic of the [? network ?] that we've already touched upon, but will really be the topic of some of our network analysis in the last three lectures. Here, we need sensors to sense the environment. We need actuators to then deliver back into the environment what the cell wants to do or to interact with other cells. You have to have feedback, synchrony, so on that you can basically program the almost digital nucleic acid world inside the cell but via clearly analog inputs and outputs. So since this is the Halloween lecture and I'm masquerading as the Wolfman, we also-- I've listed some of the scariest proteins that I could think of. And we're going to talk about three of them. One of them in the slide, which is the proteins that are actually involved in causing the symptoms that come from when you're worried about anthrax. And then we'll talk about HIV yet again, this time, polymerase mutants that cause drug resistance. And then ApoE yet again, as we have in the past, this time talking specifically about how protein structure tells us the haplotype. So with anthrax, you start out with this simple two component-- two protein domains here. They bind to a cellular-- something on your cell surface. Hopefully not yours, but human cell surface. And then one of the domains disappears. And the remaining one now self-assembles into a seven-mer, seven-fold symmetry. Remember, we were talking about two-fold rotational symmetry for the DNA protein interaction. This is now seven-fold rotational symmetry. That now allows lethal factor, LF, to bind-- still not inside the cell. But the whole complex gets internalized. Still, topologically, it's as if it were outside the cell when it's inside this little vesicle. It has to get through that membrane. But now the pH change that happens when this vesicle goes in the cell, part of the natural cell biological processes causes some act-- an unfortunate act where, now, the seven-mer complex of proteins does yet another conformational change and turns into this hairy beast that allows the lethal factor to get into your cell and kill it. So you can see that when we're talking about protein three-dimensional structure, whether we're predicting it or solving it, protein is not a static object. Here, it associates with one factor. It associates seven of itself. It interacts with lethal factor. It opens up a whole new channel in the membrane, et cetera. You need to think of these as dynamic systems with many different states. We also need to think about time scales. Many of-- the molecular mechanics we'll be talking about, the timescale of relevance is the femtosecond. You need to be able-- this is, well, two nanoseconds. So 10 to the minus 15th, 10 to the minus 9th seconds. That's atomic motion. The turnover of an enzyme that is the time it takes to for a small molecule, say, to find and bind the enzyme, to possibly go through a catalytic step, and to dissociate as a product. That's on the order of microseconds to milliseconds. And the second range is the time that it takes the molecule to-- the drug or small molecule-- to touch the surface of the cell, maybe diffuse across the cell, and find its target. Transcription that we talked about, all of the regulatory mechanisms of transcription last time, the rate of the constant for that process is around 50 nucleotides per second. Not entirely coincidentally, that's about the rate at which it is translated into protein. These are important numbers, because a typical gene size piece, say, after RNA splicing in higher organisms or naturally, it might be a kilobase. So that's about a half a minute to transcribe and translate. That could be used as a timer in a circuit of these longer time frames, like cell cycle, circadian rhythm, very long time frames in ecological systems with bamboo and various pests, which can be not even yearly, and then development and aging, which can be on the order of hundreds of years, at least for humans, turtles, and whales. So what we think proteins are good for depends on the accuracy. And the accuracy depends on the method. At the very bottom right, we have a very appealing approach, which is de novo, a priori, or ab initio prediction of secondary-- of protein three-dimensional structure from the sequence alone, which we're getting in bucketloads from the genome projects. But unfortunately, accuracy so far-- and we'll delve into this in more detail in a moment-- is on the order of six angstroms of difference between the predicted structure and what it actually is by the more precise methods up higher on this. The y-axis here, the vertical axis, is basically sequence identity as you start doing, say, threading or comparative modeling here as you get-- instead of the ab initio or de novo prediction doesn't require any sequence similarity. If you want to build it based on previously solved structures, you need at least 30%, but you're still very far-- say, 3 to 4 angstroms away-- from the native structure. As you get to 1 Angstrom or better in your accuracy, as you can get from NMR and X-ray crystallography, you now are in a position to study catalytic mechanism and design and improve ligands, such as drugs. This is really where we want to be. There may be a day where we can do this all from ab initio prediction or modeling at very great distances. But for now, modeling at very short, say, 80% to 90% amino acid similarity, is important. Remember, there's a-- just like there's a variety of different protein structures. This is just an example of a vast literature that exists where you can use some of the methods that we'll be discussing in this class on doing molecular mechanics on proteins and predicting their three-dimensional structure in complex with the various drugs. We will contrast this, or show the interplay of the computational biology, that can be aided by actual measurements of drug binding, just as we had actual measurements of zinc finger binding to double-stranded DNA. And ways that you can discover the small molecules by a clever use of parts of it that you know bind and parts of it that you know might be variable in a chemical sense. Now, this is what-- just as there is this dynamic competition between pathogens and their hosts, there's this similar lethal game that's played between pathogens and the pharmaceutical industry. And here, HIV, for which there are many drugs now aimed either at the protease or at the polymerase-- some of the first ones were aimed at the polymerase, and so we have a big collection. This is one of the most sequenced molecules on Earth, which is the HIV gene encoding the reverse transcriptase polymerase. And it's been sequenced many times because as a patient takes the drugs, their population of the AIDS virus changes. And each of these little diamond-shaped substitution sites clustered around the binding site in the protein-- the binding site here indicated the substrate is in space filling, which is the triphosphate on the upper left and that the template kind of curving around on the right-hand side of that space filling bright green structure. The protein is in red. And these little diamonds indicate substitutions, where the nomenclature is single-letter code for the wild type, the position in amino acids from the end terminus is the number, and then the third-- or the last far right letter is the new amino acids. So for example, D67N means a [? sparcade ?] at position 67 and wild type changes to an asparagine. And that causes a drug resistance in the HIV, with unfortunate consequences for the patient. We can take-- now, making mutations in polymerases is not entirely of negative consequences. And I'm going to show you a really beautiful example where a DNA polymerase-- a very similar kind of dynamic, where the DNA polymerase, you want to change it so that it can now handle what would normally be an inhibitor of DNA polymerase, a class of nucleotides that is incapable of extending-- capable of being incorporated into the growing replicating genome, but is not capable of extending. Is a powerful inhibitor, and it's also powerful sequencing reagent, these dideoxys. And so one of the things that was noticed as the sequences of some of these polymerases were being studied, and some of the resistance mutants, and so on, it was noticed that the complex between the nucleotide, whether it's a deoxynucleotide shown here with a 3 prime hydroxyl, which could then be extended by bringing in the next 5 phosphate. This hydroxyl is near in space to the position on a phenylalanine or a tyrosine, position 762 of this polymerase, which can either-- if it's a tyrosine, it has an OH there. And if it's a phenylalanine, you lose the O and you just have a hydrogen there. And when you have the phenylalanine there, there's a space that-- an appropriate space that accommodates the 3 prime hydroxyl quite well. But if you now put in a dideoxy inhibitor, you now have too much space in there, and you start trying to fill that space with other bulky molecules, like water. And basically, the binding constant, it becomes much less favorable binding when you're lacking both oxygens. So this presented an opportunity to engineer some polymerases which had a phenylalanine there to become more accepting of the dideoxys and hence better at using disease in DNA sequencing chemistry. And this was simply by engineering-- putting in that oxygen there, by changing the phenylalanine to a tyrosine, it now made a better fit between-- you can think of it as-- you can have either oxygen, and by removing this oxygen, you replace it with an oxygen on the protein side. I'm trying to emphasize, by a few examples here, the idea of complementary surfaces and how you can engineer them. This is a beautiful case. Now, we're talking about a single atom rather than the complementary surfaces of the nucleic acids we were talking about before. This has an 8,000-fold effect on the specificity of this polymerase, and a big impact on the Genome Project. Now, that's how we program a particular atom to achieve an important goal. And of course, the virus has its own mechanisms for programming it typically, by random mutagenesis and selection, as we talked about in the population genetics. But the way we program in general proteins are either transgenics, where we might overproduce the protein, or homologous recombination, which is the ultimate where we go in and if the protein is already-- if the gene encoding the protein is already present, we can change that particular nucleotide in situ in the correct place so it's properly regulated and everything. That's a great way to do it. Point mutants are not the only way to generate conditional mutants. Many of them historically were. But there are ways that you can program, and conditional, meaning that you can regulate under what conditions the protein is expressed or not or active or not with an entire domain, or with single nucleotide polymorphisms. Now, so this is one way. This is the nucleic acid way. Another way is by modulating the activity of the proteins from the outside with drugs or drug-like molecules and chemical kinetics. And under the subheadings for that, you can make these by combinatorial synthesis-- and we'll show an example of that. But combinatorial synthesis can be based on design principles, not just completely random. Usually are. The design principles can take into account what you know about the nature of the interaction of similar proteins. And you can mine whatever biochemical data that you can collect for so-called quantitative structure activity relationships. This is a slightly different discipline than the detailed crystallographic and quantitative studies that we've talked about so far. Here, you're trying to basically mine through the structures of the ligand itself for the parts of the ligand that might be responsible for the activity, the binding activity or the full biological activity that you see. So let's look at some examples of single-nucleotide polymorphisms that we've been talking about before in-- actually, this is a class that we didn't discuss before. But in previous classes. But it's related to what we've been talking about. In the case of the zinc finger, we made an altered specificity. We made new zinc fingers with bind to completely new trinucleotides. With the DNA polymerase, by changing one amino acid, we could make it now accept almost four logs better an inhibitor is very useful. And here, many different-- many of these are enzymes, where you can not just knock out the enzyme, but actually make it recognize a new substrate, or change radically the binding constant and catalytic rate for new substrates. And I just have this long fine-print slide just to impress to you-- this is actually less than half of the list-- just how many examples. These are not that unusual. And those can be designed or naturally occurring. Now, we're going to take the three-dimensional structure of proteins and connect it with our discussion of haplotypes and single-nucleotide polymorphisms. And you may recall that with-- one of the commonly occurring single-nucleotide polymorphisms is the ApoE4 allele. It's present in 20% of the human population. Even though it has unfortunate consequences, we think, mainly for Alzheimer's-- it increases the risk of Alzheimer's, and probably increases cardiovascular fitness through ApoE refers to its involvement in cholesterol metabolism and transport. The ApoE3 allele is present in about 80%, and is far more common in human populations, but both of these would be considered very common alleles. And we also mentioned that the ancestral form of this, for example, found in chimpanzees at nearly 100%, is this arginine 112, instead of what's now common in human populations was cysteine 112. And that was-- one explanation for that might be that it was physiologically-- our nutritional standards have changed. We now eat a lot more fatty things. We live long enough to get Alzheimer's. And so maybe this was something that was-- this bad allele, E4, was good in chimpanzees that have different diets or lifespans. But the other possibility-- and I can't really distinguish between these right now-- but another one to seriously consider, not just in this case, but in cases in general, is you no longer just think about single nucleotides. You think about haplotypes. Everything insists on that DNA strand has a chance of affecting either the expression level of the protein or insists, on the protein strand, to fall back and interact. And you can see that one of the nearest amino acids to this arginine 112, which is the main difference between ApoE4 and ApoE3. Arginine 61 is the same on the two alleles. But you think of this as one haplotype, and in chimpanzees the haplotype, is now threonine 61. And you can think that a [? 3R ?] in chimps, or ancestrally, is not too different from [? Rcys. ?] So the different order. So it's like this compensating, complementary mutation, just like we had in the oxygens in the polymerase a couple of slides ago. And you can think of the compensating mutations-- we had the mutual information theory for doing the theory structure. Think of complementary surfaces. When you think of single nucleotides, don't think of them as haplotype and possibly complementing constellations. Especially, now, this brings us to the possible impact of three-dimensional structure on predicting deleterious human alleles. If we suddenly had the sequence of everyone in this room and we wanted our computer program to prioritize, which ones should I the attention to? Which deviations from the most common allele should I look at first? Well, you might think of these things in terms of proteins. We've now gotten to the point in the course where we're talking about proteins. You need to think about the three-dimensional structure. Who's near who in the structure? You can think about binding sites. These might be indicated by-- if you know the three-dimensional structure or you know the conservation pattern in this family of proteins. You can ask things about charge. In that last slide, we had the charge of the arginines being near a compatibly partial negative charge on the cysteines or threonines. You can have-- a disulfide is a very important thing to lose. They tend to be highly conserved. If you introduce a proline into what would normally be an alpha helix, this is something where knowledge of the three-dimensional structure would say, oh, that proline, this a priori, without any knowledge of conservation, could be a huge change in the three-dimensional structure. And then these multi-sequence profiles are a good way of looking at the conservation. That's a way of prioritizing single-nucleotide polymorphisms that might have impact on pharmacogenomics or disease in general. Now, as we integrate that with the chemical diversity that we can create-- that's going to be the topic for the next few slides, is how do we create chemical diversity? And I'm going to introduce this, the idea of chemical diversity, in a way I hope nicely connects to where we've been with RNA arrays. RNA arrays, and the double-stranded RNA array that we used earlier in class today, can be generated in a commentorial sense. You can make an exhaustive set. Now, typically, those were made where, spatially, they were isolated. Each different nucleic acid, oligonucleotide, is present in a different place identifiable to the computer by its coordinates on an array. But you can also make them in a big mixture and use them as a mixture and do selection on them, as we did with the phage display. Or you can make them as a mixture of solid phase particles and then separate the [INAUDIBLE] phase particles out in some manner. Solid phase comes up again and again in arrays. It's very obvious why you have a solid phase. You want to be able to address it by its positions in x and y on the array. But the other reason-- technical reasons are, it's a fantastic way of getting purification of your products simply by washing rather than doing complicated purification procedures. And it allows you to, in the case of beads, there now-- you can think of it as an ultimate and flexible array. You can move the beads around and put them in new arrays, and identify them later. Anyway, so we're going to introduce the general way of making either-- complex chemicals, whether they're linear polymers like proteins or nucleic acids, or much more tighter and small molecules. But they have similar concepts that you need. There's the solid phase that I already talked about. There's the idea of protecting groups, and the protecting groups are protecting against a reactive group. So the highly reactive group here is the phosphoramidite, which is this phosphate nitrogen bond. This is capable of reacting with just about any nitrogen or oxygen, such as this-- eventually, once you do protect this oxygen at the 5 prime position, these two oxygens are 5 and 3, just like the ones we've been talking about all along. This is the chemical synthesis version of the polymerase that we've been talking about. So you have these reactive groups and the protecting groups. And those are the major concepts. Now, let's go through. This is-- the topic here is proteins. And we'll talk more about protein synthesis as part of quantitation next time, and as part of networks in the last three lectures. But here's a completely synthetic way of getting short peptides, either by directly synthesizing the peptides or synthesizing nucleic acid that encodes that peptide or interferes with the production of that peptide. And you can think of these as drug-like molecules. These are naturally related-- they can be analogs of nucleic acids and proteins, not just straight ones. And we'll talk about opportunities for making these analogs. And so by making analogs of known proteins or nucleic acids, you kind of have a more immediate connection between the thing that your computer instructed the synthesizer to make and your targets. Well, if you make a random chemical, you don't necessarily know what your target is. But we'll talk about ways of making slightly less random chemicals. But this is one way of making a direct connection. And the process is cyclic in the sense that each cycle, you return, and the polymer gets a little bit longer. You start with one monomer on a solid phase, shown by these little hexagons on the far right side of the slide. And you add-- you remove the protecting group on the immobilized polymer, one protecting group. And then you bring in this reactive group, otherwise protected. And there's really one major product that you expect. You wash off all of the excess. You now have one longer. You deprotect. This DMT group is removed. And you go back up and cycle again. There may be additional steps, such as oxidation, which will stabilize the new bond that you've made. Or you can have a capping step that can soak up any excess that was left over. But in general, after you're done with all of these cycles, then there'll be a step where you remove the protecting groups altogether and remove the polymer from the solid phase if you so choose. Or you leave it there, if you have an array. These are other examples of protecting groups. These are now on the bases. Some bases don't need protection, like thiamines. If you have an exocyclic amine, then you typically need a benzyl or an isobutyryl group in blue here, are the protecting groups. I said there's an opportunity here for modifying the nucleotides or oligopeptides or other chemicals to make them so that they're related to, but not identical in every property, to normal constituents of your body or of a bacterial cell that you're aiming at. And why would you want to make derivatives? Why not make the exact thing? And the reason you want to make derivatives would be, for example, to increase their stability, or decrease their stability, or make them bind more faster or more irreversibly. And examples are, in the previous slide, you can make modified bases. And in slide 29 here you can change the backbone itself. You can change the ribosomes so that they have bulky groups that prevent the nucleases from getting in, or they can exchange oxygen for sulfurs or hydrogens in order to make the phosphodiester backbone itself, which is where nucleus is cleaved, a less attractive, less energetically favorable substrate. So now, those are chemical processes that, in a certain sense, mimic the normal polymerases and ribosomes in the cell. And there are analogous processes to generate chemical diversity on smaller molecules. And then there are, analogous to that, biological mechanisms by which you can make small molecule diversity which are less cyclic than processes we just talked about. These are more a set of ordered reactions that has a conceptual repeat, but in a sense, you can think of it as a linear program that you go from the beginning to the end. And that's to make these polyketides, which are shown on the right-hand side of the slide. A large class of pharmaceuticals, including most of the antibiotics, are made by a fairly small set of organisms, such as streptomyces in certain plants. And this process by which it's made, which will be illustrated in more detail in the next slide, is very akin to fatty acid synthesis. Fatty acids are long hydrocarbon chains, which you can think of, adding each two carbons on that fatty acid is a process akin to the one for making these polyketide drug-like molecules. Another way of biologically making a very compact structure, which actually uses ribosomes, but it uses them to make very tight short peptides and a precursor that then folds with lots of disulfides to make this small and highly cross-linked. Now, the fact that these cone snails have gone have gone to the trouble of making hundreds of these different very small peptides that have these properties tells you something about what it is that you want-- that drug-like molecules have in common. And that is, they're small, so they diffuse quickly and get to their site. And so you can have large amounts of them in a small space. So you can manufacture lots of them. And then they have to be highly cross-linked in order to maintain the rigidity. Because they're small, they have less surface area to bind to their binding pocket. So to compensate for that, they have to have a lot of rigidity. And the thing that you lock in rigidly has to be the correct structure. It does you no good to have a rigid structure that isn't really perfectly complementary to that surface. And the third source of biological diversity is one you're probably more familiar with, which is the immune receptors, the B and T cell receptors, the antibodies, and cell-mediated immunity. And these use recombination machinery to program various combinations of nucleic acid motifs that encode protein motifs. And as they do that recombination, they have further diversification that occurs due to a template-independent polymerase, [? thermotransferase, ?] which will extend a few nucleotides of completely random nucleic acid sequence that basically incredibly accelerates the rate of mutagenesis, basically generating sequence de novo. This is one of the examples in biology where you generate sequence de novo. And I think that's very apropos of this combinatorial topic. Now, this is a beautiful example from the previous slide. Those polyketides on the far right now are the star of this slide. And here, in a certain sense, nature has-- and now scientists have-- engineered protein modules to make this linear sequence of events. You can think of, here, you're using a linear set of protein domains to program very complicated series of chemical reactions, the same way the linear sequence of messenger RNA tells the ribosome to make a series of additions in the protein. The ribosome catalytic cycle is a cycle, while this is more a linear tape, a linear series of events. The proteins themselves, these little arrow-shaped things with boxes in them along the top, labeled Module 1 through 6, those proteins are, of course, made on ribosomes. But then they act kind of like the solid phase synthesis, where the acyl carrier protein, ACP in the box, binds to the first monomer, and it starts transferring it from protein to protein along this multi-domain huge protein. And there's actually three proteins in a row here. And each of the steps are taken in order along the protein, and they involve things like a synthase step, where you bring in another monomer, or a keto reductase step, KR, where you'll bring in-- where you'll reduce one of the double bonds, or an acyltransferase step, AT. But you can see each of these has a substrate specificity. And by changing the order of substrate specificities, you can build up a huge combinatorial collection here in microbial communities, and also in the laboratory. Now, protein interaction. We're just beginning to talk seriously about protein interaction assays. I think many of you either are or will be more and more familiar with the protein interaction assays. In next class, we'll talk about ways of getting direct information on protein through cross-linking and mass spectrometry. Another way is indirectly setting up these reporter assays, where you take advantage of the binding properties of known proteins to analyze two unknown proteins. And so a known protein might be, [INAUDIBLE],, which binds to DNA, and B42, which activates transcription of something for which you have a good visual assay, like [? URO3, ?] life and death. And by taking these two knowns, let's say, in B42, which have known properties-- but they only exert their properties when they're brought together, and they're only brought together if the unknown proteins or partially known proteins to which they're bound interact with each other. And so this is a so-called two-hybrid assay and variations on it. And I think we mentioned one where you can characterize nucleic acid-protein interactions with one hybrid assay. And here, you can inhibit this interaction between the two knowns-- sorry, unknowns or partially known molecules in blue here. Here's a TGF beta, a growth factor, and a binding protein. You can inhibit that with a particular small molecule or a collection of small molecules, which can be introduced from one of these combinatorial syntheses into an array of these cellular assays. And you get this information about-- you can either collect a big data set of proteins that interact from a proteomic scale experiment, or from molecules that inhibit one or more of those interactions. This is a source of information, which is intrinsically computational in the sense-- well, in the sense that there's a large amount of it. You can model the three-dimensional structure of this interaction if you have sufficient data to do that. And you can model the impact of the small molecules in a structure activity [? since. ?] Now, if you look at the top right-hand part of slide 34 here, you can see this huge diversity of all of these different colored shapes. And if you wanted to use these in a combinatorial assay, you'd connect them in every pairwise combination and try them against your target by some bioassay. However, if that library is too large either to make or to screen-- typically, to screen-- then what you can do is, you can study a part of the molecule and see-- and then take the subset of the diversity that can bind, and then take that subset, characterize it, and now make only the pairwise combinations of the two half-molecules. And generally speaking, if the geometry is fairly rigid, then the binding constant will get-- will be roughly the product of the two binding constants. So if each one has a very low binding constant, then it will be roughly the square of that. And you get some point of diminishing returns, eventually. So this is an example of a strategy where you use a little bit of prior knowledge, which can be empirical or it could be purely computational, about how to limit your library and make interesting combinations. Now, we've been talking mainly about the kind of chemical diversity we can get that's aimed at the ligands that combine to target proteins. But now, we want to talk about the source of information about those target proteins itself, which is another genomics project, structural genomics. And typically, we want to select targets for binding drugs, or select targets for solving the structures of proteins in order to look at their ligands in more detail. And how can we do this? How can we decide which targets are high on our list to go for next? We have hundreds of proteins for which we have three-dimensional structures, and from some of them, we have information on what ligands they bind. But these are other criteria that are sometimes used in the field for target selection. If they are homologous to previous interesting targets, then that puts them high on the short list. If they are conserved-- and we've talked about how important conservation is from time to time-- then that might be an approach. If they're conserved and you knock it out, then you might expect that to be lethal. And that might make it a good target for an anti-bacterial. If you want to limit the action of your therapy to the surface in order to, say, reduce the cross-reaction with internal molecules, you can sometimes restrict yourself to the surface-acceptable proteins. And in fact, a very large class of drugs is aimed at surface-accessible membrane proteins. And so very often, those are prioritized high. There are also-- the surface-acceptable proteins are important if you're talking about vaccines, which is an increasingly important-- or diagnostics that are non-invasive, or at least not going to the interior of the cell. And there are ways, say, with microarrays, that you can ask which genes are differentially expressed in the disease state. And that causes high prioritization. Now, once you have that prioritization, which comes from, say, genome sequencing and some of those other facts, then we would like to have-- you've got your target. You've got your genome sequence-- gene sequence for that target. How, then, do you get the three-dimensional structure that helps you design drugs or improve the drugs that you have? Well, one very attractive approach, given a protein sequence which might get from their deluge of genome sequences, the practical approach might be to start with this genome sequence-- this gene sequence which is 99.99% accurate, and try to predict the three dimensional structure of the protein and its ligand specificity. And if you walk through these, these are kind of ballpark estimates, some of them better than others. To get from the sequence to exons-- we talked about this before-- might be an 80%. Remember, these numbers really should be false positives, false negatives, so on. But this ballpark, 80%, getting to exons, exons to genes. If you aren't privileged enough to have the cDNA, then this is an error-prone process with maybe a 30% success rate. Once you have genes knowing it's regulation, whether it's on or off in the particular cell types you're interested in, is very difficult right now. Knowing the motifs is barely a start on getting the full regulation. So I would say 10% or less. Once you have a regulated gene, getting the protein sequence is easy. That's the genetic code. Getting from the sequence to secondary structure is easy in the context of some of these other things, but still, the accuracy is only around 77%. I have next to this the reference is [? cast, ?] which is something that is a competition for computational assessment of three-dimensional structure prediction for proteins that's been held since, I think, '94, and the next one is coming up in a couple of months. And this is kind of the big race or bake-off between the different methods. Very exciting. But unfortunately, over decades, it's still hovering around 77% for secondary structure and about 25% for ab initio three-dimensional structure. Then even if you have the three-dimensional structure at adequate accuracy, getting the ligand specificity is problematic, and it depends on the ligand. If it's DNA, it's a good case. If it's a small molecule, it can be as low as 10% or worse. Now, since each of these are fairly independent of estimates, you can get an approximate overall accuracy which is a product of all of those, which is dismally small at 0.0005. And so it behooves one to use as many extra-experimental data as one can, or improve the algorithms that are weakest in this journey from genome sequence to ligand specificity. We'll pick up this thread right after a break and carry on to actually how we get the three-dimensional structure, whether it's predictive or experimental, and the computational tasks there. Thank you. Take a break. |
MIT_HST508_Genomics_and_Computational_Biology_Fall_2002 | 6A_RNA_2_Clustering_by_Gene_or_Condition_and_Other_Regulon_Data_Sources_Nucleic_Acid.txt | The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license and MIT OpenCourseWare in general is available at ocw.mit.edu. PROFESSOR: OK, welcome to RNA 2, which of course begins with a RNA 1 overview where we talked about the secondary and tertiary structure of RNA, and how one integrates dynamic programming in those algorithms. And then that is important in the way we go about measurements and it affects certain technical senses. And at an interpretation level, affects how we think about the quantitation of RNAm, which was the main topic last time. And then today, after we have the data analyzed so that we have RNA quantitation and the random systematic errors established some idea of what the interpretation consequences are and maybe time series data, the question is, what do we do next? What we do next is basically two things. At least for today's topic. We cluster, ask which gene expression products, whether they're RNA or protein, go up and down together. And if they go up and down together under a variety of conditions or time points in various conditions, then we want to know why. What is the mechanism by which they go up and down? And to what common goal are these gene products directed? In other words, two different whys. Why mechanistically. And why in terms of the way that they can help the entire system. So in order to deal with clustering, we'll go into quite some detail about the options that we have for doing clustering. And you'll see they're quite a number of combinations. We'll go through distance of similarity measures, hierarchical to non-hierarchical clustering and classification. Now this is kind of the roadmap-- the overview of all the different decisions that we need to make in order to establish gene expression clustering. Going from left to right, we've got data normalization choices, got distance metrics to choose from. Linkage methods, when we link two clusters together or two RNA types together, what methods do we use. And finally, the clustering method itself on the far right-hand side of slide three. And then the working backwards from the clustering method, you've got two basic goals, you can think. Typically, when we think of clustering, we're mainly talking about unsupervised methods. That is to say where we're really letting the data tell us what it has to say, what gene expression products go together. In a possibly an alternative or a [INAUDIBLE] to that would be to ask, can we use those discoveries in a sense to supervise classification? So rather than discovering what gene products go up and down together, ask those that go up and down together to use-- to allow us to classify the different conditions from which the gene expression has been ascertained. So classified as a pathological states, infectious states, cancer states and so on. So now we're going to kind of work backwards from the unsupervised clustering methods and then move into distance metrics and linkage. So we're basically working from right to left on this chart. First, with an overview of what the goals of such quantitation classification methods should be, this has been in a previous lecture. But basically, we can start with the RNA data which we reduced to a table in the previous lecture. You can think of it as a table of RNA expressions along the vertical axis and different conditions along the horizontal axis where we can have full change, say ratios or absolute levels. And we can do either clustering or classification. And when we do get to clustering and discovery, one of the things we can do is use motifs to get at direct causality. These are just some buzzwords that you will find coming up in this lecture and problem set and outside. Just examples of the two types of goals of analyzing gene expression or even more general collections of quantitative data. Four examples-- four examples of unsupervised clustering, k-means clustering, self-organizing maps, single value decomposition, cluster [INAUDIBLE] analysis. You may have heard these in various contexts. I'm lumping of all together here under this category. And we'll particularly delve into k-means as one example. We could delve into any of them, but we need to get some depth. And then just for your reference, here's some examples of the supervised learning if you were going to go into the classification. Here's some examples of early attempts at clustering. These are particularly interesting to look at because they were early and very little prior literature. They tended to take a fresh fresher look at it than you might get in the most recent papers. Fewer assumptions and therefore, more exposition about where they feel clustering comes in from other fields and is applicable to this field. The main dichotomy that I'm pointing out here is can cluster by gene. That is to say, by RNA or gene product or RNA protein. Or you can cluster by condition cell type or even time course. So you can think of that as-- by gene as this vertical axis, at least in the formats that most articles and this lecture will have it in. And then by condition will be your horizontal axis. Or you can do by clustering, which is clustering by both. And then down here is an example of one of many sources of free software that you can look at, both for microarray analysis and for clustering. The general purpose of this is to divide samples into fairly homogeneous groups. Clearly, because of biological variation that can be meaningful or random, these will not be perfectly homogeneous. When we find the coregulated genes but some of the methods we talked about in previous classes, we'll want to know what the protein complexes are that are mechanistically regulating and the downstream functions of these. Again, the major dichotomy among the unsupervised learning is whether you're doing hierarchical or non-hierarchical. We'll show an example of each. Typically, hierarchical is represented by a tree very similar to the trees that we have for sequence similarity and for pedigrees, phylogeny and so on. These are basically the terminal branches of the tree or the leaves of the tree are the individual RNA species representing a vector of different RNA quantitation. With the non-hierarchical, you'll tend to have represented-- these are visual representations as well as underlying algorithms. They'll be represented more as a multidimensional envelope, say, a sphere or ellipse that tries to encompass a set of related gene expression values. Now we'll use diagrams like this-- mainly the two on the far left hand side of slide nine-- where you'll have a fairly tight circular or spherical clusters where it's pretty evident how they're connected. Or you can have more elongated or more inner penetrating clusters. And how do we deal with these? The key terms that we'll try to define this is actually very similar to the ones we talked about before. We had either distance of similarity. These are in flip sides of the same coin. The greater the distance, than the less similarity. The dendrograms are the same kind of the trees that we've been seeing before. Now the most general way of discussing distance measures is the Minkowski metric. This is actually a set of metrics. And what we're going to be talking about here are two objects which are really two-- for the purpose of discussion, two RNAs. Call them RNAx and RNAy, geneX and geneY, have key features. Meaning you have P different conditions, P time points. You will call them dimensions sometimes. And so this means that gene expression of x under conditions one through P is it compared to the gene expression of y under those various conditions too. You can think of these as vectors with P entries in them. And so the distance is going to be some Rth root of a sum to the R power. And we're going to go through three different examples of this. And I think you'll-- hopefully, by the time we've gone through it, you'll see the advantages of this general and the specific forms. So the three examples will have R equals 2, 1, and infinity on slide 12. These are the most common metrics. And you should see them as fairly familiar. When R equals 2 in that formula, you now have the square root of the sum of squares. And this should remind you of your simple Cartesian plotting of the distance between two points on graph paper where you can take any diagonal, the shortest path. On the other hand, if you are navigating the streets of Manhattan, you will tend not to take diagonals through stone walls. You'll tend to obey the blocks and you may have to go three blocks this way and four blocks that way rather than square roots. And then finally, the last one is the maximum distance you might have to go in any particular direction. So you can think that if you take the Rth root of the sum of the differences in these two measures, x and y-- measures of the two RNAs at the same condition-- that as R goes to infinity, you're going to wait up the biggest distance difference along all the different axes. And then you'll take the R through to that. And then it'll basically be the absolute value of that difference. And so those are the three measures. But let's see some specific examples. Here we have two points. So you have-- this is the simplest possible case. Two RNAs under two different conditions. And let's say on this arbitrary scale, the distance between y and x along the horizontal dimensions, which is say condition horizontal, is four and the condition vertical is three. That's the difference between them. And where it is absolutely relative the origin doesn't matter in any of these three metrics. The diagonal-- the direct distance or Euclidean distance is going to be the square root of 4 squared plus 3 squared, which is going to be 5. And the Manhattan distance, you can't take that-- you can't go as the Crow flies. You have to go four blocks to the left and three blocks up. And that's seven. And then the maximum of the two measures, if you think of these as many different measures, the biggest distance in any particular direction would be four. Now here's an example where the Manhattan distance is called the Hamming distance when all the features are binary. And why is this interesting? I mentioned, I think, in the first lecture that many biologists and scientists in general, when they have the opportunity, will classify things as on and off even when there is some underlying quantitative nature. Transistor can be on or off for all intents and purposes. And a gene circuit or a particular gene expression can be considered off or on, 0 or 1. And so now if you have say 17 different gene expression levels, this can be considered a 17-digit binary string. And the two genes, A and B, here can be compared. If you talk about distance rather than similarity, every time there's a conflict of 01 or 10, then you add that to the sum and you have a total of five of these cases where there's a difference. So the Hamming distance is five in this case. So you can see that this has some intuitive appeal if you're going to be doing this Boolean system biology. Here's another one. This is a fourth measure of similarity or distance here. And we've brought it up before. The correlation coefficient. This is a way of comparing this vector of RNA expression levels x sub i with y. So now instead of taking the difference between x and y, sub i, which is what we were doing with the Minkowski metrics, we're taking the product of those two. But if x and y are on some arbitrary scale, then we won't really have a way of comparing one experiment to another. This is an example of normalization. We're going to use normalization a couple of different ways in this class. But they're all related in that you want to put them on a scale that's universally recognizable. Typically 0 to 1 or -1 to 1. In this case, -1 to 1. And so what you do is in order to get it to the same center, you subtract the means from both the x and y. So now they're centroid is at 0 instead of at x bar, which is just the defined of the mean, as usual. And then to get the scale the same or on some commonly referenced scale, you divide by their product of the squares. So the result of this, as we previously discussed correlation coefficient, is that the correlation coefficient varies between minus 1 and 1. If it is 1 on slide 16, it means that they are perfectly correlated. Which is, of course rare, but bear with us. If the gene products go up and down perfectly under all the conditions and all the time points that you look at, then they're going to get a linear correlation coefficient of 1. If they're perfectly negatively correlated, then they will go up and down exactly out of phase or exactly when one is at this maximum, the other one will be at it's minimum. And if there's no linear correlation, then it will be a linear correlation coefficient of zero. Now there can be all kinds of complicated nonlinear relationships. I mean, they could be very, very codependent, say, quadratic and still have a zero for their linear correlation coefficient. So exercise for the reader. Which of these is 1 minus 1 and 0? We'll start with the upper left hand one. Is that 1? Minus 1. Good. And this one? 1 right. And zero. Great. And you will see that those have not been normalized because the correlation coefficient will do the normalization for us. In a moment, we will deal with-- we'll go back to Euclidean distances but we will do a normalization first. Now here's example of hierarchical clustering dendrograms-- just happened to be done for tumors and normal tissues. And you can see the tumors designated by T tend to cluster together, and the normal tissues on the right hand side of slide 18 tend to cluster together. But it's not perfect. There's some interpenetration. You can see this would be a challenging classification problem. The way that hierarchical tree was derived is you basically start by saying, each object-- gene-- and you're going to be measuring gene expression, which typically is RNA or protein. And you're going to call each individual RNA a cluster. It's a cluster of one. It's a trivial cluster. And then as you look through each step in the hierarchical clustering, very similar to some of the greedy algorithms we use for sequence alignment, you take the two closest clusters even if they're a cluster of one and you'll merge them. And now I call that the new cluster. Now it's a cluster of two and so on and so forth. Until finally, everything is in a cluster and you've kept tract of all the who's closest to who all the way. And that produces a tree. Now in order to generate that tree, you've got four other clustering methods. You've got the choice of the distance metric, the way of putting together the distances that you've measured. So the distance we measured can be some Minkowski or correlation coefficient. But you can put them together by either focusing on the nearest neighbor of the cluster or the furthest neighbor. That's the single link for this complete length. And we'll talk about that. And then the other methods that we won't talk about are centroid, if you can think of the center of mass for the cluster as it emerges. And the average, which is just to say the mean of all the cross cluster pairs. If you got two clusters and you do all pairwise. So let's do the single link versus the complete link. First, the single link in slide 21. And we're going to use exactly the same distance matrix for both of these examples. So you don't have to shift gears too much. The main thing is the only thing we're going to shift is between single and [INAUDIBLE].. And we're using Euclidean distance here, which is square root sum of square. And here you can see AB are the two closest and A and B are the furthest apart. And so the Euclidean distance for AB is 2 and AB is 6, for example. And so in the single length method, this kicks in once you start collapsing the first link. So you make the link between A and B, that's obvious because it's the shortest distance. But how you collapse it depends on-- how you compare it to other points is what the single link method is about. So now AB is going to be treated as one unit-- one cluster. And you're going to ask, how far is AB from C? Well, since this is a single link, you're interested in the closest distance and that's BC. And BC, from the very first leftmost matrix was three. So you fill in for the AB to C at three. And similarly the D is the closest point. From AB to D is five. It's the diagonal from B to D and so on. And that's how you've lost the top row, and it's three and five. And now when you compare these, the next link you're going to make is going to be the smallest one in the whole table, which is three. And that happens to be the AB cluster is closest to C. And so that's going to be the next link you make. And then the rest of the game is over. It's just the ABC cluster is near D. So you can already imagine in your mind what that tree is going to look like. A and B are going closest, and then you bring in C. And then you finally bring in D. And you might think at this point, that's the only way to do this. But the complete length version of this is exactly the same matrix. You start up the same place. AB is still the closest one so that's the one you're going to link together first. But how you score it as you do this linking is a little different now. Because now you're concerned about completely all of the distances from the AB cluster to, say, C. Now B is close but A is far away. And we're interested in that greater distance as well. And so the whole cluster of AB gets the distance from A to C, the longest distance, five. And so five goes in that position. And six goes as long as this is from AB to D, which again, is A to D. And so now you have a completely different-- just toggle back and forth between slide 22 and, 21 and you can see it went from three, five, four to five, six four. So now when you make the next link-- the first link is obvious in both case, AB. The next link is now CD, because the smallest one in that two-by-two matrix is four. And that happens between C and D. And now C and D are the next link. And then now the game's over. You connect CD and AB and the link is-- so now you can see you're going to get two very different trees from the single link method on the left hand side of slide 23 is AB bringing in C and finally D. While the complete link, you have AB and CD as two separate pairs and then they come together. Now this is the simplest possible example I could have come up with. But I think it combined with the next couple of slides will drive home the importance of the clustering method that you're using. Here the linkage method, part of it. Again, focus in on the far left-hand side of where you have more compact spherical circular clusters or more elongated ones. We're going to take three examples here. Spherical, elongated, something in between. A single link in the middle of slide 25. And then complete link on the far right-hand side. In a single length, Now, you can kind of see why they're called single link and complete link. This is a different way of visualizing them. Here the single length does a great job for the top and bottom clusters-- the circular and the linear forms. But when you start getting something that's somewhere in between, you get this weird single link that, at least to my eye, connects up the two clusters along the bottom here and then leaves this little cluster as the second cluster. The complete link on the other hand, where you measure all the distances between previous clusters and the new clusters you're going to be adding goes well on the top one. And the middle one but does this weird thing with the elongated clusters where it takes a small cluster that seems, to my eye, to include things that are not that related. So the single link does well on the top and bottom and the complete link does well on the top middle. And so you can see that depending on what you think your data are going to look like, whether they're going to look closely spaced but compact clusters that might be single length and more elongated but separated by distance, then you might want a complete link. So now where are we in this overall road map in slide 26? We've been moving from the right where we've gone from clustering methods, supervised, unsupervised, hierarchical, non-hierarchical. We've gone through distance metrics and linkage metrics. Now let's see how it plays out with one particular non-hierarchical method. We've been focusing on hierarchical. Now we're going to go non-hierarchical k-means and bring in issues of data normalization. In this case, gene normalization where we're trying to put genes that are wildly different in their absolute value of expression on the same scale. So one might be a very small fluctuation at a kind of medium level. Another one could be very large fluctuation from baseline up to a very high level. And you want to account for this difference in baseline and this difference in scale. And so what you do-- and that's what all these three little normalized expression plots are. Is they represent this table, as I've mentioned, of genes along the vertical axis or gene expression levels-- genes that are going-- were we're going to measure expression levels along the vertical axis and the points or the conditions along the horizontal axis. And so we have two representations. One is this kind of dot cluster envelope representation in the middle where you have, in this case, three dimensions. But in a case that's a little harder to visualize, multi dimensions-- 17, 15 dimensions. That's one representation where the origin is essentially the mean where you normalize it, the mean becomes zero. And then the distance from that origin can be either positive or negative and it's the number of standard deviations from the mean. That's the way that we're going to normalize it. So each of these individual plots would be the average behavior in each of these clusters. And we'll take a look at that, the average and the deviation from the average. But the units here in the vertical axis of these little plots will be normalized expression. Number of standard deviations within the cluster from the mean of the cluster. Now when we're going to be measuring distances between clusters where we have the same normalized expression data table-- and this is the three dimension-- this is the three dimension, in this case, or multi-dimensional representation. Where the origin is zero or the mean for each of for each of the axes and the distance from that zero mean is the number of standard deviations. And when we'll measure it will measure this hidden distance the square root of sum of the squares over all the dimensions. And I want to emphasize that each of these clusters is not point-- if gene expression were regulated by transcription factors which bound to every site with exactly the same binding constant, then you might-- and if there were selection pressures forcing this to happen where the forcing everybody to be precisely-- everything in a cluster be precisely regulated, then these clusters would be really tight. There'd be almost a point and there'd be no overlap between them. But in reality, there is no such selective pressures. And the transcription factors, as a result, are possibly purposefully diverse. And you get these spread clusters. And so these little blue bars on each of the points on these time series plots of normalized expression-- the three times three plots-- those little blue bars don't necessarily represent experimental error. They represent the diversity of gene expression within a cluster. Now if you've accidentally made more-- assigned fewer clusters than is say the natural number of clusters, then you'll get more dispersion in that number than you might want. And that might be a tip off that you actually need more cluster-- you need to divide it up into more clusters and bring this down. Obviously, if you break it up in too many clusters that will have a different set of pathologies, you'll have the distance between clusters, some of the clusters will be abnormally close. They'll be almost as if they were right touching each other. And so that's the tip off that you have too many clusters. And the number of clusters is something that you can either determine in advance or you can discover as you go. But those are the examples of criteria you might use. Too much dispersion in those little blue air bars means that you've tried to lump too many things into one cluster. And too short a distance between adjacent clusters means that you probably divided them too finally. Now how do we begin to assess whether the clustering methods that we're using are optimal? We've talked about all the different kind of clustering methods that you can use. One of the ways to assess whether they're are optimal-- we'll talk about many. But one is to look way outside the box to some resource that maybe the biological community has curated functions. Now they may mean this in very vague and frustrating ways, but we believe that they have done a good job. And certainly an independent job of the experiment that's being done. The experiment that's being done is a fresh comprehensive gene expression analysis. And so if you find a cluster from a gene expression analysis that coincides with this completely independently curated database of functional categories, doesn't matter what [INAUDIBLE] means. This is some abbreviation for an Institute. Nor does it really matter what the gene names here mean. But what you will find is that a particular set of genes, once you look it up in the database, will set off a flag that says ribosome. And you know what ribosome means. And others will be unknowns. But the point is that these will be an orderly set, a set that's perhaps enriched-- unexpectedly enriched. And you want to have some way of quantitative your surprise at finding this many of one type of function in your RNA cluster. In a way, this is what you hope to find. It's a pleasant surprise. You want your clusters to have some coherence in their function. You also want to find some surprises, either unknowns or new combinations of functions that you didn't expect. Now this is an example of a clustering experiment. It's a popular way of representing it. Here is the trees we've been talking about here that the closest-- the tips, the leaves here are individual genes. You can barely see them at this scale. This is a small subset of the human genes. This is a RNA expression that has been measured over time course of serum stimulation. And considering the previous slide of different functional categorization, what you want as you hierarchically arrange these things, you've got time as the natural axis horizontally. And then you've tried to sort them so they're close together in the hierarchical tree. And you've and you've represented whether they're greatly induced or greatly suppressed during this serum stimulation. You take one of the time-- zero time point is the reference point. And then greatly increase or decrease represented by red and green respectively. And then within each of these clusters, you have little zones where they all have the same kind of pattern of black, gray, green, and red. And so for example, the E at the bottom in the red zone here is wound healing and tissue remodeling. And these are the genes that you might expect to be enriched in a growth stimulation paradigm, such as the one here where you're simulating with serum stimulation of fibroblasts. This is a particular example of how you might-- but you might want to quantitate this rather than just kind of showing it here. And we're going to walk through exactly how you do that quantitation in a moment. This is just a quick snapshot of how far this clustering goes. Actually, it goes well beyond biology. But here's something that's for slide 32 out of the range of RNA expression. Here we have compounds on the vertical axis and targets, meaning proteins on the horizontal axis. And you can see all these connections between different cancer therapeutics, different cancer cell lines and potential targets. But now back to the RNA. And we want to ask how do we assess the RNA array data collection, the clustering methods? And how do we go-- and how do we go beyond that in various directions, both as validation of the technical aspects but as showing that we're actually doing discovery and getting at mechanism. So one of the various methods we've used-- we already mentioned looking for functional categories, but another one is looking for motifs. If we find a consistent set of motifs, this is part of the validation process as well. And these are some of the examples of algorithms. The first one that leaps to mind when mathematicians and physicists enter the field and that one that we've used a great advantage in the sequence searching part of this course was oligonucleotide frequency. So you can use short oligonucleotides as convenient hashing keys or as ways of doing the lookup-- a very rapid lookup for sequences and in finding matches. And this is even more appropriate here for the motifs involved in transcriptional regulation because we from a variety of biological and chemical crystallographic studies that the motifs are in the range of 7 to 10 nucleotides often-- base pairs in double stranded DNA. And so you can use oligonucleotide frequencies. However, they're limited in that they're not as rich as the weight matrices that we got when we've got a multi-sequence alignment. And when we were talking about multi-sequence alignments, we pointed out that it was hard to get the algorithms to scale beyond the pairwise. Because pairwise was n squared where n is the sequence length. And then as you go to multi-sequence alignment, it goes up exponentially with the number of sequences. You want the number of sequences to be large though. Because the larger it is, the more you learn about the characteristics of that family of sequences. So anyway, Gibbs sampling was one of the methods that we said that we would put off to a later class. This is the later class. We'll talk about Gibbs sampling as a way of-- the idea of sampling this very large space where the large number of multi-sequences-- the multiple sequences you're comparing is that you don't want to get trapped in a local minimum. You can have these really greedy steepest descent algorithms, but you'll get to the bottom of that pit but you won't necessarily find the global. If the sampling space is too large, even sampling won't save you because you'll sample a lot of little local [INAUDIBLE].. But, anyway, Gibbs is an example where you use randomization to find it. Mean as example of maximization of expectations and [INAUDIBLE] and so forth are other ways of doing it. We're going to really focus in on one of these. Can't cover everything. We've talked about Gibbs sampling. And we want to put it in the context of-- and the thing that might be appealing. Why can't we just-- if the program for transcription factor regulation is inherent in the genome, then we should just look at the genome sequence and be able to see patterns of motifs in front of genes. And then find clusters of genes that are expressed and so on. The problem with that picture-- even for one of the best case scenarios, [INAUDIBLE],, which is about 12 mega bases-- as I said, these transcriptional control sites are about seven bases, let's say, of inflammation. Here's one that will be a star for a few slides here today after now and I have break. This is GCM4. You can see it has five really full scale, two-bit conserved bases. And then the rest of the bases in this motif-- the other five bases might add up to another two bases of information or 14 bits all together. Now 14 bits, you can think of that as 4 to the seventh power about 1 match every 16,000 bases. Now if you have a 12-megabase genome, and since it's not symmetric, you have to look at both strands. You have to think of the transcription factor scanning the DNA in both directions. Then you have 24 megabytes mega bases of sites. 24 million sites. And at random, you expect 1 over 1,600. So you have a mean of 1,500. Now here we can bring in our old friend the Poisson distribution. And we will remember that the mean and the variance of a Poisson distribution are the same. And so the standard deviation is going to be the square root of variance, as it is for all variances and all standard deviations. And so the standard deviation is going to be about 40. So if you expect to convince yourself that you have something interesting then you want it to be about two or three standard deviations above the mean. So your noise that you're fighting is about-- you want to get 2 and 1/2 times 40 or about 100 sites. Well, many biological phenomenon do not have 100 sites. They're not 100-- there may not be 100 GCM4 sites in the genome, for example. And so what you need is a way of winnowing down the genomes. We're not looking through the whole genome but we're enriching in various ways. What are the various ways that we can enrich? Well, the first three we'll lumped together as ways that we can biologically cluster. Basically, that was the theme of the first few minutes of this lecture. Ways that we can put together five genes that are-- where the gene expression products broken down together. And that would be, for example, whole genome [INAUDIBLE] data. That's the top line of slide 36. Or they could be-- and we had a little slide on this earlier of different ways that genes could show that they should go together. They could have a shared phenotype. You could do knockouts and they have similar biochemical or morphological characteristics. And so you put them in the same functional category. That might be the source of some of the functional categories we've been talking about today. They can be conserved among different species. Species will inherit them-- will tend to inherit them as a group and others. So this is the example why genes should go together. And then you'll reduce the sequence space to be the regulatory elements that go with those genes and not the rest of the genome. And those are the ways of selecting the genes. But then selecting the sequence itself near those genes or in those genes. You might want to eliminate protein coding regions, repetitive sequences or any other sequence is not likely to control sites. This helps you by reducing your sequence space. That's kind of a trivial help. Actually, an important help. But in addition to that, you want-- they help you by removing traps where you're going to find motifs but that they are unlikely a priori to be relevant to transcriptional control. Which is what you're really trying to get at here to validate and to extend the discoveries you find from the unsupervised clustering. And why do I say that? Why would protein coding regions and repetitive regions-- repetitive elements be a bias? Well, protein coding regions that for genes that cluster together, for some reason or other, likely a priori to have proteins that have similar functions. They're clustering together because they have similar functions. They might share protein domains in common. So you will find nucleic acid motifs that are similar to one another, not because they're involved in regulation but because the geniculate code turns into protein motifs that are similar to one another. So they can accomplish a similar function. And that's why they-- and repetitive regions are definitely destined to give motifs in common because of their selfish replication properties. The entire repetitive sequence from edge to edge will jump around the genome. And so there won't these little seven base pair motifs. They'll be a 10 kilo base motif. And that won't tell you much about transcription. Now having said that, we're in the business of sequence space reduction. Both the top three methods and this bottom method will exclude certain kinds of discoveries. But once you find the motif by severely restricting a sequence, you can then search for that motif and pick up the examples that you might have eliminated in the first pass in a much less noisy manner. You've got this bona fide motif, now you want to find all the other examples. In a way, you're testing the specificity of the motif. So for example, there could be RNA regulatory elements in protein coding regions. They could be some in repetitive regions. In the lecture that we gave on single nucleotide polymorphisms, I perversely chose a very interesting one that occurs in one of the most common dispersed repeats in the human genome, which is the ALU repeat. That one has regulatory significance, but we will exclude it from our search space initially so that we can get plenty of good examples in a small box. So these are the main ways of reducing search space. And we're going to illuminate this with a particular algorithm-- a modification that gives motif sampling, which is this one where you sample the multi-sequence alignment states randomly so you don't get past the local minimum. And this is called a [INAUDIBLE] nucleic acid conserved elements. The emphasis on nucleic acid. And what are the advantages to give a deep sampling [INAUDIBLE]? Why are we focusing in on it? Well, the [INAUDIBLE] sampling, as I said, keeps you out of local minimums. There are a number of sites per input sequence. It could be that in the genes that you've found in your cluster, some of them may have three of these motifs in front of it. Others will have one or even zero, because it could be that particular gene co-clusters is because of some other set of motifs that happen to have the same properties as the motif you're looking at it at any given moment. So you can have zero to a large number of motifs. And that's important. This algorithm handles it. Other algorithms assume there's exactly one site per sequence. And that introduces noise. You can distribute the information content in various ways. You'll see, we can fine tune the shape of a motif in a way. Some of these algorithms were based on proteins. Proteins have only one strand. They don't have a Watson and a Crick strand going in reverse complements one another. And so you need to make a conscious effort to adapt that algorithm so that it's-- that it, in a certain sense, recognizes the duality and the reverse complements of DNA strands. And you have to-- there's multiple distinct motifs that's different from the variable number of sites per sequence. Once you find motif number one, it may be the dominant motif that you find again and again in a multi-sequence alignment. You have to go back and find number two. Because it could be number one isn't the only or isn't the major biologically significant motif. It could be any two or three motifs acting in concert. So you can't just rest on your laurels when you find the first motif. And for each motif, there can be multiple examples of them per sequence. Anywhere from zero on up. So let's make this much more concrete and really drill down to a specific example. This example-- the real example-- it's taken from the amino acid biosynthetic genes in the yeast saccharomyces. So here we've applied the two major classes to sequence reduction. The first is by biological function here. These are all amino acid biosynthetic genes, histidine, aromatic amino acids, [INAUDIBLE].. These are all on the right hand side of slide 39. But in addition to the biological reduction of just maybe 116 genes that are involved in this process, we've also done the sequence space reduction near the gene to exclude the protein coding regions and only look at 300 to 600 bases upstream. Why 300 or 600? If the genes are really close together, you don't want to go much beyond 300 because you can enter the protein coding region of an adjacent gene. If the genes are very far apart in this particular part of the genome, you don't want to get much more than 600 or else you'll end up in the repetitive sequences or other things that are other regulatory elements unrelated to your particular protein. Or you might end up in an RNA encoding gene. So 300 to 600 is good for this particular organism. But you might need a different one for, say, human. You're going to have to look in introns and much further upstream, which makes it a much more difficult problem. Anyway this is the sequence reduction phase. And now let's say, well, do you see the motifs in here? I mean, those of you who are good computing should be able to do this algorithm in your head. But here's the answer. And then we're going to-- now we're going to go through and we're going to say how we got to that answer with the Gibbs sampling alignments algorithm. The answer here is GCN4. This is the one we used to illustrate we have about seven bits of information here in this Snyder logo format. And on the lower right, it has a map score that we'll define soon enough. Basically the higher the maps score, the better. It has to be greater than 0 to be non-random. And here's on the left hand side of slide 40 is the multisequence alignment, just like the multi sequence alignments we talked about in the last lecture-- two lectures ago. And here in red are all these arrows. They point either left to right or right to left, depending on which strand they're on so they're not exact reverse complements. Although, this does have a little bit of symmetry in it. But you can see that you have anywhere from one to two of these in front of each of these genes. OK, so now how we get there? Let's go step by step. And some of you may find this algorithm counterintuitive at first so don't be surprised if it is. The first step is we randomly seed. We plop down, say, 10 more sequences 10 nucleotides long, arbitrarily picked that as our length and plop them down randomly on these sequences here. So we have represented seven of the 116 amino acid biosynthetic genes upstream regions here. And we've just highlighted red arbitrarily two red, 10 [INAUDIBLE] on the top one, and then none on the second one, and then one on the third one, and so on. And then since those are given and which is the first position is given, then it's a trivial matter to line them up. Just take all the first positions and you take a sum, and that's the weight matrix. Now you wouldn't expect since these were all randomly chosen for real sequences, you wouldn't expect this to be an astoundingly non-random weight matrix. And it's not. It has a maps score that's negative. And as I said, that's basically random. A few bases tend to stick their head up a little bit above the random noise of 0.25 if this were a random genome or whatever the base composition is. And none of them are full scale it 2 bits. I'd say none of them are perfectly represented. So now what's the next step? That's the initial seeding and it gives you a flavor for what's going to happen next. But there's some interesting things that you can do to increase the chances of getting a good motif. So the next thing you do is either you add another site. You add another 10 [INAUDIBLE]. So the top row of side 42-- the top sequence already has two, but you add another one. You add a third one. Sequence number, arrow four still doesn't have any. But you added a third one randomly at the top and now you've got two sequence alignments. You really haven't been able to do anything up to this point. You've got now two multi-sequence alignments. And you ask, which one is better? Well, let's say the one on the right is a little bit better, the one you add of the sequence to is a little bit better. Now you don't just blind the program. It doesn't just blindly accept this as the better multi-sequence alignment. It has a probability that you will accept this. And that's again, to keep you from going through a completely greedy algorithm. Every improvement is going to be probabilistic. But you will definitely very greatly tend to accept each improvement. So this was adding a sequence. That's how you might improve it. Or you can remove one. You can add and remove another two from the top sequence here. Add one, remove one. And I asked you this if the multi sequence on the right is a little bit better. If it is, then you have a high probability of accepting those two. The add and the remove changes. These are adding or removing entire sequences. Keep going, adding and removing. Another thing you can do is can say, well, maybe the important bases aren't all smack in a row-- 10 in a row. Maybe you want to make it a little bit longer? Maybe the motifs should be a little bit longer? Maybe some of the ones in the middle aren't important, so we'll toggle one of them off and move the columns over. So now the motifs are a little bit wider, but it still has the same number of columns. And if that improves-- if that gives you a better map score, a greater surprise in a sense of probability that you've got-- that you would have this number of sites that are shared to this degree in this number of sequences, then you have a high probability of accepting that change. Now you're not just changing the collection of sequences you think belong to that motif family, but you're actually changing the structure of the elements that you're going to call the weight matrix. You're changing the column structure. And that's also probabilistic. And out of all this randomness, given many cycles, you eventually get the best motif. This might be the best motif for this particular learning set. But now you want to get the second best motif. Because this isn't necessarily the biologically best motif. And this one may not act alone. It may have another one that's also enriched and it could be that their co-occurrence is even more significant than either one of them occurring singly. So what do we do? And I think what we're going to do is we're going to take a little break. And then when we come back, your incredible curiosity will be satisfied as to how we get the second motif. So take a little break. |
MIT_HST508_Genomics_and_Computational_Biology_Fall_2002 | 10A_Networks_2_Molecular_Computing_Selfassembly_Genetic_Algorithms_Neural_Networks.txt | The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license and MIT OpenCourseWare in general is available at ocw.mit.edu. PROFESSOR: OK. Welcome. Welcome to the second discussion of networks and the second to last talk of lecture of this course. There's a very important set of lectures after that from the students I'm looking forward to. So last time we talked about the use of computers to model various complicated systems in mainly cellular systems, ranging from highly cooperative, even bistable bifurcation so that where a cell makes a decision, a discussion of chromosome copy number and its implications and a large chunk of time on flux balance optimization where we can look at many of these things from the standpoint of optimization. Now, today, to paraphrase a famous local politician, we will ask not what computers can do for biology, but what biology can do for computers. And of course we'll go back and forth and try to look at what this interesting dynamic between the experimental and the computational side. And so how can biology aid algorithm development, to return the favor and aid biology? But not only algorithm development to not only inspiring algorithm development, but actually implementing hardware and software in biological systems. We'll talk about that. And ranging from molecular computing to cellular computing and then back to inspiring algorithms again. So slide 3, what is it that we're really talking about, that computation needs in terms of aid? How do we get-- what are the real issues here? And we've mentioned a couple of times in the course, probably initially, when we were talking about dynamic programming and related topics, of how a problem in computer science scales. Typically, you'll have one key input, size in, and then the running time or sometimes the memory has an upper bound which we've been referring to as the order of some function of that end or end as the length of a string or the size of the problem. There are other symbols that are slightly more rarely used. There's in addition to the upper bound, there's lower bound. Sometimes you can get an exact bound or equal bound. And how does this play out when you have specific instances of n? Now of course, this when you say this something is on the order of or upper bound of a function of n, you don't always necessarily describe the constants. But to give you a ballpark, when you have n ranging from 1 to 1,000 and the polynomial ranging from linear to quadratic to 10th power, and then the exponential or factorial. And you can see that for very small n, you can get cases where a polynomial performance computation time can be longer than an exponential computation time. So exponential isn't necessarily bad news, depending on the size of your problem. But it gets to be bad news very quickly. As N increases, you quickly get over large numbers on the far right hand side that are computationally intractable with any known computer. So we mentioned computational complexity is one of the various definitions of complexity at the beginning of course. This is not the one we chose to be closest to the definition of living complexity. But it's one that's very frequently used in the computer science field. I just want to introduce a few of the terms here just so you've heard them in this context. We have-- and it basically refers to the issues brought up in the previous slide of whether it takes, whether it's scales by a polynomial, which is generally desirable, or whether it's scales exponentially or worse. And so we have P are problems that we can solve in polynomial deterministic time. Deterministic is just described the fact that the algorithms will do the same thing time after time, which is the typical computer that we feel comfortable using. And an example of problems in the P class, the dynamic programming, which we've used numerous times. It scales polynomial and the polynomial just depends on the problem slightly from square to the sixth power we've seen. NP has a number of subsets, but overall, it means it's not the deterministic polynomial time to get the solutions. The solutions are checkable in polynomial time, but are generally felt to not be currently feasible in polynomial time for the determination as opposed to checking. An example of this were actually the inventors of various encryption schemes such as RSA. The R and the S and the A, referring to the last names of the authors. In a certain sense, they're banking on the difficulty of cracking these codes in polynomial time. You can use the codes if you know it. You can check it, that is to say, but you can't crack it, unless someone comes through with breakthrough on the NP problems. Because if you solve one of them, you can solve all of them. And then they're subsets of this. This is a little less critical for today's discussion. But there's NP complete, an example of which is the traveling salesman. Can you get through all the vertices on the trip with a mileage below some threshold. And an MP hard version of that same thing is what's the minimum mileage that you can get, not just less than x, but how much less than x? And the worst case scenario is undecidable, where you really, even given an unlimited amount of time and space, you can't tell whether it is. And the classic one is the program halting problem where you don't know whether your program is going to halt and probably all of you have run into that problem. I'm just being funny. So but I mean, it is a real-- the program halting problem is a serious mathematical construct. OK. How do we deal with this? How do we start thinking about ways that we've dealt with it before and ways that biology could change the landscape a little bit? Usually, what we do when we're faced with an NP hard problem is cheat in some way or another. You redefine the problem, so it's in class P, sometimes sacrificing something. So you might have-- if you're interested in tertiary structure, you may be defined as a secondary. And we showed that secondary structure for RNase can be solved with the dynamic programming algorithm with N squared or worst, N to the the sixth algorithm. Whether that is as precise as the most precise tertiary structure that one could get given infinite exponential time is an open question. Probably not. If N is a small enough, we showed that exponential times can be reasonable. And so you just do an exhaustive search. Or if you can't do that, then you use some clever heuristic way of pruning things. And that's in a certain sense, that's what most of the approximations are. So what can biology do? We'll talk about three examples in today. One is DNA computing. One is genetic algorithms and neural networks. The first one in a certain-- none of these really actually solve the problem. The first one is a way of just obtaining a lot more raw computing power. I'll show you a quote where they say they've solved an NP complete problem, but it's in the same sense that you can solve any exponential problem by brute force. That's not really finessing it out of NP and into P. Genetic algorithms and neural networks are definitely heuristics. They're beautifully inspired by two of the greatest algorithms in the history of life on Earth and that is evolution and complex brain networks. And we'll get-- And so genetic algorithms is based on the adaptation that occurs during evolution and recombination and mutagenesis. And neural network is also about adaptation, but this is on the time scale of learning. So we'll first dedicate ourselves to molecular computing and associated just kind of put it in the context of nanocomputing in general. And you can see all the issues in computing, not just the math module. So all the steps are assembly of the requisite hardware. This is some kind of factory operation, typically. Then there's some of input module, some hardware and software that's required for getting the data in. Then there's some sort of memory. And then there's a central processor, might have math components and output. This is from our first lecture, assembly input, memory process, and output. And what we want to do with biology is we want to harvest things from genomics and from just biological research in general, use to design better computers either in silico or in a biological, biochemical sense. And then, harness evolution, either to make devices or as part of algorithm development. We have-- different people have different opinions about how much longer Moore's law, the scaling for large scale integrated circuits or curves files version that goes back to 1900s, about doubling every two years, the ability to calculate, calculations per second, per 1,000 [INAUDIBLE]. There might be another decade left in the silicone, a large scale integrated circuit. That's what some people say, or maybe more, maybe less. So there are three real options here for that next step, electronic nanocomputing, optical nanocomputing, molecular nanocomputing and you could add to this quantum computing, so maybe four different options for beating Moore's law or extending Moore's law, depending on how you look at it. So let's just walk through them quickly, one at a time. Optical computing you can think of as already here in a sense in that our optical fiber networks have very fast switches that are required for a good deal of our fastest internet. And there are various demonstrations where you can do optical computing for many of the tasks, not just the data transfer. And just like many other things, there's a desire to shrink this down for cost of manufacturing and quality. And so forth. The advantages of optical over electronic, typical computers is that they are, for a given set of operations, there's the general sentiment that it might be lower heat generation. It goes at the speed of light, rather than the slightly lower than speed of light that typically comes into actual implementations in electronic circuits. And here's two examples taken from the literature of getting natural sort of self-assembly, just as we've seen in many biological systems, we get self-assembly of membranes, self-assembly of multi-protein complexes. Here you have, you want to make particular kind of size of optical particles that have the right rank of index and spacing and shape and so forth. And you can use self-assembly here for that thing. These are examples of where we're getting in the nanometer range here. This is a 5 micron scale object here. I've chosen this particular example, there are a number of examples of electronic nanocomputing where the electronics is getting down to the size of molecules. Here the molecule chosen is a polymer of carbon, not a hydrocarbon, but carbon. It's like the buckyballs which is carbon-60 or graphite rolled up in the tubes, these nanotubes, can be used as transistor like elements in very tiny circuits. This is just a schematic. This isn't actually a micrograph, right. And here are four circuits that kind of reflect what might be your first four projects in an introductory electronics course, ignoring the fact that you might not use nanotubes. But you would have voltage in and voltage out. In other words, this is a transistor like circuit or an inverter like circuit in the upper left hand corner of slide 10. And the nanotube here is in the middle in series with a resistor going from high voltage to the ground at the lower part and the voltage in essentially modulates the voltage out in the nonlinear curve. And you can see this highly cooperative curve just like the ones that we've been talking about in a number of other biological and physical systems. The upper right, you have a NOR gate. Almost every circuit can be made by combinations simply by [? not ?] [? OARS. ?] This is an inverter plus an [? OAR. ?] And so you can see that there now have two inputs on the left and right in 1 and 2. And they can have states 1-1, 1-0, 0-1, 0-0, going from left to right. And you can see only when they're both 0, does the whole circuit now, output go down to 0 for low voltage for output. So input, the three possible input combinations and the last one gets the [? low. ?] And this is all done this is all done with these kind of molecular scale nanotubes. Here's a RAM, required two nanotubes in order to store, whether it's either where you flip it from open to flip it from low to high. And the last example, in the lower right, you now need 3 nanotubes. You see, we worked our way up in complexity from 1 to 2 to 3. And you need 3 in order to get a ring oscillator use. I mean, that's the way you would think about. Most easily think about it, where we have the first one's output affecting the second one. The second one affecting the third and third one looping back to the first. And the result of this as you get a series of peaks and troughs here, which you used for synchronizing circuits or generating other useful sinusoidal processes. Now that's one example-- so the optical electronic. Molecular includes a number of different possibilities, including DNA which is what we'll focus on, DNA computing. And this was started by a person, a physicist famous for thinking out of the box quite a bit. Feynman in 1959, when he was still fairly young, gave a talk entitled There's Plenty of Room at the Bottom. And by that he meant that we can-- just as we can machine and manufacture objects under fairly automated fashion, we should be able to scale that down to the point where we're manipulating individual atoms. And he couldn't think of any physical reason, such as the uncertainty principle or anything like that, that would prevent one from doing that, manipulating individual atoms. And many years later now, he's been proven right and that we are doing that, albeit not in any really high production method. Drexler and his thesis and subsequently has championed, is probably the right word, this notion and given it the name nanotechnology and nanosystems and really fleshed out some of the things that one might be able to do if you had a much higher throughput way of dealing with manufacturing at atomic scale. However, even he did not really connect all the dots between where we are now and where and how we get to the very first nano-assemblers and nanotechnology. Since then, there's been kind of a Renaissance of interest in this with a recognition that biological systems actually are naturally doing nanotech scale, atomic manipulations. And you've seen a few examples of that in this course. But the particular instance that we'll use as a jumping off point for discussion of all the steps that could be where biology and molecular mining could give us new tools, whether it's from assembly inputs, memory computation output. We'll start with Len Adleman's pace changing paper in 1994. This is the A of the RSA that we talked about a few minutes ago. He was obviously a hardcore and is algorithmics expert and decided in 1994 to actually do a paper that required not only algorithmic, but a huge change in the way you implemented the algorithms. And then actually to go into the lab into a biochemical laboratory which he was not previously trained. And to author-- it's a single author paper that included such things as PCR. That was in 1994. There was no literature on the subject of DNA computers before that. A mere six years later, there were 520 references on the subject. So he obviously hit some kind of nerve. The first few years of that after that were mainly theoretical. But I'll show you some examples. His paper had an experimental component, and some others that I'll show you. Since this course is really about that interface, constantly checking the theory with reality, those are the ones that I'll emphasize here. So the question that he asked in 1994 and is still fresh today is, is there a Hamiltonian path through all the nodes in a network? So we've been talking about interesting biological networks. But here, just in any network where the black nodes are connected by directed edges in this directed graph. You want to go from the start, S, to the terminus T, from 1 to 6, obeying the arrows and going through every point once. How do you do this? And how do you do it in DNA? So an example of one here is going from S to 3 to 5 to 2 to 4 to T. So the way you do it, first in broad strokes, you encode the graph, both the nodes, black spots, and the edges into single-stranded DNA sequences. Then you create all possible paths as by using overlapping sequences to indicate which node is connected to which other node by an edge in which direction. So you can actually have directionality, just because DNA has directionality. And you use DNA hybridization now to do that step. Now, the first step is linear. Encoding the graph is linear with the number of points in the graph, the number of places in the Hamiltonian path. The second step is not something-- and that you would do by having your computer program a DNA synthesizer which is an automated machine that we've described a couple of times. But the second step is out of your hands. This is something that happens automatically when you put DNA in the solution. If you design these sequences carefully so they don't cross hybridize very much, then the only way they can assemble is the way you want it. Then, you finally determine where the solution exists, and this is something which is almost constant in complexity. So the entire thing scales very gracefully instead of scaling exponentially as the Hamiltonian path problem normally would. This gives the appearance of scaling linearly in time, which is really one of the best case scenarios for polynomial time and certainly better than exponential. So how do we actually do that? That was broad strokes. This is a more detailed view. And you can see how this really seems like it's going to work. You have each of the nodes encoded by sequences, let's say, red and tan here on the top left of slide 14. And you have, if you want to connect node 3 with node 4, so that you take the right hand in, the 3 prime end which is sorta greenish tan. And you connect it now to the other end of, that is to say the 5 prime end of node 4 just below it, which is blue. So now you have this hybrid which is ordered. So it's an arrow going from 3 to 4 and that edge has this particular sequence going from 5 prime to 3 prime. Now, in practice, you want the edges to be complementary, not identical to the nodes. And so all the nodes are actually represented as reverse sequences in the lower left hand part of slide 14. And so represent all the edges that connect nodes in this directional manner. And then an example of how you would connect three nodes, 3 to 4 to 5 by two connecting directional edges, 3, 4, 4, 5 is shown here. All the nodes are in reverse compliments and all the edges are in the forward direction as arbitrarily defined here. And you can see how they stitch these together and you make firm connections that are unambiguous, non cross-reacting and have a direct directionality. Now that you are starting to get the idea that now we can encode this in DNA. But how are we going to actually do the computation and how are we going to find out who the winner is? So what is done, remember, we want to create all the paths and then to ask whether any of them go through all the points? And then do any of them go through all the points exactly once? So the first thing is to create all the paths from start to terminus. And by just throwing in this mixture of all the edges and all the nodes, you will create in principle all the paths. Now you want to write, you will go from the prefix of one into the suffix of the other in the same way that illustrated on the previous slide. And here are some examples of some of the paths, some are very short. This only goes through the 1, 2 4, 6, only goes through 4 the nodes, that's not all 6 of them. The bottom one goes through too many nodes and some of them it's going through repetitively. But you get the idea is that you can define the path in terms of all these edges and the reverse complements which represent the nodes. But here's the actual algorithm as encoded in DNA and implemented by practical methods that a computer scientist can do without too much help, at least not enough help required for co-authorship. So we've already encoded the graphs in the DNA sequences. And this is done by automated oligonucleotide synthesis. You create all the paths from ST by PCR amplifying from the S end of the oligo to the T end. So that means, the mere fact that they PCR amplify means they must contain node 1 and 6, the start of terminus. So that's good. Now, you want to get the ones that visit every node, so by serial hybridization, you can have nodes 2, 3, 4 and 5 immobilized. And you bind the PCR products to it, and you'll loop from 2 and then in series you bind to 3, loop from 3, bind to 4, loop for 4, 5. So now you know it has 1 in 6 because that's the PCR primers. It has 2, 3, 4 and 5 because it balances them in series by hybridization. Then, but that, you could get some of those long paths that went through multiple nodes multiple times. If you want it to have exactly N nodes, then what you do is sort of of electric [? thread ?] [? excising. ?] And here if you have a calibration curve as shown on the bottom of slide 16, we have known DNA size markers and known PCR products going, showing you have one of these DNA nodes, 2, 3, all the way up to 6. And if you find a solution that has all these properties, PCRs, does serial hybridization to all the nodes and is the right length, such as the one in column 6, then you know you've got a solution. And that was Len Adleman's argument that he had DNA computing working. Six years later, we now have, or as of six years later, there were now over 500 examples of this. I'll show you this example as both as an introduction to the satisfiability problem and also as showing that you can do RNA computing and that you can encode two-dimensional objects and it illustrates a number of things. The problem here is a test problem, very simple test board. It's not 8 by 8, but 3 by 3. And you've got an artificial number of knights here. And those of you who know chess know that these knights can attack in a curious combination of straight and diagonal. And it doesn't really matter. The point is there are a variety of arrangements of any number of knights such that none of them can attack each other and they're all kind of at peace here. And that the object of this algorithm is to find those combinations. And here's-- and you do it by cloning. So that was something that was not in the previous example. By cloning, you can find each of the solutions. And you then determine what is present and sits along that clone. It's kind of like haplotyping or splice form analysis. You can really only analyze these things by looking at the product of a single molecule. And that's what cloning is about, looking at, amplifying that single molecule up to the point where you can analyze it. So that's one thing that's new. The other thing is new, is you start with an RNA in order that you can use this powerful method, this enzyme called RNase H which will-- it has a property that when you bind a DNA oligonucleotide to an RNA and their complementary RNase H will destroy the RNA at that point, at the point of hybridization. So it's a way of eliminating an entire molecule if it happens to have a particular sequence in it. And so one of the ways that you can ask logical questions about each molecule in a large complex mixture of molecules is using this RNase elimination. And in a way, it's a way of designing an infinite number of restriction enzymes. The RNase plus the DNA oligonucleotide provides in a certain sense, a custom restriction enzyme. Any case, the other thing that's unusual here is the idea of using split and pool oligonucleotide synthesis. We introduced this in the lecture where we were introducing drug protein interaction and ways of synthesizing drugs and other molecules by Poole synthesis. And the idea behind split Poole synthesis in this case is that each of the squares in this 3 by 3 matrix can have two states, either has a knight or it doesn't. Each of those two states, you can consider a 0 or 1. You can represent them as two different sequences. Sequence A or sequence A prime, representing presence or absence of something. And so you basically have 2-- you have 8 squares and so you have 2 to the 9th different possibilities. And so down below they synthesized a set of polymers where you have every possible binary state for this 3 by 3 grid. And that's done by-- you're synthesizing along and you come to where you're going to synthesize either A or A prime. You split it. Half of this pool gets a A. Half of it gets sequence A prime. Pull back the other half gets B, half gets B prime. Pool them and split them and C and C prime and so forth, all the way out. You really only need nine of these. They did 10 for some reason or other. But the point is to get 2 to the ninth power, you need nine of these. And then as you can-- then you can read them out electrophoretically just as the electrophoretic readout gave a sizing in the first DNA computing, you can use it to get the sizes of these. And you can see-- you could read out-- Here's two solutions, BEFH referring to the squares on a 3 by 3 grid and EFC, these are both solutions. BEFH, the way of reading this off this combinatorial synthesis from the bottom is you've got A through I as the possible bit binary signatures. And then you have two columns, either the 0 column or the 1, two states, the two sequences. And as you PCR from the end tag out to each of these tags, A or A prime, B or B prime, and so forth. Then you get-- these are all-- you'll get this graduated series. It will tell you, you've got a little-- you've got, A is in the 0 state. B is in the 1 state. C and D are in the 0 state. E is in the 1 state so forth. And so I've circled B being in the 1 stage showing that there's a Knight in position B. And you can go through the same thing for the other solutions. Each of these to develop as a clone. And there's an and-- the neat thing about all these problems is they have multiple representations. You represent them in DNA. You represent them in data. You can represent them as a Boolean logical set of operations where these things represent ANDs and ORs at the bottom of the slide and so on. And you will see that in the last examples where I've just kind of breezed through, where that set of logical operations is probably one of the favorite points of attack for DNA computing still today. So what are the problems and the advantages? The problems are that yes, it is polynomial time. In fact, it's close to linear time with a number of inputs in. In terms of synthesis needs, you know, your computer tells a synthesizer what to make. And then enzymes will or hybridisation will do this highly parallel reaction independent of N. So basically constant time. So it's linear time for synthesis, constant time for the computing, and constant time for the getting the answer right out. But you have exponential volumes. For example, a hundred-node graph, we've been talking about the various graphs solutions might be 10 to the 30th molecules. And if any of you have ever tried to synthesize a mole or 10 to the 24th molecules, you realize it would bankrupt our planet to make 10 to the 30th molecules. So in addition, the elementary steps are slow. It's highly parallel. You can imagine having trillions of molecules computing in parallel, but the elementary steps of hybridization and DNA polymerase or RNase H and so forth, typically are in the millihertz range. That is to say, it might take 1,000 seconds, rather than gigahertz, which would be a billionth of a second. So there might be a 10 to the 12th gap in rate of executing the commands, but there's more, the hope is that there's more than a 10 to the 12th advantage in parallelism. In addition, experimental errors mustn't be swept under the rug. You've got issues with mismatch. There's a limit that just how cleverly you can design all these sequences. As the graph gets bigger, you need to have more and more sequences involved. And that means that you're going to get more and more cross hybridization, incomplete cleavage, and so forth. There are-- when this slide 18 says non-reusable, there are reusable forms. And we'll get to those in just a minute. So those are the disadvantages. What are the promises or the possible advantages? High parallelism, could be much more than the 10 to the 12th fold loss in speed. When computer hardware and people dream of the next generation of computers, they hope to get away from the current record, which is around 10 to the 9th operations per joule for conventional computers. Maybe that's not a record but it's conventional computers. Closer to the 34 times 10 to the 19th operations per joule that you should be able to squeak out near the thermodynamic limit. And as it turns out, many DNA enzymes such as DNA polymerase are already within a factor of 10 of that goal, while conventional computers are off by 10 factors of 10 or more. If one can quote, solve, or one NP-complete solve problem, you can get many. The improvements that we'll just kind of briefly talk about that keep people excited about this are that this is a natural way of talking to biological problems. If there are biological problems, you can-- It may be a smaller step to get to DNA computing, and there are faster readout methods just as there are faster and faster computational methods. And natural selection, evolution, is something that you can use on DNA computers, which so far has not been extremely powerful in conventional computers, although we'll talk about genetic algorithms shortly. So one way of getting reuse is to have a so-called sticker-based model or something where you're basically just using the hybridization properties without being destructive. And here's an example. I'm not going to walk through it too much. I should point out the all in here includes Adleman again. And I'll have one more example of his work in just a moment here. But there are examples now of work trying to consider seriously the amount, the volumes of DNA that are needed and ways of dealing with fault tolerance or error reduction algorithms where you actually go through and consciously say, OK, if we had an error here, how would we compensate for it? How would we dedicate a little more, a few more bits to take the next step? So just like in the night test problem we had a little bit before, the idea of using ANDs and ORs of Boolean variables these X's in slide 21 which can have two states 0 or 1 similar to the Boolean variables that we run across from time to time in generative models. And you can have clauses, which are basically logical operations on the set of Boolean variables. We have, NOTs and ORs and ANDs. And this kind of problem is a very general problem, very interesting one. And it has been tackled just in very analogous ways to the ways we've been talking about the previous ones where you encode the graph in DNA sequences, thereby creating all the paths by hybridization or something like that. And then you read it out with PCR and solid phase. And you can say, you see here's a quote where they say, here we solve a NP-complete problem. They have indeed solved one, but they haven't really turned it into a polynomial time problem. It's simply brute force it out. And this is the most recent one, just it came out in Science again, Len Adleman on it, and now up to 20 variables in a three-set problem. So that's all about computing. It may not-- since DNA is relatively slow at computing per step, highly parallel, it may not be the best of the various steps in computing assembly, input, memory, computation, and output. So we'll explore some of the other ones, in particular assembly. To a certain extent, assembling computers is slow and so it's something where molecular assembly might have some advantages. Now this particular example, I'm going to emphasize the assembly aspect of it, but you can think of this as a way of mapping, these authors have mapped assembly of an actual two-dimensional piling onto an abstract but important and powerful computer science concept which is the general Turing machine, a machine, kind of a tape-like structure that can compute, do any kind of general computing, deterministic computing. And so they mapped a kind of a physical tiling as you might have a periodic or a periodic mosaic of tiles onto this computing machine and onto this kind of logical operation, XOR on a string of binary bits, just as we had in the previous couple of slides on the three stat problem. I want to emphasize the geometric, physical, version of it because we're trying to make a transition now in this paper which combines the DNA computing with DNA assembly. Now up it, we have three more or less equivalent ways of representing the same sequences. So these are called triple crossovers, because you can see, let's take a look at say Y2 here, right in the middle of the slide. You can see that there are these multiple crossovers where you have a kind of a recombination event where you've got two double-stranded DNA molecules that are exchanging a strand. And this is not a natural homologous recombination of N in the sense they're non homologous, and this thing is kind of trapped in this crossover. And the crossover, when you have multiple crossovers, you can make a piece of DNA that now has more than just two ends. You can have multiple sticky ends. You can have, say in this case, in the upper part, you can see four different ends with 1, 2, 3, 4 crossover strands here. And each of these, you can see on the far right, a 5 base 3 prime overhang, and a 7 base 5 prime overhang and some flush ends on the far left. And so each of these things has ability to stick to other tiling elements. And you can see that you can put together a fairly-- a two-dimensional structure, which can be as intricate as a mosaic. It does not have to repeat itself, or it can repeat itself if you want to use visual tools such as Fourier transforms to look at a repeating structure. And you can see, you can engineer in restriction sites to help you analyze the structure on a gel-based assay on the far left. We've seen enough gel-based assays tonight already, I won't belabor, we're not going to walk you through it. But you can see that there's-- even though this is not straight DNA as the previous two or three examples were, this is a much more complicated branched structure. Nevertheless, you can turn it into a linear readout with electrophoretic sizes. But you can also look at it as a truly two or three-dimensional object here. Here we use atomic-- they use atomic force microscopy where a probe with a single atom at its tip is responding to the force. That allows that as you touch an object, the feedback in the system with a scanning tunneling microscope, usually as part of the feedback, tells you that you just touched the surface and you've got to back off a little bit. And then just scan along and just profiling the surface. And so here, with two different tiling methods, this is more a repetitive rather than aperiodic tiling. You can see that these little pink protrusions, you can engineer into the tile, not just the two-dimensional stickiness, sticky tags, but a third dimension, which is a bump, which might be a stem loop as we've seen in other secondary structures. There's engineering into this DNA. So now that these bumps will stick up and be easy targets for the atomic force microscope. And here you have a bump every second tile, and that's on the left. And on the right of slide 27, you have a more complicated tiling where you have four different types of tiles and a bump every fourth one. And so you expect, you can calculate from the Watson-Crick model for DNA, or more advanced models of DNA, even though this is a lattice of brain structures, you can calculate, it should be about 33 nanometers between the bumps. And that's what's observed. And 65 nanometers is calculated and observed. You can see the bigger lattice spacing in these admittedly somewhat fuzzy atomic force micrographs. But you can see, you get something, an experimental confirmation of the two-dimensional structures here. Now this is self-assembly nanofabrication. And to some extent, it is inspired by and can be combined with microfabrication. This is something where you basically use optics, you go to the limit of current optical manufacturing as a microfabrication where it's typically hard to get below say 100 nanometers or so in feature size. Here you can see this is-- and this is the microfabrication used to make your computer chips. But it's, in this case, it's used to actually make moving parts, parts that move relative to one another. And that motion actually has useful applications. The first such useful application that I'm aware is putting these things into air bag sensors. And the idea is that when you're driving your large automobile and run into a even larger object, you suddenly decelerate, either by brakes or some other method. And when you do, this little bitty device, a sensor inside your-- somewhere in your car, will shift one of its parts by at least 0.2 angstroms, meaning a tenth of an atomic diameter. That doesn't seem like very much. And it isn't. It only causes 100 femtofarads, that means 10 to the -13 farad capacitance change. But that's quite enough to signal that a collision has occurred or will occur and with a very low false positive rate, those of you who have driven large automobiles know how infrequently the airbag opens up accidentally. But when it does deflect by 0.2 angstroms then it does open up the airbag. OK, so this is a payoff of microfabrication. But now we want to combine the nanosystems that we saw in the DNA computers and in the tiling with microfabrication. And I picked this as just one of the very few examples where microfab meets nanofabrication. And we'll call it a nano-electromechanical system. And here, so you've got the microfabrication of both the posts upon which these things stand, which is these nickel columns of 80 nanometers, and the little bars, metal bars, which can be as a micron range. And you can see them visualized on the left hand photograph where you have these bars at regular spacings, and if you look at this publication or the webs that go along with it, these little bars will spin around. And what they're spinning on is not a microfabricated motor, but it is a nano biotech motor, which and actually it's a protein that most people didn't think of as a motor when it was first discovered. It is the protein present in almost all organisms that is responsible for ATP generation. Usually, you think of this as making ATP for motors to use, but this actually, since it is capable of rotary motion, does move this 1 micron bar around at the rate that you would expect for the nano-machine to be generating torque. So we're now, we've talked about assembly. That's an example of an output device, what sort of input devices we have that will work at the single molecule level. Here is, there are many examples. We've mentioned a few of them in the sequencing, genotyping lecture where we were talking about single fluorophores. Here, you can use another aspect of biology, which is the self-assembly of membranes to make a very, very tight, low conductance seal which is only on the order of 2 nanometers thick, but it's enough to make a gigaohm seal, multi gigaohm seal. And then you poke a little hole in it with a single molecule of a protein. And there are growing ways to do this on inorganic substrates as well. So when you have a single protein pore which itself might be a 1 nanometer opening, it will allow in the presence of electric field, indicated by these negative and positive charges here, yellow negative and positive ions to go through like sodium chloride. And they will go through it about up to a million ions per second, easily. And when you have a larger molecule, that's say, a polyanion in the electric field, it will slowly migrate through this pore. And while it does so, it blocks the rapid movement of the smaller ions like sodium. And you can record, the-- it doesn't necessarily completely block the channel. You can record the rate at which sodium will go through and how it's influenced by the composition of the polymer going through. And so here's an example of a bacterial protein. This is [? Miller ?] et. all, referenced down below. A bacterial toxin, actually from staph, staphylococcus, whose goal in life is to kill other organisms, it's not to provide a handy conduit for RNA to go through a cell. But nevertheless in this experiment, when it does, when a nucleic acid does go through, it blocks the little red water molecules and similar sized ions are blocked. And they're blocked in a way, which is sensitive to not only the molecule, but parts of the molecule. So you can see here, you've got a molecule with a oligo(A) part and an oligo(C) part. And you can actually discriminate between these, both in terms of the rate at which they go through, each of the parts goes through, and the conductance. So if you look down in the lower right hand part, you'll see a 5 picoampere here to the -12 ampere, 20 picoampere and 120 picoampere levels for these individual molecules. And each of these spikes is first the 830 half, and then the C-70 half going through. And you can see that the two different conductance levels that you get, reproducibly go, it's going through a typically in one direction, first the A, then the C, and different conductance levels, different rates. And then 125 amperes is in between molecules where you're getting the full conductance capability, lots of sodium is going through. And just like with other methods that we've seen before, the use of two dimensions helps get you better statistical resolution. Here the two dimensions are the conductance and the time, time, and the vertical axis and conductance on the horizontal axis. And you can see how you can discriminate different types of polymers by this method. One molecule of time, each of these dots represents a single molecular event. And so we'll take a short break. And then when we come back, we'll talk about not molecular computing, but designing cellular computers and revisit some of these same themes. Thanks. |
MIT_HST508_Genomics_and_Computational_Biology_Fall_2002 | 6B_RNA_2_Clustering_by_Gene_or_Condition_and_Other_Regulon_Data_Sources_Nucleic_Acid.txt | The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license and MIT OpenCourseWare, in general, is available at ocw.mit.edu. GEORGE CHURCH: OK, welcome back. I'm sure you're all dying to know the answer, so y'all came back faster than usual. [LAUGHTER] So how do we get from the first top motif score to the second one? And I'll show you, just to spice this up, I'll see two different algorithms they actually, historically, done and the 100-fold improvement in speed and also in accuracy that comes from the change. So the first way, and you already can see negatively predisposed to this, but if you look at the motif, the winner, if you will, the first time around, it has certain base positions which are particularly high-information concept. That is to say they really dominate, and they probably are critical to finding a motif. If you had a way of, say, knocking out one of those from the sequence, from all the sequences which contributed to this motif, then it would greatly reduce your chances that you would find it again. And so that's what we are going to do in slide 47 is we're just going to go through and pick on one of those bases and turn it into an X. So an X doesn't really match any of the weight matrices. And so whenever you have an motif that overlaps it, it won't have a good score. And so you won't build up-- you won't have this transition. The Gibbs sampler won't go in that direction. Now, this has a couple of disadvantages from a accuracy standpoint in that there may be some motifs that you really like that overlap the original motif slightly. And you'll miss those during the sampling. So an alternative way of looking for these sort of things, rather than taking the best continuous sampling in this Xed out version, instead, what you can do is maintain a list of all the motifs you've found up to this point. In this case, we have just one. And now, because we were using AlignACE to do this multi-sequence alignment by sampling-- but now, as you're going along, building up, initially, a random motif, you compare it to the first winner. And you say, is this random motif, or this motif that's emerging out of this random process, does it look at all like the previous one? If it starts to look convincingly like the winner before, you know where it's going to go. It's just going to get more and more like that. So you might as well quit early. And so that's what you do is now you haven't Xed out any particular base. All the information is there. You can take any kind of motif that expands and changes the columns and so forth. And if you're building up towards a motif you've seen before, you can reject it. This process has the dual advantage of now allowing you to get overlapping motifs that might have a different enough column structure or differ enough weight matrix or are slightly offset so that it really is a different motif. You can find those. The algorithm that you're using to compare these we'll use a couple of times today. We'll call it CompareACE for comparing these consensus elements, or these weight matrices. And so it not only improves your ability to discriminate related but statistically separable consensus elements but it also has about 100-fold increase in speed. Because you can stop 100 times earlier in this motif sampling, which goes-- once you lock into a motif, you go and you go until you really get the best possible score. But now you can reject these weaker ones earlier on. Now, you may have been wondering all along, you probably have an intuitive feeling for what the MAP score is-- and I'm going to take-- and at the end of this, you're not going to be able to rederive this from first principles because I'm not going to go into it in that depth. But I want to expose you to some of the terms that are involved in this maximum a posteriori score. Of course, the hero of any kind of scoring function is the weight matrix, the actual number of counts of As, Cs, Gs, and Ts you have at every position in every column in this matrix. Now, remember, we've critiqued this already, that the typical weight matrices, there is no codependence of columns the way there was in RNA secondary structural or in CpG islands or other Markov chains. These are independent columns. So the key player, the hero here, is f sub jb. This is the weight matrix. And this is not a frequency, but this is the actual number of occurrences of each base-- A, C, G, or T, which is B, at position j in the column matrix. These can be the active columns, j going-- and then the number of occurrences is just the sum of those occurrences. We've already been talking about how the width of the motif can include some columns that are basically on and off, columns that you believe are significant and those that aren't. So the number of columns, c, is less than or equal to the width. So for example, when we were doing the GCN4 example, we had a width of 10 or might at one point expand to the width of 11 with 10 active columns, c equal 10. You'll recognize-- here's the star, the f sub jb here. And you're adding these pseudocounts here, these betas, which you remember that every time that you have a danger that you might, because you have a limited database that you're looking through, a limited number of actually observed sites, you could get a number of sites equal to 0. And you don't want to have zeros in there. Because you're basically acknowledging, because I did lambda sampling, it could have been 1. If I had sampled one more, it could have been 1. And so an estimate might be that you add another pseudocount. And this can be represented here. You don't want to have-- and these gammas you can think of, they're kind of like factorials. And you're taking products. The pi is products. And then I may have mentioned that you might want to take into account the background levels of the bases. If you get, say, a motif that's just a string of As and you're doing it in a genome that's very AT-rich, then you want to account. It's less surprising if you find a string of As in a genome that's GC-rich. And that's what this g sub b is the background genome frequency for base b. Now, in a double-stranded DNA, the frequency of As is going to be equal to frequency of Ts. But a single-stranded RNA virus, for example, there's going to really be a very independent set of backgrounds for Gs, As, Ts, and Cs, or Us. So this gives you some flavor of what's in the MAP score. A greatly oversimplified version of this detailed slide 49 is in slide 50-- laughably oversimplified-- is it's basically the overrepresentation of these sites is what we're talking about. It means that these sites, you're giving a higher score if there's an overrepresentation in that learning set. It tells you nothing about the rest of the genome. It could be they're overrepresented in the rest of the genome, too. And that's what we're going to go into next. But you get a bonus for the number of aligned sites and the overrepresentation of those sites. That's what the MAP score is. STUDENT: Does the background [INAUDIBLE]?? GEORGE CHURCH: Hmm? STUDENT: Does the background [INAUDIBLE]?? GEORGE CHURCH: The background is ignored in this oversimplification, but it's explicit in slide 49. You really should ignore this and think more about the previous one. But the main point that I'm making is the overrepresentation is just half of the story. The other half is the specificity. In other words, if it's present in your learning set or your enriched set, you want to next ask, is it present in the rest of the genome? Because if it's present-- and maybe that's what you mean by background-- if it's present in the rest of the genome, then that's not great. And so what we're going to do is this is an example of running through-- after you get the first motif-- running through lots of other motifs. So the best motif of all for a larger set than the ones we were looking at. We were looking at seven, but there's 116 of these in [INAUDIBLE]. When you run through the whole thing, the top MAP score, the one that's most overrepresented, is this A-rich one. Now you want to ask, is that specific to the amino acid biosynthetic genes? Or is it found all over the place? And we're going to get to that, how you measure that, in just a moment. The one we were kind of highlighting here is in GCN4 is kind of modestly-- this is not a rank order list. This is kind of in random order. But we'll show how you can do the rank ordering, as well, how you can order this. But you see all kinds of motifs, some that are stretched out, different compositions. So to evaluate motif significance, we have these five examples that we'll go through. There's the specificity that I've been talking about and will be the subject of the next slide. That little arrow means the slide that's due here. Group specificity-- is this specific to the group that you found by clustering, or is it all over the place? Functional enrichment-- we talked about this a couple of times. Are the genes that you're finding in the cluster or the genes that you're finding this motif in front of, are they enriched by some fairly objective criteria? Is the motifs you're finding in a particular position in the upstream elements? Because they have a position in the promoters or enhancers. Does the motif that you found have interesting symmetry properties, as you might expect from proteins, which bind as multimers? They might have inverted or tandem repeats, where the elements either point towards each other or in tandem. Is the motifs that you're finding, are they related in any way to motifs that were known before by more complicated biochemical and genetic assays? So the first one, the group specificity, in order to ask whether the motif you found in the small subset of the genome is present in the rest of the genome, we need a way of scanning. Now, when we introduce weight matrices in the multi-sequence alignment lecture and we say we would put off the motif Gibbs sampling until today, we already introduced one really trivial way of scanning the genome, where you basically take the weight matrix, move it in each position, and you do a simple sum. That's basically what this is is a simple sum. But we're taking log ratios of these counts. Again, the hero, the counts, just like the f-- so bj before, now it's n sub l b, slightly different nomenclature taken from different articles, but it's the same idea. This is a number of occurrences of base b at position l. This is the weight matrix as counts, not as frequencies. And in the denominator is the number of occurrences of the most common base. Now, this could be b, or it could be some other one. But this is the most common one. This is going to tend to be larger than or equal to the ends of lb, which is the weight matrix. And you're going to sum over l, over the length of the binding site. This was w in the previous one. And you're going to just scan this along the entire genome, stepping it over one base at a time and coming back on the opposite strand. And that's going to be ScanACE. So you've got AlignACE compares to ScanACE. Now we're going to scan it to see for specificity. Again, you've got these 0.5's. You can think of these as pseudocounts. Keep the zeros out. You don't want to have a logarithm of 0. Now, let's walk this through a particular biological data set. This is a cell cycle data set going through two cycles, cell division cycles. here. There are points, 15 significant time points, along the horizontal axis. And this is a particular cluster. Out of 30 different clusters, this particular one has a peak just before the S phase, the phase and the cell cycle where you're trying to get the replication of the cell at the S phase is where you actually synthesize a new set of DNA molecules. You duplicate it, just as we talked about in the first lecture. And actually, since you've recorded a time series through two cell division cycles, you expect there to be periodicity. Or genes that are acquired in the first S phase, you expect them to be acquired in the second S phase. That's the underlying thought behind designing this experiment is you would synchronize all the cells. Normally, cells are all over the place. Some of them are in S. Some of them are in M, which is where the metaphase chromosomes separate. But in this, you synchronize them all up by a method we'll talk about in just a moment. And then this is that diagram where we have the number of standard deviations from the mean. This is a normalized signal of the RNA expression. On the vertical axis and horizontal axis is this time series categorically described as G1-- gap 1-- synthesis, gap 2-- metaphase-- mitosis, and so on. Now, what do we learn from this particular cluster? This cluster has 186 genes in it. That means the RNAs for those 186 genes were in a nice envelope. It doesn't mean that they're strictly couldn't be 185, 187. There may be some outliers on the edges. But that's the number that we're going to be doing these calculations for. The ways we're going to evaluate it is, first, whether the functional categories make sense. Is there an enrichment for a particular functional category? You may have already, those biologists among you, may have already had a hypothesis of what functional categories should be enriched. If it's going to peak, if these are the RNAs that are going to peak just before S phase, just before you need them for DNA synthesis, maybe these can encode genes that are involved-- genes [? possibly ?] involved-- in DNA synthesis. Sure enough, that's the most striking observation is that in this database, this MIPS database of functional categories, you have 82 genes that are described as involved in DNA synthesis. And of this cluster that are co-expressed at peak, at S, you have an overlap of 23 with that. And that may not sound like a huge overlap-- a few-- but it's very statistically significant. It's 10 to the -16 is the probability of that occurring at random, out of the 6,000 or so [INAUDIBLE] genes, that having this overlap of 23 is very significant. So that's the first. And we'll, in just a moment, show how we did that calculation. But that's your first test. There is a functional category enrichment. Next, you find the motifs. You use the AlignACE. You go through. You find the top motif. It's MCB. You go and you find the very close second-highest motif. It's SCB. These are not chosen by hand. This is all done algorithmically. The only input-- there's no literature input except for checking these functional categories-- for finding the motifs, it's just the microarray data and the sequence upstream from the genes that come out of this cluster. That's how these were found. Now, they have to have names. MCB and SCB, we could have just called them x and y. But these names do mean that the CompareACE score to something that was in the literature is good. But a very profound accomplishment of this is that now-- unlike the literature, where it's rather challenging to find the connection from a conclusion, like this motif is likely to regulate this gene is likely to be enriched in this class of genes-- here, it's directly traceable. You can see the logic that connects this motif MCB to this cluster via Gibbs algorithm. And this cluster is traceable back to the RNA profiles on the microarray. This is all a very simple, comprehensive study. But now you want to ask, is this motif-- we know that it's got a high MAP score. It's highly enriched. And it's highly unlikely that a motif this strong would have occurred in this size cluster of genes. But what you want to ask, is it specific? This is the thing we've been putting off for a little while. Is it specific? And if you look at the 30 clusters when you cluster this whole set of genes that vary during the cell cycle into 30 envelopes, this particular one, envelope number 2, cluster number 2, which is displayed in the upper left, down in the lower left, you can see that all the MCB motifs, almost all of them, are found in cluster 2 when you use ScanACE, and very few of them are found in the rest of the genome-- similarly for the second-most impressive motif by AlignACE MAP scores, SCB. And it's also specific to cluster 2. The fact that you're seeing this non-random enrichment for functional category, this non-random enrichment for a motif, this non-random specificity of that motif and a second motif, kind of tells you that everything is working, that your RNA data collection is working-- which, you may spend a lot of money to get to this point, you should be gratified-- and the clustering is working, and the motif finding and specificity scores, all this is working. It doesn't mean that it's absolutely perfectly tuned and everything, but it's giving you feedback that you're taking a step in the right direction. Similarly, the position of this motif in the promoter is non-random. You see this little spike that's coming up just before the-- it could be the transcription or translation start. In this case, the ATG is the translation start. And it's non-random. How do we measure each of these things? How do we get this? Well, before we get that, I'll give you two more examples, same format, just to show you that you get different motifs when you go to different clusters-- two more clusters. The next one is also periodic. It has a peak now slightly shifted to the right from the previous periodic function. And it repeats at exactly the same periodicity as if they're part of the same periodic function, which is exactly how the experiment was designed. The difference between this and the previous one is now, the two top motifs are not previously known motifs. It doesn't mean they're any worse. But they're new ones. And the way you evaluate whether they're specific is the same way we got the specificity for. Now they're in cluster 14, which is this cluster up in the upper left. And both of them are about as specific as the ones in the previous slide. The functional category is not as impressive. It's 10 to the minus 6 instead of-- sorry, 10 to the minus 4, no 10 to the minus 6, the previous one's 10 to the minus 16. Now, this is still statistically significant. But it could mean that this particular way of functional categorization which the curators use may not be ideal for this particular regulatory mechanism, regulatory regulon. So this may be a discovery both of a new regulatory set and of two new motifs. But in order to establish that, you'd need some experiments to really, say, knock out these motifs and see what the consequences are. Now, the third cluster illustrates yet another set of ideas. Here, even though the experiment was designed specifically to enrich for the most abundant-- or sorry, for the most periodic gene expression, there were inevitably features of the experimental design which were not periodic. In particular, when you synchronize the cells, you force them all to be in synchrony for the cell division cycle, you did that by taking a temperature-sensitive mutant in the cell division cycle, say CDC15 or 28. And that temperature-sensitive mutant, that requires that you raise the temperature to shut down the function of that gene by unfolding the protein. And so you have a temperature shift to allow them to go back into the cell cycle. So you're going from high temperature to low. That's one thing. And so that temperature essentially decays rapidly. And then you have the residual of that going out in time. In addition, there's all the physiological effects. You had all these things kind of waiting in this funny physiological state. And then that decays with time. So that's not cyclic. That perturbation is a linear or decay. And sure enough, you find examples of clusters which are not periodic. This one peaks in the second cell cycle but not in the corresponding point of the first cell cycle. And in fact, most of the 30 clusters, when you divide this up into 30-- the entire expression space up into 30-- are like this. They're not periodic. But that's OK. Because what you're looking for is clustering, as if these were different conditions or different time points. It doesn't matter what it is. Because they are coexpressed, so going up and down together, possibly during serendipitous factors. But you can still apply the same criteria for asking whether you're impressed with this cluster or not. Does it have enrichment for a functional category in the upper left-hand part of 556? And wow, it really does. This is the most impressive one of all 30 clusters. It has a probability of 10 to the minus 54 that you would find this degree of overlap between the functional category-- think of this as a Venn diagram of overlapping circles-- the overlap between the class of ribosomal proteins and the class of this particular cluster, which is not periodic, is amazingly significant. In addition, you find two motifs. The top two motifs are highly enriched. That's what the Snyder information content logo means-- and is highly specific. That's what the bottom line means. This is-- it's present in cluster 1 and very little in any of the other clusters by ScanACE using the motif matrix. So these are three clusters, each with a different story. The first one was two known motifs. The second was two unknown motifs and possibly a new functional category. The third one is a whopping match to a functional category, one known, one unknown motif, and the whole thing non-periodic, even though the experimental design was periodic. So now we've shown that you can quantitate all these things that are often casually treated in the discussion section of biological papers. Here, they've all been treated quantitatively. But how do we do that? What is the algorithm behind each of these things? We won't talk about how we measure periodicity. But you can imagine you can measure the periodicity. And we have. You could ask specificity. How did we measure the specificity and the functional assignments? It turns out that's almost the same statistical function we use for those two things-- functional assignments, group specificity. Positional bias is a different one. And CompareACE we can use not only for looking for previous motifs, as we did in the AlignACE algorithm itself and as we do as we want to look through databases of motifs, we can also look as the motif looks symmetry on itself. So this is how we do each of these. We have a choice. When we ask whether the intersection of two subsets of all of the possible genes-- let's say our cluster and a functional category or a cluster and all the best hits with ScanACE-- if those overlap in a significant way, we can think of that as sampling from a population. The question is, are we sampling with replacement or without replacement? It's an easy thing to get confused about. And I urge you to just look back at the definitions of these offline. But there actually-- a mistake was made in the literature by an author who should have known better because he got it right the first time and wrong the second time. But the correct use-- and in fact, in widespread use-- is the hypergeometric. Because we are actually, here, sampling without replacement. When you do that, the two sets, the two subsets of the big set-- the big set is n, and the two subsets are s1 and s2-- you have these combinatoric, this simple combinatoric, where you have s1 choose x where x is the intersection between the two sets. And This will be much clearer in the next slide, where we have a kind of a diagram to go with it. But this is the chance of getting exactly x. In the next slide, we're going to show how we need to consider the possibility that it could be x or larger. Now, so this is the diagram. n is the total number of genes in [? yeast, ?] somewhere upwards of 6,000. And then the subset 1 might be the number of genes in the cluster that you got out of your microarray experiment. And s2 is the number of genes found in the functional category. This is the MIPS database. How surprised are we that we found x as the intersection between those two sets? Well, let's say x were 1, and the two sets were about 100 each. That's not too surprising, right? But that hypergeometric formula, if you plug in 1, it's very surprising that you got exactly one. But the reason is because it could have been bigger than-- we're saying that it's significant that they overlap that much. Well, if it's 1, we have to consider 1 or greater. Because we're basically saying it could be 1 or greater. And so what you have to do is do a sum from 1 up. And what you'll find is that that's very likely, not surprising. On the other hand, if we had a very significant overlap with these two, then the hypothesis that these two are very related-- in other words, that you've got an enrichment, that your cluster is enriched for this functional category because you've got, say, 100 in s1 and 100 in S2 and the overlap is 99, then you would have been surprised by 99, and you would have been surprised by 100. But both 99 and 100 together are still very rare. And so you're surprised. And so the sum has to go from whatever you've got on up. And that's what this is. And that's easy to forget, too. People might just say, oh, this particular intersection is surprising. So you have to have that sum. Now, I'm going to go from slide 59 to 60. And there's going to be relatively, graphically, very little difference. But it's a radically different thing that we're doing. Now we're doing group specificity score. This is the motif you found in the cluster s1. You looked through s1. AlignACE found your motif. Now you want to ask whether that's specific or not. So you search through the entire genome, and you pick top 100 matches. And those are upstream of the genes in s2, subset 2. If there's a huge overlap of s1 and s2 we'll call x, then you're going to be surprised. And so again, you take the sum over this hypergeometric distribution. And if that's a small number, a small probability, then that's a measure of how surprised you are. So if that's 10 to the minus 6, then you're very surprised. Now, those were hypergeometric. But positional bias now, it is binomial. And you should be able to remember the binomial is this combinatoric term where t is the total number of sites, and i is the amount you're surprised. So m is the number of sites that are in the most enriched window. Now, you can take a window any size you want. If you make it too small, you're going to get sampling statistics. If you make it too big, it'll include the entire 600 base pair non-coding region. So you can try a bunch of-- you can try different windows. But basically, what you're looking for is how surprised you are that you have m or more sites in that window. If you're surprised by 10 sites, then you'd be even more surprised by more than 10. So you have to take the sum. So it's a sum just like the previous hypergeometric ones. But now it's over a binomial. And remember, the binomial is this combinatorial term, a probability to the i power and 1 minus that probability to the total minus i power. So this should be very recognizable. This is the chance that you have enrichment in a particular part of the promoter. Now, if we can compare motifs-- we've mentioned this already-- we use it in the AlignACE algorithm itself to cut our losses when we start refinding the same motif again. We use it to find whether there were similar motifs. And through experience and training sets-- and you can find that, just kind of like a correlation coefficient, the CompareACE score, as it gets closer to 1, is more and more believable. And about 0.7 is where you get statistically significant matches to other motifs. And here's an example where you can actually treat the distances to other motifs, so similar to other motifs, where 1 is perfectly similar. As you get along the diagonal, any motif is similar to itself, by definition. And you can build up a little matrix of similarities of motifs. And then you can do hierarchical clustering of motifs. And you can, if a and b are sufficiently close together, then you might consider that they're the same motif or they're transcription factors where the transcription factor that binds to that DNA motif may be related to the protein sequence level. These are predictions that you might make from this kind of clustering based on comparing weight matrices. Now, if you compare a motif to itself, what does that mean? If you just compare it to itself over its entirety in the same orientation-- that's what the previous one-- you'll get a comparative score of 1. However, if you flip it-- remember, DNA being double-stranded, unlike proteins-- when you flip it and compare the weight matrices, now you're asking whether it has twofold symmetry. And this is another very profound connection, I think, between the weight matrices, which are kind of a summary of an alignment of many sequences of evolutionary significance or, in this case, regulatory significance. But it's a weight matrix of aligned sequence. That is actually directly related, conceptually related, to a very different thought, which is that the three-dimensional structure of the protein-nucleic acid interaction has some symmetry in it. If you have a protein dimer or a protein domain that's duplicated, if you have a cell symmetry to a reverse complement of a motif, that means, in three-dimensional structure, the two protein motifs are related by a dyad symmetry. That means a twofold axis where you rotate 180 degrees in three dimensions. On the other hand, if the element, if the halves of the element or thirds of the element are related by a direct translation in the motif space, in the multisequence alignment, then that means that you have a direct repeat of DNA-protein interactions where the helical translation and rotation of the axis is reflected in the protein DNA. So anyway, there's a connection between motif matrices and three-dimensional structures. And here's how it plays out when you do a CompareACE where you actually go and you compare, column by column, the weight matrices of motif 1 with itself in reverse complement. And you can see these three PRRs are in [INAUDIBLE] taken from bacterial genomes are very significant when you compare it to its reverse complement. That means that, very likely, there's a protein dimer or maybe a closely sequenced related heterodimer which binds 180 degrees symmetry. On the other hand, here, when you compare CPXR to itself reverse complemented, a very poor AlignACE score. It means it doesn't have this dyad symmetry. However, if you took the two halves and compared them-- I don't show it-- but no doubt you would get a very strong AlignACE-- a CompareACE score between the two halves, indicating a direct repeat in sequence space and sort of a helical repeat in three-dimensional structure space. I think this is a very powerful connection between these two. And that, of course, can be quantitated. Now we want to say, behind the scenes, all along, you've had to have some confidence in what the AlignACE scores meant. And you do this by doing a test set. A test that has to be composed of negative controls and positive controls and very large set of functional categories from which we've shown a few examples in the context of negative controls, positive controls. So negative controls can be randomly selected genes. And you want to try different cluster sizes to see the effect of cluster size on the whole algorithm. You might be able to predict this completely theoretically. But it's very gratifying, whether you can or can't, to run it through exactly the same algorithm, the same software, with randomly selected sets. Now, this is very expensive. Because you can generate-- you need to generate-- a lot more randomly selected sets than the actual test sets. And then for positive controls, there are actually relatively few of these. These are cases where you have really well-defined transcription factors, which have to have the additional fact that they have five or more known sites. Because you need to have five or more sites in order for AlignACE to get a grip on the problem and produce a nice multisequence alignment. OK, so let's go through, first, the results of the functional categories-- 248 functional categories-- from these different databases and then go through the negative and positive controls. So here are some of the friends that we happen to find-- now, this is all done from functional categories. This was not done from microarrays. But here are some of the friends that we found in this cell cycle microarray data. RAF1 was the ribosomal one. GCN4 we've seen before. And MCB was the one that was in the S phase. And you can see these have been ranked. And remember, so you could rank them by three different methods-- by the MAP score, which is the unlikeliness of finding this good an information content motif in the learning set. It doesn't tell you about specificity. That's the next column to the right. There's MAP, the specificity score, which means that it's present in that functional category and not in lots of other parts of the genome, and then position the bot. And remember, that was done by the intersecting Venn diagram hypergeometric. And then the positional bias-- that was the binomial-- is how non-randomly positioned is it in the promoters? And so this is ranked by the specificity. And you can see RAF1 is very specific to that particular functional category. Now let's rank them by positional bias. And you get a very different story. The ones that were on the top of the previous one are off the chart here. MCB just barely makes it as number 14. And this A-rich sequence logo, which you might think is something that is all over the place, and in fact, it is. It has a pretty poor specificity score. It has a high MAP score. Its positional bias is astronomical. It is found in a particular place in many promoters throughout the genome. So this is a way that you quantitate each of these three things, the non-randomness in a learning set, the specificity for that set, and the positional bias within promoters, in general. So what are the negative controls here? Clusters of size 20, 40, 60, 80, 100 open reading frames, meaning genes for which you might have functional categories. And this allows you to calibrate the false positive rates. And what you do is you're looking for-- we could use any criteria here. We said that a MAP score might, on average, be 0 if it's random. But if we go up above 10, we'll get a higher enrichment specificity score of 10 to the minus 5 or lower, meaning-- and then we apply these two criteria to the functional categories and to the random controls. And the functional categories is gratifyingly higher than the random controls. And so we can say that about half of the functional category runs are likely to be real motifs. Of these, about half of those are known. And so the rest are probably new discovered motifs and new discovered regulatons-- regulatory connected genes. Now, the positive controls, it's said, are harder to come by. There are 29 transcription factors. These are incompletely curated. One of the boons that will come from this systematic analysis of microarray data and functional categories will be a lot of new positive controls. But until we get them, we can't use them. So this is what one can use right now. And in 21 out of 29 cases, an appropriate motif-- meaning you have to basically rerun AlignACE because you can't really use the weight matrices from the literature. They were derived by slightly different methods. But you can use those to prime AlignACE. It's a trivial thing for AlignACE to now derive a weight matrix. And then you compare it to weight matrices that come out of the tests. And 21 and 29 work. And of the eight-- the difference between these two is eight-- and of those eight, five were actually an appropriate functional category. So depending on how you interpret these two facts, you can say the false negative rate is 10% to 30%-- not great, but neither the positive control set nor the algorithm are perfect here. Now, where do we go from here? We need to both generalize and to reduce the assumptions so that we can discover new things. So for example, one of the assumptions we've been making is that motifs act in isolation. We've been discovering motifs one at a time. We'll find the best one. We'll cross it off our list, or we'll filter out subsequence ones. We'll find all the rest. But what may really be statistically significant, and we may be missing by looking at it one at a time, is motif interactions. And [INAUDIBLE] and coworkers have pursued this with a vengeance. And I think this is a very exciting direction this can go is what proteins-- how two or three or more motifs can interact to produce coregulation. Then we have these DNA motifs that come out of this microarray data. But what's binding to it? How do we find that connection? Well, one way of many is in vivo crosslinking. There are also so-called one-hybrid acids and so forth. But just think of this conceptually. As you're catching it in the act, you grab it. And then you do proteomics to find which proteins are connected to which nucleic acids. And the final direction this might be going is we've said that the different columns in the weight matrices are independent. And we've already seen multiple examples in the past fact-- in fact, I emphasize them on purpose-- where the columns are not independent in RNA secondary structure, in CpGs and so forth. And there's some evidence from this paper that the interdependence between columns might be something that you can question. So in summary, we've talked mainly about clustering and then where you go to check that your clusters are biologically significant, whether you've made discoveries and know the limits of your discoveries. What are the false positive and false negative rates? How do you measure the specificity of your motifs? How do you measure the functional enrichment, things that are casual in the classic literature? So I look forward to seeing you next week. Thank you. |
MIT_HST508_Genomics_and_Computational_Biology_Fall_2002 | 3B_DNA_1_Genome_Sequencing_Polymorphisms_Populations_Statistics_Pharmacogenomics.txt | The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license and MIT OpenCourseWare in general is available at ocw.mit.edu. GEORGE CHURCH: OK. Welcome back. We're going to go through a very specific example of association studying, illustrating this extremely important statistic, the chi-square statistic in the simplest case. That simple case will be a single allele combination, two alleles and two possible phenotypic outcomes, HIV resistance and HIV non-resistance. So to set up this association study, which would be computational, let's talk a little bit about the biology here. All viruses need to get into your cells somehow. HIV has a number of proteins to which it binds on the surface of your cell. These proteins are not-- they're not designed to bind HIV. They do something else. This is in a family of chemokine receptors. This is involved in intracellular signaling, and so it's a receptor normally for these chemokines. But it is also a receptor for the virus. And in the human population, as we alluded to earlier, there are at least two alleles, two common alleles. One of them, the top one, capital CCR5, has this long, open reading frame. And this you could translate by your pearl strip that you wrote for the problem set. And the DNA sequences in the middle here and the protein sequence derived from it is on the top strand. And that's the capital CCR5. And the little delta or deletion Ccr5 is below it. You've locked out 32 base pairs, which is not an interval multiple of three. And as you know from last lecture, that means it's going to be a different genetic code. At every point downstream, this is going to be frame-shifted, and so you get a whole new set of amino acids for the entirety of the rest of the carboxy terminus. So this in black here-- so we're showing here in the single-letter code a somewhat realistic folding of the-- schematic folding of the protein, showing here disulfide between two cysteines. Transmembrane region, we have these hydrophobic alpha helices. And finally, in the black amino acids, the C terminus, the end of the protein are all substituted in this deletion mutant. Now that is sufficient to cause resistance to the virus. It probably has some effect on its ability to bind chemokines or react to them in some other way. And presumably, this is not-- there is some deleterious effect we do not know about. But in any case, we can assess this. We can make a genetic assay for this. And we can look in human populations for their resistance to HIV and for the presence of either two alleles of the resistance or two alleles of the susceptible. Now I'm kind of biasing you here in this. We haven't done the association study. I really shouldn't be referring to it as resistant or not, but I think it helps you visualize it. We'll be rigorous enough in the next slide. So here is, if you will, the big allele, right? It's the original. It's the non-deleted allele. So when you do a PCR assay where you're amplifying with two primers across the region that could be deleted or not, you'll tend to get a large amplification product. You prime synthesis until you get something which has 403 base pairs. And there's enough of it that you can display it on electrophoretic gel, and it migrates slowly because it's large. If you have a homozygote for the deletion allele-- we'll not call it the HIV resistance allele just yet-- then you'll get a 371-base-pair PCR product. And if you have the heterozygote, you'll get both, the large and the small allele represented on this electrophoretic assay. A very simple, very robust DNA PCR-based assay. So now let's ask whether one allele or the other is more abundant in people which are observed to be seropositive or seronegative. Seropositive means that they have circulating antibodies in their serum which react positively to the HIV virus. And so that's not necessarily, but is commonly associated with being heavily exposed to the HIV virus, and therefore basically infected. Seronegative means that you are-- it's an indication that you're resistant. Now we could do this as a two-by-two matrix here of alleles versus outcomes, or we could do it as a three-by-two where we have genotypes versus outcomes. The three genotypes are big-big, little-little, and big-little heterozygote. But let's just do it-- just keep it simple in terms of alleles. If there is a selective advan-- or there's a perceivable association between the seropositive, seronegative, and the allele, you should be able to see it in both the genotype or just the allele. The allele is simpler. It's a two-by-two matrix. And so these are the data, just these four boxes here. The big allele is 1,278 observed negative and 1,368 observed positive for a total of 2,646. You can see that there are fewer total in the population surveyed of the deletion mutant. This is consistent with the claim that I made all along. It's about 9% in general human population, and it's 9% of this population. That's good. But now you have to correct for that. You need to-- you can't just ask, is it more frequent here or not? You have to correct for the actual frequencies in the population. And the way that's done is you calculate another table, which is the expected number of each of these combinations of big with seronegative, big with seropositive, and so forth, under the assumption that it's completely random. You know the allele frequencies in the population, but say that they just randomly associated with whether it's seropositive or seronegative. You use these totals here and the frequencies in the population to generate this expected end of the random assumption. And then you look for deviation of the real ones of the observed versus the expected. So any deviation between the expected and the observed is going to be significant in this. So you want to do is take the difference between the observed and expected. So we're working on a statistic here, a type of measure that will determine, how far from expectation are the observations on the left-hand side in this two-by-two? And so for every square in the two-by-two, you find a corresponding expected number and you subtract it. That's the starting point. But you don't care whether it's negative or positive. You want to make it positive. So the trick we'll often use is to take the square of that. So you could take the absolute value. You take the square for the chi-square, oddly enough. And then you want to put it on some kind of standard scale so that when you do a chi-square for any kind of phenomenon, you'll be able to compare. And the way you do that is you divide by the expected number. Or as this puts it on, if these numbers were really huge, this would bring them down. Very small, it'll bring the same point. And so you take the sum over all the squares in here, and that's 15.6. And that's saying that it deviates from the expectation by this amount normalized as a fraction of the expected. Now to determine just a little sidelight on the chi-square, in order to determine the probability of 15.6 being significant, what does that mean, looking at this at 15.6? You want to turn this into a probability, because probabilities are the common language that we can all share our surprise at this being different from expected, the null hypothesis that this is the same as expected. And so in order to evaluate that, you have to ask how much freedom there-- degrees of freedom. This is jargon for, how many different ways can these two numbers vary? Well, since we know the total number, when we observe the number of CCR5, that fixes in the columns the number of deletions, right? Because it's just going to be the number of big alleles taken from the total. So in a way, the degrees of freedom is just one. You fixed this one number, and the other one is now known. And the same thing's true for rows and columns. So the rows minus-- it's just one and the columns is one. And so the whole degrees of freedom is the product that rows and columns, which is one. So you plug this in. If you look in your standard statistical-- or your favorite statistical book or software package, Excel or whatever, you plug in this chi-square value and this degrees of freedom, and you get this probability, which is very significant. Generally better than 5% means that you will only be wrong 20% of the time. Some people would prefer to-- this means you'll be wrong eight times in 10,000 or something like that. Very, very infrequently. OK. Now this is great. This is a two-by-two matrix. We found an association. We have a plausible molecular mechanism in the previous slide. But how was it that we pulled CCR5 out of-- this rabbit out of the hat? I mean, why CCR5? There are 40,000-some genes in the human genome. Yes? AUDIENCE: What is it? I was curious as to, what population is it? I mean, in which there's slightly more seropositive or seronegative people. This must be a particular-- GEORGE CHURCH: Oh, yeah. This is a case control study where you try to get a roughly equal number of negatives and positives that are matched for socioeconomic group and race and gender and things like that. So that's a setup. But then the-- if there's not a huge risk ratio for the two alleles, then you'll expect them to be close to the population frequency, which means that you can't just look at it on inspection and say, oh, yeah. All the seropositive are the big allele. We wouldn't need a chi-square at that point. But this is pretty close to the population frequency, and so that's why we had to do a chi-square. OK. But the question that we're taking now is, how did we pick CCR5? And I introduced this as being a putative receptor for the virus, but one of the first pieces of evidence that this was a putative receptor for the virus was this association study. So how was it that it hit the chemokine receptor? Well, you could say, well, because some kind of hunch about chemokines being involved in immunology, and immunology being important in fighting viruses, but that wouldn't suggest that it's an actual receptor. I think these are-- there's all kinds of inspired guesswork. There's biochemistry happening behind the scene, and so on. But let's just put that aside for the moment and take the more general case that you wanted to test not just this one hypothesis, that CCR5 is involved, but you've implicitly then tested or explicitly tested every gene. You've gone through every gene and you've taken either the most common alleles or you've sequenced your own genome and you've found the alleles that you have, whether they're common or rare. You don't care. And you want to ask, what's the association? Let's consider the upper right-hand panel here where we have some risk ratio, a fairly subtle one, 1.5. Remember, we were talking about risk ratios of 75 in the case of autism. This is just 1.5. Very subtle risk ratio, just like I think in the last one could have been a subtle risk ratio. And the x-axis is going to be the number of alleles that you're hypothetically testing. This could be related to the number of genes that you're testing, or it could be more than the number of genes that you're testing, because you can have more than one allele per gene that you might want to test. Is this allele of this gene important in this disease, or is this other one? For example, sickle cell is important, but various other hemoglobin mutations are not. This chemokine deletion is important and maybe another one isn't. Many chemokine mutations will be neutral. OK. So as you increase the number of genes and alleles from-- here's 10 to the fourth on up-- then the number of sib pairs-- so this is a simulation covering all kinds of experiments that you could do where you can use computational methods to help guide the design of experiments. If you did an expensive experiment and you just happened to use too few patients and too many alleles, then you may have misused your resources. You should have maybe done fewer alleles and more patients or something like that. And this provides some guidance here. But you can see that in order to get to a very large number of alleles, you need a fairly modest increase in the number of patients. And that's due to the exponential term that you have in these probability distributions. So you actually-- but nevertheless, it's a big deal cost-wise going up from, say, 400 patients to 1,600. Nevertheless, it's only linear in patients while it's exponential in the number of alleles you can test, so that's the good news. And you can see some of these other panels show the effect of varying other parameters. For example, here on the left you've got varying the population frequency of the allele. As you get to very, very rare alleles on the right hand of the horizontal axis here, then the number of sib pairs-- brothers-sisters, brother-brothers that you need-- starts to go through the roof. And now it's exponential on both axes, or that is to say that it's a direct relationship. Question? AUDIENCE: What's that z value over there? GEORGE CHURCH: Hmm? AUDIENCE: What's that z value of over there? GEORGE CHURCH: Oh, that's the population frequency that you're-- AUDIENCE: On the bottom? GEORGE CHURCH: Hmm? Well, on-- on the one we were just talking about on the left, the population frequency is the horizontal axis, and it varies from near unity to 10 to the minus nine. It's a very rare allele frequency. On the right, the one we started with, the top right, we just picked 0.5, which would be very-- be around here on the left-hand end of the left-hand quadrant. So you pick one of these allele frequencies, a fairly-- where it's equal frequency of the two alleles in the population as where you do the rest of the simulation. You can think of all these panels could be some multi-dimensional display, but I wanted to take them out one at a time. And actually, I did all these in Excel using the equations that are present in this reference that's given in the slides here. A good sign of a well-written paper is for a sort of average individual to be able to reproduce it. So we have how many-- so we've been talking about new polymorphisms. In fact, the question that came up during the break was, if selection and drift cause allele frequencies to fix fairly rapidly, drift in small populations, normal-sized populations will drift in fairly short evolutionary times and selection if you have a high selection coefficient of cost fixation. Why do we have any allele frequencies at all? Why isn't everything fixed at 100% for one particular allele? And the answer is mutation. And where all these new mutations or polymorphisms come from-- actually, these should probably be called mutations, because most of them have frequencies less than 1%. Well, here's a specific model which doesn't follow all the assumptions we had before, but we'll use a different-- we'll use a back-of-the-envelope calculation to give you a feeling for what is true for the-- what is likely to be the case for the human population. So the human population, there's a little bit of unknown in some of these parameters, so you should take these all with a grain of salt. Actually, everything I say you should take. But the number of populations that we've had since a bottleneck in the human population-- maybe there were as few as 10 to the fourth humans at some point or another. Maybe less. But since that time, there have been about 5,000 generations. And during that time, our population has now grown to six billion people. That's the n prime nn. And the mutation rate, as mentioned earlier in response to a question, is around 10 to the minus eighth per base pair per genome per generation. And so the genome size is about six billion. Coincidentally, the same as the population size. Total coincidence. Then 10 to the minus eight times that number of base pairs means you have about 60 mutations or so occurring in a generation. So you've got a steady flow of new mutations. Now if you take that-- just roughly speaking, if you take that over 5,000 generations, you've got 60 times 5,000. You've got a very large number-- on the order of 10 to the fourth mutations that have accumulated in our population. Assuming relatively little drift because of this exponentially growing population and relatively little selection, then this is the number that might have accumulated. You can do subtle corrections for this exponential growth. The number of mutations you got at the beginning will have higher frequency, but there'll be fewer of them, because population was smaller. But anyway, you get the picture, is that the total number of mutations in any one of this that are new since that new-- since 5,000 generations ago will be on the order of 10 to the fourth in your body, not all of them doing good for you. And for each of those rare mutations-- they might have a frequency of about 10 to the minus five-- there will be about 10 to the fourth people on Earth that will share that very rare mutation with you. 10 to the minus 5 sounds very rare, but when you multiply it by six billion people, there are a lot of people who share that with you. It's a new mutation. OK. So the take-home is-- and this is from this reference here. High genomic deleterious mutations accumulate over these 4,000-- some large number of generations. And they would confound linkage methods and they would confound assumptions that say that the common alleles-- common alleles are causative. OK. So let's say we've done an association study. Either we've done it on one gene, picked one out of the hat like CCR5, or we've done it on a full genome survey, 40,000 genes, all the alleles we know of. In order to do the latter, we had to have a big patient population. But let's say we've done that. Now we want to prove that that association is-- that great statistic that we got out, if we have a large enough patient population, is still just a statistic. What will constitute proof is after we find the association, we make a copy of that mutation, isolated away from the three million other polymorphisms that are floating around in your body, and we do some kind of test. Ideally, we would make an isogenic pair of humans that only differ by this one mutation. That is not generally considered medically feasible or ethical. But somehow or another, you can do it on human cells or you can do it in a mammalian model system. But you have to make something that's close to isogenic, a copy of this mutation, and show that it has a phenotype that makes sense. And then, just to make sure you haven't introduced other mutations, the really careful scientists will then revert that one polymorphism and show now you no longer have that phenotype. OK. So let's walk through an example. Here's the third example in this lecture of a very specific allele where we've shown the molecular basis of it, and it's the second example where it's not coding, and it's the second example where it's going to repeat. So this is just to-- I'm not giving you a random sample here. I'm specifically biasing this towards very interesting genes, very interesting traits that are associated with non-coding repetitive alleles. In this case, the trait that got the attention of these researchers-- it isn't necessarily proven that this is the relationship. But the association that was studied was between anxiety-related traits, anxiety, and a polymorphism in the serotonin transporter. This is a science paper. Now the next step in this is to-- OK, that was found associated. There were lots of other alleles randomly being moved around, because these are humans. We have no control over who mates with who, or very little. So you get what you're given. You can do the survey as best you can. But now in order to move it towards a mechanistic basis and a proof of introducing just this one point mutation. Now what is the mutation found in this case of anxiety relationship in this serotonin transporter? It's upstream of the first RNA encoding axon one. It is a 44 base pair deletion in a repetitive region, in a region which might be responsible for promoting transcription. You have the short and the long alleles, just like in the CCR5. In this case, it's in a putative promoter element. And when you make this construct and you hook it up to a luciferase enzymatic activity in vitro, in cultured cells so you don't actually have to construct a mutant human, you can now see the long allele always produces higher levels of expression than the short allele. And these little error bars on top of each of the measures show your statistical measure of standard deviation, as we did in the first lecture. And if the black mean, which is the height of the bar, is different from the white mean by more than a couple of these standard deviations, which is the root mean square of the standard deviations of each of the measures, then you call it statistically significant. And that's what these triple stars mean. It's the statistical shorthand for this is statistically significant at the cutoff that we're using. Say, 5%. Now in this case, it's kind of showing off because every one of them is statistically significant by two different-- completely different assays here. But the point is this does not prove that anxiety is meaningfully associated with this. But it does prove that this repetitive polymorphism is causally capable of differential transcription levels for a reporter gene. So by introducing it into a clean cellular system, you can test at least part of the mechanism that might be involved in the association that originally got your attention. Now as you can see, we're getting into a fairly mature and fairly new phase of human genetics. Some of the concepts that are used here will be useful in a variety of other systems where we have relatively little control over the genetics. But certainly in humans, there's a very large need. And where this happened historically is with Mendelian linkage, which would involve very large families with a complete pedigree, multi-generation, great-grandfathers and mothers, all the way down, hundreds of people in a family. And that's the problem, because there aren't that many diseases for which you have large families with simple Mendelian inheritance. Common diseases tend to be much more complicated, involving multiple genes simultaneously and small families. Then there's linkage disequilibrium, which depends on these common alleles where the population has gone through a particularly small bottleneck and where your common allele is a pretty good distance away from the causative one. But it allows you to map it into a ballpark, and then you hunker down and do the more expensive testing of hypotheses for all and sequencing and looking for potentially causative alleles. Once you find a potentially causative allele, something that looks suspicious, maybe something that's in a coding region that it's conserved-- that's the first priority, despite all the counterexamples I've given. And then you go ahead. But the problem is that you're at the mercy of the recombination that might have occurred in that population, but you get to very interesting populations like people of African American descent where the population is very old, hasn't passed through a population bottleneck. The useful linkage disequilibrium is on a couple of kilobases rather than hundreds of kilobases, so it actually is hard to find things that are linked. That is to say, insists but not causatives. Insists to the causative allele. Instead, you have to look directly for causative alleles. And that's where-- now you look for common causative alleles. This has the problem that we've been talking about, which is that maybe common alleles aren't often causative. They've been selected out. So then you get into the scenario, what if we wanted to look at all alleles? Well, theoretically, there's nothing wrong with that. It'd be great. We could do the association studies. We'd have to have fairly large populations. But you saw it was linear with an exponential increase in the number of hypotheses, and you could rank those hypotheses by a priori likelihood and so forth, but it's expensive. And so the discussion now has to include a little bit of discussion of the new technologies that might make this less expensive, and a discussion of how we got the first genome and how the new technologies might be a little different. But we're going to do this in the context of computation in the sense of, how do we deal with random and systematic errors? As we plan our strategy for getting each of our genomes, personal genomes sequence for $1,000 or so, how do we choose which technology to pursue? Which ones have the lowest intrinsic, random, and systematic errors? So we needed to study the first genome to see, what would we mean by random and systematic errors? How many people here know what the difference is already? A few. OK. But anyway, whether you do or not, it's good for the soul to go through some real examples of this. A random mutation is something which occurs every time you do the experiment. You get a different error. And a systematic one is something where if you do it over and over the same way, you'll get the same error or a simple class of errors occurring more than once, over and over. So for sequencing, the process involves picking something that you want to sequence-- we'll call it a clone or a template. Then actually doing the sequencing and then assembling this into a meaningful interpreted sequence. And this is an example where we might have chosen these big clones. In other words, you might fragment the genome up randomly into large clones of 100 kilobases called bacterial artificial chromosomes, for example. And then that's all random, and so there's a shotgun of that scale. And then we break it up randomly into smaller pieces, which provide us with little sequences. And then in the computer, we assemble these by methods that we'll discuss in the next lecture, which can take even fairly different sequences than assemble them. In this case, we're talking about very similar sequences. And then you assemble the little sequences into big sequences, and you assemble the big sequences into even bigger sequences, and then you have the whole thing. But you can see there's a lot of opportunity for error here. We make it look simple in this slide, but we're going to talk about random and systematic errors in a moment. Where did we get those sequences, those little sequences that we want to assemble? We're going to go through a few methods in a moment, but the most common one by far I think that generates over 90% of the human genome sequence in the last couple of years is capillary electrophoresis of fluorescently labeled polymerase terminated products. When you electrophoresis things, you're separating DNA fragments, which are n nucleotides long, from a DNA fragment which is n plus one nucleotides long. That's a very subtle difference as n gets big. As n gets big, it gets harder and harder to separate n from n plus one. As it does so, you start getting all kinds of errors. The total number of errors here in the lower left-hand corner is the number of insertions plus the number of deletions plus the number of substitutions plus n, which is an abbreviation for no call. It means the software felt that it was so close that it couldn't call it at all. It just said it's n. I don't know whether it's an A, C-- it doesn't know whether it's an A, C, G, or T. Calls it a no call. And so if you look through this table, it shows on the vertical axis for each of these six bar charts is as you go up on the vertical axis, you go from very short reads where it's easy to separate n from n plus one electrophoretically to very long ones where you start accumulating insertions deletion errors, insertion deletion substitution, and n. They all go up with length. And so you can think of that as a random error superimposed on the systematic error of if you always do the experiments, the same base pair is always at the end of your run, it's going to always have a higher random error rate. So this is kind of a combination of random and systematic error. So let's go through some examples here. We have just isolating the template to prepare it for sequencing, there are systematic errors. If you have certain kinds of repeats, long, inverted repeats, or certain kinds of restriction elements that the bacteria doesn't like or likes to chew up, then you won't get the clone. And that would plague the early part of the Genome Project, is you'd keep trying again and again the same way, and you just wouldn't get certain clones. It's as if there was a hole there. You know that there's something there, but you couldn't clone it. Sequencing-- hairpins can form in the single strand of nucleic acids where you're trying to separate n from n plus one. And those hairpins then take out the gel electrophoresis and make it seem like it's much smaller than it actually is. Tandem repeats cause a problem in all three of these stages in sequencing. You get some polymerase stuttering and you get little artifacts. In assembly repeats, you're assembling by sequence alignment. If you have a repeat, the repeat's going to align just as well because it's a repeat within the genome as a repeat due to experimental repeating, and so the alignment can be off. The errors that you got from these earlier steps make it hard to assemble. Polymorphisms look like errors. Chimeric clones means you've got things that have been misassembled early on here, and so on. OK. When we do this random selection of big clones and random selection of little clones for sequencing, we want to know when to quit. Now you could say, well, we're going to quit once we get it all assembled, but you need to accumulate a certain amount of data in advance before you try to assemble it. And so there are various calculations as to when to quit. And this is related to the Poisson distribution, but it's not exactly. And in fact, one of the studies done in 1988 made some poor assumptions about-- remember, we mentioned in the first lecture the assumptions of the Poisson distribution that make it an approximation-- when it becomes an approximation to a more formal distribution like the binomial. Anyway, that approximation, you should be getting, as you get higher and higher coverage, meaning more and more experimental repeats, you'll eventually fill all the gaps. And that means you'll get 100% complete coverage, and it should asymptotically approach that. Well, if you use the Poisson incorrectly, as authors did in 1988, you basically go off to infinity. You get 200% coverage, which is physically impossible. But both an earlier study, which they ignored, and a more recent study got it right. And these are slightly different measures, but they both converge on 100%. And I urge you to look at that if you're designing an experiment with simple formula. On the other hand, if you want to design the experiment more explicit, you eventually get to a point where a simple analytic formula won't do, and you have to do Monte Carlo. We treated this in the first class of analytic versus numeric differentiable equations and then many other cases. The simulation you want to do is beyond-- is too hard to do analytically. So Gene Myers set up-- just listed all the things he thought would be-- could affect the ability to assemble a genome sequence. The read length and types of repeats, and all this stuff was simulated. And he cranked the simulation on real human genome size projects and came to the conclusion that you could do a shotgun assembly of a mammalian genome. A company called Celera was formed. Gene Myers was hired as the computer guru. They put together a large stable of computers and they started doing this on the human and the mouse genome. The human genome was in the end not done by this method, but the mouse genome was, and it was a pretty good assembly. So the Drosophila genome missing the Drosophila repeats was also done by this method. So this is scaled very gracefully from the first shotgun sequence, which was on a four kV plasmid to mammalian size genomes. No mammalian genome is completely sequenced, so we can't really declare victory. But the sort of simulation that he did here has played out very nicely. Right here in Boston, there will be-- when we start thinking of the future of sequencing technology, we would all like-- this is almost uncontroversial as to how much we desire that the genome not cost $3 billion, but it costs $1,000. And now there are a number of people who are taking very definite steps to get us to that point. To understand systematic and random errors that can occur in these steps, let's take this dideoxy gel electrophoresis we've been talking about here. You have four different colored terminators. These is where the polymerase can't go any further because there's a blocking group here, which is fluorescent labeled. And if the template on the far left here is ready to accept an A, you'll get an A. And this n and n minus one will separate on this electrophoresis and the four colors will give you this four-colored pattern where the intensity reflects good termination at that position, and you can basically go along reading it. G, C, G, G, A, T. Now this is in the well-behaved part. An example of a systematic error that you get in this extension occurs here in the upper right-hand corner of slide 40. Now you've got-- because of one of these hairpins that I mentioned before, you've got a pile-up of seven nucleotides all at the same position. A completely alternative method that doesn't involve gel electrophoresis is called pirA sequencing, and this is an equivalent that can be done with fluorescent addition. Instead of separating them in time by their size in electrophoresis, in time you ask serially, do you want an A here? If yes, then you get a little green-- you get a little peek of representing the release of pyrophosphate or the incorporation of fluorescent A. And then you ask, do you want a T, and so forth, and you go along. And each signal means yes to the answer that it's ready for that particular base. And you can see this doesn't have any problems with this hairpin region. So when you have a systematic error, you have to change the method that you're using fairly radically. The opposite strand or a completely different method. If you have-- early on, we had huge differences in intensity of the fluorophores and new enzymes and new fluorophores were developed by Tabor and Matthews and Glaser, and so forth. And this was a huge advance in making everything uniform and eliminating systematic errors. So the sort of things that are on the horizon that hopefully will be discussed tomorrow in Boston, we've talked about this short number of base extensions like power sequencing, the longer extensions in capillary arrays, which, capillary arrays are changing from low-end capillaries into microfabricated chips. I'll show an example of a mass spec in just a moment. Sequencing by hybridization on arrays will illustrate as a prelude, as an example of Affymetrix technology in this slide. OK. So the idea here, this is mainly for re-sequencing. Some of the ways of reducing the cost of sequencing the human genome will not apply to sequencing brand new genomes. But still, we need to pursue them, because this may be the way that we get the $1,000 human genome, even if we don't get other ones. And here, you know the sequence, except there's a possibility that there could be a polymorphism at any base. Could be any single base substitution at any base. And so you don't necessarily know in advance what the common ones are or the rare ones, but you do know the canonical sequence. And so at every position, you'll make a 25-mer oligonucleotide which will bind to a fluorescently labeled version of your genome, or a piece of your genome. And this is actually developed for HIV re-sequencing by Affymetrix. And at this middle position, you'll put in all four possible substitutions, T, G, C, or A. And you'll consider each possible template. So if you have the template, which is-- let's say these are the two alleles that occur in a human population or in your sample. You can have this sequence and then all the variations on it, stepping along, changing T, G, C, A for the first base, the second base, the third base, and so forth, till you hit this one. And this is where the real polymorphism occurs. And you could have either this as the context or this as the context. And this is schematic And this is the real data. These are real data down here where you have the context of the A allele or the context of the C allele. You can have the homozygotes or the heterozygotes. And in the homozygous A, you can see it lights up the best, the best hybridization when you have the A in the middle position. The middle position is most sensitive to hybridization changes. And for the C, you have it in the C row here. And remember, you're changing all the bases in every position. One, two, three, four. This is the one where the middle base of the middle position is in the right context. So this is in the A context and the C context. And in the heterozygote, you get both the A and the C. And so this has been done on HIV. It's been done on BRCA1, on mitochondria. And now they've applied it with whole wafers to the entire human genome. And this probably costs on the order of $3 million or so. Mass spectrometry is another way that's used, probably not for sequencing the whole genome, but it's used for single nucleotide polymorphisms. This costs on the order of $0.50 per polymorphism. If there's three million of them in your genome, that's a lot of $0.50. [LAUGHTER] And here's what it looks like when you read it out. It's really just like electrophoresis. You can now separate an addition of an A from an addition of a G. And the difference in mass between an A and a G, this is even more subtle than the difference of an n and an n plus one. This is just a difference between an A and a C. It's detectable in this. In fact, it's detectable and quantitative enough that you can pool samples. This is a bit of a stunt, but it's an important stunt to show that this is really a very precise method, albeit still fairly expensive. Now just in closing, I want to give you the simplest possible example of how we can search through sequences. Next week, we'll give you a much more rigorous way that you can look through the sequences with very extensive differences between the sequences. But here, the theme of today's talk is the subtle polymorphism differences that occur between you and me. And here, one different-- so generally, you're looking for exact matches. And a good way to look for exact matches is-- good ways are hashing, suffix arrays, and suffix trees where basically in each of these, you're looking for using a word, a word either that's built up and stored from the end, the suffix, one letter at a time, or it's a chunk that you might have as a hash. And you make up a lookup table. And the size of that lookup table-- it's a trade-off between speed of searching and the size of the table. The size is going to be-- if the word is n nucleotides long, it's going to be four to the n. It's going to be the storage space you have to put on disk or RAM, RAM if you want it to be a fast search. And so 16 is the magic number, sort of in the ballpark for a human genome, because four to the 16th is four billion sequences you can represent. But it's a huge table. You have to have a table of four billion times however many bytes you need to store the positions. Typically, about four bytes of storage. If you cut back on this a little bit, you'll end up with collisions where you'll have two things that have the same hash or suffix. If you make it-- and that's if you make it smaller. It'll take less space. If you make it bigger, it'll take a ridiculous amount of RAM. And then here's a kind of whimsical-- another example of pearl where you not only want to find all the mutations here at a very high density-- a ridiculously high density, not one particular base for every few base pairs. You not only want to find them, but you want to correct them. And here, the pearl does the substitution. And after gene therapy, everybody walks out happy. OK. [LAUGHTER] |
MIT_HST508_Genomics_and_Computational_Biology_Fall_2002 | 11A_Networks_3_The_Future_of_Computational_Biology_Cellular_Developmental_Social_E.txt | The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license and MIT OpenCourseWare, in general, is available at OCW.MIT.edu. GEORGE CHURCH: OK, so welcome tonight, a very special lecture for me tonight in more ways than I can express. This is the last of my lectures, but I look forward to your lectures coming up. I really do. I've heard from the TFs, and from what I've seen from talking to many of you, this is going to be a really amazing set of projects this year, more than any other year. Anyway, last week, we asked what can biology do for computing? And this week is going to be a little less nuts and bolts than usual. The entire course-- as you all understand, you're all self-selected to life survey courses, at least this one, because you're still here. And tonight's going to be more of a survey than ever before, and it's going to be talking about topics for which we don't have answers for the most part. There'll be some interesting details along the way. But anyway, instead of what biology can do for computing, this is more about what biology and computing can do for the world. That's really what we want to do, and also, the higher level network models that we can build since this is network number three. So we'll start out-- we've been talking mainly about cellular models as our highest level of networking, and so we'll talk about multicellular models, in particular, the sort of multicellular models that you get that involve sensory integration and integration of higher levels, and then all the way up to multi-organ systems, where we'll take an example from the atomic scale, all the way up to organ system failure. And then we'll go from there. So let's start with multicellular models. And this is something that I-- I'll show you a few slides that will remind you of the first lecture. They either were taken from the first lecture-- in this case, this is a wildly different author, and approach, and so forth, but it's the same lesson as from the first lecture, which is we not only have these exponential curves going through the 1900s, but they're super exponential. They go up, and the trend lines you get depend on which set of points you use, the steepest one being the most recent one, 1995. And this basically is part of when it is that we will have computing power that might be on the order of an integrated human neural net. We talked about neural nets last time as a metaphor, or as an algorithm, but when will we actually have a computer that has the capability of the human cerebral cortex, or entire integrated nervous system? And the basis of those calculations that Moravec made, and in lecture one Kurzweil was the super exponential curve that I showed. Moravec made them based on his studies on how the retina works. And I think those of you who have done a lot of computer programming, that understanding how the retina works is probably one of the more intuitive of our various human senses, in the sense that you can program an algorithm that might do in the lower right of slide four here, edge or motion detection. When a frog sees a fly, black and white, high contrast, zoom across its visual field, something goes off. And you could write an algorithm that could do that, given a few frames of any kind of form. So anyway, but there's been literature on the subject. And when you go through even the best algorithms right now for doing these kind of calculations and you figure out how long it takes to do-- the retina is doing about 10 million detections per second. And then you scale up from that 0.02 gram chunk of retina up to the 1,500 gram human brain, then you get the extrapolation that was in the previous slide. I think this is interesting both as an introduction to the kind of algorithms we'd like to think about when we think about integrating a multicellular system, but also in terms of where the compute power both biological and computational is going. Do we want to engineer biological systems to keep up with these silicon or whatever other computational system we're using a decade or so from now. Now, so I said that the visual system is a relatively intuitive computational system to program. In contrast, I would say the olfactory system is less intuitive, at least for me. We know now quite a bit about the molecular biology of the system. There are 1,000 receptors. It's probably one of the largest classes of genomic repeated proteins. And these receptors are in the same family as the receptors that are the major class of drug receptors, the G protein coupled receptors. And there is basically one receptor per cell, which is quite a trick, right? Because you got one receptor from mom and one from dad. And you've got thousands of receptors. But there's basically one per cell. And they detect olfactory molecule concentrations over about seven logs, where they can detect a concentration threshold with a particular standard deviation. Now this is the beginnings of the model, just stating the fact that you know about it. When you look at the neuroanatomy, you've got this-- the odorant molecules down at the very bottom of this figure, cilia olfactory neurons, which is a primary signal transduction. This thing goes up where it's integrated in the glomeruli. And you can think of this like the neural nets that we're talking about last time, where you had these interneurons. You had the extra layer that we showed was so important. And the thing that's amazing about this system are lodged in these four basic olfactory facts, which I take from Hopfield's work, which is really just like this is the same Hopfield that introduced us to or really championed the concept of neural nets as a computational algorithmic metaphor. And here, the idea is you have odor and memory recognition. You have background elimination, just like how your eyes adapt to a mostly red room, your nose is great, or whole olfactory system is great at adapting to background odorants. So you have one known and one unknown thoroughly mixed. Number three, you can have component separation. You can have a few different odors at once. You have odor separation where you have unknowns. I mean, the combination of these is quite remarkable. So the basis of the model that Hopfield has proposed is this. You have a coverage for each of the i receptors, where i goes from 1 to 1,000. This will vary depending on the organism. Mice have twice as many as we do, probably, and so on. But the coverage of each of those receptors is equal to the minimum coverage for the concentration of the target, where let's say the target is the target odorant that you're looking for over the concentration that's the threshold for firing that neuron for that target, threshold of t. And you multiply that times either one or some fraction. It's 1 if you really hit the target. And it's this fraction for that particular receptor and that particular target. You've got some crosstalk. And the amount of crosstalk you can think of this as a field of receptors, which all have variable binding. And that binding is over the six or seven logs here. And this just reflects that same thing that we were talking about. So the total number of inputs going to the next layer in the neural net, the coverage here, is the sum of this target signal plus the background signal. So the first term is target. And the right hand term is the background. And so you can see they have the same form, the same threshold, and concentration. c sub b is concentration of background. Now, so let's see how this plays out when you actually try this out in an olfactory processing problem. Here you have an odor space. And what you have here is 80 different neurons is the y-axis at the top of slide seven. And then across the x-axis is time in milliseconds going from 0 to 800 milliseconds. And this is all modeled on fairly realistic parameters based on experiments. And so what you have here is 80 adapting neurons. And what you're doing is two sniffs of a mixed odor. That's what this is modeling, OK? First, you have a 100 to 500 milliseconds, the range from-- so you can see that something's happening at 100 milliseconds. And up to 500 you have a mixed odor, which is 50 parts of x plus 1,000 parts of y, or 1,000 times y. And then 500 milliseconds you change this ratio very slightly, you see? You just up x by 75 and y up from 1,000 to 1,100. OK, so that's the paradigm, 50 and 1,000 to 75 and 1,100. And the sniff at 100 milliseconds, that first one, that mixed odorant, it activates more than half the neurons. So this is not like your retina, where if you have a single point source it will activate that one broad cell, or maybe a center surround effect. But it's really activating half of the neurons, OK? And then the changed sniff at 500 milliseconds-- remember, just these two small changes-- is almost invisible, right? So that's what you're seeing right here, OK, is this huge, swamping amount. But then in B-- so that's what happened in A. B, you've now plotting instead of individual neurons, you're doing the instantaneous rate in hertz, OK? And now, you can see the second sniff here at the 500 milliseconds, which even though you really can't see it in the corresponding point where you're looking at all these individual neurons. And so you can see that even a 20% spread in one of these parameters is enough to get this kind of easily detectable signal. Now, that's an example of a simulation. That's not a simulation of a particular experimental dataset, but it's based on experimental datasets. And the code, this is not Mathematica, this is Matlab, which we all wish we could have interchangeably taught in this course. And you can see here in green are the comments. And you can see some of the things that we've been talking about here. The number of receptor types is actually here 2,000 instead of 1,000. You see the target, remember, I said could be some random number that goes over six logarithms. Well, the way that's done here is here's a random number generated and you generate as the log of the target. And then, you get the target by taking the exponential. It's very straightforward and the code goes on from there. OK, now this brings us to a very interesting point. Since these three lectures on networks, and in a certain sense, the whole course, is building towards systems biology network models. And for these models to be useful, in the same sense that the models we have for X-ray crystallography, and the models we've had for genome interpretation in terms of homology, and sequence folding, and so forth needed to be shared. The models, even more than the data, need to be shareable because in a certain sense, even modest manipulation of the data usually involves some good model behind it. And so these are some of the work groups that are working on different modeling schemes. In particular, I should point out that BioSpice is now DARPA BioSpice, and SBML is kind of a growing part of that. System Biology Markup Language is a way of sharing this data. And as yet, there's not the kind of convention that we have in crystallography and DNA sequencing where you submit-- at the time that you publish a paper, you submit your data and model, data/model, with an accession number to the database or else you don't get it published. We're nowhere near that now for the rest of biology. But these kind of efforts, probably, are moving in that direction. And you can see that the features of each of these are some of the things that we've been talking about before. You've got stochastic modeling, kinetic modeling, enzyme receptor cell geometry, neural nets, and so on. This is by no means comprehensive. And there are hypertext links here if you really want to dig into them. And some of the platforms that they've been thinking about different ways of making it slightly less system-dependent. Everyone has their attempts. Obviously, Windows is not system-independent. So that's two very brief examples of multicellular models, both of them being neural models kind of building from what we were thinking about last time. But now we're going to a completely different kind of multicellular model, all the way up from individual effects that a single nucleotide-- how much effect a single nucleotide can have on a whole organ system, which in this case will be cardiovascular. So we go from--. And there are a number of physiome and cardiome projects that are of active interest. Just like the previous slides that I showed showing all the ways of sharing system models, many of these efforts are kind of loose consortium alliances where people have put together different models either to deal with the anatomy, physiology, or sometimes, a integration of molecular to cellular, cellular to, say, neural or muscular, and all the way up to fluid dynamics and so on. So let's start with a single base-pair change. And I start with this one because you should feel comfortable with it at this point. We mentioned it briefly before. We'll talk about it in more detail now. This is the single nucleotide polymorphism, which causes the change of the beta subunit of hemoglobin, which is a tetramer, to go from a glutamate, normally, in most people at position six of the beta chain, to a valine. So that's the purple set of tetramers. Or to tryptophan, and that's the cyan set on the far right. And this, from combinations of X-ray crystallography and three-dimensional modeling, and this is not speculation, just based on primary sequence data, even though the primary sequence here is very, very close, just a single nucleotide and a single amino acid change. Nevertheless, these authors have been building this from the crystallographic data where not only do you care about the tetramer itself, but how the tetramers interact with one another to make these long, fibrous chains, which are considerably more stable in the sickle cell. And what you see here is sort of the valine substitution is locked in one kind of conformation and the tryptophan in another. And you can see how these authors have decided how a potential fiber can form in the different cases. Now those fibers, in one way or another, which is not totally mapped out, but you affect the efficiency of the hemoglobin slightly and the shape of the cell much more radically. And this combination results in the cell, especially under any kind of stress, either oxidative or other metabolic stress, becoming sickled. So you can have a combination of cells which have varying degrees of sickling. Here in this microscopic slide on the left, you have both the sickle cell and the normal cell side by side. I think when we built up the red blood cell metabolic model, we talked about some of the non-metabolic considerations of that, which were that its function is to transport oxygen and so forth. And what we're talking about now is more a cell membrane issue. This is on slide 13, where the internal environment, which is the hemoglobin, is greatly affecting now the external environment, which is the hemodynamic flow in the capillaries. So this falls under the heading now of how is it that we can go from that single nucleotide change to a very dramatic morphological change? In this case, it's pathological, but you could imagine that in the hands of evolutionary adaptation, an organism could take this and run with it. Maybe not sickle cell mutation, but some other one that causes some other morphological change in some other cell or complex aggregate of cells. Because here you can see a whole variety of different shapes of red blood cells. And remember, these red blood cells are very simple. They have no macromolecular [INAUDIBLE],, no DNA, no RNA, so we're really just talking about a bag of proteins which are greatly affected by these different conditions or enzyme deficiencies. So the system models that were built up, where we had the kinetic parameters for the enzymes, here can, in principle, be modeled on the impact it would have on the osmolarity, or the other membrane properties that were listed in the previous slide, or the sickling that was in two slides back. So from a single nucleotide polymorphism, we've gone from this three-dimensional fiber of hemoglobin to a change in the three-dimensional structure of this biconcave disk to a sickle disk. But you can also go, sort of in parallel, the same single-nucleotide polymorphism, or some like it, can take you up to pathogen resistance. We've already mentioned that sickle cell hemoglobin can take you to malarial resistance. But in addition, you can get components of the enzymatic metabolic pathway, such as the one that we modeled a few lectures back, such as glutathione peroxidase. This is part of the redox components. And here, erythrocytes that are heterozygous for this particular allele should be more efficient in sheltering the cell membrane from irreversible oxidation in binding hemoglobin caused by the oxidant stress that's exerted by the malarial parasite. And what they observe here is actually an interaction between these two possible haplotypes. So it's actually, you can think of it as a pair of haplotypes. It's the one is the hemoglobin AS, the one we've been talking about for sickling, and then, this glutathione peroxidase. And so you can imagine, that since both of these things interact with malaria parasite, that your genotype will depend-- I'm sorry, your phenotype in respect to malaria will depend on the alleles that you have at both cases. So you can-- here, you can have an A over S heterozygote and a two over one heterozygote. And so that's something to take into account when you're trying to do any kind of predictive modeling or modeling to explain the functional genomics that you have in a patient who's a compound heterozygote like that. So now, the third pathway. Now, from single-- the same set of single nucleotide polymorphisms that affect either hemoglobin or one of the major enzymes in the red blood cell. Now, on to cell morphology. On to pathogen interaction. And finally, to interaction with drugs. We talked about pharmacogenomics. Here's another example where you have the drug-induced oxidative hemolysis that you occur with certain enzymopathies like glucose-6-phosphate dehydrogenase. This enzyme, glucose-6-phosphate dehydrogenase, interacts with drugs such as primaquine. And it disrupts mitochondrial function, heme biosynthesis, and so forth, and so on. This is a very significant consideration with a long list of 20-some drugs that has to be taken to account when you have any of a variety of red blood cell enzyme changes. So we've got these three effects of a limited number of single nucleotide polymorphisms. How do you make this transition from-- and they're all somewhat interconnected. Part of the reason it's resistant to malaria is because it's less effective in its cell shape, and its ability to transport oxygen, and do its metabolism. And same thing with drug sensitivity. How are these changes in the hemoglobin-- might they be reflected in the three-dimensional shape of the erythrocyte, which is a kind of a membrane-bound compartment? And here's a model that struck me as interesting. It's not been that extensively tested since 1998. But the idea is that Band3-- this is one of those names that just comes out of the molecular biologist literature, that they ran a gel, and they count the bands, and number three. And it turns out it's about 10% of the red blood cell membrane. It's a very abundant protein. And it's responsible for the equilibration of anions such as carbonate and chloride across the red blood cell membrane. It's not a pump, it just kind of allows these anions to go across because the pump here is the pumps that are involved in proton and there's also sodium potassium. And that's what your ATP in the red blood cell is going for. And these are just kind of following along. The idea here is that if you change the degree-- let's see, I think this is covered in more detail. No, sorry. The mechanism of action here is greatly affected by its disulfide effects. And if you change the redox components in the red blood cell, you get just a slight change in conformation of this molecule. And then that can result in a net difference between the cross-sectional area of this protein on the outside and the inside, which translates into a net change-- since the phospholipids don't exchange, or the rate constant for the phospholipid exchange is slow and known, then this results in a conformational change. Anyway, that's the model. That connects the single nucleotide to-- could connect it to the three-dimensional structure of the cell. Then you want to connect the three-dimensional structure of the cell to its ability to carry out its function in the capillaries. And its function there is to allow the fusion of oxygen and carbon dioxide. So here, each of these cells is shaped by a mechanical process. A mechanical process here, you can take the known or the measurable mechanical properties of a red blood cell and subject them to a method called finite elements analysis where you're solving these partial differentiable equations. And you can calculate the exchange of the oxygen-- alveolar just refers to-- this would be in the lungs. This would be a small capillary in the lungs. And you can see the capillary now, the endothelial walls are close enough so that the red blood cells are basically deformed in their shape as they go through there. And the exact shapes that are compatible with this will determine the rate that these little arrows that go from the oxygen on the outside the lung epithelium get through to the blood cell. So that's a reference that deals with that kind of problem. Now, as we start thinking about building up a larger system model, the other parts of the system, each of them-- so we've got a red blood cell model, it's the metabolic model. In principle, it could also be a shape model and a diffusion model. But the other parts of the system typically involve muscle cells. They have the smooth muscles throughout the entire arterial and venous system, and most significantly, the heart muscles. And so here you have an example of action potential in one of the ventricular cells of the dog heart. And it includes all the components that at least we, since this is not an entire course on ventricular heart modeling. This is the sort of parameters, in terms of all the I to the currents for each of the different potassium channels to K, all of-- each of the ones at I sub K was all potassium. And then there's some in sodium, and sodium and calcium, calcium potassium, sodium alone. And so each of these things, that internal storage of calcium [INAUDIBLE] from one subcellular compartment to another and so on. You get the idea. Each of these things has to have the parameters measured. And any of that are absent, you have to have reasonable ways of getting surrogates. And so this is just the sort of data that would go into the other major type of cell that comes into this cardiovascular modeling. And finally, you can integrate this up to a fairly complete system. It's obviously not complete until you get to the whole organism, but higher level, and how we're talking about whole body recirculation. This NSR group, this is a hypertext link for this particular model which you can download. And it has a four-chambered heart. We were just talking about one ventricular cell on the previous slide. But that would be-- if we look at the second box from the top here, this is where the heart and lung model would fit in there. It's got seven different organs, which include kidney, liver, lower limbs, and so forth down at the bottom part of this diagram. And you get the picture. Each of these things, you're modeling the volumes and flows, Now, this is systems biology. This is really kind of a renaissance of interest in physiology, which is what it would have been called before. And there are some really interesting mysteries out there to be solved that are really only solvable, or possibly only solvable, at the system level. You never know, somebody could come up and say, oh, yeah, this is really well understood immunology and that answers everything. But even if it's some kind of immunology, you still have to say, how does it play out? And what happens is we've been talking about sickle cell disease, and you can have fairly mild pain as one of the major symptoms. But then all of a sudden, out of nowhere, you'll get this multi-organ system failure where almost every major organ in that previous slide, lungs and so forth, fill up with fluid and you have a very high chance of dying. And this could happen to every one of us in this room because it's not restricted to sickle cell. If any of you have severe burn, or car crash, major bone injuries, and so forth, you have a very good chance of getting into multi-organ system failure. And it really isn't known how that plays out. I mean, not even that-- not just a quantitative model, but even qualitatively. So I think this is something-- there are projects now to get genomic data collection, both genetic and expression data, and time will tell whether that is actually the best route to solving that mystery. So that was multi-organ. Now we're talking about multi-organism. After you've built up an entire organism, then you want to know how it acts at the next level of network analysis, which is how does it fit in with other organisms? None of us, almost no organism, really belongs in ecological niche all by itself. And some of the modeling we are taught in this course could be called simulation. And probably one of the longest-lasting computer games in the world, and one of the most successful, I've heard, is The Sims. It started with SimCity in-- SimCity in '87. And then this particular one illustrated here is SimLife, which is all about ecological modeling. And it's not entirely different from what a serious ecological-modeling program would do. It certainly has a lot of the interesting parameters such as the lifespan here. You're doing demographics here so you have the lifespan, the amount of food needed, the size, kind of vision, roaming, so forth, of all these different kinds of animals. You can have plants, you have herbivores, carnivores, and so on. And then you can set up the population sizes, and then it will do simulations based on that, where each individual animal is tracked in terms of its position and quantity. And as you would expect, as the carnivores build up, then they knock down the herbivores, and the plants go up, and so forth. And it goes through these cycles. And so this is basically a stochastic model, just like the stochastic models that we had for molecules, but here, at the organism level. And what's happening here in the slower right-hand section is you've got little green plants getting eaten by herbivores, and the herbivores getting eaten by carnivores. Now, hopefully, this course has already or will inspire you to really think globally. To think not only globally in terms of how little pieces of molecular tools fit together into systems and networks, but how our systems and networks we model fit into the big picture of what are really important problems. Maybe a slightly improved Viagra is not in the same category as a new tuberculosis drug or maybe it is. I mean, you decide for yourself what is thinking globally. But you have to act locally, and that's what we're doing in this class. When we're thinking globally, and we're thinking about ecological systems, we're thinking about the lithosphere, for the most part, and its interaction with the hydrosphere. The lithosphere is mostly silicon dioxide, a tiny bit of carbon, and it gets very hot, very quickly. So only the top 0.1%, that's the top four kilometers of it, is survivable in any type of organism we know of. About 110 degrees centigrade is where organisms start having a great deal of trouble. You and I would have trouble before that. The biosphere here is about 3 times 10 to the 15th grams, my estimate for marine organisms. And maybe 10 to the 18th grams for all land plants and animals and microorganisms. The microbial hydrosphere here is about 10 to the 21st milliliters, which works out about 10 to the 27th cells. A phenomenal number of these cells, about 10 to the 26th of those cells is a single species, which is prochlorococcus, which is responsible for maybe about 50% of the Earth's photosynthesis. A lot of that photosynthesis does not end up in fixed carbon because it is immediately consumed by one of its other predators, basically. But there's really quite a lot of cells out there which, to the extent that they're well-mixed, and they're not-- of course, it's not perfectly mixed, all the Pacific and Atlantic and so forth. But it's a considerably more well-mixed, say, than the lithosphere organisms, where you have organisms down a kilometer into the ground. They don't move around very rapidly. So when you have a population of that size as you remember from one of our first lectures on population size, the effect of population size determines the rate of drift and the optimality of the organism. So one of the things that we do, when we've been talking about mining the biosphere, one of the things that we're looking for are new tools that we can use for nanoengineering. And kind of one of my pet ones that I'm becoming interested in, we don't work on the same thing, but it is kind of interesting, is a set of-- we mentioned this briefly in the drug protein interaction lecture where-- but I don't think I mentioned this particular one-- where you have polyketides that go together. It's another polymer that has certain similarities to the basic polymers we talked about, but each step in it had to have a protein enzyme-- an enzymatic domain to accomplish that. And so one of these is tetracycline. And this is one of the more aromatic kind of coupled-aromatic compounds that's made because you have these [INAUDIBLE] and aromatases in addition to the polymerization steps. And so you make this thing that looks kind of like a poly aromatic hydrocarbon. And also in nature, you will find, in soot, in forest fires, in all kinds of natural phenomena, you will find polycyclic aromatic hydrocarbons, which some of you may know about as the potent carcinogens, but they're also just natural components. But they also-- you can start to see this looks a little bit like the buckyballs and buckytubes that we talked about when we were talking about molecular-type transistors. So the possibility of mining the biosphere for enzymes that act to synthesize or degrade this class of compounds, I think, would be just an example. We could list many others. But the part of it is just to dream, to imagine, what it is would like to find out there. And if there's an abundant source of it, then you should be able to-- abundant source of, say, the compound, in this case, you need to be able to find a microorganism and an enzyme that goes with that because there's a truly phenomenal amount of diversity. Another very important global consideration, rather than just mining it for new tools, is thinking about ways that either we could engineer or accidentally mess up our entire planet, as we could be doing with global warming or could do. And perhaps it's naive to think that we actually are having such a big effect because we know that global warming changes periodically over millennia. But it is very clear from the record just exactly how much carbon dioxide we are releasing and it makes sense, that it is consistent with the kind of temperature changes that have been observed since the Industrial Revolution. In any case, when you look at, in particular, the Southern Ocean-- up at the top of slide 27-- you can see there is a few places, in particular, Southern Ocean seems like it's a prime candidate where you have high nutrients but low chlorophyll. And you say, well, why would you have high nutrients but very little of the chlorophyll around, which is a tipoff that you have the photosynthetic bacteria. And the reason is that you have a limitation of some micronutrient. Micronutrient means it's not needed in the vast quantities that you need nitrogen, phosphorus, carbon, oxygen, and so on. So iron, typically, is the limiting micronutrient in the Southern Ocean. And so there actually have been little pilot experiments to drop iron off the backs of huge tankers. And there are now at least seven patents that have been filed on doing this in order to balance out the carbon credits for different nations. This is potentially a very big bit of terrestrial engineering that might happen. It involves trying to change phytoplankton. And as this line here begins to point out, even though phytoplankton are only 1% of the total global biomass, it's about 50% of the carbon fixation. What is the source of this 50-fold anomaly? Why is it that even though it's doing-- how is it doing 50% of the carbon fixation? And what's happening is a lot of the carbon is going right back out after being fixed. Instead of settling to the bottom of the ocean and never bothering us again, it gets returned. And the exact modeling of that requires a much deeper knowledge of exactly what organisms are present in the ocean. One of the problems with this however, is a genomic problem, in a certain sense, which is that very few of these organisms can be cultivated in the laboratory. On the order of 99.9% of the organisms that you sample from the ocean, or from the soil, a number of different environments, do not grow well in the laboratory. So if you were to study them, some people are feeling that the best route to studying them is going directly for-- looking at their genomes, without necessarily being able to grow them in the laboratory. Now, when we talk about this problem of-- even if we fertilize the oceans, and we can model this whole procedure, I won't go into details. But if there are predators there that take the fixed carbon and turn it back into carbon dioxide, you want to be able to monitor that. Now, of course, the ocean is a large, complex set of prokaryotic autotrophs that are, say, photosynthetic bacteria, and eukaryotic photosynthetic bacteria, and prokaryotic and eukaryotic predators of various sorts. And this is, I think, one of the more interesting predator-prey differential equation models. It's almost the same as many of the other differential equations that we did from the first lecture on growth. We did the logistic equation, some of the ones we did on repressor function, and so on. But the thing that's kind of interesting about this one is it's not simply a single-cell bacteria being eaten by a single-celled heterotroph, let's say, a blue-green algae being eaten by a single-cell heterotroph. Here, they are both multicellular. They're sort of the minimal multicellular. But because they are-- so Chlorella is one of the smallest plants, multicellular plants. And Brachionus is a small rotifer, a multicellular animal. Because they're multi-cellular, though, now you have the demographic mortality and fecundity of each of these things that has to be modeled in. That's the m and the lambda for Brachionus. And so this plays into these equations where you have here, in the upper-right of slide 28, the rate of change of nitrogen with respect to time, and concentration of Chlorella. So nitrogen is N, Chlorella is C. The R is the Brachionus that are reproducing and B are the Brachionus which are the total. And you can see each of these equations and how it plays out. Now this is how-- and the way they play it out here is you do it by dilution rate, delta, here. And the dilution rate is shown along the horizontal x-axis for each of these three plots here. And what they're modeling is on the far left, the nitrogen concentration as a function of dilution rate. And you can see you get these different sectors of behavior. When you look at the Chlorella, this is a green, photosynthetic, multicellular organism, in units of millions of cells per milliliter, that can take different-- it take different pathways here where, as a function of dilution rate, you get this bifurcation where you get a huge split in the possible concentrations of Chlorella is the one curve and the Brachionus, the predator, is the other curve. This kind of bifurcation is the sort of thing we talked about earlier, both in the logistic equation and in the self-exclusion modeling. And in the upper-right-hand, here's our friend the coefficient of variation in percent, ranging from 0 to 80 or so on the vertical axis and the dilution rate. And again, you can see that you get a peak in the dilution rate at around 0.75. The bottom two are models, and the upper right is the data. You can actually see, when you run real Chlorella and Brachionus in here, that you do get a peak just as you would predict in this kind of bifurcation analysis in kinetic modeling. That's due to-- where you get interesting behavior in this complicated-- or this fairly simple model ecosystem where you just have two species. In the ocean, of course, you have a lot more. Now, that's out in the wide world where you have 10 to the 26 cells. But inside every one of us in this room, no offense, but there are about 10 times as many non-human cells in each of us as are human cells. And we have our own ecology. And there is a literature where certain organisms we know cause diseases, and they're put into one category, infectious diseases, but then there's another set that takes some time to sort out. And you can see here are candidate diseases and candidate organisms. And all these little bars that are going in here are references that you can look up where maybe it's not an airtight case yet, or maybe it's not even-- maybe there's controversial or discredited [INAUDIBLE],, but these are links where, eventually, this will be called an infectious disease. Helicobacter pylori causes stomach ulcers. I think that fact's basically already accepted. Some Australian doctor named Marshall decided to drink a bunch of Helicobacter to prove to his colleagues, who didn't believe any stomach ulcers were caused by Helicobacter, much less all of them. And he drank it, he cured himself, he drank it again. And he caused stomach ulcers in himself. And hopefully, none of you will volunteer to do similar things with some of these other nasty guys here because I think some of the list of things that they cause are a little more serious than stomach ulcers, and the cures are a little less obvious than the cures for a gram-negative bacterium like Helicobacter. There have been various efforts started to actually mine what's the connection with genomics and computational biology? To mine the transcriptome for evidence of these. You can look for bacteria very easily because they have ribosomal RNA components which you can PCR. Viruses are more complicated because they're not conserved, or they don't have any universally conserved element like ribosomal RNA. But there have been efforts to look for what's present in the human transcriptome that's not present in the human genome. The human genome is something highly purified and cloned and so forth, while the transcriptome is whatever cells they ground up that day and they. And indeed, lurking in there are many of these hepatitis, papilloma, Epstein-Barr virus, retroviral-like elements which are not present in the human genome as yet. Human genome is not completely finished. So that's still an escape clause. But basically, these are smoking guns for at least commensals. Microorganisms and viruses that are living in tissues, some of them may be tissue-specific. Some of them, subset, may actually cause disease. And that's the whole problem then, is sorting out cause and effect. We know how it's been done, heroically, with Helicobacter pylori, but how do we do it with the other ones? Some of them will have tissue-culture models. At the time this slide was done, but so-- this is just illustrating kind of the flow of nucleic acid information that comes in about-- that was capable of being mined for new microorganisms, new viruses that might be present in a tissue-specific fashion, might be pathogens. These are some of the most popular sequences from which we're mining. Obviously, human was popular at times [INAUDIBLE] the human genome was in place. Here's Brachionus, our friend, which was pretty high up on this list, considering most of you probably didn't hear about Brachionus before this lecture. Now you've heard of it twice. There just was a spurt in sequencing interest. And of course, HIV is the winner year after year because of the importance of resequencing it for new mutations. And it really has a record number of new mutations. And we've talked about HIV from a variety of different standpoints. One of them is polymerase, is protease, is drug targets, and as a source of drug resistance. And that you can follow up that model, that sort of atomic-scale protein modeling, where you're looking for new drugs where the polymerase, the mutant polymerase, will not be resistant to the new drug. But you can also monitor at the population scale. As HIV goes to a particular patient, or to a population of patients, how does the drug resistance change as a function of time? Here, the horizontal axis is in days, up to 30 days, and then the vertical axis is the titers of the viruses [INAUDIBLE] as a function of time. And what you have here is rates of exponential increase. These are all on logarithmic scales. And where you're modeling such a thing as the clearance to the immune system and so forth. Originally, this virus was thought to be a very slowly replicating virus, almost cryptic. And then later was found out this is actually a very rapidly replicating, but immune system is very rapidly responding. Just each of these models we've gone through has a set of parameters that it goes along with. In a certain sense, there's quite a bit you can learn about a model just by seeing what parameters are present or absent, whether they're experimentally determined, how accurately, and so on. And model parameters here are mortality rate of uninfected CD4+ T-cells, same thing for infected cells, and they can have different [INAUDIBLE] you can see on the far right that there's a huge difference between a diet of 1/4 per day versus [INAUDIBLE] and so on. There's rates for getting infected, rates of virus loss, production, threshold value for remission, so on and so forth. And these are important parameters for how the virus population changes within a person or the population. How it spreads through the population depends on, in many cases, on the herd immunity and so forth. And we get into issues of public health. It's almost a truism that most of the additional quality years of life, quality-adjusted life years here, QALY, that we have in the world the fact that our life expectancy is so much longer and so forth, are mainly public health decisions that have been made, not so much pharmaceutical. Such things as clean water have made a huge impact. And so we need to think-- but even when you think about the pharmaceutical aspects, it needs to be thought of in a public health sense. And a surprising amount of public health officials in even the most developed nations, maybe most of them do not have formal education in public health. And so I urge you all to get at least some education in this. Because in this, we have-- here, they're trying to build and have built quantitative models, not only for the kinetics which a disease will go through a population or even through an individual, but the way that you make decisions. And this is having an impact on the way decisions are-- how these different projects are prioritized. At any given time, you might have hundreds of different candidate vaccines, which is one of the major, very effective public health strategies. And this is the way they prioritize it. So the level one, saves money and it improves life. So that's almost a no-brainer. Then you have different levels. We have it might cost a $10,000, or $100,000 per quality-adjusted life year saved. And so the level one candidates are cytomegalovirus, and therapeutic vaccines. These are not aimed at an infectious disease, they're for diabetes, rheumatoid arthritis, multiple sclerosis, various bacterial ones. And of course, HIV was not even in this study because it was such a high priority already within the NIH. So is there a role for genomics and computational biology, the title of this course, in this vaccine research and development? I think the answer is yes, but it requires a great deal of creativity and resourcefulness on all of your parts. There are new opportunities with DNA vaccines where you can have one or more DNAs shot intramuscularly or a variety of other ways. What is delivered by the DNA vaccine can be various so-called intracellular vaccines, which can either act through cell-mediated immunity or in some other way intracellularly. RNAi is a rapidly emerging way of delivering things that may not be classically considered therapeutics or vaccines. The concept of multiplexing-- many of us have received multiple vaccines at a time. Typically, every year, your flu vaccine will have two or three strains in it. But as you can see from this middle article, the diversity of certain diseases, like HIV and influenza, almost demand attention from genomics. The diversity that can occur, either from year to year or throughout the entire global set of viruses. Another opportunity is when we have vectors for, say, arthropod vectors, insect vectors like mosquito. And we have opportunities to hit malaria and all the different life stages. There are multiple life stages within humans and multiple stages within the insect vector. And now, the genomics of these two just came out recently, in the same week, in Nature and Science. This provides a whole new set of inspirations for work that can be done on these. I think we should take a little break now, and we'll wrap up the talk after that break. |
MIT_HST508_Genomics_and_Computational_Biology_Fall_2002 | 1A_Intro_1_Computational_Side_of_Computational_Biology_Statistics_Perl_Mathematica.txt | The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license and MIT OpenCourseWare in general is available at ocu.mit.edu. GEORGE CHURCH: OK. We're getting ready. And let's go. OK. So welcome to the first class of BIOE101 or biophysics 101 or HST.508 or Genetics 224. You can see that this is a truly interdisciplinary class, and I hope that's going to be one of its strengths. It has been in the past. I hope that you get to know some of your colleagues in this room or maybe in the other half of this class, which is taught on the medical school campus, because they could be some of your greatest assets in years to come. This course-- basically, if you cannot, for some reason, on a particular day, make this meeting at 5:30, you have the option of going at noon over in the Canon room, or you have the option of tuning in to the video, which is intended mainly for the distance education students but also serves as a backup in case one of you gets sick or is out of town or something like that. So that's what's going on right there, is that we put this on our internet site. It's synchronized to the PowerPoint slides. You should have PowerPoint handouts right now. This course, as stated on the website and on this first slide, is based, essentially, entirely on six problem sets and a course project. In the past, the students and I have had a great time in the course projects. We see, really, the cutting edge of computational biology. And the six problem sets are really intended to get you up to speed so that you are as close as possible to a publication-grade computational biology project, which, generally speaking, is collaborative, involving two or more people. But you can do it solo as well. There'll be more details on the website and also in your sections. You're also free to work with other people to use any resources you want for the problem sets. I would just ask that you-- however you answer the question, whether it comes just purely from your head or from collaboration or from some website, that you just say for each problem, just briefly, where you got the answer. This is just a good academic discipline for acknowledging your sources. This course is intended to be an introductory course, introductory to almost every subject that it combines. It has minor prerequisites. And those who feel that they are a little weak in either molecular biology, statistics, or computing, there will be classes in addition to the regular sections which will address these intensively, the especially at the beginning of the course. Then the sections themselves will be crafted so that they are directed at people who feel strong in biology and slightly weaker in computing. So the focus will be on getting the computing up from-- more or less from scratch. And those that are stronger, say, in computing, then there'll be sections on advanced topics which will fit in with the course. By the middle of the course, almost all the sections will be very close to identical. But there still will be that slight difference. We will try to provide mechanisms by which you can interact with colleagues that have different strengths but similar interests so that you can get together on problem sets and the projects. You shouldn't feel it was essential. It's an opportunity, not an obligation. It's important you hand in your questionnaire immediately after class so that we can assign you the sections. The sections will-- the sections will be the way that you get information and the way that you interact with the teaching fellow who will be responsible for grading your problem sets. So that teaching fellow will interact with you on all six problem sets in your project, and so you should get to know that person very well. And I'm very indebted to this crew, by the way, which is up here at the top. Suzanne [? Camilli ?] is the head teaching fellow this year. Many of them-- she was a teaching fellow last year. And many of these-- almost all of these teaching fellows have taken this course last year. That's most of the bureaucratic aspects of it unless there are some questions. Please feel free to interrupt at any time. This can be interactive if you want it to be. I can set aside enough time for that. You're certainly also welcome to come to me before class, after class, and during the break, and we can even set up additional times. Questions? Yes. AUDIENCE: The extra sections, those are more filling in the background. GEORGE CHURCH: That's right. There will be this-- AUDIENCE: There seems to be only one, which is Thursday evening. GEORGE CHURCH: So the question is about extra sessions. There will be a schedule of all the sections evolving on the website within the next day or two. Depending on what your questionnaires show up, there will probably be three or four extra sections within the first couple of weeks. AUDIENCE: And how do we [INAUDIBLE]?? GEORGE CHURCH: Through your section head. If you put your email address, it will allow the section head teaching fellow to contact you, or you can contact any of them if no one contacts you. OK. So the overview for the course-- this is constrained to fit into the calendar of both Harvard and MIT. It also tries to achieve the goal of keeping the Division of Continuing Education students so that they only have to come in for one day a week, which is a considerable plus section. Some of them choose to have section the same day. And some of them have considerable commutes. So that determines the timing of this. The topics-- we'll have two introductory lectures. We'll actually cover pretty interesting topics for introductory thematically, one on computing followed by one on biology, although both of them and the whole course is on computational biology and systems biology. Then as another way of focusing, we'll have a series of six lectures, two on DNA, two on RNA, and two on proteins or proteomics, which reflects the central dogma of molecular biology where DNA encodes RNA which encodes proteins. But much more so, it's telling us about the flow of information that we need to establish experimentally and in terms of modeling that allows us to do functional genomics, and hence, systems biology, and to mine the datasets which are on everyone's attention right now. Then finally, three topics which really focus on the overarching theme of the whole course, which is on networks and systems biology. And these really cover the gamut of network analysis on cellular all the way through ecological scale modeling in order to see how we integrate various data types. Then comes the very interesting section of the course, three of these two-hour sessions where you get to present to me the topics that you're excited about, how you have taken the course material problem sets and so forth and incorporated into your view of what's exciting and then present to all of us that information as team or individual presentations. OK. That's the overview for the whole course. Today's story-- I try to make this somewhat a narrative and with themes. So we're going to toggle back and forth throughout the lecture on living systems and computational systems-- similarities, differences, how we can use one to tell us something about the other. In particular, we'll focus in on the aspect of living systems which is fairly unique, which is self-assembly and replication. These are put in the context of another theme for today, which is discrete versus continuous data, discrete versus continuous modeling of data. Since this is an introductory lecture and we want to get you warmed up, I'm going to try to illustrate as much as possible with minimal examples, small examples, so you can see it all in one page or all in one line, even, examples of minimal life-- things that illustrate the minimal aspects of life replication, mainly, and minimal programs, which allow us to analyze some key aspects of modeling living systems. Some of the key aspects are catalysis and replication. And here we'll use differential equations. This will be quite, I think, an interesting approach to it, painless, and exciting the way it connects to biology. And then after talking about that replication in the context of differential equations, we'll introduce [? directed ?] graphs, which indicate how growth occurs in a pedigree, not just growth in exponential mode. And then finally, we'll connect this to the issues that surround single molecules, which are actually very significant in biological systems and involve a different type of analysis than the continuous functions that we will use in ordinary differential equations. When we analyze errors, either in data or variance in biological populations, we use statistics, broadly speaking, that fall under bell curve statistics-- actually, more than that. And overall, the entire-- the theme that unites all of biology and most of the course, and this lecture in particular, is the idea that many of these functions in biological systems can be under selection, experimental selection in the laboratory in order to obtain an optimal growth rate or some other aspect of optionality. You generally will do well embedding the biological system either as optimal or can be made optimal for a particular task. OK. Now, I'm just poking fun at the number for this course. It is also an intrinsically important symbol for the 0's and 1's that occur in all of our computers and under underlie the way we either deal with discrete data or make continuous data appear to be discrete. We'll start with the biological entities most similar to the 0's and 1's that make your computers hum. And these are the nucleotides of DNA and RNA, A, C, G, and T, or A, C, G, and U in the case of RNA. And of course, all of this is symbols. The A stands for adenine. And adenine, of course, doesn't look like an A. It looks like some electron density. And we often represent it with chemical formula you'll see in subsequent slides. But here in slide 5, we're talking about, schematically, A, C, G, and T being-- A being represented by the two-digit binary number 00, and C 01 and 10 and 11. So you can see that two digits isn't quite enough to encode the four nucleotides which are strung together where you might have 3 billion such nucleotides making up one of your two human genomes. Now, things get a little bit more complicated once this digital 3 billion nucleotides in your genome starts being turned into molecules that actually do the work of the cell, the work of your body, which is RNA and proteins. This is an example of one of the primary transcripts common to all living organisms that we know of. This is a transfer RNA. I actually participated in solving this complicated structure in the '70s. And this is color coded, the DNA sequence that encodes this particular RNA. And when the RNA is made, it folds up into this three-dimensional structure which you see rotating here. This is actually a stereo image, which, if you cross your eyes in the right way, will appear to be even more three-dimensional than it is there. You will learn more about this later in the course if you don't know already. But the point is that this goes from a 5-prime end at blue all the way out to the 3-prime end in red. And it has to fold in this manner. It reproducibly folds in this manner, and it has to stay in order to perform its function, which is actually translating from DNA-- sorry-- from RNA sequences into protein. But what we're illustrating here is the relationship between this discrete binary digital code and DNA and the more continuous code that you have, which is the x, y, and z-coordinates of the atoms-- the continuous nature of the probability distribution of electrons around each of those atoms and the continuous nature of its position in space and its various binding constants, affinities for other molecules in the cell. Let me just take those two examples, the DNA sequence and the three-dimensional structure for the transfer RNA that it encodes, and expand upon them a little bit or give other examples. We have a sequence. In the previous slide, I showed a sequence of 76 nucleotides. On the left-hand side are examples of discrete concepts, concepts which are very naturally encoded in a discrete digital way, and then as close as we can get to the continuous version of that. So a continuous version of a sequence might be a probability of a sequence. In other words, at the first position, if you look at a population of sequences-- the number of different tRNAs, the number of different people in this room-- it may not be that there's always an A at position 1. It could be that it's got a different probability in different people. So you represent that as a probability A, C, G, and T. It's a vector with four numbers in it. That's more continuous. Similarly, we have digital analog devices in the instruments that collect the data that you'll be working with in this course. Integration can be represented, practically speaking, as a sum of small steps. When we're talking about, say, a neural network that's responsible for some of your thoughts or a regulatory network that causes homeostasis in your body, biologists casually often reflect on these as being on and off. It's a good approximation. It allows you to make very simple diagrams. But you must remember that many of these are actually composed of gradients or graded responses. Not necessarily all are [? on/off. ?] Some of these gradients or graded responses have this sigmoid shape that I've drawn as an icon down the middle of this, separating the discrete from the continuous. And so for all intents and purposes, things tend to hang out either off here at the bottom or on at the top of the concentration limits or the signal limits if the signal is electrical or so forth. Similarly, we'll have examples where some cells will be-- we'll have a field of cells which are either on or off, effectively very extreme ends of this. But because there's a mixture of them, if you were to mush them up and measure some property of them, it would appear to be somewhere intermediate. So this is an alarm that should be going off every time you see mixtures that they may be composed of-- you may need-- to model it, you may have to model it as a population, each individual behaving as more extreme than the average with very few in the middle. Similarly with mutations, you might say that certain mutations are essential for life. Others are neutral. They have no effect. Others, more gray zone, are conditional. OK. Just a point of orientation for how we can describe not only the bits, the discrete or digital components of this course. But also, many of these prefixes are useful for describing the continuous as well. But you can see this-- many of you will be familiar with these. All of you, I'm sure, have access to computers, and so you've used terms like kilobyte and megabyte and gigabyte. Technically, 2 to the 10th power is 1,024, not 1,000, and so it shouldn't be referred to as kilo. But for most intents and purposes, these are so close to 1,000 or a million or a billion that they're used interchangeably. But this is the official standard here. And certainly for the continuous numbers that we'll be talking about, the order we'll have is 10 to the plus 3 will [? be kilo. ?] 10 to the minus 3 will be milliliters. Mega, micro, giga, nano, and so on, all the way up. [? For the ?] numbers, atom mole is getting-- zeptomole is getting close to the limit where you're getting close to single molecules. Petabyte is getting close to the limit of where your computers can go right now. Why is it important to have defined quantitative measures? Why can't we just casually say, oh, yeah, it's a long time rather than seconds, or it's really long distance to Harvard Square in meters? Why do we have to use meters, kilograms, moles, degrees, Kelvin, candelas, amperes? These are the seven basic international system units from which most of the other units that you use in science are derived. In a certain sense, there is some inter-convertibility even within these. We'll be talking about precision a little bit at the end of today's class and throughout the course. In fact, a theme today is that even things that are represented digitally are not-- in order to represent things digitally, there are approximations that are made. And you should always question what those assumptions are when you make an approximation. The precision for some of these units of measure-- for example, the time scale measured in seconds-- can be as precise as 14 significant decimal digits. Now, you may wonder from time to time in the course why biology doesn't have 14 significant figures. But I will leave that up as an exercise for one of your projects, maybe, to figure out how to get that. But for all practical purposes, most of biology is three or four significant figures. There are quantum limits to time and length, which are so short that we don't need to concern ourselves. But the quantum unit for the mole is actually of great importance to this course. A mole is 6 times 10 to the 23rd entities. These entities can be photons or molecules, or we even have over 6 times 10 to the 23rd of the bacteria in an ocean. So what's important here is that the quantum of this is the molecule, and many of the things we'll be dealing with are single molecules. And that'll be the last topic of today's lecture, will be how one deals with single molecules as opposed to large buckets of molecules. Now, we have all those great quantitative definitions for all the standard units used in physics and chemistry, and even in biology, but what about the definition of biology itself? Most biology books start with this. There are entire books dedicated to this question of what is life. I think that rather than, in this session, ask what is alive or not-- rather than make it a dichotomy, I'd prefer to say how alive is something. What is the probability that a given entity will replicate? We have, here on slide 10, the probability of replication, and not just how likely is it for the whole thing to replicate or some part of it to replicate, but is it doing so using simple parts, simple environmental components, and creating great complexity from that. How faithful is the replication? Now, this probability of replication from simplicity to complexity can be defined for a specific environment. Sometimes, the environment required for replication is, of necessity, very specific. Certain organisms do not do well except in their native environments. Zoos discover this, and we keep killing off species for a variety of reasons, one of them being this specificity. So when we talk about-- so another aspect of life-- which each of these, in principle, can be quantitated-- is how robust is it. How many environments can it handle? Inevitably, there are environments that cannot be handled. But the more robust and adaptable it is, in a certain sense, the more alive it is or the more alive it will be, its descendants will be, millions of years from now, probably. Now, some very challenging examples I list here. I'm not going to walk through all of them. But things that have challenged people's definitions of life before are mules-- that is to say, sterile hybrids which will not leave behind progeny but seem quite as alive as their parents were but will not replicate, generally. So the probability of replication is low for the entire organism, but maybe for individual cells. Fires replicate quite well, but most people would like to exclude them from this definition. Or perhaps if we're in the mode of not excluding things from life, the probability of replication complexity versus simplicity might impinge upon fires. Crystals-- if you nucleate a supersaturated solution with a crystal, it will make, in a certain sense, copies of itself. How [? painful ?] is that? How simple is it? Flowers, viruses, predators-- these require very complex environments. They require environments often more complex than themselves in order to replicate. Does that make them less alive? I'll show an example of molecular ligation as the simplest case just to get us warmed up for this introductory class. Since we're in the topic here, briefly, of general biology, not just the historical terrestrial biology from which all of us feel particular affinity-- if we had visitors from some other planet or if we started making our own self-assembling machines-- and to some extent, we already are making self-assembling machines in factories. They require a very complex environment which includes humans. But how do we define these things? Now, in order to define-- to get at that complexity versus simplicity issue and faithfulness of replication, we need to define both replication and complexity. Replication is not perfectly faithful in many of the things that you would consider alive. So simple bacterium, even though it will make a copy that does many of the same things-- we know it when we see it-- it looks like it's the same thing. If you actually counted the molecules, no two bacteria would be alike. No two humans, certainly, are alike. But even supposedly genetically identical bacteria will have different numbers of proteins and small molecules. Complexity has at least four definitions we will use in this course. There is computational complexity, which is of practical significance in computational biology, as the computer scientists in the room will know, in that this tells us the speed and memory tradeoffs we have in scaling up any problem. Does a problem scale up as a simple linear function of the number of inputs that you give to the computer? Does it scale up as a simple polynomial, or is it worse than that? Is it exponential in behavior? Is it something that you can prove you got the right answer in polynomial time, and so on? We'll get to this later on, but I just want to introduce it as something where the word "complexity" is used. Number two sounds similar but is actually quite a bit different. It's algorithmic complexity or algorithmic randomness. And this is basically any string-- not just a computer program, but a computer program can be represented as a string-- can be reduced down where you get rid of obvious redundancy, and what's left is the randomness. And the number of bits it takes to encode that algorithm is a reflection of the complexity. That doesn't necessarily give you any predictions about how long it will run or how much memory it will require during computation. Entropy and information, number 3, is related to item number 2 in that the more complex the string-- a string is just a series of symbols like this, and that was what we were talking about with the randomness-- the more complex the string, or an image can be turned into a string-- these are three images here-- then the more bits you need to encode it in order to, say, make a file. You might compress the data, but ultimately, after it's fully compressed, that's the amount of information you need. Entropy is a chemical term. This information definition was championed by Shannon-- we'll come back to it-- and entropy by Boltzmann and others. And the fusion of these two into a unified theme is extremely important in both chemistry and information theory. Nevertheless, none of these above reflect our intuitive feeling for when we look at these three panels. We have a highly ordered, which would be a very low entropy, array on the left-hand side. We have, on the far right-hand side, something highly disordered, essentially random. This may have been generated by a coin toss where, as you fill up the array, you just say black or white, toss a coin. The one in the middle-- and so the low entropy on the left could be easily represented as 010101 in a simple array-- very little information, very little entropy, algorithmically, not very random. At the other end, though, it has high entropy, high information content. It takes a lot of bits to represent it even though you know you got it just by a coin toss, and it has high algorithmic randomness. But if you allow a coin toss to be part of your description of the physical complexity of this pattern on the far right, say a gas or something like this generated by coin toss, then you can represent it as a particular kind of random description, which this is almost as-- this is about as easy to describe as a highly ordered system. So even though it's high entropy, it has low complexity. And complexity lies somewhere in between where you have lots of different kinds of symmetries, lots of different scales of structures, and it's very hard to represent it either as a random coin toss or as a highly ordered system. How do we quantitate this? We're really trying to move from the vaguer definitions to something that really encompasses what we intuitively feel about complexity. Here's an example I happen to like. I would say that this is not something that's broadly adapted, but I want to expose you to it. Here we have entropy or the randomness, the information-- Shannon information on the horizontal axis of slide 12 where the low entropy that's highly ordered is on the left-hand side. It's 0. And the highly disordered, random high entropy of 1 on this type of scale is on the right-hand side. And complexity, as we said, we expect does not correlate perfectly with information or entropy in that this highly disordered structure on the right-hand side actually has very low complexity. And complexity is not even a single-valued function of entropy or information because you can have multiple different structures that have the same entropy but have different complexities. That's what we see here when you take a slice or a line up from the horizontal axis up through the complexity. You can have multiple different values in here. OK. That was an example of a model. Yeah. AUDIENCE: [INAUDIBLE] despite how complexity is measured or [INAUDIBLE] definition-- GEORGE CHURCH: You'll really have to refer to this article. It's a fairly complicated one. But the idea here was you take a complicated set of data generated by a logistic map that we'll come to in just a few slides, and you ask-- as you increase the complexity of the map in a way that appeals to the intuitive definition of complexity you had in the previous one, you calculate the number of symmetries, the physical symmetry that it would take to represent this, allowing coin tosses as part of the algorithm for simplifying it. It's just barred from all the previous definitions. Crutchfield and coworkers have championed this. And I would say it's not widely accepted, but it's certainly not rejected either. And I find it appealing. So why do we model? That was an example of modeling complexity. I pointed out earlier the models that we had for a three-dimensional structure. This course is mainly about measurements. So why model? I would argue that when we measure, we actually must model. And we need to do that not merely to understand the biological, chemical data that we are collecting. And if I understand, we can have various tests for our understanding. Tests might include, but not be limited to, designing useful modifications of a system. The course is mainly about systems. And by using our models to design a new variation on the system and then obtaining additional data and modeling that new data, we get a better successful approximation to what the underlying data means. And we prove it, often with useful modifications that can have impact on society or impact on other research agenda. Another reason why we model is in order to share data. Typically, when you look in a database-- those of you who have looked at biological or chemical databases, that is not data in the database, generally speaking. There are models in there. And the things that you download are models. So for example, the sequence that I showed at the beginning-- that's 76 nucleotides of A's, C's, G's and P's-- that's a model that represents our interpretation of some kind of fluorescent patterns that we see when we run our instruments on amplified DNA taken from a variety of organisms. That's the model. Similarly, the three-dimensional rotating transfer RNA that we saw was probably more obviously a model. There, we integrate the chemistry of molecular mechanics and the physics of the diffraction data where the atoms-- the electron density in a crystal lattice diffract and make a pattern. We integrate those models, and that's what shared in databases. Not only does it allow us to share data, but the way we share it is by searching. Those of you who have searched either biological databases or the internet with Google or something like that know how powerful a search can be. Typically, it is easier to search with a model and more powerful and more accurate to search with a model than to search through raw data. When we merge datasets, we will align them. We will find redundancies. We will integrate different concepts. That requires modeling. And finally, checking data-- one of the themes of this course will be to embrace your outliers. When you model your raw data and you find data that are very far away from expectation, this is a good thing. It allows you to find errors in your model or errors in your data or discoveries. So checking is another. Finally, integrating-- as I said, this course is about measures and models. And it's not just a survey of all the things that you can do with computation in biology and computational biology, but most importantly, it's about integration. We need more than one data type to make progress in biology and medicine, agriculture. And integration is one of the biggest challenges that we have right now. It's relatively easy to collect a single homogeneous dataset, but then to connect that to the rest of the world is the big challenge that all of you will have. A theme that will go throughout the course will be this business about errors, two types of errors, random and systematic. And every time that somebody hands you some data or you collect your own data or you're dealing with something from the literature, you should immediately assume there are both random and systematic errors. You should know which is which and how much there is. You should not accept anything as being true without qualification. Random errors mean that if you repeat the experiment again and again, you will get slightly different variations on the error types. And to a certain extent, they will average out over enough determination. Systematic errors, on the other hand-- you have a high probability of getting almost exactly the same error again and again, or a very small subset of a certain class of errors, which means that just doing it over and over will not improve your statistics or improve your accuracy. You need to change paradigms altogether, collecting data by more than one method. So those are the types of reasons that we model. And now we should go through the kinds of models that we'll do. This is a course that's basically on sequence, how sequence leads to three-dimensional or even four-dimensional structures with time, how three-dimensional structures lead to function, how function is embedded in complicated systems, so which models we will be searching, merging, and checking, as we said in the previous slide. Which models will we sequence? We will be making the assumption in dynamic programming, which is just a fancy way of saying searching and aligning-- dynamic programming will make the assumption that sequences are related to one another. They make the further assumption they are related by ancestry. That is to say, you mutated them in the laboratory, or perhaps you were mutated before you came to the laboratory. That's dynamic programming that can align sequences or three-dimensional structures which are quite different from one another. A few slides back when we talk about replication, if two things are replicated-- showing common ancestry, say-- the faithfulness of that replication is key to-- the dynamic programming is key to your accepting that something is actually replicated. If it changes into something completely new in the process of replication, then it's unlikely it will be able to maintain that. Three-dimensional structure-- we will be talking about motifs, catalysis, complementary surfaces. I'll give you a beautiful example of complimentary surface in a few slides. And all of this is dealing with the continuous functions of energy and kinetics where different complementary surfaces will bind to one another with very specific rate constants covering many orders of magnitude. These energetic and kinetic phenomenon is what underlies the functional genomics. When we functional genomic data, one of the ways we model it is we ask whether phenomena that we're studying inside our bodies or inside of microorganisms, so forth-- the protein levels, the RNA levels, and so forth, where they go up and down together means that they're a cluster. They have common properties. Does this common properties reflect yet another aspect of them, like their common functions or their common mechanisms, for being coregulated? In systems biology, we have these qualitative diagrams that tell us about the all or none, or Boolean, which means logical 0's and 1's. Or we can treat them as continuous differential equations or stochastic where you have some of the power of the differential equations, but you deal with individual molecules or individual organisms and populations. Organisms can be stochastic. Molecules can be stochastic. And another thing-- again, this theme of optimization-- we can ask whether a network or network component is optimal. Is it optimal due to the past history of the organism, or is it something we can set a goal of a biotech process to optimize it for a new function? Linear programming is the mathematical tool that one can use for studying this. It's common in economic algorithms. So we were talking about modeling in this whole realm, going from minimal life sequences through the catalysis that involves interactions of three-dimensional structures, functional genomics, and the optimality that we get that's required, as we will see momentarily, for getting single molecules to work. What's our parts list? We're going to try to start simple, but put the simplicity in the context of the big picture. The big picture for the atoms is the periodic table of the more than 100 elements. We have a very short list here-- sodium, potassium, ion, chlorine, calcium, magnesium, molybdenum, manganese, sulfur, selenium, copper, nickel, cobalt, and silicon, which are useful in many species. If you have to pick something that's related to us as a minimal of biochemical system that shows some of the replicative properties and evolutionary properties of life, you would say RNA-based life. One of the breakthroughs in experimental science over the last couple of decades is recognizing that RNA, like proteins, has catalytic capabilities. It has the potential to have been one of the early replicating units. Just for the sake of describing a simple system, let's look at something made up of five elements. It's not necessarily the simplest, but it's just something to think about. These five elements can be in the presence of an environment which can be composed of the same five elements. The environment would be very simple, that complexity versus simplicity. It would be water, ammonium, positively charged ions, nucleotide triphosphate, negatively charged ions, which are precursors to making polymers, and then possibly lipids, the fatty substances that bound membranes. An example of catalyzed RNA polymerization is in this article, and then I'll deal with another one that, rather than using nucleotide triphosphates, uses slightly larger RNA precursors. Now let's start doing the toggling back and forth between living systems and computational systems so that, whether you're from a computational background or biological or both, you'll see some interesting relationships. Here we talked about a minimal biological system with five elements. Here we're going to talk about some minimal problems with a very small number of elements. Basically, they're limited to a single line of code each. And they do something which is related to the topic here. Replication is an exponential process, exponentially growing. If something is autocatalytic is another way of describing it. So we've used that as the theme for our minimal programs. What is this exponential function? We give it a very specific argument. The argument is 1. So this is e to the 1 power, e being this number here that's represented the number of times, about 2.7. The four languages that we're demonstrating here is Perl, Excel, Fortran 77, and Mathematica. In this course, we'll mainly use Perl and Mathematica for reasons that will be evident in the next couple of slides. But I just want to show you these different ones. And in the theme of accuracy of replication, how faithfully is a particular string handled, either the string of nucleotides in a simple life form or the string of digits in a number-- and you can see that the internal representation of the math that goes on inside the computer has to be done digitally even though these are transcendental numbers which have an arbitrary number of digits. And you can see that some programs are typically limited. And you can even guess the number of bits representing it internally. Here down at the bottom of slide 17, you can see some of the nitty-gritty detail of how these things are represented as 0's and 1's representing the [INAUDIBLE],, the exponent, and so on. But you can see some of these programs, when they print, when they actually print, the program is not aware of or the programmer was not aware of exactly what the internal representation was. And so all these trailing 0's are incorrect. The winner in this, of course, is Mathematica, here to ask for the e to the 1 power. You say, let's give it-- this n bracket says let's start giving arbitrary precision where the numeric, the n numeric, is 100 digits. And this seems like a stunt in this particular case. But actually, when you start doing a series of calculations where errors can accumulate, the ability to go into arbitrary precision will allow you to prevent a catastrophic imprecision. And what happens is, in a mathematical calculation, if it needs a little more precision than you initially anticipated, it will go out and get it on its own, which is quite remarkable. Try to do that in any of these other languages. They will typically give you what's called an underflow or an overflow error and either stop or just make mistakes. OK. That's my first advertisement for Mathematica. You can see all of these are very simple programs that belie and hide underneath the very complicated electronics and algorithms that are built in to accomplish this. I'll give you a feeling for what that is in just a couple of slides. So back to self-replication. We're toggling back and forth between simple biological and computational systems. We have-- here we've represented tri and hexanucleotides, three nucleotides and six. Again, remember, these letters are crude-- or I should say simple representations of electron density involving dozens of atoms in the shape of a cytosine, or a C, or guanine, or a G. It's important that you know that there are two strands in DNA, typically, or even many RNAs-- in this particular example, whether this is RNA or DNA. And in order to indicate the orientation of the strand, it has a directionality. We indicate 5-prime, which is the name of an atom in the ribose, but the details are not important from a computational standpoint. It's just a way of indicating this is one end of the RNA. And a CCG is not the same as a GCC. The 5-prime indicates the 5-prime end of that. We take two of these, which are basically identical, and they will ligate together spontaneously if you set up the chemistry right to make these two trinucleotides into a hexanucleotide, CCG CCG. In the presence of a complement here, which is CGG CGG, a different sequence-- I've tried to emphasize that by making it capital and green-- that would bind to-- these two trinucleotides would bind to this, aligned by Watson-Crick base pairs. The rules for Watson-Crick base pairs, many of you may know. A pairs with T, and C pairs with G. And that will catalyze this process. It will speed up the kinetics in which these trinucleotides [? turn ?] [? into ?] hexanucleotides. Now, here it gets interesting. This hexanucleotide now drops down here and speeds up the process of this different trinucleotide [INAUDIBLE] and forming the original one that catalyzes the first reaction. So this catalyzes, speeds up the first reaction, which produces a product that speeds up the second, which produces the first catalyst, and so on. This is called a hypercycle, championed by Eigen and Schuster. And this is probably one of the simplest examples of it. And I think it gives you a feeling for how a variety of interdependent biochemical processes can result in autocatalysis where you get these exponential cycles which we recognize as something very similar to replication. This would have a very-- this could have a very high probability of replication. It could be very faithful, but its complexity is low. The input complexity is low, and the output complexity is low. That's how we would qualify it. Toggling back to computing and simple examples, I've given some simple examples already of Perl in Mathematica. But in order to give a few more here on slide 19, why did we choose these two for this course? It's been my experience, both in our laboratory and research environment, and also in this class, that these are two of the easiest languages to learn. That doesn't mean that they're absolutely easy, but the learning curve is very simple. It's very fast. And you can, by example, change programs that are working into things that do what you want very quickly. It's a high-level language in a sense-- the higher the level of the language, the closer you are to English conversation. The lower the level of language you are, the closer you are to bits, 0's and 1's, or the actual electronics inside of computers. So in the hierarchy, Perl and Mathematica are very high up there. Yeah. AUDIENCE: Where do you get the password to download Mathematica? GEORGE CHURCH: You will get that from your section teaching fellow. Yeah. So the question is where do you get the password for Mathematica for this course, and the teaching fellows are responsible for that. And that's been tested. I think that works. So Perl is freely-- it's open source. It's interesting in that regard. I think many of you will find the concept of open source where anyone can-- when software breaks or needs to be expanded, or it needs to be understood, it's there for your inspection rather than hidden behind some corporate doors. Very interesting aspect of Perl. Mathematica is not open source as far as I know. It has some interesting redeeming features, though. It's very strong on math. No surprise given its name. In particular, it's both symbolic and numeric and in graphics. And we'll have some examples of that in just a moment. It can do things that are hard to do in Perl and other languages. Perl is also very strong for web applications. Many of the most amazing things which are done on the web have Perl behind the scenes. Now, when we-- a further comparison of-- this is the dark side of computation and biology, showing another analogy, which is parasites. We have parasitic computer and viral-- biological viruses. This one-- in past years, I had this little bit of a computer virus-- this is not the whole thing. This is just a little piece of it. It's fairly short code. And this was a very nasty one at the time, quite a while ago. And I had it in the PowerPoint presentation. When people would download it, they would then send me an email. I'm sorry to inform you you have a virus in your PowerPoint. And so this year, I've upgraded the PowerPoint. So this is actually an image rather than the actual text. And so your viral detectors will not detect this unless they're a lot smarter than I think they are. This, on the other hand, is the costs of these viruses. This is not a laughing matter. This is 4 times the cost of the entire Human Genome Project per year. The Genome Project took us about 20 years to get going. Every year, we spend four times that amount on computer viruses, which are, as far as I can tell, completely frivolous. Even more serious is this. Now, this is real text. This is not an image. It looked like an image to you. It's very serious. And I sincerely hope that some of the people in this class make a contribution to the intellectual avenues which will ultimately lead to the defeat of the AIDS virus and various other viruses and bacteria like them that cause so much suffering in the world. 20 million have died. This is worse than the Black Plague and the 1918 influenza epidemic. And the analogy here between this part of the virus-- this is a little piece of the viral code in symbols of the 20 amino acids single-letter code. This is a little piece of the computer virus. And I've highlighted here the command copy in the computer virus. This is just part of what it does to get this particular VBS script, the thing you're seeing here, into some other part of your directory. And this, essentially, is part of the copy command in the AIDS virus. This is the polymerase. The protein is responsible for making copies of the code for the virus. And highlighted in red here are some of the mutations that make this particular AIDS virus resistant to the drugs which have been instrumental, in some cases, in taming AIDS temporarily, and at great cost. Clearly, vaccines, and hopefully other public health measures, will be the answer. Here are some other conceptual connections. These are not meant to be dogma or to limit you in any way but hopefully to expand the way you think about these things. Obviously, as with other analogies, they will break down. But let's just follow for a moment. What we're calling the instruction set, the concept of an instruction set, in computers is a program, in organisms is a genome. We've already said bits, so 0's and 1's, A, C, G and T. The stable memory, the thing that you can depend on but is a little bit slow for access, are disks and tapes in computers, and they're DNA, or in some cases RNA genomes. That then-- you take it out of the slow memory, stable memory, and move it into something that's more active, more volatile, which is random-access memory in computers and the RNA in organisms. The environment for computers tends to be complex-- it's the internet sockets and people banging on the keyboard-- while organisms can be very simple. I gave some examples earlier where it's complex, but it can be as simple as water and salts, in which they can replicate very complicated structures. The input and output can be-- the input can be analog and converted to digital in this process, digital analog for output-- the analog, say, of your screens. And the input-output is governed by these proteins at the end of the central dogma in organisms. When we make these complicated systems from simple things, they go from so-called monomers, which really combine into polymers, which need not be linear polymers, although they often are in biological systems-- basically, you go from minerals to chips in computers. These are replicated in factories. The factories or cells can be as small as femtolitre. Remember, this prefix, this femto, is 10 to the -15. That's about 1 cubic micron. Very small factories, very amazing productivity and complexity. Again, input-output as above. And communication is extremely fast in computers, slower but very rich in organisms. After a very short break-- we can stretch-- we'll come back and talk in more detail about how computers actually are made and how biological systems are made. Thank you. |
MIT_HST508_Genomics_and_Computational_Biology_Fall_2002 | 2B_Intro_2_Biological_Side_of_Computational_Biology_Comparative_Genomics_Models_A.txt | The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license and MIT OpenCourseWare in general is available at ocw.mit.edu. [SIDE CONVERSATION] GEORGE CHURCH: OK, welcome back to the second half of the second lecture. Here, we're going to take this beautiful example of an algorithm, which the cellular machinery applies to get from DNA to RNA protein. And so a simple example where that is incorporated into a program very similar to what you'll be doing on your problem set. So here, we've got it nicely color coded. This is unrelated, of course, to the color coding and the genetic code we've been using so far, but it includes the genetic code. You can see the comments here are preceded by this little number sign. We go from the genome, a DNA sequence, which is a string. Strings are one of the things that Perl does very well. It's not the only string-manipulation programming language, but it's a particularly easy one. And so here is how easy it is to enter a DNA sequence, one of the many ways of entering a DNA sequence. You can bring it in from a file. Here it is as part of the code. You transcribe it into RNA in silico here, by the simple command where you say RNA sequence is equal to DNA sequence, and then you substitute all the Ts for Us globally. That's what this 12th line is on slide 23. Now, off to the side here, is a reminder that it's really much more complicated than that. There's all these proteins involved and doing it accurately and with regulations, so forth. But for the sake of this Perl program, this is quite sufficient to get us to an RNA sequence, which we can then translate, and here, the translation process uses-- it's going to be a cycle. So in a sense, the cycle in the RNA m where you put in one nucleotide at a m is all compact here. It's all just substitute every T for U. Here, we're going to have a more explicit loop, this while loop. You can see it's indented. Everything within the loop, that's going to be iterated, is offset a bit. And what you're going to be doing, is looking in groups of 3. That's what this 3 is on line 17 is looking through the position you are in the RNA, and taking a chunk of 3 at a time. The chunk here is a substring, substring. And you pull out a codon as that substring. And then you do a translation. So now this is, in a sense, representing the modularity of biology and the modularity of good programming code. It's you put this whole business of translating in a separate part of the program, so you don't have to embed the code everywhere that it's going to be used. And the translation here, is that simple table. So this subroutine, S-U-B, line 22, again, indented all the code in the subroutine. It goes off the bottom of the slide and down through the floor all the different cases. Now, we could list 64 cases for the 64 trinucleotides, or we can use the more compact string manipulation that you can do in Perl to indicate-- a dot means any kind of character after G and C, would return alanine so GC, ACG, or U. And then for cysteine, which has two possibilities, so that's four possibilities represented by that dot. And cysteine has two possibilities. It's either UGC, or UGU, where the vertical line means or. And you get the idea. There's a whole, very compact, syntax here for doing the translation. And that's how we do one of the cleaner, more simple algorithms in computational biology. And now I'm going to make it complicated again. But first, to set the stage for how the genetic code is not universal, we have to explain by what we might mean by it being universal. This is the ultimate pedigree on slide 24 here. It's, in principle, some very simple organism, possibly an RNA-based organism that may have made a proto-ribosome that may have done protein synthesis or may have done some other chemical reaction at what is now [INAUDIBLE] transferase site. Something along this line, it's speculated by some biologists, was the common ancestor of all living species, all living cells. And certainly, by the time we started getting branching of the three major branches bacteria, the archaea bacteria, and the eukaryotes, by the time we get to that point, there was, probably a ribosome set of genes that encodes all the proteins and RNAs or the ribosome, and these were shared. And then at each cell division, you split off two cells were slightly different from one another. As they would mutate and differentiate and be selected, they would generate this huge diversity of organisms, which you see here. Now, this is a [? directed ?] basically graph in the sense that you can't have a descendant in this process being an ancestor of one of its parents. So time has, as an axis, going up in this case, unlike some of the more physics based diagrams we had before. And as you branch out to existing species, you see that things like plants, have actually inheritance, not just along this direct tree-like structure, but you've got more of a network-like structure, where some of the genomic material came in from one of the bacterial branches long ago. And this has recently been put on very firm footing on a genomic scale, in a recent article this month, where the bacterial genome of cyanobacteria, these are blue-green algae that fix carbon in all the oceans and plant. And the simplest plant that's been sequenced is Arabidopsis, a weed. And anyway, the DNA from the bacteria has not only gone into the chloroplast, which is an organelle, which has a very reduced genome, but it's been spread through thousands of genes in the nucleus, which is the major place where all the chromosomes are for the plants. And so this, possibly symbiotic relationship, has resulted in a complicated inheritance. And this is not unique. Another one coming in from the purple bacteria, has been incorporated into a separate membrane-bound organelle, which provides the ATP for both plants and non-photosynthetic multicellular organisms, animals. And these two arrows, that could be there could be thousands of arrows going over long periods of time at deep branches, and we know there are many interconnecting arrows in recent times in almost all these organisms, or certain representatives from all over this tree, can take up DNA in various ways, can even mate with organisms of various species and exchange DNA and incorporate it. And so it's not this simple tree, but it is, certainly, going forward in time. It is directed and acyclic in that regard. So how many living species are there? We're building to the point, connecting back to the central dogma, of how many different genetic codes are there. So we need to know how many species there are. If you take a gram of some thumbnail of soil, from any of a variety of different soils that have been tested, you can find about 5,000 bacterial species. Well, what does it mean to be a species? In animals, that typically means that they don't produce fertile offspring when they interbreed two different species. But there are books full of exceptions of this, even in animals. And bacteria, of course, where they exchange DNA all over the place, as I said, it breaks down even more. So working definition that many biologists adopt, is that if two microbial species share 20% of their DNA, if you take their DNA and align it by algorithms, such as the ones we we'll be using this course, and you find that 70% of the base pairs are conserved, then they're the same species. Otherwise, they're different species. And there are millions of non-microbial species, many of which harbor microbial species. This number is dropping slightly because of our inability to restrain our growth and other activities that cause species extinction. And the number of whole genomes is getting closer to 100, and the number in the pipeline is probably 600 or so, maybe in the thousands, with new technologies. And there's over 80,000 species defined by one or more nucleic acid sequences in the NCBI, National Center for Bioinformatics, which is one of the three major nucleic-acid databases in the world. Why do we study more than one species? The comparison between species allows subtle and not so subtle analyses of what are the important positions to stay constant because they provide some very fundamental biochemical activity. But what are the important ones to vary because they provide some important variants, for example, escaping immune surveillance and so on. So there's reasons to be constant, there's reason to be variable, and reasons to be neutral. So let's go back now and apply this to the genetic code, this particularly simple and elegant, nearly-universal code. This is how genetic codes are represented in CBI, one of the ways. And here, the three bases of the codon, bases 1, 2, and 3 remember, we said that you UUU was phenylalanine encoded by DNA TTT. So going down from the top on the leftmost column, it's TTT single letter code F for phenylalanine. So the amino acids go along the bottom row of this table. And you can see all the amino acids are represented. Stars represent stop codons which are not recognized by transfer RNAs but by proteins called release factors that simulate the function of the transfer RNA and cause the release of this cyclic incorporation of amino acids into polypeptides. Now, this is the standard code, so-called, where you have one methionine here in the middle here, encoded by ATG, and three stop codons in all the rest. Here's where it gets complicated. There are over 22 different genetic codes. Some of the changes from the standard code are indicated here in blue. You have here, for example, for the [INAUDIBLE] mitochondrial code, this is the code actually used in every cell in your body for the subset of the cell that is the powerhouse that makes ATP, the mitochondria that we talked about before, which was part of the horizontal transmission of information from purple bacteria long ago. But anyway, the normal stop codon is now tryptophan, abbreviated W. There's an extra methionine. And there's two extra stop codons, which are replacing what would have been an arginine in the standard code. And you can see there's little blues all over the place. Changes in the number where you can start. The start sites that are indicated by 3, 1 to 3 starts in the standard code. And when you talk about the starts, you start getting to how much do you favor that particular start? What other signals are required to start at that particular position? It's not as simple as just having an ATG trinucleotide to get a start of protein synthesis. You need other nucleic acid components. Anyway, still, it's a slightly more complicated algorithm. You have to know exactly what organelle and what organism you're dealing with, but you can apply the same kind of computer codes that we had, a couple of slides back. But now we get into even more complicated. And part of the reason I'm showing you this early on, some of these things would not be in your textbooks for the first biology course. And they would not be in a first computational biology lecture or two. But I do this so that you'll have a healthy distrust of everything that you read and everything you hear, including everything you hear from me. And this should really make you distrust the genetic code. Because what these ribosomes do, in this particular sequence, which has been well-documented, eight years ago, is that they will hop over 50 nucleotides. They're not going 3 nucleotides at a time as they should, in fact, it's not even an integral multiple of 3 nucleotides. It is literally coming to a stop codon, and rather than stopping, if it has just the right sequence context, including this complicated RNA secondary structure called a pseudoknot. The RNA folds up the messenger, which really should just be a messenger, which the computer should just be recognizing-- or the biological, biochemical computer, as is the ribosome, should be recognizing three nucleotides at a time, instead it's recognizing a morphology. This thing folds up, and no longer is an informational molecule. It is a morphological recognition element. Anyway, when it finds that, it skips over 50 nucleotides, skips over the stop codon, and makes a, otherwise, perfectly normal protein. So don't even trust dogma, especially don't trust dogma, central dogma included. Plenty of counter-examples. Now, we're going to move from this very, very simple example of an algorithm, where we can model proteins directly from the nucleic acids that come out of DNA sequences wholesale. We now want to ask, how do we get the more quantitative data, which comes out of functional genomics rather than classical sequencing? How do we get that into quantitative models, and then get the quantitative models then repopulated with additional quantitative data to make a full model? Now, in order to say, came up earlier as a question, what is the function of a gene product? We dip into qualitative statements which are made in the literature, which have various ways of representing the evidence for this. Some of them very convoluted arguments, some of them very casual. But when an attempt is made to put these into a database or a data structure, as representative, gross oversimplification of the literature, this is what often comes out, something like this, where you'll have a hierarchical table. Here, I've blown up one of the levels of the hierarchy. You can think of as a list, where the list may not be in a particularly logical order, but the hierarchy is, so that under metabolism would be some covalent change in substrates, which enzymes would catalyze. And then you'd have information transfer we've been talking about, like the DNA to RNA protein, these biopolymers. Regulation of information transfer or of metabolism would have all these four subheadings type of regulation, trigger, and so on. And then transport in these various other processes. Each of these functions, such as illustrated by these references here, can be used in a way of connecting all the new information we get to some of systematic best guess, best estimate, of encapsulation of the literature. Another example of this, in addition to the MIPS for yeast, is gene ontology, which is derived from the word ontology, or nature of being. And the objective f GO, abbreviation of Gene Ontology, is to provide a controlled vocabulary. My vocabulary during this lecture has been uncontrolled, as you've probably guessed. But when you start talking about-- I have pointed out the problems that you get into when you casually refer to gene expression when you really mean RNA expression, and refer to genes as protein-coding entities, when you really mean protein or RNA-encoding entities. That process of being more precise about our use of terms, at least when we're communicating with computers, is very important. We communicate with each other, you guys will give me a little slack, some of you but computers won't. They will misinterpret every chance they get. And so that's what control of vocabulary is all about. And you'll have three different-- Dave, the inventors of gene ontology, have a hierarchy including molecular function, biological processes, and the cellular component, which we'll expand upon in the next slide. Cautionary note, whenever you do modeling or you will be assumptions in this case. Some of the assumptions exclude vast parts of biology, which are listed here as part of their documentation. They have things that are not modeled in the gene ontology are domain structure and three-dimensional structure, which, obviously, has played a big role in the two lectures so far. The evolution and gene expression, we've already talked about the phylogenetic tree of evolution, and the gene expression will be a big topic in the RNA and proteomics part of this course. The small molecules we've illustrated today. Almost everything in this course, seems to be excluded from the gene ontology. Nevertheless, here we go with just one slide talking about the functions. We have molecular function. What the gene product can do without specifying where or when. A broad example of this, would be enzyme, something that catalyzes. And then a very specific example of an enzyme would be, a adenylate cyclase, something that makes a cycle in the ribose of a adenylate. So both of these fall under the net molecular function when you're describing a function of a protein in describing a genome. A biological process has to have more than one step. If it's one step, that's not a process. It has to have a time component, typically, and there's a transformation that occurs. Examples of signal transduction is a broad biological process. An example of signal transduction is cyclic AMP biosynthesis. The cellular component, would be somehow reflecting this assembly to organelle that we were talking about earlier. And here, an example, you have a ribosomal protein being part of a ribosome. So it gets you some idea of the component, some molecular-function biological process and components. Now, as I said, this gene ontology is based on facts. The facts that are included, it's not-- ideally, there would be a direct logical connection between the facts that are summarized in the hierarchical gene ontology and the raw data that came out of some instrument. That is not the case. This is all from the literature, and it's done on a low budget, wow. And examples of how they summarize it, is it's inferred from a mutant phenotype or a genetic interaction. So these two are genetic. Or physical interaction, this passes for biophysics. Sequence similarity, now we're starting-- as we go down this list, we're starting to get into murkier and murkier evidence. Sequence similarity, as you'll see in a subsequence slide, has problems. A direct assay could be a physical interaction, or it could be some other biochemical assay. Expression pattern might be evidence of some of the associations that were mentioned in the gene ontology. Now we get to electronic annotation. In a certain sense, all of these things are electronic annotation. Sequence similarity might be a way that you automatically get electronic annotation. Then you get to a traceable author statement. This means that someone said something is true, without saying how he or she knows it's true, so we're getting really murky. And the murkiest of all is non-traceable authors' data. You don't even know who said something might be true, OK. Let's go back up to the top of that, in fact, go beyond the top of it, where we now will start tracking the data from the instruments to statements. And hopefully, in this course, you will see how we will, in the future and present, make models in a rigorous way where you can track it all the way back, to data. So one class, the most obvious class, of data collection, is what I would call direct observation, typically through a microscope. And here's a particularly powerful case. I promised you earlier that we would talk about how you have 959 cells in the non-gonadal cell lineage of the worm. It starts as a single cell up here at the top middle, a fertilized egg, as this egg-shaped thing at the top. And then it splits off, way off to the left and off to the right. And that makes two cells, two stem cells, that are capable of differentiating and dividing further. And they each make two more, and it keeps going. But you can see it starts getting symmetry breaking, almost immediately. In fact, the egg itself is an asymmetric entity. And you start getting lineages that will either die as they terminate, or they will just stop dividing. And eventually, you end up with, after about 1,000 cell divisions or so, you end up with these 959 non-gonadal cells. And this lineage has been completely mapped out by direct microscopic observation, where a series of photographs you can show that this single cell turns into these two cells, so you have a time axis, and you have a lineage axis, which is one of these directed acyclic graphs. In addition, and even more amazing, to me, anyway, is that you have a complete neural connection for this multicellular organism. It has a fairly simple brain if you've ever had a conversation with one of these things. But each neuron can have dozens to hundreds of connections. And these have been mapped by a serial section through the entire worm, very thin sections and electron microscopy. And then checking out the whole wiring diagram. This is really a tour de force. And part of the reason it's possible, this would even be hard to do in a variety of organisms, but this is another case where biology cooperates, just like with the genetic code, in this particular organism, that lineage happens the same way every time. In even slightly more complicated organisms, like the Drosophila fruit fly or humans. The lineages are not so strict, and a cell can take on a number of different directions depending on the exact physical environment it finds itself in. But nevertheless, for this one, the neural connections are reproducible, and the cell lineage is reproducible. And so you can map this all out. For other organisms, it doesn't mean you shouldn't try it, it just means you'll have to represent it in a slightly less-fixed pattern. You'll have to represent it as probabilistic set of divisions and probabilistic set of neural connections. And maybe even, conditional, on various conditions. OK, that's direct observation as a class of source of data for modeling. Here's three other sources of data. In each case, I've shown pretty raw representations of the data. You can think of these all as representing an intensity readout, with some sort of separation, as the horizontal axis or is some cases, both axes. So the intensity here, is indicated by a line plot of four different color fluorophores in an electrophoretic separation, which is the basis for the genomic sequencing that we're so proud of here. Then so here, the detection of this fluorescence of the terminated chains of DNA, we'll get to that later in the course. Here you have mass spectrometry where you're measuring differences in masses, even more accurately than in sequencing. You're separating nucleic acids by their differences in mass, so about 1 part in 1,000. The mass spectrometry is more like 1 part in 10,000, or even better. Because here, you're separating in a gas phase, based on the electrical and magnetic properties. Here, you're separating by charge in a liquid phase, liquid and gel phase. Each of these, you can specify the throughput per day or the throughput per unit dollar. This becomes important in planning these structures. The third category here, is arrays. These can be arrays of nucleic acids for quantitating RNAs, or arrays of antibodies, proteins, small-molecule chemicals, which we can quantitate the binding of one kind of molecule to an array of other molecules. In both the top and the bottom, you can have multiple colors. And these can be used quantitatively as internal standards so that you can monitor this process. See, we're going to go into this in great detail, later on. But I wanted to give you a feeling for where the source of these things are. This array analysis, in a sense, is another example of microscopy. Just like in the previous slide, we used direct observation microscopy to monitor cell lineages. So too we can do it/ We can make a-- it's just wonderful. The battery's charged, OK. We can take the microscopy of artificial patterns, such as arrays. Just as we have separation here by mass, we can also have separation on a variety of other properties, sometimes called multidimensional separation. This gets back to the first slide of the lecture, which was the purification aspect. Now, how do we jump from that kind of raw data to this common way that biologists communicate in journals, where they have circles and arrows, where the circles might be some kind of protein molecule such as a stat, and arrow indicates some of interaction, or regulation, or quantitative influence that one protein has on another, or a protein. So in alternative diagrams, nodes could be small molecules, and the edges, the links between the nodes, could be an enzymatic reaction catalyzed by a protein. There are about 500 biological databases that we'll talk about in the database talk. How the data and models were entered into these databases, is a huge issue. Many of them have been done very casually. For DNA sequencing and crystallography, I think the process by which you go from the raw data to the models, is very well understood, very well communicated, for this sort of thing. It will take this whole course for us to even to scratch the surface. Here's another example-- that one was protein-protein interactions. This is an example where the nodes now are not proteins but small molecules. And they're connected by an enzymatic pathway. This is another example of an application of ordinary differentiable equations, just like the one last class. We had exponential growth. Here, you have simple fluxes, where a catalytic reaction occurs, not autocatalytic, but catalytic. There's no there's no exponential growth occurring in this cell. It doesn't have any biopolymers in it, biopolymer synthesis. But these catalytic reactions that form this network, and you can model the influx of fresh molecules. And its utilization within the cell and the efflux. We'll come back to that. Inside there, are a set of kinetic equations. We need to figure out how to get from the raw data types that I've shown you to this kind of equation. This will be one of the goals of the course. Here, you have a velocity on the far-left hand side of the top equation, which is related to a maximum velocity on the numerator. And then a series of linear sums and quotients. Now, some of the terms will be nonlinear. Here's an exponent of 4 that enters in, because you have one of these [? propertivities ?] that gives you that kind of sigmoidal curve that we showed for transistors and will enter to a number of biological consequences, where the steepness of that sigmoidal curve is determined by this exponent, sometimes called a Hill coefficient. But other than that, you'll get these simple, linear sums and quotients. And we'll come back to that. What actually constitutes these networks? I want you to feel less limited than you might get in a simple textbook. A simple, textbook definition of a catalytic enzyme-catalyzed process you might have, A is a substrate that turns into B as a product. This is a process that A could go to B spontaneously, but in the presence of enzyme, it goes faster. Or it could be, that for all intents and purposes, A never turns into B. It's so slow that you need this enzyme here to even detect it. The enzyme will form a complex with A. This could be a non-covalent complex or a covalent one. It then produces a covalent change in A. Ant it becomes an enzyme-bound B. B is released. Enzyme E is regenerated. And so in a certain sense, in this process of turning A to B, E is not consumed. But let's think about an increasingly important class of biochemistry, such as signal transduction, where the enzyme now has a new role. It changes places with the substrate. It becomes a substrate. The E now is a substrate in which a small molecule, ATP, which might have been the A up here, combines with the E. And the E could either catalyze its own phosphorylation or in context with another enzyme, but in any case, it becomes covalently modified to produce a phosphorylated enzyme, a phosphorylated protein. And then the ATP is regenerated by a simple enzymatic process. And so in a certain sense, formally, it's very similar to this process, except you now flip the enzyme and the substrate. ATP is not consumed, the small molecule is not consumed. The enzyme is consumed. So think of these things, these networks, as symmetrically as you can. Try not to get too embedded in the names, this is an enzyme, this is a substrate, and think more about the concepts. The concept here, is some things are consumed and some things are catalytic and regenerated. So again, we are going to integrate this metabolic processes we were talking about, in the last couple of slides, with the information flow, which was the topic of central dogma, in order to get functional genomics, which measures those information molecules mostly and produces quantitative modeling. You need to have the qualitative models to know what's connected to what. You need to have the raw data as illustrated in slide 41. Again, to remind you, the source of quantitative data here, you can measure RNA, or proteins, or peptides in the mass spectrometry. RNA in the arrays connected to the DNA provided by the DNA sequencing. I warned you that one of the gene ontology data type sources of data was sequenced-- electronic sequence annotation by sequence similarity. I want to elaborate on this warning with this slide, where we say, we have various justifications for looking for distant homologs, examples of gene products, which are related by, on that ultimate pedigree tree of life, by very long distances. It's been a long time since those things were present as a common ancestor. And we want to find those because they help us limit the number of hypotheses that we need to test whenever we find a new molecule. If we can connect it to another molecule, however distant, then we feel that we don't have to test every possible hypothesis. We just have to test that little narrow one. But what happens when we do that? Let's say, instead of some distant homology, where we have, say 20% amino acid identity. You line up the sequences by methods that we'll talk about later. And you have 20% of the positions that are the same, or even less, can sometimes be meaningful. But how good is that? There's going to be some kind of curve that relates how close two proteins are with a probability that they will have the same biochemical, or cell biological, or genetic function. And here's some worst case scenarios. And I don't mean to represent these as typical, but they get you doubting again so that you don't trust anything. 100% sequence identity. This should be a best case scenario, but it's not. The amylase enzyme, which catalyzes a carbon metabolism in most cells when it's expressed to high levels in a vertebrate like our friend this marine tortoise, turtle, it turns into the major eye lens protein. And actually, this is true of most vertebrates. They have some kind of enzyme, like a glycolic enzyme, which is overproduced and aggregates and makes a clear lens more morphologically interesting feature, which just focuses light. Completely new function by all those definitions of function. No longer does the enzymatic activity, does an optical activity instead. Another example, we have 100% sequence identity. Not something really distant homolog like 20% or 10%, but 100% sequence identity. [? Thyroxine, ?] which is involved in redox reactions involving [INAUDIBLE] and other things. In the right context with other proteins, it can now be part of a DNA polymerase, when it globs on the DNA, it goes really without stopping with [? thyroxine, ?] but it falls off if [? thyroxine ?] isn't around. That's not a redox function completely different biochemical function. But like I say, there will be a curve. Sometimes, there will be very great hypothesis limitation that can come from very distant relatives. These are more examples of the quantitative data that we will use to get hints at relationships among genes that go up and down together. They form the basis of asking, what is function not based just on sequence homology, but based on a variety of quantitative data, such as the RNA data and the microarrays. This is three more ways of looking at how we define functions. Function definition number one, is the effects of mutation on fitness. This is, in a certain sense, what the organism cares about the function of a gene product. It's how many grandchildren am I going to have? That's what it cares about. And that's what shaped the function over time. And so if we're going to understand any of our other definition functions, we have to at least give some attention to what shaped it over billions of years and over many different environments. We need to have some feeling for the ecology of these organisms. The second definition is the more commonly used one, which is what is actually its function in a machine-like sense. In the cogs, in the wheels, how does it function structurally? What's the three-dimensional structure? What's the mechanism? The third function, is more forward-looking, not what good has it been to organisms in the past, but what good can it be to us in the future or to other organisms in the future? This may not involve reproducing the organism, making copies of it. It could be that there's some other engineering goal or objective function. When we say that we've proven something. We've proven a biological hypothesis. What we mean is, given the assumption, it's a statistical statement that the odds of the hypothesis being wrong are less than 5% of the time keeping in mind hidden hypotheses and multiple hypotheses. In genomics, it's all too easy to collect a lot of data, and therefore, when you mine the data, you can make a lot of hypotheses. And you test them, and you find you will find thousands of things, which by themselves would be significant at the 5% level, the standard statistical test, but you've got to correct for the number of hypotheses you implicitly or explicitly test. We'll mention this time and again in specific cases as we go forward. The systems biology manifesto that I mentioned earlier, had this little loop where you would generate perturbations and test things and so forth. But an alternative way rather than doing additional experiments, is if you really have bought-in fully into systems biology and you really have all the components and a systematic perturbations, then you might be able to test the hypothesis generated by data mining one data set by going into another data set. You need to ensure that they are independent. And you need to ensure that the hypothesis itself, came from the first data set and not the second when you go out and test it. But that would be a pure data mining loop, systems biology loop. Now, just like when we say we have a proof, you should be distrusting of anybody that says I have an absolute proof. What they really mean, is a statistical statement. So too, when someone says, when they refer to the quality of their data, is this is the answer at the raw data level, what they really mean, is that they have some error level that they can quantitate. And you should be especially distrustful if someone doesn't attempt to give you any feeling for that. Not to say that everybody that gives error bars or error estimates is to be trusted, but you get the idea. So for DNA sequencing, there's a standard of practice. It was not always such, but a meeting in Bermuda, it's called the Bermuda standard, this is the best place to establish standards, is 99.99% accurate. You can see they have very high standards in Bermuda. But that's across the Genome Project. These are aspects that, I think, we got from genomics, in addition to the raw data, we've got kind of an attitude. The attitude is, we can start looking at whole systems again, less on the individual gene-hypothesis driven standard NIH grant proposals that predated the Genome Project. Now you can do less hypothesis driven, you can do data mining, and so on. We've also inherited the concept of automation the modeling and completion. Completion is something which still is not reduced to practice for functional genomics, but it has been reduced to practice for sequencing. And there is hope that we can approach it for functional genomics. Be careful using the word impossible. There certainly are things that appear not to be cost effective at any given moment, but technology is moving quickly enough. Remember those greater than exponential curves in the last lecture. There are technologies arriving that make things suddenly become cost effective. And that's a particularly important warning when you're designing a computational method that will compete with an experimental method, the experimental method suddenly becomes cost effective, then you need to revise your computational goals. We have types of mutations that we've talked about. We have a null mutation, for example, phenylketonuria, which is tested in newborns, in almost all newborns, that are born in the United States and certainly Massachusetts. This is a very serious source of mental retardation completely wiping out that gene. Small dosage effects, like a 1.5-fold effect that we talked about in trisomies, like Down syndrome, are important. You have conditional mutants. Classically temperature sensitivity of a mutation, meaning the protein unfolds. Or more recent enthusiasm for chemicals, a mutation which depends upon a chemical for producing its phenotype. You can not only have these things that affect dosage or condition or complete knockout, you can have a new function to obtain for changing the ligand specificity or changing the aggregation of a protein. Here in the background, are how a change in the hemoglobin, which normally transports oxygen, can change the morphology of a cell and hence the function in transporting oxygen. I just want to end on two slides on how you can represent the competition among cells or among organisms, which represents the Darwinian function, function number one, a few slides back. Here you have mutants in a population. Selection acts on populations, and mutations are tagged, by definition, by their nucleic acid. If you can use the tags, you make a pool of such mutants, these are naturally occurring population. And when these pools are subjected to selections natural, or complex, or simple, or in the laboratory you can now read out these tags in many of the quantitative ways we talked about, for instance, mass spectrometry, arrays, so on. And as you go through more rounds of selection, you'll eventually pick the winner, which is the most highly selected of the mutations, the winner. Or you might have a mixture if you through a very limited number of rounds. This will follow the exponential curve that we had here, whether it's exponential decay or exponential growth. You can have a very subtle difference in growth, due to the function of that mutated gene product, but that small, say 1% turns into complete all or none replacement if you have enough generations. This is the awesome power of the exponential that we talked about last time. And in real world, and also in the laboratory, you can think of this as going over a variety of environments, E, over different times. So the time you spend in each of these different environments, has some unit that happens. In natural environment, you'll spend say, more time in one condition than another. And the selection coefficients are a simple sum, and this exponential gives you the ratio of the organisms. Here's some references on this. And I urge you to take a look at these, where actual experiments have been done getting these. And we'll come back to this later in the course. So this is the end of this lecture number two. Thank you very much. |
MIT_HST508_Genomics_and_Computational_Biology_Fall_2002 | 8A_Protein_2_Mass_Spectrometry_Postsynthetic_Modifications_Quantitation_of_Protein.txt | The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license and MIT OpenCourseWare in general is available at ocw.mit.edu. GEORGE CHURCH: OK. Welcome to the second proteins. Just as with the RNA analyses, we started with, in that case, a brief discussion of RNA structure, and then moved on to RNA quantitation. This case, we spent a bit more time on protein structure, and we'll spend a little less time on protein quantitation, which will be the main topic today. But partly because the RNA and protein quantitation have many themes in common, so we've covered some of those. And we'll talk about how one can integrate protein quantitation with RNA quantitation and with metabolism. So last time we talked about mainly with structure and interaction of proteins with small molecules at the structural level. And this time we'll be building up towards the quantitating proteins and their interactions with other proteins of small molecules so that we can address that as the first step towards the network analysis that will be the subject of the last three lectures of the course. So we'll get a hint of that at the end of today's talk. So at the beginning of the course, I had one slide talking about how purification was a revolution in many different fields-- one of them resulting in recombinant DNA and the Genome Project, and so on. Today we're going to talk more about purification from the standpoint from what does it tell us about the properties of the molecule itself as opposed to just purifying for its own sake. How can we go in and data mine and get the maximum value out of purification? The things that we expect from purification is to reduce some source of noise. That is to say, in the process of identification or quantitation of proteins or their components, we want to remove sources-- the major source of noise would be environmental contamination, or more often contamination with other bona fide members of a mixture, other proteins that may be very abundant or breakdown products or related products. And we want to separate it into enough different components so that the major ones are off in their own little bin and don't interfere too much. So the second reason why we might want to purify is to prepare materials for in vitro experiments. While we're studying networks in the next three classes, we'll find that the real problem is that they're complicated enough that you need some way of isolating one network component from all the possible interactions. And one way to do that is to purify that network component, or a number of network components out and make a subnetwork completely artificially in vitro. So that's another use of purification. And finally-- and this is the main theme for the next couple of slides-- is to discover biochemical properties about that component itself. So what can you glean from the purification process? And this requires careful use of purification. Now, many of these methods were developed because they work well, but now we go back and we look at which ones can give us information about the biology and the chemistry of the system. And so we have the charge of a molecule, which we talked about last time as being a very important relatively long range interaction 1/r interaction. And here are two methods I mentioned, one involving electric field, isoelectric focusing, which determines the pH at which the charge is neutral and there's no net movement. In ion exchange chromatography, we have a mobile phase and a solid phase. And charge enters prominently into that. Size is something that will be a recurring theme tonight, where we'll be talking about the mass of protein complexes, the mass of individual protein subunits, and the mass of peptides cleaved out of those basic subunits. And you can see there are quite a number of different methods. Sedimentation velocity will be one that we will use-- electrophoresis, and so on. Solubility and hydrophobicity I'll lump together here as properties, kind of bulk properties of the amino acid composition of peptides and proteins. And they refer to their affinity for hydrophobic solid phases or hydrophobic mobile phases, solvents, and so on. The biological significance of hydrophobicity might be the affinity for lipid bilayers or affinity for other hydrophobic patches of other proteins. Now, we have-- the size tells us something about the stoichiometry of protein-protein interactions when we're looking at native measures of size of complexes. And other indications of specific binding, whether it's between proteins or a protein in a small molecule can be detected by affinity chromatography or another related method as [INAUDIBLE] precipitation, where you'll have one ligand, like an antibody, which is specific for a particular protein epitope and will pull down all the proteins that are associated with that epitope, that surface property. And another sedimentation method-- now, the first sedimentation method was a velocity method, a kinetic method, where the largest particles, once you take into account buoyant density, all other things being equal, the largest particles will sediment most quickly. If you set it up so that things are nearly equally buoyant and you build up a density gradient by centrifugal field, then you get particles separating by their properties, which can include the binding of metal ions, which greatly affect the density of nucleic acids, for example, and hence, nucleic acid protein complexes. Now, this is a particularly awesome pair. And historically, it figures prominently into proteomics. It's still quite viable. And they illustrate some important points. When we talk about the mass of a native protein or of a protein that's been denatured by a detergent micelle, such as Sodium Dodecyl Sulfate, SDS, its mass can be resolved by fairly potent polyacrylamide gel electrophoresis, where you get sieving, where the gel causes sieving. And this micelle, this detergent micelle size itself, or the micelle is dependent upon the size of the unfolded or partially unfolded protein chain. And so you actually get a fairly good calibratable plot of mass of the protein versus the mobility in an electric field when the protein embeds itself in this detergent micelle. So this observation of the potentability of this detergent to denature proteins, and then to resolve based on mass with the small exceptions of very hydrophobic proteins or very carbohydrate-rich proteins-- most other proteins will have a very nice calibratable relationship. Similarly, the charge on a protein approaches 0 as the pH gets to the point where it's titrating out all the type titratable groups. And this can be calculated. And the resolution of this is about 1 part in 100 or better. So both of these together are two very high resolution methods. And you can think of trying to divide-- you have a complex protein mixture. You want to divide it up into a lot of separate bins, maybe 100 of each. And if you combine two dimensions, as you often do here, where you'll run first isoelectric, and then second the SDS dimension, now you got 100 bins in one dimension, and each of those bins turns into 100 bins in the second dimension. So theoretically, you've got on the order of 10 to the fourth bins. And if some of the things that you want to-- some of the rare proteins you want to analyze are in one of those bins, you might have gotten as much as a 10 to the fourth-fold enrichment for those rare proteins away from the more common ones, which are all too easy to stumble upon. Now, before we get to an actual example over two-dimensional gel, I want to motivate this by the computational components to it and the computational biology that you can get from these multi-dimensional separations. And this comes from our recurring interest in comparing whenever we can calculate a property of a system, or in this case of a protein, we do so, and we compare it with the observations. And some of the properties here are the localization in the cell, which essentially is an association of the protein as it's made, the post-synthetic modifications to the proteins, such as proteolysis and phosphorylation, and so on, its charge, and its mass. And here is shown a plot of calculated charge, or isoelectric point in pH units. P being the negative logarithm of the hydrogen ion concentration. So it's calculated on the horizontal axis and observed on the vertical axis. And what we're seeing here is if there were a perfect x equal y relationship here, where calculated and observed were the same, then all the dots would lie on this line, and though the outliers, initially they were such things as frame shifts of the DNA sequence, producing a wrong calculated protein. Once those were corrected, then there were observed proteolytic cleavages, which could be mapped down through the exact amino acid, and then those corrected a few more on to the line. And the remainders were other post-synthetic modifications, such as phosphorylation. So here's a reason to embrace your outliers. Each one of these things is an exciting story, where you either have a correction to a previous data type or a new discovery of a post-synthetic modification. Now, how do we actually calculate all these facts about protein properties? Some of these have biological significance, which we've listed. Where it is in the cell, obviously, matters to its carrying out its function. How big it is determines what other proteins are associated, and so on. So how do we calculate this? The protein charge which was in the previous slide is a simple linear function where you sum up the pKas of each of the individual amino acids. Now, pKa, again, p means negative logarithm, and the Ka means the equilibrium association constant of the proton with each of these chargeable residues. So depending on the pH, there are a wide variety of nearly physiological pHs where [INAUDIBLE] and histidines will be positively charged, the blues, and the red ones can be tyrosine and cysteine. Especially aspartate and glutamate can be negatively charged in the range of pHs that we saw in the previous slide. And so this is calculated as that simple sum. And protein mass is calibratable with knowns. Even if you have a very complex empirical relationship, such as that detergent binding FBS gel electrophoresis. It sounds very-- too many moving parts to be completely theoretical. But if you calibrate it with good known proteins, or protein complexes if you're doing a native electrophoresis or native sedimentation velocity, then you can get a curve where you can interpolate and find masses quite accurately, or at least around about 2%. But mass spectrometry is commonly applied to peptide masses, sometimes whole protein masses. And here, assuming the mass spectrometer is properly calibrated and so forth, this is a simple isotope sum. And this can be carried out sometimes to four or six significant figures. And it really is a simple sum of the isotopes measured by physics. And this can include post-synthetic modifications. As you're getting down to peptides, the post-synthetic modification becomes a much larger fractional effect on the measures that you're making. Not only the mass, but the liquid chromatography properties of either a protein or a peptide can be calculated. Here you take amino acid composition and do linear regression on a calibration set. And you can get precision on the order of 5% or better. A subset of localization, we have motifs. Sometimes hydrophobicity is a part of those motifs in their description. Expression would not be something-- we've seen the motifs that are involved in regulation of transcription and so forth. But kind of the shortcut that might take you directly there in certain cases is the codon adaptation index. This is something where the hypothesis is that you can go directly from the nucleotide sequence of a protein coding region, and if it uses codons that are very abundant, that correspond to very abundant transfer RNAs, then that protein-- then that's saying that the evolutionary pressures producing that particular choice of codons is revealing that that protein is going to be high abundance. So in a way, this is a way of going directly without-- due to this observation and somewhat logical expectation. so Now we have all sorts of separation methods and motivation for studying them, more than just using them. But now we want to look at a particular case of separation. We're talking about complexes and protein localization. So this here is another example of a two-dimensional gel, where the isoelectric point is on the x-axis here and the vertical axis is this estimate of molecular mass, provided by the association of the protein with the SDS micelles and the effect on gel electrophoresis. And this is a 2D gel, a small section of it, not the full pH range nor the full molecular weight range, but this has a good fraction of the proteins that are secreted. Now, to what extent can we calculate this? We've shown a number of calculations and observations so far. And we pointed out in last class that to some extent the transmembrane regions of a protein could-- this is one of the better algorithms in protein sequence gazing. And taking that one step further, you can actually say, OK, we know that this might have a motif or two that interact with membranes, or somehow are part of the process by which proteins are targeted and move across membranes so that you can divide it into several different subcellular localizations-- in eukaryotes, the mitochondrial localization, and chloroplasts in plants, secreted proteins that go all the way across the plasma membrane, and other locations, such as within the membrane. And this can have an 85% success rate, meaning a 15% false negative. And you can see modest false positives here, too, over predictions in 295 transmembranes. So let's return to mass, and this time in the context of mass spectrometry. What do we have-- what's our starting point? Well, if you look on the far left side of slide 11, you'll see the simplest of the atoms, hydrogen, and you would expect this to have an atomic mass of 1. Well, since carbon-12 is assumed to be precisely 12, it turns out that hydrogen is not precisely 1 when you actually measure it. And it's not even good enough for government standards. When you actually, say, add a CH2, it adds up to 14. And that is discriminatable from a nitrogen-14 with a good enough mass spectrometer. As it turns out, for most biochemical-- protein analyses, you don't depend on the sixth decimal point. You can get away, really, with-- or certainly don't depend on having the 10 to the minus three atomic mass units as your resolution. But you do depend upon in a much-- in a kilodalton size peptide being able to get 1 part in 10 to the fourth, 1 atomic mass unit. Another big consideration here is that not only are these things not exactly integers, but in natural abundance, they're a mixture of the major isotope on the far left and the second and sometimes third and fourth stable isotopes, usually non-radioactive isotopes, which are present in nature. And the most abundant in this particular list is C13. And it's most abundant in two senses. One is it's the highest fractional abundance of any of these elements. And secondly, carbon itself is very common in peptides. If you cleave your protein up with trypsin, which cleaves c-terminal to lysines and arginines, you'll get on the order of 10 or 20 of these peptides per protein. And they might be 10 amino acids long. And so they might have on the order of 40 carbons in them. So now the fractional abundance of C13 with 40 carbons is getting close to unity. And we'll see an example of exactly how this plays out in terms of the multiple combinations of isotopes that can occur. Sulfur, on the other hand, has more stable isotopes. It has four different stable isotopes, but each one is a fairly small fraction. And the probability of having a sulfur in a given peptide is low. I mean, that's the probability of having one sulfur. The probability of having 40 sulfurs is vanishingly small. So we've gone through calculating all this charged mass. Now we're going to do liquid chromatography, in particular hydrophobic measures. And this all falls under the heading of high-performance liquid chromatography, which is achieved under high pressure typically to get it to go rapidly. And so you'll have these little-- you'll digest your protein with trypsin. You get a series of these peptides, say, 10 amino acids or so. And then they're injected in a liquid phase. They bind to the solid phase by the hydrophobic properties. You might have it. And then you'll have a readout, where, as a function of time, you get abundance where the peaks can be measured by mass or ion counting and so on. And these can be collected or simply run into a mass spectrometer. There are going to be two phases-- the mobile phase and solid phase. Talking about the mobile phase first, the hydrophobic tendency of any given peptide is going to be some kind of related to the sum of the individual amino acid components. And you can either have a isocratic elution, where you have basically constant migration speed, no change in the content of the mobile phase, or you can have the mobile phase change its composition, say, for something that's almost entirely water at the beginning to something that has a high organic content, up to 40% or so acetonitrile, and so that at the end of the gradient, even the most hydrophobic peptides now have just as much affinity for the mobile phase as the solid phase, and they come flowing off. The solid phase has a number of different options. The main one we've been talking about, implicitly at least, is the reverse phase, where you have the hydrophobic carbon-18 alkyl chains immobilized to a highly porous media in a column that can withstand high pressure. Or it can have differing polarities, size exclusion, the same sort of things that we were talking about with electrophoresis, except now the force is a non-electric field, but it's simply the pressure differentiable between the injection port and the output. So here is a specific case worked out example of how you calculate the affinity of a peptide to a hydrophobic column. This is a C18 column, referring to the number of carbons in the alkyl chain. So you can see that this is like a kind of a lipid type phase. It's the sort of thing you might see in the middle of a lipid bilayer membrane. And in this plot, you have relative retention time along the vertical axis. And the residue number refers to having short peptides cleaved, sort of walking along the protein. These have been synthesized so they march along the proteins, very analogous to how in an earlier class Rosetta made synthetic oligonucleotides marching along the human genome. And this was done in order to calibrate to see how the composition, the amino acid composition of a peptide might affect its mobility and allow you to calculate it. And so what you end up with here is a somewhat intuitive way of summing the contributions of each of these amino acids. Now, remember, this is done under fairly acidic conditions. So the normally charged acid groups will now be protonated and be neutral. And so what we'll find is mainly a spectrum from the very slowest to be eluted. They require the most organic, and hence the most-- the longest retention time would be the highly hydrophobic aromatics, tryptophan and phenylalanine, have a component which you obtain by linear regression of each of these peptides. You know the sequence of these peptides. You calculate their composition vector. And then you relate it to the retention time that's observed, so that the lower plot is the observed ones. And then after you do the regression, you get a set of these coefficients. You can now plug them in an additive sense to make this calculated plot. And you can see that there's very good correlation up and down. It might have been better if these authors had done this as a plot of calculated versus observed, as we did in the previous one, and then showed the scatter and did a regression curve. But you get the idea. And so the most hydrophobic ones are at the top, and the ones that are acid conditions are the most highly charged. There's the positive charge ones, the lysine, the histidine, and arginine down here at the bottom, and the acidic ones, since they're close to neutral or near 0 here. And there's a slight effect as to whether the amino acid is at the N-terminus or not. So now we've calculated the reverse phase behavior here on the far left-hand side of slide 17. And this is separated by hydrophobicity, and then we have mass, which we've outlined how you can calculate the mass. So now you can calculate both its hydrophobicity on the vertical axis, and mass on the horizontal. And you can make this, in the computer, this two-dimensional plot. In this case, this is actually observed data. But you could also superimpose on it the calculated ones. These are slightly streakier in the reverse phase retention time, rt, just because of the scale that one uses, or the properties of the separation method. Now, if you blow up-- if you take one of these little two-dimensional spots and zoom in on it, and look at it in greater detail, you see that it actually is a complex set of peaks in the mass direction, and fairly simple in the retention time. Now, you could say, well, maybe these are all different peptides. But in fact-- and they are, in a certain sense. But they're trivial relatives of this, where you need to think-- where you can data mine and get additional information. So each of these are separate isotope peaks, meaning, remember, this might have 40 carbons in this peptide. And so it's going to be a binomial distribution, where this is the case where you have zero carbon-13's. In other words, it's all carbon-12's. And then the next peak over to the right is going to have one carbon-13. The next peak over is going to have two carbon-13's. And n minus 2, where n is the total number of carbons in that peptide, which would be carbon-12, the most abundant one, and so on. And you'd get every possible combination in a binomial distribution just as you would expect. Now, what does this say? It tells you at least two things. One is the distance between these, you can see here, is a half an atomic mass unit. Well, how do we get a half of an atomic mass unit? I mean, they're supposed to be-- I mean, we know they're not perfect integers, but this is way off. And the reason is because what's actually measured is the mass over charge. You're not measuring the mass-- mass over charge. And so this is saying this particular peptide has a +2 charge state. That's an important fact. It's going to be hard to interpret its mass if you don't know its charge, because it's m/z that's measured. The other thing that's measured is from the exact binomial distribution you can get an estimate of the number of carbons in there, because if they're a huge number of carbons, then it will turn out that one of the secondary peaks here, one of the rightward peaks, will actually be the most abundant one. If it has a small number of carbons, then the zero carbon-13 peak will be the all around winner. And so from the relative heights of 0, 1, and 2, you can estimate the number of carbons. So two facts you can get from this high resolution view of that peak. So now we've got these two phases, these two dimensions pretty well in hand-- the reverse phase and the mass. Now we're going to add another dimension-- actually, a couple more dimensions. One is another peptide dimension, which is a Strong Cation Exchange, SCX. What this means is the peptides will have different cationic properties, different charges, and they will bind to different extents to this. And you can put these in tandem, either physically or conceptually. You'll run one, take a bunch of fractions, and then put it on the reverse phase. The first phase literally is physically connected to the mass spectrometer. And then we'll talk about some upstream. Now, upstream separation methods, before you fragment it into peptides, you can separate. It can have dimensions on the proteins, which can tell you about the complexes. Let's go through a specific example of complexes. So if you take the entire yeast proteome-- grind up yeast, throw it on a first separation, which is the sedimentation velocity-- remember, there's equilibrium and velocity. Velocity is mainly responsive to the size of the complex. This is a native dimension. We're not denaturing with SDS native. And so you'll see the things that go absolutely the fastest in the cell and among the most abundant are the ribosomes and the ribosomal subunits. This is done under conditions where you just tease apart the two ribosomal subunits so that they're separated. And the bigger of the two is called 60S. The S refers to Svedberg, who was one of the pioneers of sedimentation velocity. And this can be correlated. the Rate at which it goes down, this stabilized gradient centrifugal field is related to the complex. And then so that's the first dimension is horizontally, sedimentation. The very top axis going horizontally is sedimentation. And then going down is SDS gel electrophoresis. So now you're taking these native complexes and breaking them up into their component proteins, where high molecular weight is up, and low molecular weight is down. And you can see that there are quite a number of proteins in each of these two subunits, the 60S and the 40S. Now, each of these proteins-- so that's the second dimension. Now, you cleave it into peptides, and you analyze the peptides by retention time in all the three dimensions-- strong cation exchange, retention time, and mass. And then you add a fourth one, which we'll develop a little later, where you can actually break up the peptides into little pieces. So you identify which protein each of the peptides came from. Now, unfortunately, not all peptides are equal in their ionization potential. And so if it ionizes poorly, you won't detect a particular peptide from a protein. If you detect a large number of peptides from a protein, that probably means that it's abundant in your sample or your fraction, and it also means that you can believe that the computer identification of that protein is probably pretty solid. So if you get five or more peptides from a particular protein in your database search-- and we'll talk just a moment about how you do that database search-- then you believe it, and it's very solid. And so if you look at the-- you can see that all these-- as you look at essentially the protein fingerprint for each of the 60S fractions in the sedimentation, they look pretty similar. And when you run out the mass spec, you get similar sets of peptide signatures. And they correspond to-- most of them, especially the most abundant ones, correspond to the known 60S proteins. If you go a little bit slower, less mass, the 40S subunits and you analyze the subsequences of the mass spec signatures for those peptides, then they mainly turn up 40S. There's some exceptions in both. There are some other categories, which may be interesting. The most interesting in this particular study, the authors highlighted was [? Weimer ?] 116p, which, remember, this is a very mature field. This is a very recent study. And ribosomes were very well characterized. And I think we had the conceit that at least in microbial systems, such as E. coli and yeast, we really understood all the proteins that were required to make a ribosome hum. But here was a new ribosomal protein, which has since been confirmed that this is a bona fide part of the 40S subunit. You can see here it had many peptide hits. So it was equal in abundance to the other 40S subunit proteins. So here we had in a certain sense five or so dimensions, the sedimentation, which is the native complex size, the denatured protein subunit size, the peptide ionic-- the ion exchange, the peptide mass, and the peptide fragmentation. So when we talk about fragmentation, that's what MS/MS sometimes means. It means that you're doing first mass separation of the peptide. You break it up, and you do another mass separation of the component parts. And that allows us to do database searching and sequencing of the peptides. Now, how this works-- this is a blow up of something that was in an earlier slide, where you can really see the region where you've got electrospray. We'll have an even closer blow up with this in a moment. But basically, your reverse phase liquid chromatography is going directly into this vacuum here, generating little spray at 4,000 volts, and then these molecular ions will go through the vacuum through a series of magnetic octopoles until it hits an ion trap, which helps you determine the m/z, and finally a detector. You can see there's a variety of different pressures in here, sort of increasing pressure from the point where you have the aqueous solvent going in all the way to the detector, which is the highest vacuum at around 10 to the minus fifth torr. Now, here's where the [INAUDIBLE] mass spectrometry, or MS/MS, or collision-induced dissociation, you have-- here's your ion beam of molecular peptide ions. You've now turned peptides in-- they're each on their own little space. And in the middle of this quadrupole, this magnetic environment here, you bring in an inert gas, like argon, to collide with these rapidly moving ions, and they will break the chain, basically, at any covalent bond. And if you think of the peptide at a chain backbone as having three different covalent bonds, there's the peptide bond itself, and then there's a carbon-nitrogen bond, and a carbonyl nitrogen bond-- sorry-- and a carbon-carbon bond. It can break at any of those three positions, and then you'll generate a set of fragments coming in from the N-terminus with three possible C-terminal fragments. And coming in, if a C-terminus is cleaving at the same point, the whole series is coming in that way. And as you're coming in from the N-terminus, you give it A, B, and C, depending on whether it breaks at the C carbon-carbon bond, the carbon-nitrogen bond, or the nitrogen-carbon bond, A, B, and C respectively. And the same ones coming in from the C-terminus are called Z, Y, and X. And so, as it turns out, just empirically, if you sort through all the chemistry, in most cases, the peptide bond is the one that's most actively cleaved. And so the B ions and their complementary Y ions dominate the picture. Now, the other ones will be present. Especially if they come from a very abundant peptide, they can swap out the B and Y ions for a less abundant peptide. But all other things being equal, B and Y will dominate, and most of the rest of the discussion will be about those. This is the closest picture we'll show of the ionization step. This is a step, which has not been thoroughly enough studied, in the sense that this ionization step, where the droplets of aqueous and organic solvent coming out of the separation column is subjected to the vacuum. The water starts being released from the droplet. The protons-- remember, this acidic media associates with a molecular ion. They kind of explode because there's too much positive charge in a small space, until you finally have a molecular ion associated with one or two net positive charges. Remember, we had an example just a little while ago. We had a net positive charge of +2. You don't need to have neutrality in this situation. Anyway, this is poorly understood in the sense that some peptides ionize much better than others. And we'll come back to this when we talk about quantification. But for right now, what we want to do is ask how do we analyze the complex spectra that comes out when you fragment-- when you take-- first, you get a fairly simple spectra, which is just a list of all the masses of all of the peptides. And remember, some will be weak, and some will be strong because of this voodoo ionization. But then we break them. However, whatever the intensity of the original peptide was, it will make a bunch of daughter ions, which will be the B ions coming in from the N-terminus, the Y ions from the C-terminus. And you'll have this big mixture, a nested set, that get increasingly large as you get further from the N-terminus and the B ion series and their complements. And the sum of the B ion with its complementary Y ion has to be the original molecular mass corrected for the chemistry that occurs right at the cleavage. And so here's a real example. We're going to work through it so that you can see what happens to a typical peptide here in the upper right-hand corner. And this is tandem mass spectrometry. Remember, there is a-- if you think of it, there was a single mass peptide that was then broken into all these little pieces. And the almost intact peptide will be on the far end of the horizontal axis, which is the mass axis, close to 1,200 atomic mass units. And then a relative abundance is the vertical axis, as it's just related to the ion counts. And you can see there's some variation here. This is not due to ionization. This is due to the cleavage efficiency, cleaving at each of the bonds for the Y series, which is in blue here. And it tends to be the higher peaks, and the B series in red, which tends to be slightly lower peaks. And then you've got these little arrows, the darker arrows indicate the Y ion series that separate two adjacent peaks. Because what's the difference between those two peaks is the addition of one amino acid. And so the focusing on the blue series, the Y ions coming in from the C-terminus, the shortest Y, the Y1, would be just the C-terminal amino acid itself, which would be arginine. And its distance from the origin would be about the mass of the arginine itself. And then you add a glycine to it, which is a small delta, and then an alanine, and an isoleucine, and serine, and so forth. And you can see here very clearly the leucine and the asparagine and the valine, so that Y ion series all the way down. The G is the last one documented. The N and the S at the highest molecular weight are not visible. And actually, many of these things you'll have very weak peaks. You'll essentially have missing peaks that are corresponding to one of the delta amino acids. So in that case, the distance between the two prominent peaks will be two amino acids. You can see this starting to get to be a challenging pattern recognition problem, because you've got all the B ions mixed in with the Y ions. And this is summarized in the next slide. The B ions are mixed in with the Y ions. And some of the ions are missing. Each ion has multiple isotopic forms. That wasn't so evident in the previous slide. But that blow up I showed a while back, you had that binomial distribution. There is the lingering presence of A, C, X, and Z type ions, where you've got cleavage of some of the other bonds. Ions can lose a water or an ammonia. You've got noise from other peptides and from contaminants in the system. And you've got amino acid modifications, which is not a contaminant or a bad thing. It's a good thing. This is what you're looking for. But these can be in trace amounts in this system. Now, there are two ways to approach the awesome amount of data that you can get out of these. Remember, you've got all these multi dimensions finally ending in this forest of B ions and Y ions and all the rest of it in there. And there are two approaches. One is we'll call it de novo peptide sequencing, which would be analogous to the de novo DNA sequencing that we were doing and the other is if you tell me the sequence then I'll find it in my data kind of game OK it's doing a database search where you're limited to proteins that are very, very-- to finding peptides that you already know about, or can hypothesize from a genome sequence. So this is the first category. This is de novo sequencing. And it takes on all the challenges in the previous slide. It takes on the possibility of missing data for a particular ion species that you think should be there, but for some reason are not efficiently cleaved by the argon in the collision-induced association. And it takes into account that you have to have one set of B ions, a nested set of masses from the N-terminus that have to be complementary to this nested set you get coming in from C-terminus. So this is dynamic programming. And you can probably count how many different times we've done a dynamic programming algorithm in this class. And so hopefully you're happy that you did at least one of them by hand. And this one we won't belabor, but here you can see how it kind of conceptually maps to the simplest one that we talked about at the beginning, which is comparing two amino acid sequences. There, the indels were caused by evolutionary change. Here the insertions and deletions are due to a missing ion due to inefficient cleavage in the gas phase. This is further complicated by this necessity of essentially sequencing in both from the B and the Y simultaneously and making sure that you have the best combination of B and Y assignments. So that's de novo sequencing. Now, in slide 29, we have the alternative, which is by far more commonly applied, an example of the alternative, which is you tell me a sequence. I'll find it in my data, [INAUDIBLE].. And where you're basically calculating the spectrum that you might expect for each peptide that you might expect in a genome. So you basically use the genome to predict the protein coding regions. Use those to do in silico digestion with trypsin, basically cleave after every lysine and arginine, and maybe after some other ones are complicated rules where trypsin doesn't always cleave after lysines and arginines. And sometimes there'll be other proteases present. And you have to take those into account. In any case, you generate a virtual set of peptides. You generate a virtual set of mass peaks. Now, since we don't know the rules that determine the height of those mass peaks, we wish we did, but we don't, so we just set it to a unit, to some arbitrary to make them all the same. So you're not going to be getting a great correlation coefficient based on the heights, but merely whether it's there or not. And that's what you do is every time you have a hit between your predicted spectrum and the other one, no matter what the intensity of the other one is, so you waited on the observed intensity, but you have no real calculated intensity. And this correlation coefficient serves as a way of prioritizing your scores. And if you have a-- very often the best score will be the database hit for the peptide that you want. Now, if you're expecting post-synthetic modifications, you need to tell the algorithm to add the appropriate mass to the appropriate amino acid. So for example, if you expect a phosphoserine, you have to put the phosphate mass into the program and associate it with a serine. So the serine can be either a regular serine or a phosphoserine. So that's another complexity there. So now we've gone through the richness of the separation methods. Separation is intimately connected with getting us to a mass spectrum which is clean enough to do either de novo sequencing or database searching. Now that we've got it identified, let's try to quantitate it. We can quantitate it one of two ways, just the same as with the RNAs, either on an absolute scale or on a relative scale. What is involved? We'll make an analogy to the RNA quantification methods, which I believe we've had something very similar to the left-hand side of slide 31 here, when we talked about RNAs, all the ways we could quantitate them. A subset of these have an analog in the protein domain on the right-hand side of this slide. So for example, one of our favorite methods that we used was microarrays. That's the top line of the RNA. This is where you would have the gene segments, either oligonucleotides or [INAUDIBLE],, immobilized on a microarray. Fluorescently label your RNA and quantitate. For proteins, the equivalent would be an antibody array aimed at unique features of each protein. This is in very early days, because we are antibody limited. We do not have antibodies against every protein surface epitope. And they are not specific enough. There's a lot of crosstalk. We mentioned in the second line that microarrays could not measure the composition of alternative slices, or the size of the messenger RNA. So that was best done by a Northern blot, which actually measured the size, but was not high throughput. The equivalent for that for proteins is called a Western. These are all puns on Ed Southern's name. The Westerns will allow you to measure the size of native or denaturing proteins, and then detect them with antibodies. Again, antibody limited. If we had a technology breakthrough that would give us all the antibodies we need, just like we had, it's easy to dial up any nucleic acid you want just by synthesis. There's no real equivalent to PCR for a protein world. You can tag proteins with nucleic acids and do PCR on the nucleic acids. But there's no real direct amplication on proteins. Reporter constructs basically work the same for each. You have something that is highly specific because you constructed it in vivo, but it is a sum of all the RNA and protein expression steps that give you the reporter. Fluorescent in situ hybridization in the case of RNA or fluorescent in situ antibodies in case of proteins is a great way of correlating quasi quantitative information with a subcellular or suborganismal localization. Tag counting, there's nothing equivalent for proteins, and mass spectrometry can be used for differentiable display. What are the sort of numbers of molecules we have? A ballpark when we're dealing with quantitation-- slide 32, it depends on the organism. Some of the simplest ones, like you find in yeast, we mentioned the messenger RNA molecules might be less than one per cell, just stochastic fluctuations. And in a human cell, it's probably a fairly good approximation or assumption that almost every nucleotide in the human genome can be transcribed. Maybe there's some leakiness where on the order of 1 in 10 to the fourth cells will have a little leakiness at any particular nucleotide. So that's kind of the background level. It's 10 to the minus fourth per cell. And it's really only achievable detection with reverse transcriptase PCR. The entire transcriptome of a human cell is on the order of half a million transcripts messenger RNAs. And so if any particular messenger RNA got up to 10 to the fifth, it would dominate. And this happens in some cases, like reticulocytes. Maybe 90% of the messenger RNA might be globin. Now, for proteins, you'll typically have bursts of proteins. You have one messenger RNA. You might get 10 to 1,000 proteins made, depending on the organism. And so you typically have a corresponding amplification in the last line. Now, when people assess casually whether a particular method is quantitative or not, they can be easily intimidated. So they might say-- I've commented that the ionization of-- so ESI stands for Electrospray Ionization in mass spectrometry. If you take a protein and cleave it with trypsin, in principle, every tryptic fragment, since trypsin cleaves pretty close to completely fairly easily, every tryptic fragment, every peptide should be equimolar. If you now inject that into the HPLC, into the mass spec, every peak integrated intensity should be equal, because they're all equimolar. And then when you find that, no, they vary over two orders of magnitude-- that is to say some are 100 times more intense peaks than others-- then you might get discouraged and say, oh, isn't a quantitative science at all. I can't deal with a factor of 100 difference. But I think that you need to reassess that when people say that mass spectrometry is not quantitative. The two requirements for quantitation is that you have reproducibility, and that you have a way of calibrating or calculating. If you can calculate from first principles, then you don't need calibration. If it's too empirical, then you need calibration. But you do not need that every disparate object behaves exactly the same way. Not every peptide has to give the same quantitative answer simply has to be reproducible and calibrate able with that same peptide and so here's an example of establish two examples in a row of establishing the reproducibility here this is from that ribosomal protein experiment that I showed earlier with the complexes of sedimentation velocity and the multi dimensions. And this is just you do a measurement on day one, and you do the whole experiment over on day two, and then you compare the intensity of the peaks. And you get what is a fairly good straight line relationship on a log-log curve over about a little over 3 logs. There were many moving parts in that experiment. There were all those different dimensions. And the whole experiment was not designed to be quantitative. There are no internal controls, and so forth, or so on. Nevertheless, this is a good starting point for convincing yourself or determining whether something is reproducible enough that you can make it quantitative. Here's another way of measuring the reproducibility. That one was a correlation coefficient, a linear correlation coefficient on a log-log plot. Here is the coefficient of variation. I think we may have mentioned this before. This is just the standard deviation normalized by the mean. In the upper left-hand part of this slide, you can see that the CV, or Coefficient of Variation, is just the standard deviation divided by the mean. So you can report it in terms of percentages of variation. So here with a calibration standards of peptides, you get somewhere between 2% and 28% coefficient of variation. That means you can trust these things to be within 2% to 28% of their absolute amounts when you calibrate them. So these two are just two examples that should reassure you that there is reproducibility and you can calibrate. Calibration can be an expensive proposition. But there are various motivations for measuring, quantitating both proteins and nucleic acids on an absolute scale. And for example, you might want to compare them to each other. You might want to say, to what extent is it the case that the most abundant proteins result from the most abundant messenger RNAs? Or you could imagine a world where these are completely independent, since one is transcription factors. The other is translation factors. There's no reason that they necessarily are synced up. Or you could imagine a hypothesis where it's a lot of work to make a lot of protein, and so everything has to be working right for the most abundant ones. And for the least abundant ones, you can have a little more slop. So this analysis critiqued a little bit in the subsequent paper that we'll talk about after the break can be interpreted as being consistent. When you include all the proteins, you have a very good correlation coefficient. This is a linear Pearson correlation coefficient. But as you restrict yourself to the lowest abundance proteins, it falls apart. You have less significant Pearson correlation coefficients. So let's take a little break. And we'll talk about critiquing this a little bit, and improving, and asking other motivations for putting protein on absolute scale, and doing ratios. |
MIT_HST508_Genomics_and_Computational_Biology_Fall_2002 | 1B_Intro_1_Computational_Side_of_Computational_Biology_Statistics_Perl_Mathematica.txt | The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license and MIT OpenCourseWare in general is available at ocw.mit.edu. GEORGE CHURCH: Is there a section? No, there are no sections for a few days. The sections, both the weekly sections and the extra sections, will be on the website. That's your best place to look. If you have questions after you've looked at the website, then you should contact your teaching fellow. But definitely look at the website. It will be updated daily, and your sections will be assigned probably by Thursday or Friday, OK. OK, welcome back from the break. As promised, we're going to discuss the innards of computers that are going to be helping us with computational biology. We have a schematic diagram here, illustrating in part-- we'll have similar schematic diagrams for biological systems or biochemical systems. This is one for transistors. We have these nonlinear transistor elements here with an input voltage and output voltage in a controlled voltage VDD, and a ground here, this little triangle in the lower left. These transistors are in this circuit here that is certainly a higher-level description. It allows you to put down the more detailed description of this voltage curve as a function of time. So time is the horizontal axis here, ranging out to 200 nanoseconds, and this is all done in a program, a simulator called Spice. of this complementary metal oxide inverter. And this sort of simulator is one of the things that we will be talking about in biological systems. And it's very useful for designing these circuits, and you can see as the voltage, VDD, goes up in this straight blue line from 0 volts, which is off, to 5 volts, which is on. You get this almost all or none, but certainly nonlinear response, where the output voltage on the vertical axis goes from 5 volts, on, to 0 volts, off. It's basically the opposite. When it's 0, it's 5. When 5, it's zero. But in between, you can see there's this gray zone. You want to stay away from this in digital circuitry. You want to stay fully saturated or basically 0 volts. And then these inverters can be wired together in even higher-level diagram into what are called registers, which allow you to store multi-digit binary numbers. These can then be coupled together. So you can take two registers and add, digit by digit, the contents of those registers. That's called an adder. Adders and a variety of other higher-level electronic components can be put together and then addressed by software called a compiler. Well, how did you get this software in there in the first place? We have to toggle the hardware until it gets into a state where you sort of manually get the transistors to be in the right set of on and off voltages, 5 and 0 volts. Once you get the first compiler, you can now work at a much higher level. A compiler is basically something-- all this code that I've been showing you so far, the Perl, the Fortran 77, the Mathematica, so forth, those are all things like, print x above 1. That is like a compiler. It's a code which you can type in. It's almost English. Some of you may say it's not nearly close enough to English. But it's much closer than dealing with these little voltages, OK. And once you have one compiler, you can make another one. You can use that pseudo English to write a more complicated compiler that can deal with a even higher-level language, and then reduce it. Then it takes care of the bookkeeping, or reduces it down to telling the computer what voltages to put where and when. As you go up the still higher ones, we have these high level application programs, which might have intense graphics or something like that, OK, so that you really get a world view that is much more in resonance with our primate visual and auditory senses. Now, this idea of self-compiling and self-assembling is very interesting, very self-referential, as many of the things in this course will be. We have biological components which have these very interesting complementary surfaces that I talked about a couple of slides ago, where the two strands of DNA which are not covalently linked-- covalently means that you have these strong bonds along this one ribbon-- and connected by a series of stacking interactions, where plates, the sort of planar bases stack up. And they form weak bonds from one strand to the other where the rules I mentioned before-- now slightly more realistic symbols here for the GC and AT base pairs rather than just the alphabetics. But this is still symbols. This is not really electron density, where you're using letters instead of the electron density of nitrogens, carbons, oxygens, hydrogens. But these hydrogens make this hydrogen bond a weak bond, and the surfaces are complementary, so that if you try to pair a C with a T, you have the wrong spacing. You have steric clashes and so on. This process by which one sequence will not make the same sequence right away-- replication does not make an identical sequence. It actually makes a complementary surface. And then one more cycle, and you get back to the original, just like that example, I gave before of the trinucleotide going to a hexanucleotide. And then that helps the second round. This is very analogous, except now these molecules, instead of being six nucleotides long, they can be hundreds of millions of nucleotides long. So this introduction to minimal life, all these life have in common self-assembly, catalysis, replication, mutation and selection. The monomers, meaning the simple molecules taken from the environment, the environment being defined by some boundary-- this could be a fairly flexible boundary. It could be a membranous boundary. It could just be some kind of aggregate. The small, simple monomers, these combine to make complicated polymers. And then this replication process continues. When we go to more complicated systems, now this is a central dogma. The DNA, the long-term stable storage, is accessed by making ephemeral RNA, which then encodes proteins. And in some systems, RNA can replicate. RNA can be reverse-transcribed to DNA. Unfortunately, the AIDS virus is one of them that does this. DNA can replicate itself, and certain pathological cases, or actually certain cases, I should say, of proteins, can in an autocatalytic cycle recruit other proteins to their particular state. These are called prions, and they're based in Mad Cow disease and things like that. But again, you've got a boundary for the replication, and simple structures come in and complicated structures are generated. When we talk about the polymers, we want to quantitate their amounts in their interactions. They initiate, elongate, terminate. They fold. They get modified in various ways. So it's not just simple linear polymers. Their position in space is important, and they are either degraded or diluted during the replication process. This gets even a little more complicated when we talk about functional genomics. Here, we measure the growth rate. We measure the concentration of RNA and proteins. Sometimes, their localization, it's important to measure that too. That's called gene expression. And the interactions are important to be part of this measurement process. These measures and models that I'm talking about are how we get to defining enough about living systems that we can model. So here is another model. This is a Rorschach test as you might take. You go into a psychologist's office maybe long ago, and they'd say, look at this inkblot. What does it remind you of? You know, bad things your father did, so forth. Here, the Rorschach test is, what does this curve remind you of. Give me some hints. STUDENT: Exponential curve. GEORGE CHURCH: Exponential curve, great. And how do you get-- and this is like the stock crisis before a dotcom crash. How do you get this? How do you find it in biology for the biologists, or how do you find it mathematically for the rest? Yeah. STUDENT: It kind of reminds me of human population growth. GEORGE CHURCH: Human population growth. Yeah, that's a good biological example. Maybe, and Malthus and others have said, this can't go on forever. But this is not a fact. It's a pretty solid speculation. So how do you get this? Well, for those of you who prefer to see or expect to see stocks go down as well as up, we've got an exponential decay curve in magenta. It sort of is a reflection of the exponential growth curve. These are just e to the kt, or e to the minus-- or where k is positive. And this is the world's simplest differentiable equation. y, the y-axis, is a function of time, the horizontal axis. And the small changes in y with small changes in t, time, t, dydt-- that's sort of the slope of the y as it goes up, the blue exponential curve-- is related to how much y. The more humans are, the faster the human race replicates, OK. And it just keeps getting more and more. And that's why it has this exponential curve. This is much steeper than quadratic. And its origins are way back here. It's similar to exponential decay, as you might get with radioactive substances. It follows a reverse process. And if you integrate this, you get a very simple integral. It's what we've been talking about. It's e to the kt. So it's an exponential function of time, y, say the human population or your stocks, where e is this number that we highlighted before, about 2.7. If you're interested in half life, which sometimes people are, like radioactive decay or half-life of replication of bacteria in a solution, it's a very simple formula that gets you from the rate constant k. This is like a biochemical rate constant to a half-life. This is growth and decay. So what limits this? Why doesn't it just keep going up? What we've been looking at is the lower left-hand corner of this graph in slide 29, where it goes exponentially up from close to 0, not quite. And eventually, it will plateau, or worse yet, it might come down. And what causes this plateau is exhaustion of resources. And if you get enough accumulation of waste products, or enough exhaustion of resources. You can plummet. If you just zoom in on this little part here, one way of analyzing it, very hopefully known to some of you, is that you take the logarithm of y and plot it versus t. So t is linear axis, and the vertical axis is logarithmic. Now you get a straight line, at least for the beginning here. And eventually it will plateau the same as this one does. And that's a way of telling that you have a simple exponential. If you have e to the t power, or 2 to the t power, or anything to the t power, simple. Those are all simple exponentials, and they'll give a line when you take the logarithm. Now, what does Mathematica do to help us here? You set up this equation. Instead of saying dydt, you could say, y prime of t. That's just shorthand. It's very commonly used in calculus. y prime of t the first derivative of t, of y with respect to t, is directly proportional to y. That is to say, your slope of the human race expanding is directly proportional to y, the number of humans, OK. And then you're going to start at time equals 0. We've got one human. Well, that's probably not enough. Well, OK. Maybe a bacteria. OK. You have initial conditions, OK. And then so you just say solve it. You tell the computer, so everything to the right of this equation is something you can type in to Mathematica. Do a differentiable equation solve of this string that I typed in here, which tells you the initial conditions and the formula. And boom, out comes the out. You didn't type this in. Mathematica came up with this, e to the t power. That's pretty cool. And even though it's the world's simplest differentiable equation, it solved it. Try to do that in your other favorite programming languages, Excel, or Fortran, or Perl, or Python or whatever, C. This is really powerful. Now, this is analytic or symbolic or formal. These are various terms you would say for this trick. And as the equations get more and more complicated, this becomes more and more amazing, almost intelligent. Eventually, they get complicated enough that neither humans nor Mathematica can solve them. And so what you do then is use a numerical approximation where you take little steps and you solve it by numeric approximation. But you set it up the same way. You tell it that the derivative y with respect to t is proportional to y. Or in this case, proportionality counts as one. Same initial conditions, one starting bacterium. But now you tell it what interval you want to do this. You don't want it to have to do these little steps all over, all negative deposit of infinity for time. You just want to say, I'm just interested in time from 0 to 3 minutes or hours or whatever years, OK. And then you evaluate it, and you can plot this, which appeals to the primate visual system, these plots. And you'll see lots of plots in this course. But now here, y, a function of t, is this exponential curve. And if we plotted log of y as a function of t with this numerical solver, it would be a straight line. Now, I give you some where it isn't a straight line. These are all logarithmic on the y-axis, and they're all linear on the horizontal axis. And they're all time on the horizontal axis, linear time. And they're more than simple exponentials. They're going up faster. Rather than going out slower, which we think human population, bacterial population and so forth, they'll go up linear on a log plot and then flatten out, these things are just going faster and faster. What are these things? Well, even though your dotcoms didn't work out, if you had had a stock portfolio in Western European commerce in the year 1000, you'd be in really good shape now in the year 2000. This is not only going exponential, it's going steeper than exponential. And we all hope that this will keep going forever, that the gross domestic product of people in Western Europe and the world will just keep going up. And this is due to technology. And technology keeps reinventing itself. And hopefully, it can keep doing that. Here's another example . This is more drilling down to specific technologies that have been on a superexponential or hyperexponential-- I don't know quite what the right term here is-- for a long time. These are greater than linear, steeper than linear. These are close to quadratic. And so it's an exponent of a quadratic. And these are for transmission rate in pink, of data from the Morse code in the 1830s to optical fibers here in the present. And then the blue are digital processing from the first census in the 1890s to modern computers. And this is in instructions per second per $1,000. Now, this unit, the little piece of this, these integrated circuits that Moore's law refers to from just the tiny end of this from 1965 onward, refers to integrated circuits. And these will run out of gas pretty soon, everybody tells us. But this curve may not, because it goes beyond it. It predates integrated circuits and it will post date them. And who knows where this leads. STUDENT: Question. GEORGE CHURCH: Yeah. STUDENT: What's the r squared? GEORGE CHURCH: Oh, sorry. We'll get to that at the end of this lecture, but it's a correlation coefficient, which is, it's to what extent is there a fit between one curve and another. How well does the calculated curve fit the observed data collected? And so these are around 99, which is very good correlation. And it's better than the linear plot, but of course, you have more adjustable parameters. OK. Another sign of hopefulness is data are coming in faster. So our life is getting better. Our computers are getting faster, and data is coming in from the Genome Project. And this little inflection point, where it was log linear for a while and then a new log linear-- so overall, it's superexponential. And this is for the number of base pairs we can get per dollar, starting with transfer RNAs in the late '60s and ending with who knows how many human genomes that we will have by the year 2010. Now, where does this all this exponential growth go? Some people think we will be creating computers that are smarter than we are soon. What would this require? Here's a nice back of the envelope example of systems analysis where biology meets computers. Let's analyze our retina. All of our retinas are processing right now, hopefully. And Hans Moravec simulated a retina for video imaging where he did edge and motion detection, and it required about a billion instructions per second to match the 10 times per second which you're updating the retina. The brain is about 100,000 times larger than the retina, and if this scales linearly, which is speculative, then you need a computer that has about 100 million MIPs or about 10 to the 14 instructions per second of compute power, and a similar number of bytes. Now, back in 1998, that was still quite a ways away. But here in 2002, the best supercomputer-- and this site keeps tract of the top 500 supercomputers. And trust me, your computer is not on that list. But anyway, the top one is within a factor of 10 of this compute power. Now probably, Earth's scientists that own this thing will not bother to try to see if it can do ordinary human things like watching soap operas. But we're in that range. We need to be cognizant of the possibility. Here's another model. I've tried to put it in the same units we've been talking about, this exponential growth. Again, we have the rate constant k. We have the y, the human or bacterial population growing exponentially. And here now, we try to model the case where yes, as you have a great population size, it means greater growth until it gets close to the maximum carrying capacity, the 100%, the 1, the maximum it can go. And then it will plateau. So you have a plateauing near 1. And this is called the logistic map. It is the basis of that complexity calculation we talked about earlier. And here, the population size is a function of rate constant, and it has both grows when y is small. As y scales up, it goes up exponentially. And then finally, as it approaches a maximum of 1, it plateaus. However, if you get greedy, and you increase your growth rate beyond, say-- here, it's very, very small, very ungreedy, just 1.01. That's like a 1% interest in your bank account, OK. But still, you'd grow exponentially given enough time. However, you get greedy and say, I want a 300% return on my investment. Well, then you start getting these little cycles of like stock market going up and down, OK. And if you get really greedy where you need a 400% improvement each cycle, you get chaos. And then you can eventually drop down very close to 0 and crash, and the population can go extinct because it used up as resources or made non-optimal use and maybe toxic side products. OK, graphs. We have directed acyclic graphs. Just as an example, graphs are made up of nodes. You can think of these here as, the nodes are people or organisms. And you start with one bacterium here on the far left-hand side of slide 35. And you have a direction. You can only go forward in time. So the node is the bacterium individual, and the lines connecting them are edges in the graph terminology. And they're directed. And they can't go in cycles because you can't have a daughter giving rise to a mother, OK. So this all makes intuitive sense. But you can use these kind of graphs for a whole variety of interesting things. You have not just the pedigree we talked about, but phylogeny in general, ancient connections between organisms. The biopolymer backbone, you have a simple linear backbone or a branch backbone. This can be represented. It doesn't covalently cross back on itself. If you want to know what's near one another as this polymer folds up, like that transfer I showed you in the first slides, those contacts are indicated. Now you start getting cycles, because A can contact B, B, C, D, back to A again. You get cycles in a three-dimensional structure. You get cycles in a regulatory network. You can have, in order to maintain homeostasis in your body, A can regulate B. B can regulate C, and back to A again. But that's all directed. There are system models that we and others will study. They have in common-- this is slide 37 on the left-hand side-- the system models. And they've been chosen mainly because in the pre-genomic era, it was very hard to get data sets. Certain systems were just technically easier to get large data sets, genetic or biochemical. These include E. Coli, going toward food and away from toxins. Red blood cell is a nice metabolic system because it doesn't have any polymer synthesis. Makes it simpler. Cell division cycle is really key for understanding pathogen replication, cancer. Circadian rhythm, a huge number, many if not possibly all organisms, have some circadian rhythms that keep their biochemistry optimal, and keep us hopefully awake, right now, anyway, until it's time. OK. Plasmid DNA replication is an example of single-molecule precision. And we'll talk about the DNA single molecules in just a moment. So that's where we're aiming right now, is from graphs and pedigrees down to the single molecules that allow replication to work. This replication is achieved by interconnected machines they are somewhat modular. "Modular" is also a computer term where you try to put code that works together into something that's defined spatially and functionally. So you have these little modules that replicate the DNA, make RNA from it. A different module does the protein synthesis. There is some interconnection between these. This will be discussed. these kind of complicated machines that biologists love to simplify in diagrams will be described in more detail next week. But the idea is this idea of modules versus extensively coupled networks. This is how we get the replication. The way we analyze the replication is somewhere here in the middle, where I've had a scale here that goes from high resolution, very accurate descriptions of physical processes sort of in the nanometer femtosecond range on the far right-hand side of slide 40, to things that are very long timescale, very large scale sort of kilometers, years that happen in population dynamics, sometimes global population dynamics. You should understand that all of these models we'll be talking about are approximations. As we go down the scale, it gets more and more computable to compute more and more complicated things, but at the cost of greater and greater approximations. Even the molecular mechanics that we use in conjunction with crystallographic diffraction data is amazing computational chemistry, but it's a great approximation of quantum mechanics, which in turn is an approximation of quantum electrodynamics, which itself is an approximation. And all of these things are very hard to compute on any even reasonable atomic system multi-body problem. The big approximation for molecular mechanics is that you have spherical atoms, so you don't have the distortion of the dipole that occurs in very useful bonds, such as the hydrogen bonds and almost all non-bond interactions. That's poorly approximated, but it's the best we have that can be computed right now with most computers and even modestly large molecules. Then now, as we go down, we can think of it as higher and higher-level abstraction. Just like high-level programming languages, we're now programming chemical systems and thinking about them. Now instead of dealing with single atoms in molecular mechanics that produced the tRNA structure that I showed you, we now deal with that whole tRNA as a single molecule. But it's still a great depth of precision, because each molecule has its own life, and you track each one on the computer. And that's stochastic simulation. The next higher level beyond that is now, we don't deal with single molecules. We deal with populations of molecules, or we deal with a concentration as a function of time. The ordinary differentiable equations that we've already been talking about, like that exponential growth curve, is one way of dealing with concentration of bacteria as a function of time. That's appropriate. There are other cases where we want to do this optimization. We want to study how close to optimal a system is. One way to study that is with these economic functions, these linear programming, to look at the fluxes. Now, we're no longer talking about concentration and time, because we're interested in the rates through which chemicals are flowing through a biological system, where any particular chemical concentration is at a steady-state level. And that means you have things going in and things going out, but stuff in the middle is staying the same. That's a very useful approximation. It's used time and again. Even though these are dynamic systems, you can find them. In a pseudo-steady state, you can apply these very powerful computing tools. And then you can do computations that would be very hard to do in these more precise and complete methods. And we'll go through these in much more detail later on you'll find very interesting connections between the larger-scale things we're talking about, where we talk about stochastics of whole organisms in big populations, and the stochastic of single molecules. OK. We're talking about single molecules. This is our last topic today. And each of you do single-molecule manipulations on a regular basis, and your ancestors have been doing it for $10,000 years without a license, without a computer. And they've been doing a pretty good job of it. They've taken this little weedlike thing, teosinte, and turned it into this corn that would make the 4th of July quite proud. And dogs, who knows what their ancestors looked like. But right now, they span about three logarithms of mass. And this was all done with the awesome power of single DNA molecule technology, crosses basically. And what happens in each of these cells in your body if all goes well is you start out with one chromosome of interest. And it divides, and then the cell divides. And that chromosome-- we'll forget about all the other chromosomes in there for a moment-- that chromosome has a choice of when it divides, it can go one each into each of the daughter cells. Or both chromosomes can hang out together, since they're all tangled or something, and then one of the daughter cells doesn't get any copy of that chromosome. That obviously is not a good thing. Even if two copies of the chromosome is OK, if you can tolerate that extra dosage, you certainly can't tolerate zero chromosomes. Well, what's the chances this will happen? Well this is really elementary probability. I'm going to ease you into it. It's that it's about 50/50 chance. These are all. This is the exhaustive list of the possibilities. And about 50% of them have the wrong dosage. Well, what if we have a more realistic situation, human cells with 46 chromosomes? What are the odds that they'll all be right, that we'll get exactly one of each? We have 23 chromosomes from mom and 23 from pop. What's the odds that this is going to work out? Well, we're going to take a couple of slides to get to that answer. But first, to motivate you, this is extremely important in a health care sense. It's the most common form of mutation, happens all the time. Unfortunately, at every chromosome, duplication or losses is a big change in the human state. And the mildest of all of the additions or subtractions of a chromosome-- here, you just have three copies of chromosome 21, everything else normal. So 1.5 dosage of one of the smallest human chromosomes has an enormous impact. Most of you have seen someone with Down syndrome, which is severe mental retardation, and heart defects in various other organs. STUDENT: [INAUDIBLE]. GEORGE CHURCH: Yeah, question. STUDENT: This problem that you just described, in reality, though, it's not random, because there are mechanisms in the cell that would bias the tool in order not to segregate one cell. GEORGE CHURCH: Right. This is a good point. This is what I'm setting us up for, exactly that conclusion. It cannot be random. Single molecules are subject to stochastics. And so to overcome that stochastic process that should be random, you have to have machines that involve multiple molecules, because only through multiple molecules can you get the statistics to overcome the single molecule. And that's quite a trick. You can't just casually say, oh, there's some machine in there that takes care of this, OK. Just to expand this a little bit more, we know that certainly, the DNA is the case where the single molecule is always a problem. It has to be aided by molecular machines where you use energy. You expend energy in order to make sure the DNA molecules work. We'll get back to that calculation of what the odds are at random, but I should say also, RNAs in many systems appear to be, on average-- remember, it's the population average to be close to 1 molecule per cell. They're produce in bursts, following stochastic bursts of RNA where you get transcription factors binding. And then that burst of RNA produces even bigger bursts of proteins. But on average, it comes out to be a very small number, because the proteins preserve. They persist through many cell divisions while the RNAs maybe can turn over more rapidly. To get back to that question of how much variation is tolerable in biological systems, here's the very beginnings of your statistics. Some of you may have this hopefully already. There will be sections where you can cover this. But here are some of the really, really useful, easy statistics. What do we want to know about a distribution? Making the fewest assumptions for now, we want to know its mean, its arithmetic average. What is the average number of chromosomes in a cell? If they're supposed to be 1, how close to the average are they? That's the variance. To get the arithmetic mean, you basically add up all the numbers and divide by the number that you counted. Add up the values which are the axes. And so here you take a weighted sum. The sigma means the sum of the x values weighted by the frequency, f of x, the frequency that they occur in the sample that you take. Here, r for the mean is just 1. It's taking the first moment to get the mean. The analogous thing is, now you correct all the values that you're measuring, these variables, the number of chromosomes, say. And you subtract the mean. So now the mean for this x minus mu is now effectively 0. The mean is 0, and you want to ask, how far from 0 do you deviate. For chromosomes, you want that deviation to be very small. You want the variance to be very small. And it's just the sum of the squares. We want to take the squares because if it deviates either more or less, it's still a tragedy, and you want to keep tract of that. So these are two things. They don't make any assumption about what kind of distribution. The distribution can be anything. You can calculate the arithmetic mean and the standard deviation. Another useful concept that starts to make more assumptions in interpretation is, now you have two variables, say x and y. And you want to ask, do they covariate. Are they related to one another? When you do two different experiments, you want to ask whether they're giving similar results. If they're two completely different kinds of experiments, you might want to know whether they're reinforcing each other. You want to know whether they're redundant. If you're observing two biological facts, you don't know whether they are related to one another. It's a discovery if they covary. That's what this means. Covariance is, again, using this concept of expectation. The sum of the x's corrected, so that you subtract the means, their averages, and divide by their standard deviations, square root of the variance. So you basically end up with a mean of 0 and a variance of 1. And then whenever these normalize, when x goes up and y goes up, the product will be reflected in this sum. So now this has an interesting property that when x and y are independent, unrelated, then the C or the Pearson correlation coefficient is 0. However, the reverse is not true. If C is 0, it does not imply that the x and y are independent. An example, a simple example, is the curve, the quadratic curve, y equals x squared. Here, they are completely related to one another, but they give a correlation coefficient of 0. That's because this is a linear correlation coefficient. The model that you're testing is that they are linear related. They are either positively correlated, which means the extreme case of C will be equal to 1, or negatively correlated C is equal to minus 1. You can plug this little formula here, handy-dandy practical form you can plug in to calculate probabilities. And this is an Excel formula. The probability that a correlation is far from 0, that's dependent upon the sample size where you sampled different x's and y's. You know, it could be head size and weight, or length and weight, and so forth. If they're correlated, then this probability will be significant if your sample size is large enough. There are some very practical things. And now let's put these in the context of a particular class of distribution. Now, most of those did not require that we state what kind of distributions, but there's a big interesting set which are roughly bell-shaped curves. And I've rigged this so that these three wildly different types of distribution happen to give similar curves. And you see in the next couple slides how I rigged it, but basically, this is a normal distribution, the Poisson distribution and the binomial. The binomial has a limited range. n goes from 0 to 40 in this case. It has a maximum n of 40. The Poisson has a mean which in this case is 20. The normal distribution has a mean that's similar. The normal distribution can have any range, the standard deviation. And this is set here to be, the standard deviation to be, the square root of the mean of the Poisson distribution. That's how you can rig these to be similar to one another. I think time is not going to permit me to go through all this in detail, but you will cover these in your statistics sections if you don't already know it. But suffice it to say that the binomial distribution is limit x. It Has to be an integer, and the integer is limited to going from 0 to n. This distinguishes it from a Poisson distribution, where it goes from 0 to infinity, and the normal curve, where it can go from negative infinity to positive infinity. Binomial and Poisson are discrete. They happen at integers while the Gaussians are continuous. The way you calculate this is, the probability of an individual n happening has probability p, so say 0.01 in the previous slide. And then getting exactly x of those is p to the x power as a first approximation. But there are actually two cases here, hence the name binomial. If it were multiple cases, they'd be multinomial. But the two cases are basically p and 1 minus p. They have to sum to 1. All these probabilities have to sum to 1. So now you have the probability that you have exactly x, and the leftovers go into 1 minus p. But then you also have to correct for the number of different ways that you could get this, the number of combinations, which is the total number of possibilities choosing x at a time. And then again, the leftover is n minus x. And this is n factorial over x factorial times n minus x factorial. This is the number of combinations. And so now the binomial distribution is this, and the sum of all the terms here has to add to 1. That's one of the properties of probability distribution, is if you think of all the possibilities, they add up to 1. So the sum of the binomial distribution of all the x's is 1. Now, just to remind you that computers are fallible, here's what, when you take a fairly-- you know, x is equal to exactly 300 taken from a population of 700. A probability of 1 for a unit event. The probability of getting exactly 350 from that kind of bell-shaped curve is very small, but not 0. And Mathematica gets it right, and Excels gueses it at 0. Good guess, but wrong. Poisson, you now can and must go out to positive infinity. And there, you often will make the approximation that for large n and small probability in the binomial we've been talking about-- n is the total number of objects you're looking at, you're choosing x from. And t is the unit probability of each of those. The mean is now approximately n times p, and the binomial and Poisson are very similar. That's why they look similar in that plot. And here's some practical magic you can do with the Poisson. If you have a library of [INAUDIBLE] or combinatorial chemistry, or genomic clones and so forth, and x is a number of hits, and you want that to be greater than 0 so your thesis can proceed, you want the 0 hit term to be very small, e to the minus mu. So you want the mean to be greater than 1 or 2 or maybe even-- if the mean is 10, the probability that for a given experiment you'll have a 0 hit is very small. And you can estimate this from the number and 1 2 and 3 hits you get. You can estimate the 0 hit, and you can estimate whether it actually fits a Poisson or not. The final of this trio is the normal. Now you go from negative infinity to positive infinity. It's not just 0 to n or 0 to infinity. And it's continuous. That means everything takes a p. So now instead of summing up to 1, you integrate up to 1, because now the little delta x's are infinitely small. And so now, here's an exponential of a quadratic, just like the ones we were talking about earlier. So this is a negative quadratic. And that gives you a nice little bell-shaped curve. And the 2 pi sigma square root is a normalization, so it does actually integrate sum to 1. Here, another approximation sometimes applied is when n times p times q is large, the normal is very similar to the binomial. People will abuse this and use one of these three distributions in the place of the other when it isn't appropriate. And we'll give some examples as we go. So back to this calculation. If we apply the binomial that we had in those previous slides-- and I urge you to do this as an exercise. It's not on the problem set, but just do it. Just getting some for any 46 chromosomes, you get the right number of chromosomes, is about 8%. That's not too bad. But it's still fairly lethal, because you really want exactly the correct 46, which is 0.5 to the 46th power, which is infinitesimally unlikely to happen at random, which gets back to the point made by the audience here that this is not random. But you can't fight the fact that single molecules are based in stochastics. So you have to have a lot of events adding up, energy being input to overcome that. We have selection that's optimizing this over long periods of time. We can use random numbers that underlie this for the simulations of these stochastic events, and also for permutation statistics, that when you have some data and you want to know whether it's significant or not, you can kind of do a Monte Carlo simulation of it. Here's how you code it in a couple of different languages, Perl, Excel, Fortran 77, Mathematica. Even though you can't evaluate it by looking at these numbers on the screen, trust me. There are bad random number generators. There are random number generators which are not very random, OK. Where they come from is this remainder operation, operated on very special numbers. You'll have to look these up in this reference to get a full feeling for it. But these are deterministic formulas that give you random numbers. It's not really the same as flipping a coin. The computer actually will give you-- It's the same random numbers over and over again unless you do something very special. And typically, these give a uniform distribution between 0 and 1, or over some integer range. And then you can turn it into a normal distribution with this kind of trick here where you make a transformation. There's a difference between a uniform random distribution and a bell-shaped one. And you can generate both of them just with this slide alone. So we full cycle back to these three different bell curves. They have very different properties that can be applied. Binomial only when you have a limited range, Poisson when you're positive infinity. Normal, negative, positive and continuous. Thank you for participating in this. These are the topics that we covered. See you back here in a week. Please hand in your questionnaires, and the sections should be assigned by Thursday or Friday. STUDENT: Thank you. |
MIT_HST508_Genomics_and_Computational_Biology_Fall_2002 | 7B_Protein_1_3D_Structural_Genomics_Homology_Catalytic_and_Regulatory_Dynamics_Fun.txt | The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license, and MIT OpenCourseWare in general, is available at MIT.edu. PROFESSOR: OK. Welcome back to the second half. We'll pick up on this somewhat pessimistic note of how one goes from a very accurate sequence to a probably less than accurate speculative league and specificity, to actually how we actually do get a 3D structure league and specificity, and some of the powerful methods that are shared by the computational tools. One of the things we want to do is we want to, if possible, find a homologue. It may be a distant homologue or a very close one. If it's very close, as we saw from that previous slide about the highest accuracy coming from homologues that are 80% or 90%. As we get further and further away, remember we had to-- first, you could use exact matches. And then you had to use dynamic programming, and then hidden Markov models. And finally, at the bottom of this Slide 37, we go to-- we resort to threading, where one of the structures is searched through a database of three-dimensional structures with your favorite sequence. And you not only search through the database of structures, but through each structure, you think of every way you could thread your sequence into that complicated 3D structure. And some of the threading positions with insertions, deletions, and offsets will be better than others. They will fit that three-dimensional structure. Some will cause clashes, where the three-dimensional structure you're searching through and the particular thread position you've got causes two bulky groups to lie on top of each other in space in a way that's hard to relieve the stress. And so, that threading is probably the ultimate in getting very distant relationships. It is limited by the fact that it really only works if one of the two proteins has a three-dimensional structure. But you can search the database of sequences against a database of 3D structures. The antidote to the limitation of having not enough three-dimensional structures is to launch a project, like the Genome Project, to get all of the three-dimensional structures. Now, is that finite? Well, certainly for a particular genome, that number is less than or equal to the number of proteins in that proteome for that particular organism. But if we look at organisms as a whole, where we don't even really even know how many organisms there are on Earth, then that number may be larger. But some people estimate that it's less than 10,000. Basic folds, where a fold is the scaffolding-- which, once you have that fold, whether it's a certain number of alpha helices, betas and turns in a particular order, in a particular geometry. Once you have that, and you have any amino acid sequences within a 35% amino acid identity-- and you'll see, as this last section of this lecture goes along, why it's around 35%. The goal is to get, saturate the three-dimensional structure space, so that every three-- every sequence is within at least 35% of one of those structures. This is somewhat conjectural, unlike the Human Genome Project, where we knew we had 3 billion bases to sequence. Here, we hope that we have 10,000 basic folds. And we hope that 35% amino acid sequence identity will be enough to do homology modeling. OK? It's not. It's not currently. And the criteria for this has to be prioritized in some way. And remember, we had prioritization for drug targets in the previous slide. And here, prioritization for structural genomics is similar. But in addition, you want to have-- you want them to represent the largest family you can get, but not have it previously solved. And for some reason, they're excluding non-- or they're excluding transmembrane proteins. Now, this is a very important class, as we'll see in the next slide. Because the goal stated up at the top of this, Slide 38, of assigning functions and interpreting disease related polymorphisms and drug targets and so on certainly apply to membrane proteins as well. And there are reasons that we've already alluded to for looking specific for programming cells via membrane proteins. This is where cell-cell interactions occur. This is where adhesion motility immune recognition occurs. These all occur without getting inside of the cell. This is a major class of drug targets. And furthermore, it's not like this is an impossible class of proteins to solve. Actually, the three-dimensional structural databases-- more about that in a moment-- are filled with these things. They're certainly-- are the most underrepresented class, but there are plenty of examples. And there are two major classes. One of them is soluble fragments of fibrous or membrane protein. Some of the fibrous proteins are excluded as well. Here you'll use a protease to cleave off, possibly, a tiny piece that makes it insoluble, maybe a little anchor into the membrane. And all the rest of it now behaves like a soluble protein. And we know how to solve soluble proteins. Other times, the other class is integral membrane proteins, where they go like this one bacteria on the right-hand side of the image, you can see the red alpha helices go back and forth across the membrane where the gray lipids. And this is-- there's no way you could clip off a little piece of this and have a major fraction of it left to solve. But this was solved in the membrane. And you can see the little blue water molecules going through that channel in this. That channel is responsible for a proton pumping, which can be part of the ATP production process. But there are many other classes-- redox proteins, toxins, ion channels, photosynthesis and phototransduction, and so on. The G protein coupled-receptor class is a particularly important drug target. ABC proteins and transporters have also been solved. So given that this is an underrepresented class, and given that the structural genomics project will not necessarily target these in a rapid manner, what is the current state of affairs for computational prediction of the transmembrane regions of proteins? And actually, I would say the prospects here are fairly favorable compared to some of the least favorable ones in that slide, that very pessimistic slide a few back. Here, you can get, as indicated in this JMP paper, transmembrane helices identified-- and there can also be transmembrane beta, as well-- but helices identify with accuracy greater than 99%. And remember, this is basically saying that you correctly identified those. And then you also have false predictions. There'll be a number of peptide regions of known proteins, which are incorrectly predicted to be in transmembrane. And this is a tolerable false prediction rate of 17% to 43%, given a set of soluble proteins as a negative learning set. Now, that's-- just merely knowing that a particular segment of protein is transmembrane is a big step in terms of identifying its function. But to get the further functional characterization, we need things like ligand binding, which we've already addressed. And if you look at some of these quotes, you can see that a lot of the emphasis is on display and cataloging and a hopeful expectation that we'll be able to move from rough three-dimensional structures to ligand-binding specificity. But where do the three-dimensional structures that we do have, do believe, that do tell us about the exact geometry of ligand binding come from? Where do they come from? And how do we compute on them? How do we read them? How do our computer programs read them? Well, this is a typical file of a three-dimensional structure. This happens to be one that we will show at the-- the three-dimensional structure at the very end of this talk. It is the human estrogen receptor. And you can see, the first line is that it is a complex between a protein and a DNA molecule. And the molecules, the estrogen receptor. The third line down is the resolution. This is a technical description of the X-ray diffraction pattern. It gives you an upper limit to how precise it is. It's going to be more precise than 2.4 angstroms, depending on how much statistical oversampling you have, and how good your computer program is enforcing their chemical constraints. A typical precision for a 2.4 angstrom protein structure might be on the order of 0.3 angstroms, maybe eight times better than the nominal resolution. But that's an important number, unambiguously determined in the process of collecting it. So when you look at the literature, look at this number. And look at the next number down, which is the R value, which is not a measure of resolution, but a measure of goodness of fit between the model. The model is the X, Y, Z-coordinates of your atoms. It's the goodness of fit between the model and the data. And we'll have a slide coming up soon of how this R value is calculated. OK. The next line down begins the sequence. And if you have a multi-chain sequence, here I've cut-- I've cut out some lines for-- there's many lines of sequence for the protein in this three-letter code, and main lines of sequence for the nucleic acid in this one-letter code. And then, additional chemical parts of the structure. Remember, the structure is complicated. It's not just protein. Here it has nucleic acid, has zinc. It has water molecules, sometimes various other things. Each of these molecules, if you can find it in the structure, you will determine the X, Y and Z-coordinates. So the next one tells you the secondary structure. Remember, that there's three basic types-- alpha helices, beta sheets, and coils. And these are described-- and again, I'm just showing you one line example of each. There's a long list for each of these, where they've been identified by either manually or a computational automation from the structure. And these can be useful as a summary of the structure. And then, here's the real meat of the structure. The lines that begin with the word atom, this is a position of the nitrogen atom number one in methionine, which is the amino acid number one in the A chain. So it met A. The A-chain happens to be the protein chain. And then, following that is the residue number one. And then, XYZ coordinates, roughly 50, 24, 79. Then a scale factor one, which is almost always one. And then a B factor, 60, which is representative of how far from that XYZ value can it deviate? That's a square deviation term. And it absorbs the thermal motion of that atom and various structural defects. So it gives you some idea of the disorder of that atom. And then you have the last couple of records have to do, what atom is connected to what atom in the structure? In a certain sense, those can often be inferred just by the distance between atoms in the structure. Now on the far right hand side, it's just the record number and the shorthand for the structure, which is 1/8 CQ. 1/8 CQ refers to the human chicken receptor. So that's a very dry-- that's the way that it appears when you download it from the database PDB or RCSV. Then when you display it, while you're solving it, if you're NMR or X-ray crystallographer or possibly display it from the databases, and the two different cultures in [INAUDIBLE] on the left tends to describe their structures as multiple chain tracings because they want to either express their uncertainty of the structure, or they want to brag about how they know something about the dynamics. Whatever, you have multiple chains which overlap here in different colors indicating some of the different uncertainty or dynamics of each major atom. Sometimes you'll show all the atoms as one on the right, or you'll just show the major atoms like the carbon alphas which is the center of each amino acid as you go along on the left. On the right is the way that X-ray crystallographer might show the fit to the data. So the model is a stick figure connecting the atoms as circles. And then the mesh work is the electron density, which you can observe once you have all the X-ray data and the model, or you can calculate it once you have the model. The model plus the known physics of scattering of each of the-- known physics of the electron density of each of the atoms, you can calculate electron density. Now you can compare the calculated electron density with the observed. Or you can compare the calculated scattering with the observed. Typically, it's done in the scattering, which is the 4E transform of the electron density. And that's all this is. The electron density is indicated by RHO here in the middle of this formula. And the 4E transform is just this integral of RHO over the phasing information, the phasing of the light waves. These waves, just like a wave on an ocean has a phase. Whether it's up or down in the trough, how much. And so the product of the RHO electron density, which is a function of X, Y, and Z, all three coordinates, its is summed by these integrals-- It's a continuous function-- from 0 to 1. 0 to 1, and X, Y, and Z. This is Y01, it's because this is a repeating structure. It has a little rectangular cube of space around it, which repeats. And so all you really need to do to calculate the entire electron density is to think about this little cube, which goes from 0 to 1 in those arbitrary units. But now that's how you can get from one space to another. From the electron density to the scattering that you actually observe when you shine x-ray light upon a repeating crystal structure. But now you want to adjust the model. Adjust those atoms in the previous slide so that you can maximize the fit. That is to say, minimize the difference between the observed scattering F0 and the calculated scattering, FC. Because you know the scattering of each atom and you know that you're trying to determine the position of each atom. The position of the atom might be a parameter, P, which you adjust a small bit at a time. And that and that change can be approximated by what's called a Paler Series Expansion. Here, we're just taking the first term, which involves first derivatives, all sub-sequence terms involve second derivatives and higher. And those are close enough to 0 for this work that you drop them. And this basically says that if we're going to adjust this parameter, we can get a feeling for how fast to adjust it, which is based on the sensitivity of the scattering, the F. The derivative of F calculated with respect to each parameter. The parameters would be X, Y, and Z-coordinates, or they could be some kind of rotational parameters. So this is to give you a flavor for how it's actually done. This is how you actually get from the scattering off of a crystal. The crystal has the advantage. In principle, you can do a scattering experiment from single molecules. But single molecules, the signal is too weak. And it's swamped out by the noise of random other photon events. So by having a large number of them in ordered array, they all cohere and they basically do your statistics for you. They integrate and you get the value of the statistics of billions of molecules without having to observe each of the billions of molecules and then do the computation in the presence of a huge noise. So that's what the crystal is all about. NMR also requires billions of molecules. And so they both have the big demand of requiring large amounts of pure molecules. And that's one of the reasons that membrane proteins have been harder to get at. It's harder to get large amounts of pure membrane pieces. Now these two methods NMR, which I won't describe, and extra crystallography that I barely described, share with the abinitio methods of protein structure prediction certain key computational components. And these are embedded in a combined system, which does crystallography, NMR, and some of these molecular mechanics that keeps the structure. You can imagine that if the structure started blowing off, atoms going in weird directions, it could still minimize the function if you have a local minimum. But if you hold the chemistry intact and satisfy what you know about molecule mechanics of chemicals, then you actually can fit a structure from further away. Now here's the ARC factor I said we will come back to. As you do this refinement, as you adjust the positions of each of the atoms, as your computer adjusts the position of each of these atoms, you compare the scattering of the observed F0 with the FC. These are in absolute values because actually, the scattering results in loss of phase information. So the actual things you measure are absolute values. And then you take the absolute value of the difference in order to make sure you sum a positive number. And then you normalize this, as we did before, to put it on a recognizable standard scale where 0.4 means you have a very crude structure-- if you see this in the literature you don't believe it. If it's less than 0.25, which the last structure was, then you believe it as pretty close to done. This is very analogous to correlation coefficient. Remember, we had a linear correlation coefficient between two functions, here would be observed and calculated. If they correlate well, then you're getting close to done. Correlating well is better than 0.7, in this case. And one way of reporting the similarities between two structures, this is not a goodness of fit between model and data, this is a goodness of fit between two models. Atom by atom you go through and you measure the distance between them. And that root mean squared deviation, of all the distances over all the atoms, or all the key atoms, core atoms, carbon alphas-- or maybe even a smaller core than that-- allows you to quote a root mean square deviation, which has some meaning independent of how many atoms you have and what proteins you're looking at. Each of these is try to put this on a common scale so you can compare from structure to structure. Now if we're going to do molecular mechanics, which is common to the computational empirical methods and the computational sequence based methods, we need to talk about the side chains of the proteins. We've mainly been talking about the backbones. And just a refresher, this is from the geniculate code. Again, the blue or the positively charge and the negative charge and so forth. They have a chirality. It matters whether you're talking about L-amino acids or D-amino acids. The way you remember-- this is just a mnemonic for remembering it-- is that when the hydrogens point towards you, going clockwise, it goes C0 carbonyl R, this is a side chain in corn. And some of the 19 of these amino acids have a chirality there. Glycine does not because it has two hydrogens instead of an R, it has a hydrogen. And two of the amino acids actually have two centers of chiral asymmetry. Threonine, which has this side chain, and Isoleucine. They have the carbon alpha and the carbon beta are both asymmetric. And one of the very earliest exercises was done when the very first models of proteins were looked at, little peptides. Can do this by hand, with some very simple crude models. And you can go through systematically, there are three bonds along a peptide, long peptide. And these are the peptide bond itself, which connects one amino acid to the next one. And that tends to be pretty rigid. It has a partial double stranded bond and it tends to be a trans-configuration, 180 degrees. This is the rotation around the bonds, not the bond angle, but the rotation around the bonds. And there are two other bonds that are not so rigid. So these are free, but they're constrained by the clashes that occur that when you rotate around the bond, the side chains will clash with other parts of the protein. And so Ramachandran and colleagues went through systematically, all the possible Phi and PSI angles, these are these two free bonds. And this is shown here ranging over the full range of Phi and PSI on the horizontal vertical axis. And you get these little orange regions where even with very bulky chain groups, which would occur you get these allowed regions. And these two allowed regions happened to coincide with two of the most popular motifs you find in proteins, which are the beta sheet and the alpha helix. There are other things, such as the 310 helix and various other structures that turn up. But those are by far the two most common. And the yellow shows how they get extended when you have smaller side chains that allow more parts of the conformation spaces as it's called, to be inspected. Now that's a very crude thing that you can do with very simple stick figures. But as you get to more detailed analysis, the ultimate application of all we know about physics, if we could compute them would be quantum electrodynamics. This is way out of range for any molecule of the size that we're interested in. And then as you go down this list, you get more and more precise programs until you get down to something which is barely computationally feasible for things the size of proteins. And a great approximation of all the quantum approximations above it. Every one of these is an approximation, but each one as you go down, gets more and more approximate. And the main thing that's missing from molecular mechanics that's present in the next step up is the polarization of electrons. In other words, in micro mechanics, you assume the electron clouds are basically spherical. And this is a huge loss, but it still is computationally very demanding. So you don't get that asymmetric polarization that you get in hydrogen bonds and many other dipoles. So this is really-- this is basic physics. You can see the first line, forced equals mass times acceleration. Basic Newton's law. And Newton also introduced the calculus to us. So he would be very comfortable with the next line, which is that force can be redefined as the first derivative of energy with respect to position or radius. And then mass is just mass. And we're introducing the subscript I, for the atomic. For each atom gets its own Newton's law. And then acceleration is just a second derivative of position with respect to time. Now what kind of time constants are we talking about here? This is the femtosecond range for atomic motion to the minus 15 seconds. And as you step through, you update this kinetic procedure, you can do it in half time intervals, updating velocity and position every femtosecond or half femtosecond. So now what's this energy term? This is what I alluded to in the previous slide as being very approximate. And semi-empirical, it is based on experiments not entirely from first principles or not even from the quantum approximations. You have, say, spectroscopic analyzes that will show that the spring-like motion that two atoms can have when they're connected by a bond has a kind of a Hooke's law type of spring motion. And that's the energy of the bond length, EB in this sum of all the E's, slide 52. And E theta is the angle that you have as a bond angle bends. And that's the spring-like force. And then omega is this kind of torsion angle that we've been talking about in the phi PSI plot, Ramachandran plot just before. Van der Waals is the non-bonded contact, which can be either positive or negative. Actually, it should show down at the very bottom of the slide is that there is a repulsive force, which is related to the R to the 12th power. And an attractive force, which is R to the sixth power. So as you get closer, it starts to get attracted until you get this hard sphere repulsion as you get a little bit closer. Electrostatic interactions are the longest range effects. All these covalent bonds, B and theta and omega, are short range. Van der Waals are short range. Electrostatic is slightly longer range because it's a 1 over R, or R is the distance between the two atoms. And those are the main terms that enter into all molecular mechanics, whether they're used in crystallography or whether they used in abinitio. Now this is the state of the art for abinitio. Just the very most recent CAS competition resulted in a very clear winner by some criteria, at least. The Baker Lab here, the URL is down here, is the number of standard deviations away for the mean in terms of the score for the number of correct predictions, here out at around 30 where the mean is close to zero. And even with this huge advance for the field in prediction, still this is a typical RMS standard deviation between the real structure, which was kept hidden from sight from all these competitors-- until it was known, but not to the competitors-- and then revealed. And the RMS deviation was 6, or 4, or 5 in that range depending on the structure and whether you include all the atoms or just the core ones. And this is not adequate, as we saw in that slide actually from the same group earlier on. Another way of looking at this is now-- those were predictive structures-- these are now observed structures. The purple is comparing two structures, both of which were done by X-ray crystallography. And along the red axis here is sequence identity, ranging from 0 to close to 100%, say 96 plus percent. And the green axis is the RMS root mean square mediation between structure one and structure two. And you can see that-- think of this purple curve as starting in the lower right, where you have very high sequence identity and less than 1 angstrom written in deviation. That means that when you solve two proteins that are very similar in sequence, you will get very similar structures. That's good. That bodes well for structural modeling, although that is not structural modeling, I mean, not homology modeling. Then as you go down in sequence identity, the purple curve starts to slope up and up until it starts curving up towards 2.74 and beyond. it gets harder and harder to do these structural alignments. And so 4 angstroms is the sort you would get from homology modeling at less than 20% or 30% sequence identity. And this is what I said earlier, why we're trying to get enough proteins populating. This is all known proteins here being compared, when enough populating it. So you never have to go below 35% into this Twilight Zone, where you really can't make good-- you don't find good RMS deviations between two known crystal structures. Now as we do protein dynamics using the molecular mechanics approximation we talked about, these can be applied not only to predict a static structure or a series of steps in a protein process, but the dynamics of folding from a completely unfolded protein as it might be coming off the ribosome. And this is something for which there are relatively few experimental methods. And so this is clearly a valuable contribution, but there's a problem with doing a theoretical calculation that's hard to empirically verify. But in any case, to do one of the larger tasks and IBM and others are sinking significant resources and infrastructure of this. But doing your femtosecond time scale over a one microsecond simulation, you can easily do the math, that's 10 to the minus 6, divided by 10 to the -15 is about 10 to the ninth such steps, each of which involves this big calculation that we just went through all the energy terms log. But that's been done for this. And you can see the blue and the red represent the calculated and the observed structure at one point in the dynamic simulation. When you have a protein three dimensional structure, you can try to dock it with small molecules. This could be easier, in principle, because you can keep both the small molecule and the protein relatively rigid as you dock them. There has to be some flexibility hence the name, flex, for one of these programs. And overall, the results are intriguing enough that you might want to use it as an alternative in the few cases where you have the three dimensional structure of a protein, but for some reason you can't solve the three dimensional structure of the complex. But you must remember that actually, even though we cited that the solving of protein structure might be $100,000, solving a complex once you have the protein structure is actually considerably less than that. But in any case, this is encouraging where you have in the order of 0.25 to 1.84 as a root mean squared deviation between the predicted and the experimental binding modes of small molecule. You can imagine that to be off by 1.8 angstroms, it must be docking in roughly the right pocket, but maybe at the wrong angle or yeah, maybe slightly off. So the last topic is the issue of cross talk. As we talk about protein three dimensional structures, we try to find homologs. And we often find homologs within an organism, pair logs. And these pair logs and alternative splice forms of a protein are potential toxic side reactions of a particular drug. And you can see that many of these drugs are aimed at family members. For example, a top two are part of the steroid binding family, which we have already introduced once and it will be in an upcoming slide. And when you consider that these proteins, that particular class of proteins interacts both with a small molecule, which is either a natural or artificial steroid or thyroid like which is a steroid like compound, and it binds to a target nucleic acid. And both the nucleic acid and the small molecule have potential for crosstalk. And here is the nucleic acid part of the story. And in the next slide will show the small molecule part of the story. But the nucleic acid part, you have two protein domains similar to one another. This is another example of the symmetry that we started this talk and ended last talk with. The symmetry here is, you have these two that can be direct or inverted repeats, separated by little spacers here. So the DNA is in yellow and the little spacers are in the gray and CPK colors. And the protein domains are in green and white, where the green and white are structurally similar to one another. It's hard for you appreciate them going around like that. This is to emphasize that the direct or inverted repeat here. Now that's the DNA interaction. And this is the ligand binding. You can see the estradiol is the small, yellow ligand. And the tamoxifen, which is the larger ligand. This is something that's important in treating breast cancer that might be responsive to estrogen binding drugs. So this is the part of the protein that has two parts, or three parts here. That's the binding domain. The little red thing is an activator peptide and then there's the DNA binding component. Now what are the crosstalk we have here? You can see that these wide variety of different steroid-like protein binding domains, they bind a vitamin D3, retinoic acids such as those that occur in developmental processes and in vision. Thyroid, which regulates our metabolism. And estrogen, testosterone, and so forth. All of these things have fairly similar small molecule binding sites. And the DNA sequences they bind are these half sites, which are very closely conserved in all the members of this big family. And one of the main differences is the distance between these can vary. And the distance here is indicated on the far left hand lower left here. dR3, means direct repeat with three nucleotides in between those two halves sides. IR0 means an inverted repeat with 0 nucleotides between the half sites. The R15 is direct repeat with 15 nucleotides, and so on. And you see each family member has a distinct ligand and nucleic acid. Although, there's a lot of similarity of the ligands, a lot of similarity of the nucleic acids. How do we-- last line of this slide 61-- target one member of this protein family or other protein families? In some cases, you will have complete artistic control, not only on the small molecule, but of the protein itself. If you have a small molecule that looks like ATP, you can inhibit all sorts of ATP binding proteins. If you're lucky, you can inhibit a specific class of ATP binding proteins. But knocking out a particular member of a class is hard. And you can see here on the right hand side, these three chemical structures. The adenine part are these five and six member fused rings. And attach to them are the side chains. The first one, all black structure, is a known inhibitor of protein kinases in general. And the red additions are how to make that a little bit bulkier so that it will no longer bind to protein kinases in general. Now why would you want to make this inhibitor not bind to protein kinases? Well now, if it doesn't bind any protein kinase very well because it's too bulky, it doesn't fit anymore, then you can carve an amino acid out of one of the protein kinases by doing homologous recombination or transgenic mutating that particular-- the nucleic acid encoding that gene. And you will have this ability to manipulate both the chemical and the protein target in cases where-- as we'll get to in the last three lectures-- where we're analyzing systems Biology networks, you want to be able to target a particular protein at a time by having a known ligand protein interaction where you minimize the cross plot by engineering the specific interaction. You start with a specific interaction for the class and then you engineer it so it hits one of them. So that way you can do a time course, say, just knocking out that particular protein quickly or letting it come back. And this shows the results down at the bottom here. You start out with these two different kinases. CDK2, involved in cell cycling. And chem kinase. Two, both of these would bind to the original black inhibitor. And hence, there would be significant crosstalk. And here, the interfering dosages in micromolars are shown in the three columns here for the three different compounds, run underneath each of them. And you can see that the lower the number, the better it binds. So when you take these-- if you take these now binding pockets and carve them out, that's what the little wedge cut out for the two lowest ones, CDK2, derivative AS1, and chem kinase 2, AS1 derivative. Now that you can see, they have a much improved binding constant to even the bulkiest derivatives. And this is mediated by a threonine or phenylalanine at position 38 is changed to a glycine. Glycine is obviously smaller than a threonine or phenylalanine at that position and makes room for the drug just in the same way that changing the tyrosine to a phenylalanine made room for the dideoxy terminator in the earlier example. So in summary, we have talked about protein three dimensional structure and how we can program proteins, basically. How we can use bits of proteins that we may not be able to predict, a priori-- from scratch-- how we get from a sequence to a ligand. But we can take parts that we know and rearrange them in interesting combinations. We can build up databases of binding constants to combinations of combinatorial libraries of nucleic acids, of peptides, of small molecules. And we can put these together in novel combinations that allow us to do network analysis and ask what protein does what event. So thank you. Until next time. |
MIT_HST508_Genomics_and_Computational_Biology_Fall_2002 | 9A_Networks_1_Systems_Biology_Metabolic_Kinetic_Flux_Balance_Optimization_Methods.txt | The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license and MIT OpenCourseWare in general is available at ocw.mit.edu. GEORGE CHURCH: OK. Well, even though we've been hinting at networks pretty heavily all the way through the course, these are the three lectures where we actually take it on. We really started this at the end of last week's lecture, so-called protein 2. Where in the process of talking about protein modifications and quantitating metabolites and their interactions with proteins, we started talking about the sorts of sources of data that you would have that would allow you to get at a quantitative analysis of protein networks, such as red blood cell. So we're going to pick up on that theme by talking about macroscopic continuous concentration gradients, and then contrast that with mesoscopic or discrete molecular numbers. We're just going to very briefly touch upon the issues in discriminating between the stochastic modeling and the continuous modeling. And it's a very interesting connection between the red blood cell model, where you have a few examples of cooperativity with modest Hill's coefficients-- to take that to an extreme case where you have actually bistability, where you have two stable modes which are separated by a very cooperative interaction. And then we'll talk about copy number controls, another opportunity for doing modeling either by macroscopic continuous modeling or this stochastic modeling. And then, after the break, we'll talk about flux balance optimization, which I think is a really exciting and clever way of leveraging the little bits of information that you have about very complicated regulatory networks, biochemical networks. So I think we've kind of just barely touched upon this before. The particular networks that are being studied, how did they come to be studied? Why and how? And typically, what they have in common is that they have large genetic and biochemical kinetic data sets to go with it-- and/or. Right now there is no model that describes all of the interesting aspects of an entire cell or an entire organism. Usually, there are little pieces of it. The closest to a whole cell is the red blood cell. And this is because there's no biopolymer components to it, no biopolymer synthesis. Today, we'll talk a little bit more about these two related topics, which is cell division cycle and the segregation of chromosomes during the cell division. Here, the key point is the critical nature of single molecules in these that come into play in the dividing cells. And in this, they'll be talking about bistability, how there's a decision of either to take the next step in dividing the cell or not. And how that bistability can be achieved either stochastically, where you're dealing with the fluctuations of single molecules greatly affecting a switch, such as a phase lambda switch, or you can have it involving a large number of molecules, where stochastic seems to play less of a role. And then, at the end, we'll talk about how we can do comparative metabolism, where we literally integrate genomics with a network model of biochemistry at many different levels. Of course, there's the genome encoding the components of that network. But there's also the systematic knockouts of genes and their effects on the network. Now, this slide is also review. But it puts in context where we'll mainly begin talking today. We'll talk a little bit about ordinary differential equations, both for-- at the beginning, under red blood cell, and in the bistability discussion, how you can get, even with just concentration and time, you can get these very interesting behaviors, highly cooperative. And then we'll drop concentration and time by a steady-state approximation when we talk about flux balance in the second half. And of course, before, we were talking about molecular mechanics in the context of protein structure. And master's equations are the way of looking at stochastic single molecules, which we'll mention in passing. Eventually, in the network, discussions will get to spatially inhomogeneous models, where you actually care or realize the importance of where particular molecules lie in terms of their function. Now, what are the limits and problems in connecting the in vitro parameters that were so key last time and this time in developing a system model? By isolating particular molecules, you can cut off, make the system simpler. But then there's the problem of reintegration. And here, this is more historical artifact than really critical to this discussion. And originally, enzyme kinetics would ignore the products for technical mathematical reasons. But as you can see from last time, we showed that you could represent those equations quite well and even make the measurements. In the presence of products, it would just be another few terms in the rate equations. More critical, however, is that including the product along with substrate in the measurements and the modeling is just the first step, because how many different products and substrates and other regulatory molecules might be involved that you don't know about initially? In addition, the conditions for doing the in vitro measurements, it's hard to do them at the concentrations in cells. And in cells, they're nearly crystalline densities, as I say, up to 30% or so, which is basically the very high concentration of solute that occurs-- proteins and other macromolecules. And the substrates, the small molecules, are typically in vast excess in in vitro reactions. But they're very close to equal molar in vivo because a lot of them are bound up with enzymes. And you get interesting observations, such as this one mentioned at the bottom where a chemical reaction, which is spontaneous in solution, which is the epimerization of galactose, does not occur in normal cells of E. coli unless they have an enzyme that catalyzes this normally spontaneous reaction. So this is curious that it happens in solution. Doesn't happen in the cell unless you have the enzyme. OK. So with all those caveats in mind, and recognizing that even though this model has more measured parameters than almost any other cellular model, they were all measured in vitro, with the caveats that we just mentioned. And the cell is aimed-- the function of all these network is mainly to provide the redox that keeps the hemoglobin reduced and the ATP that keeps the osmotic pressure under control. And also, the cell is-- even though we've shown a little structure in this network, the structure is assumed to be fairly continuous within the cell, which is a fairly good approximation in this case. So this is not merely the stapling together of some kinetic in vitro parameters. We have other considerations in a real cell. We have the sort of physical parameters of the mass balance, energy balance, and redox balance. We've already mentioned energy and redox. But you have to have conservation of mass, as we'll see develop quite a bit more in the second half of this talk. In addition, there are physics, such as the osmotic pressure and electron neutrality. There are cells which do have transience of non-electrode neutrality. But for the red blood cells, certainly, one of the goals is to maintain very close to electron neutrality and osmotically stable. You want to have as many non-adjustable constraints as possible. As in other modeling systems, if these are measurements, rather than adjustable model parameters, then it allows you to test the few hypotheses you have, and have them overdetermined, and look for contradictions, outliers. And then, eventually, we'll see advantages to knowing what the maximum fluxes-- the maximum rates that you can have in these complex networks. And we could incorporate gene regulation, as we've seen lots of wholesale, increasingly accurate data on gene regulation is becoming available. It would be nice to integrate these, because the expression of the proteins affects their activity. The activity affects these fluxes. So these fluxes are represented here in slide number 8 as dx dt, where each of the x sub i's represents one of these blue dots, a node in the network, where you have four basic processes that affect it-- up to four. You can have synthesis steps, degradation steps. So synthesis produces transport, can bring it into the cell. Degradation removes it. And it can be utilized, incorporated into the body of the cell. And you can think of this as a sink removing it from the free population of-- for each x molecule. And this can be restated as a stoichiometric matrix [? so ?] ij, where that's mainly 1's and minus 1's. As you can see, in front of each of these fluxes, synthesis degradation and so forth will be 1's and minus 1's that refer to the stoichiometry, and sometimes 2's and 0's. And then the transport is its own vector. And you'll see the utility of the stoichiometric matrix where i is the metabolite number, and j is the reaction number or the enzyme number for all the possible reactions that can occur in a cell. These can be all possible reactions. Then you can toggle them on and off with mutations or changing different cell types. Now, this particularly rich system, the red blood cell, has been modeled many times and continues to be modeled since the mid '70s and now into the 2000s, starting originally just with glycolysis, later adding pentose phosphate, nucleotide metabolism, various [? pumps, ?] [? osmotic ?] consideration. Hemoglobin ligands have been treated from time to time, and less so issues having to do with [INAUDIBLE] and shape. No model includes all of these in one model, although it's really very close. But there are models that include all of the metabolism that we know and the transport osmotic properties. Relatively few of those were made available broadly. But now they're increasingly being made available freely on the web, as models should be. The assumptions behind this is that-- like I said, not everything is modeled. Some of it, for mathematical convenience, you will typically, in order to do differential equation-- since there's vast ranges of time constants, from things that happen extremely quickly to things that happen extremely slowly, typically, what you do is you'll model with a window in the middle, where you'll say, things that happened very quickly can be treated as a pseudo-equilibrium, as we've listed in this middle line here on slide 10. Things that happen extremely slowly can be treated as a constant. If they happen over the period of years, then in the course of an experiment that might be hours, they can be treated as a constant or something that you systematically explore. Although we'll typically be ignoring little pieces of the metabolism, like [INAUDIBLE] metabolism calcium-- either as data comes in from other systems and try to treat it by homology or analogy. In addition, when we talk about a typical cell, this can mean that we assume homogeneous distribution of molecules within the cell and homogeneity from cell to cell within a particular organism. It also means there's a tendency to model a wild type without respect to polymorphism. Although in human population, with red blood cell function in particular, there's quite a literature on mutations that affect the functioning of the proteins within the red blood cells. Probably one of the best studied human genetic systems. OK. Surface area is not absolutely constant, although for the time being, it's modeled as such. This is some examples of a subset of it. This is a subset that refers to glycolysis. You can see they all have the same form where you have a change in a concentration in moles per liter with respect to time of some small molecule-- here glucose-6-phosphate. It's some synthesis rate with the subscript being hexokinase. This is the upper left-hand corner. And then that's the synthesis minus the sinks, the degradation rates, which go through two other enzymes. Again, a reminder-- at the bottom, you'll see this come up a few times, just a reminder. Each of these is a form of change in the concentration of some small molecule with respect to time-- is the sum of the synthesis, subtracting the degradation transport utilization. OK. Now, remember we focused in on one little piece of this a couple of times now. This is a step that happens to be allosteric. That means that, depending on the concentration of the-- the top part of this formula tends to be composed of substrates and products having an effect. And then this term in the denominator has a fourth power dependence on a variety of other site effectors, including AMP. And you can see that the velocity is either hyperbolic, which is the upper curve-- which kind goes smoothly up and plateaus. While a sigmoid curve implies greater cooperatively, which can be affected by some of these other site effectors, which is more sigmoid in shape. And we'll use this as a stepping stone to talking about what are the various ways of getting that sigmoid shape? We've already talked about how proteins can be multimers, dimers, tetramers, and so forth. That could be one way of achieving the sigmoid shape via a conformation change which senses the second site. But we'll talk about another way in just a couple of slides. When we have the kinetic expressions here, they have the form of the previous one. Most of them are simpler than the one in slide 12. The model has a total of 44 rate expressions. They have about five constants, on average, so about 200 parameters. These are not truly adjustable in the sense that they're determined from the in vitro reactions. What kind of assumptions? We've already mentioned the difference between in vitro and in vivo. We have the lingering question of how many effectors might there be that we don't know about? Typically, these in vitro experiments were done with a small number of substrates and products that you know about. But as a worst case scenario, Mike Savageau you know, likes to trot out the glutamine synthetase, which fortunately is not in this particular model. But it could be that there's an enzyme that's just as complicated but hasn't been studied as much. In the case of glutamine synthetase, there are three substrates. Remember, the previous example, there were only two substrates and two products. Glutamine synthetase, three substrates, three products, and has nine allosteric effectors, rather than the three or so in the previous example. So this gives a grand total of 15 different molecules you need to track. So the number of different measurements you might have to make, hypothetically-- no one's actually made this number of measurements. But Mike Savageau likes to point out that even if you only did four concentration points in this multi-dimensional space, you have 4 to the 15th measurements, or a billion measurements. A billion isn't nearly as intimidating as it was when he made this statement in '76, but it still is not something that's routinely done. What other constraints? There are these physical chemical constraints of osmotic pressure and electron neutrality, here stated a little more explicitly. You have pi i equal to pi e. That means the pressure on the inside is equal to the pressure on the outside. That sounds like a good way to balance things out so that the cell doesn't explode. And explicitly, what that means is these gas constant r times the absolute temperature, degrees Kelvin, times the sum of the pressure components for the j molecule, going up to m chemical species, for i, standing for Interior, is equal to the same sum, equivalent sum for the subscript e, for Exterior. Electroneutrality has the same set of concentrations for the i, interior, and j molecules, where now z is the charge, where charge is the same z that we had in m over z for the mass spectrometry. So, OK. Now, some of the models I give you today, we will compare it to the calculated and the observed, as we have done before. And here it's shown a little bit differently than how I've done it before and how we'll do it later on. Typically, we would have observed on one axis and calculated on the other. In general, we're looking for outliers. And here we're sorting by the degree to which they deviate from observation. And so the deviation is going to be observation minus the calculated. And the degree of deviation can either be normalized to the standard deviation-- which is basically normalizing it to how confident we are in the experimental measure. Or it can be normalized to the averaged value, which then becomes less dependent upon the accuracy of the experiment and more dependent upon-- with a fraction of the-- and so we've sorted it on the ladder. And you can see that most of them are less than two standard-- sorry. Deviation is less than twofold the average value and less than seven standard deviations in terms of the measurements. But the ones that are furthest to the right are clearly the ones that require the most attention, either in the experimental measurements or in the modeling. These are steady-state measures. This is kind of an abuse of a beautiful kinetic model. But it reflects the limited data that exists. And it's much easier to collect steady-state data where, basically, the red blood cell-- every particular molecule, even though there are fluxes in and out of every molecule, the molecule concentration itself is staying constant. So that's how you-- if you're assuming each molecular concentration is staying constant with respect to time, you're just measuring steady-state levels. But if you're more interested in looking at the dynamics, the movie of how molecules will change if you perturb the system, you can think of a wide variety of different curves that the timecourses can take. And then the challenge is how do you represent them? You have 40-some different small molecules. You're tracking the concentrations. And then the timecourse can range over hours. But one way of doing it is doing pairs of substrates at a time, substrate a and b, and then monitoring the timecourse as a vector. Think of these as a series of little points along here. And let's look in slide 17, look in the upper left, number 1. If, let's say, a is converted to b, so that a plus b is equal to some constant, you can see this slope of negative 1. This is exactly negative 1, because for every molecule of a that's consumed, a molecule of b is produced. And so when you see this perfect slope of negative 1, then that's the kind of relationship you expect between those two, even if you're randomly sampling a dynamic system. Here you're taking a timecourse. Number two is a pair of concentrations [? in the ?] equilibrium. You'll get an equilibrium constant, and the ratios here will not be negative 1. They will be some constant which determines that equilibrium. You have two dynamically independent metabolites, as in quadrant III here. Basically, as you march through increasing b, a can stay constant because it doesn't really care. It doesn't respond to changes in b or changes that result in b changing. And then maybe some other set of dynamics will cause a to change, and b stays constant. And if you sample enough timecourse, you may find that this fills up the entire space concentrations available to a and b showing no correlation. Another interesting type of phenomena you can see is that not every possible concentration of a-- and it's not completely independent of b. As you might start at a particular point in a time series at this gap here in the lower right-- and you start decreasing b and increasing a. And then, at some point, the dynamics of the entire network, not just a and b, contribute to now a taking a dive down and b increasing. And eventually, you return to that steady-state point, and you've described all the conditions that you might be able to achieve in this closed loop. So we're going to look at these kind of phase diagrams. The concentration of a versus concentration of b-- where a series of time points that would be color coded. These can be either lumped, as they are in this diagram, or in the next slide, we'll see them separated out one metabolite at a time. But it's the same concept, whether you've got a group of metabolites involved in glycolysis lumped together, compared to, say, a group that are redox. You have in the lower left of each of these quadrants is this time series that we've been looking at. And the upper part is the correlation coefficients, color coded so that blue is a negative correlation, and gray is very significant positive correlation, and everything else is something in between. So what you see from these is you see, for example, here's this up curve that's very close to the negative 1 curve, as if there's a conservation reaction here between glycolysis and the adjacent steps. You see little loops, for example, in the lower left, adenine biosynthesis, in that row, and so on. You see examples of each of these kind of behaviors here, where you're going from the red point-- these little dots in red-- to green to blue to yellow to the end gray in increasing time coarseness, starting with 0.1-hour resolution and ending with 300-hour resolution. This is lumping, where we're kind of looking at things like ATP and redox loads. Now, if we look at it one molecule at a time, you obviously get a more complex-- you get every possible pairwise combination of molecules. And you can see these full dynamics. Now, these are not data. These are all simulations. And unfortunately, we don't have that kind of dynamic resolution in experimental data just yet. But this gives you some idea. If you see some particularly interesting phenomenon here, then it might be a motivation for going in and looking at the data in more detail. OK. Now, we mentioned this difference between the ordinary hyperbolic curve-- in the lower right of this whole slide, the upper left of the insert-- and the sigmoid curve that you get from an allosteric interaction. And within-- this whole cell is set up for transporting oxygen. And as oxygen concentration increases for hemoglobin, you'll either get this hyperbolic or sigmoid curve as it gets increasingly sigmoid, increasingly cooperative, with increasing amounts of one of the intermediate metabolites, 2, 3 diphosphoglyserate. And you can see, this is now the connection between the glycolytic pathway. The regulation is sensing the state of the glycolytic pathway and the hemoglobin. In addition, there are connections with the pH, which is also regulated in this, and the redox. You can see just above it here, the hemoglobin going to the unproductive methemoglobin state. So you can see there are connections in the network between this ultimate function, which is transporting oxygen, and all these intermediate metabolism components. And it also brings us to this topic, again, of this cooperativity and how does it arise here? And the hemoglobin has a tetrameric conformation state change, which is there's a second site that binds this organic diphosphoglyserate. But another direction we can take-- so in the lower left-hand corner of slide 21 is the same icon again of how we get increasingly cooperative from hyperbolic to sigmoidal, to the point where this becomes almost vertical and displaced from the origin so that the cell, at a good point in response to a stimulus, will make a decision to commit to the next phase in cell division. This is similar to the cell division that we talked about earlier in the context of the microarray analysis of yeast cell division. Here we're talking about a Xenopus amphibian oocyte, which has nice large cells to do this kind of study. Where you need to decide to come out of G1 and commit to synthesizing DNA-- in the S phase, in the lower portion of the circle, where you get, now, two DNA molecules. And once you're convinced that you've finished replicating all your DNA, only at that point, then can you commit to mitosis, another major decision. And then you get two cells, and you go back, and you complete the cell cycle. The little timecourse at the bottom here should be reminiscent for you of the timecourse we had of RNA synthesis of various clusters of RNA. Here it's the DNA synthesis. You can see it ramps up in the red S phase and then ramps down in the metaphase due to the creation of two new cells. But we want to talk about how do we make this as responsive as, say, progesterone or something that is signaling cells that might be waiting for long periods of time to complete the next step in a division cycle for an external hormone stimulus that it's time to start the next step in cell cycle? We want this to be displaced from the origin. You don't want it to be just flipping on and off irrespective of a stimulus. But when it does flip, you want it to go very quickly. So how would we model this? So look at the upper right-hand diagram of slide 22. And you have a set of these oocytes kind of diagrammatically indicating their state. Their state determined by to what extent are they ready to commit to the next division state? Here we can think of it as the biomarker is the state of phosphorylation of a protein MAP kinase. If it's phosphorylated, then it's committed to this division. And we can think of this as the black side of this gradient, going from white to black. But if you grind up a whole population of these oocytes and measure the total MAP kinase phosphorylation as a function of increasing stimulus, S-- in this case progesterone-- the response-- that is to say, the phosphorylated state and the commitment to mitosis-- will gradually increase, as indicated in the gradient model. But that's if you ground up all the cells. But the other way of obtaining that result would be if each cell is making an all or none decision. And what happens is the probability of a cell being in that all or none state changes with increasing stimulus to progesterone. And that is the lower model and is, in fact, closer to reality, as indicated by the experiment at the bottom part of the slide here, where you have-- this is a part of a concentration curve where you're increasing the stimulus progesterone. But at this particular stimulus, you can sample individual cells. There's enough of the cell that you can actually do proteomics on individual cells. And the proteomics here is a western blot. We mentioned this a couple of lectures back. And here you can see the two states of the MAP kinase. The phosphorylated state is the slower. Electrophoretically, it's the upper band in this diagram. And the lower band is the unphosphorylated state. And you can see there's no example here of a cell which is in an intermediate state, where it has half and half or 40/60 of the two different protein forms. However, if you, as a thought experiment, took all those cells and mushed them up, and took this and ran it all in one lane, you would see a mixture. You would see all the intermediate states, and that would be a function of progesterone. So this is a warning, similar to ones that I've said before, that when you grind everything up and mix populations of cells or molecules, you need to be careful because different cells may be in different states, different molecules may be in different states. And the average behavior is not the same as the individual. But now, that's only part of the lesson here. The cells are going through this all or none process. We can monitor it by single-cell proteomics. But how do we model it? Well, in very abstract terms, the response here can be modeled as in this Hill coefficient, where you have a stimulus, s, some kinetic constant, k-- kind of like a Michaelis constant-- where it's basically, as gets s closer to k, it has a larger response here, effect on the response. And that's nonlinear because you have the exponent h. And the larger h is, the more nonlinear it is. So let's say that h is 1 in this little schematic to the right here. It's hyperbolic. No sigmoid character at all. In the case of hemoglobin and phosphofructokinase that we talked about in the red blood cell, it's more sigmoidal, like an h of 2.8, almost cube law there. And in fact, even within this system, one of the steps that we'll talk about in the next slide-- where you have a stimulus of a Mos protein and a response of this same MAP kinase phosphorylation response, it has a modest sigmoid Hill coefficient of 3, just like hemoglobin. But the overall response of MAP kinase to progesterone has an enormous exponent of 42. That means it's almost vertical, and it's displaced from the origin. How do we get that? OK. So-- whoops. So here is a proposed model. And it's an interesting snapshot in the inevitable evolution of a model from something very primitive, which might have just been the Mos effects on MAP kinase, kind of as a direct effect here, which as we said, has a Hill coefficient of about threefold-- sorry. Hill coefficient of 3, no threefold, because that's an exponential. But overall, combination of two other factors that you have a chain of modifiers, each close to saturation, meaning that each one has a slight sigmoid behavior. And you've got-- sorry. It could be a hyperbolic function, such as, let's just say, this dotted line that's going up smoothly from 0, where it says "neither." That would be the effect of, if you just have a normal enzymatic reaction with no allostery, no feedback, no ultrasensitivity. If each of those has a component and you're close to saturation-- meaning your substrate is very high. And so you're going as fast as the enzyme will go. You have a chain of those. You can show through kinetic modeling that that will create a high sensitivity to the reaction. And that's what the furthest dotted curve close to the axis is, ultrasensitivity alone. And then, here, you have progesterone going in, in the upper left, affecting a complicated rate. Now, these rate constants don't necessarily mean a unitary, simple kinetic step. They can represent something as complicated as going from Amino Acids, AA, to a particular protein, Mos. And then the reverse reaction, k sub minus 1, is the degradation of Mos back to amino acids, which is not a reverse of the same enzyme set by any means. The next step, k2, is simpler. It's just Mos being phosphorylated by, actually, our friend MAP kinase in its [? phosphorous ?] state, producing the-- for catalyzing the Mos phosphorylation, which catalyzes another phosphorylation of another protein, which then positively stimulates MAP kinase itself. So this whole thing is a set of positive reactions. Each of the phosphoproteins increases the enzymatic tendency for each of the other phosphoproteins to be produced. So you can see that this is kind of on a hair trigger. If any one of these phosphoproteins gets produced, then it'll increase all the other ones, and it'll be a very cooperative procedure. And that's what this, furthest towards the axis in the lower left-hand corner-- where you have this positive feedback alone causes this very great tendency to just jump up from 0 to very high response with very little stimulus. Well, that's dangerous. You want it to be nearly vertical, but you don't want to be nearly vertical at 0 stimulus, because that's unstable. So you want to move it over. And that's where the ultrasensitivity comes in, when you have this chain of modifiers putting both together as a solid black line, where it's shifted over so you have-- this assay was done with Mos, rather than progesterone, as the input. But you can see the overall increased cooperativity and shifting to the right. So that's an example of how you can get this very high Hill coefficient and where you can get bistability without stochastic. You can imagine that you can have stochastic bistability. If you have one molecule in the cell, and either it's there or it's not, then you have bistability. You have two states for the cell. Either it's the cell with the molecule or without. But here you can see that even with a very large cell-- Xenopus oocytes being one of the largest cells-- and very large amounts of proteins-- enough that you can easily see them in proteomics-- you can still achieve bistability with the right kinetic model. Not every random model would achieve that high Hill coefficient. OK. So we're just going to briefly mention the other way of getting bistability, which is via stochastic, so small molecules. And here, an example-- so instead of dealing with very large cells with very large numbers of molecules involved, here in, say, bacteria in particular, a phage-infected bacteria, you generally have the case of very small bursts of activity. A transcription factor will bind to a promoter. Before the transcription factor comes off, it may cause a small burst of a couple of RNA transcripts being made by a couple of RNA polymerases seeing those transcription factors. Then each of those RNAs caused its own bursts of protein synthesis where a whole series of ribosomes will bind in a polysome. And you'll get this double burst of RNA and protein. And the stochastic binding of that transcription factor that starts that burst can be modeled by reasonably measured parameters for each of these steps. And you can see that cells one, two, and three, in the lower left of slide 25-- where time is the horizontal axis up to, say, 45 minutes or a cell division or two. While the number of product proteins here, measured in dimers of protein, fluctuates, where cell one gets an early start, early burst, and cell three hasn't quite hit its burst yet. So you can see there's a lot of variation. And this is one way of achieving a bistable switch. But as you've seen-- not the only way. You can also do it where all the proteins are present in all large amounts. If you do choose to go the stochastic route-- and this might be an interesting project for some of you. It's by no means shown to be mission critical for the community of systems modelers. But many people believe it is a way to go. There has been great progress since 1977 when Gillespie proposed the algorithm named after him for stochastic simulation, a couple of chemical reactions in general, not just biological, biochemical reactions. Since then, Gibson and Bruck, within the last couple of years, have come up with an algorithm, which is now time proportional to the logarithm of number of reactions, rather than the number of reactions. Any time you go from n to log n, this is big progress. And this is done by better tracking of calculations that you can reuse. And so I encourage you to take a look at this aspect of stochastics. Another aspect is people often think of the stochastics as kind of a nuisance. They increase the computer time that it takes to do simulations. They increase your uncertainty about the simulations that you then produce. But there is an aspect of it which is just beginning to be harnessed in various fields of engineering, and biological engineering is no exception. And I give you two examples here to just whet your appetite. Again, I'm not going to go through them. But you can see that you can actually make switches and amplifiers for gene expression-- gene expression being one of our favorite topics in this course-- which are based on noise and where you can get bistability using these fluctuations. And that's not too unexpected based on what we've just been saying. But in addition, you can get stochastic focusing where the fluctuation allows enhanced sensitivity. OK. So I encourage you to look at that. Now, a particular place where you might worry that stochastics is coming into play quite a bit is in chromosome copy number, whether this is eukaryotic chromosomes or in the case we'll illustrate, a very simple case of plasmid chromosomes. Now, the interesting thing about plasmids is they can either be in lockstep with cell division, the way that eukaryotic chromosomes are-- in the cases of Xenopus oocytes we just talked about, where it makes a big decision. And that's the case of the R1 plasmids. Or it can be more of kind of a cloud of copy number, where they're trying to be close to a target number where you'll have more copies than one per cell. And so as the cell divides, it kind of randomly takes a partition of that number of plasmids. And ColE1 is an example of that. And you model it in order to determine the factors that govern it. This has implication. The copy number will affect the expression levels. And the expression levels are of importance to biotechnology. And plasmids are, of course, also important in pathogenesis. The copy number affects pathogenesis since plasmids are a major way that drug resistance elements are passed around. So let's take a look at one, hopefully, highly oversimplified version of this. Here you have two RNAs, imaginatively called RNA 1 and RNA 2. We'll start-- RNA 2 is transcribed here, on the bottom strand, from right to left. So actually, look at the very bottom of slide 30. The magenta RNA polymerase is making RNA 2. And if nothing binds to RNA 2, if RNA 1 does not bind to it, RNase H will cleave it. It will then bind to blue DNA polymerase and will start replicating the plasmid. On the other hand, if RNA 1, which is made on the opposite strand of RNA 2-- it's this antisense story. It will then come and bind to RNA 2, sort of in trans. It acts as a transacting inhibitor. It's aided and abetted by the Rom protein. And now you don't get cleavage of RNase H. And so DNA polymerase doesn't have a primer upon which to act, and you don't get replication. And this is, of course, not just a yes or no thing. This is something that's regulated and allows it to feed back to get the right copy number. You don't want it to get an infinite copy number, or else it'll choke the cell that's harboring it. But you don't want it to drop down low enough so that then many cells will segregate with no copies of the plasmid. So you want to have a mass balance. You have to have conservation of mass. You want to be able to model both the initiation and degradation and inhibition. So you do this by making some simplifications that the RNase H rate is fast. Remember we had slow and fast reactions that we would eliminate, so too with the DNA polymerization. By subsuming the RNA 2 concentration in an RNA 1-based model, you can simplify it so that you're really only considering two species, RNA 1 and the plasmid DNA itself. We'll call these r and n for the two different molecules. So this is just a way of introducing a two-species model. We're going to come up with a rate equation for change of RNA 1 with respect to time. And then the next slide will show the change in the concentration of the plasmid. Concentration of the RNA is r. Concentration of the plasmid is n. We have dr dt and dn dt. And this is very simple. It's just like what we were talking about with the metabolites. You synthesize the RNA. That's a positive term. You degrade it, or you dilute it out. The dilution is based on the growth rate. Mu is a typical-- it's used in growth rates in population genetics, and here it's used in chemical kinetics. In fact, this is, in a certain sense, an example of a very exciting field where you're bringing together population genetics and chemical kinetics into one place. And population genetics and chemical kinetics, when they come together, unite some of the most disparate parts of this course. OK. So now we've got an equation here where the positive term is k1, the rate of initiation of RNA synthesis. And it's, of course, the more molecules, n, the higher the concentration of the plasmid, then the more RNA you're going to make. So that makes sense that you have the product, the rate constant times the number of plasmids. Similarly, the loss of it is going to be related to the number of RNAs. More RNAs you have, the more that you're going to lose, more that you have to lose. So that's the RNA, and this is for the DNA. Here you have dependence on the RNA. RNA 1, remember, is the thing that we modeled in the previous slide-- is an inhibitor. And so it's going to, when it binds to RNA 2-- which is implicitly modeled here-- it's going to have this inhibitory constant in the denominator. And so as the inhibitor RNA goes up, this inhibitory term goes up, the forward rate goes down. And the rate of replication is going to be also dependent on plasmid copy number, so it goes up within. The dilution rate is, of course, dependent on n as well. So the idea of this, in the next slide, is going to be to solve for the plasmid number. So slide 34, we have how you would implement those two equations that we had in the last two slides-- are shown on the very top left part of the slide. dr dt is abbreviated dr here in mathematical format. It's that same k1 constant times n is the concentration of plasmid molecules. Then the negative is the degradation rate. And the dilution by cell division, mu. And that's times r, which is the concentration of the inhibitory RNA 1. And in an [? analysis ?] equation, which we've already seen before for the plasmid molecules, n, the change in concentration of n is a function of time, dn dt, here abbreviated dn, is there. And then we're going to solve it. So these first three things are setting it up and asking the program to solve it. And we'll do it under the constraints of dr dt is equal to 0, dn dt is equal to 0. You will recognize this as the steady-state assumption. Even though there are fluxes in and out that are non-zero, the net effect is zero. And so that's the formula for steady state. We're going to start at a dilution rate of 1, where you'll have some steady-state level. And then we're going to watch the dynamics as it goes to a dilution rate that's slower, that is to say, the growth rate is slower. And as the growth rate is slower, you'll expect, maybe, to accumulate more plasmid molecules. Because they're not in lockstep with the cell division rate, the cell grows more slowly. And there's not some other feedback, and we haven't put any other feedback in this model, then it should go to about twice the level. So you do the symbolic solver in the top here. And you get this symbolic solution here. And if you do the numeric solver, NDSolve, then you get a very similar solution. And you could plot it. Here you're plotting y, which is a plasmid copy number from over a time range of 0 to 3. And you can see it goes up from slightly over 1 to slightly over 2 in terms of a concentration of plasmid, as you might expect from lowering the dilution curve. Just as we had stochastic models for the bistability that we talked about earlier-- the Xenopus bistability could be continuous. And the lambda model could be stochastic. Here there are stochastic models for Copy Number Control, CNC, which are very interesting. I urge you to look at them-- where you can have, basically, using stochastic modeling to do molecular clocks, where you can reduce the rate at plasmid loss. You can see, in that last one, if you had a very small number of molecules, you would have a loss in some of the cells that would be more accurately modeled in a stochastic model. Now, we want to go from these models of red blood cell where you have metabolism without polymer synthesis, and the CNC model where you have polymer synthesis of RNA and DNA, but without metabolism, to a more integrated cell where you have both going on. And you want to represent the full optimization that must occur, getting metabolism optimally suited so that you get the right kinds of macromolecules made in a complicated cell like E. coli, which can adjust to a wide variety of different growth conditions. So what are the problems here? The number of parameters that we needed and had for the red blood cell was enormous. It was 200. It was a tour de force to get those. For E. coli, it's orders of magnitude more that are needed. Because instead of 40 enzymes, we have somewhere between 400 and 4,000. So measuring parameters is a problem. And we have the same problem about in vitro versus in vivo. We have the same set of constraints. And we want to focus more of our attention now on the flux constraints. So after a short break, we're going to come back and talk about the flux balance as a solution to this. And just as we had with the red blood cell, we're going to be focusing in on ways to relook at the way we think about the synthesis and degradation of the molecules in this network to see if we can rephrase in a way where we can ask interesting questions about the optimization of these systems. So take a quick break. And the second half, we'll talk about the flux balance. |
MIT_HST508_Genomics_and_Computational_Biology_Fall_2002 | 5C_RNA_1_Microarrays_Library_Sequencing_and_Quantitation_Concepts.txt | The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license and MIT OpenCourseWare in general is available at ocw.mit.edu. GEORGE CHURCH: Use this slide to review how far the in-situ might go, and what its current limitations might be. And then we'll move on to arrays. One potential advantage to these sort of microscopic in-situ analyses is that if you use a non-destructive visualization rather than fixing the cells-- you actually monitor them real time-- you can sample this basically as quickly as a modern microscopic camera system can monitor, on the order of a millisecond. You can obtain a sensitivity on the order of a single molecule of fluorescence. This is very challenging. It requires very small pixel sizes, but it is possible. And it's the basis of some of the sequencing methods that we discussed a couple of classes ago. And so that resolution itself is typically on the order of a micron or a quarter of a micron, sort of set by the limit of the optics in terms of the diffraction that can typically occur. But you can get below that diffraction limit of 250 nanometers down as low as 10 nanometers by using tricks such as near-field optics and various deconvolution methods. Multiplicity is really the greatest limitation of the in-situ method right now, and it's certainly an opportunity for the creative ones in this group to address. How can we get through multiplicity, looking at all of the RNA simultaneously as we can do in microarrays, which is essentially a microscopic method as well, But still have the spatial advantages of an in-situ? This is an unsolved problem. Multiplicity now is typically around one or two or three colors. The colors can be deconvolved by using band filters. If you use combinations of colors, you can conveniently discriminate the 24 different types of human metaphase chromosomes. However, this depends, you should note-- don't get fooled into thinking you actually have 24 colors. These are combinations of ratios of color, considering them to be false-colored by the computer algorithm. But they depend on non-overlap or of being able to find objects in the visual field and extend them. If where they overlap, you now have mixtures of mixtures, this no longer is simply deconvolved. So for all and practical purposes, we're limited to around four or five colors. So in situ, I would not call immediately genomics-compatible. The systems biology, it takes a vast number of in-situ experiments to get the kind of comprehensive data you can get out of microarray experiments. So let's focus on the kind of experiments like microarrays that can get us full genome scale information, and what are the limitations in quality for these sort of things. Now, we can either lump or split various ways of measuring arrays in cells. The top two items on slide 28 either can be characterized-- microarrays might be associated with, sort of for historical reasons, longer probes, maybe the length of an entire gene or entire cDNA, messenger RNA. And affymetrix and other oligonucleotide-based methods typically use 25-nucleotide-long ligament oligomers. Typically, the long microarray probes are used as single probes, one probe per gene, while the short ones typically have 24 probes per gene. These are not necessary differences. You can imagine various combinations. Another difference is in the long probes, typically, you'll do an experiment and control with different colors, and they're mixed together to control for some of the variations that might occur. In the spotting of the long probes, this is typically spotted mechanically, while the short probes are developed by a photochemical method in which 100 mass, sort of black and white mass such as use in the Silicon Valley for making computer chips, which we introduced in the sequencing technology lecture, these kind of 100 photomass will allow you to make 25-mer, four possibilities per base. And you'll make, say, 20 of these scattered along the gene, and a mismatch control for each of those 20 perfect matches, PM and MM abbreviation. Those mismatched controls help you get at the possible cross hybridization by related sequences or even distantly related sequences. And then what they typically do is subtract the mismatch controls from the perfect masses, and then average across all 20 of them, or some statistically good sampling of those 20. OK. So you typically do ratios for the long probes, and you try to get absolute amounts from the short probes. And then there are two wildly different methods in the bottom of the slide. These are called SAGE, standing for Serial Analysis of Gene Expression, and NPSS, an acronym for a highly parallel bead-based method. Both of them essentially sequence, determine the sequence of somewhere between 14 and 22 nucleotides. This is the sort of minimum length sequence that's often but not always sufficient to identify an RNA molecule. You're basically counting individual RNA molecules with a tag that's just long enough to be able to recognize it in a database. At a 14-mer, you can recognize it and say, a human cDNA database, but it's not unique enough to identify it in a human genomic database. The 22-mer is large enough to get an acceptable rate of false positives, false negatives in a human genomic library. So that's kind of the range which you can do this. And it's just, people tend to take shorter tags because the cost goes up with the length of the tag. And so these were conveniently short tags. So the top ones, you get quantitation by integration of the fluorescent signals. And the bottom two, you get quantitation by counting individual tags. The bottom two methods have the opportunity for discovery, while the top three, you basically can quantitate any gene or segment of the genome that you care to put on the array. But you won't necessarily discover anything outside of those features. So these are four of the key methods that are used for quantitating RNAs right now on a genome scale, where you'll do hopefully multiple experiments for each type. Now, let's just zoom in a little bit more so you can have appreciation for where some of the systematic and random errors might creep in to these kind of experiments. And I'll just arbitrarily use the 25 oligonucleotide probe arrays as an example for the long arrays, the microarrays. You might have, say, 1,000-nucleotide-long probes. You might have 10,000 of them on a glass slide about this big. With the photolithography, you can have a more like closer to a million features in a square centimeter. And in each of those features, each of those million positions on the array, in the square array, you will have maybe 10 to the 5th and 10 to the 6th molecules all identical in position 1, and then a new set of 10 to the 6th molecules in position 2, all aimed at a different RNA or different part of an RNA. Each of those probe cells is ready to accept, fluorescently labeled or using biotin as an intermediate in order to get fluorescence. So you take your RNA and you directly [? biotinylate ?] it, or you make a cDNA copy. Or in one way or another, you introduce a fluorescent or bioitn molecule into a copy of your RNA. And then you apply that to the chip, and they will bind kinetically. And the more the mass action that you get from the original RNA, the RNAs that are the most abundant will result in the largest number of biotins or fluorescent molecules on the array at a given element. Here, an indirect conjugate with a fluorescent stripped out of [INAUDIBLE] It's like a rift in this covalently attached [INAUDIBLE],, and you get a fluorescent signal, which you quantitate, which tells you the amount of original messenger RNA. If you have 20 different oligonucleotides per gene, you can scatter this about the array or you can have them in lines. This tends to be back from when they had them in lines. So you get the streakiness there. Now, one of the first things you want to do, much of the software from the companies is set up on the assumption that you will only do the experiment once. Now, this may have been appealing in the early days from a cost standpoint, but it's not really cost-effective, in that you will make mistakes and you will draw incorrect conclusions that will require you to go back. But this is an example of an early experiment to establish the reproducibility from one experiment to another, possibly to reassure people that they didn't need to repeat the experiment. But in any case, this is the thing that is now commonly done in order to assess that your experiments are indeed reproducible. And what you expect from this as you go along the horizontal axis towards higher and higher copies per cell, going off to the right or going up on the vertical axis, when you get high copies per cell, then you expect there to be very close similarity in the two measures from two different experiments done at two different days. And then as you get to the very rare transcripts, you expect the various sources of noise in the experiment to start to dominate the light scattering and the array. The background fluorescence of the glass, the non-specific cross-hybridization between different RNAs, start to dominate over the true signal because, the true signal is going down and all those background signals are staying constant. So you start to get spread at a low number of copy numbers per cell. You can see a huge fraction of the RNAs in the yeast cells that present a single copy, say one or less fewer RNAs per cell. Now, this can either indicate that most of the RNAs in the cell are not physiologically significant, or it could indicate that all it takes is a small burst of one or a few molecules of RNA to produce an even larger burst of proteins and even larger bursts of activities of those proteins. So you get this amplification. And so the stochastics that we will study in the systems biology part of this becomes a more significant consideration. So looking at one molecule per cell, it's important to start thinking about what the implications of that might be for the systems biology, and asking, can we accurately measure it down there, and do we believe that it's biologically significant. Now, there's a whole variety of microarray data analyses, ranging from the very hardware-oriented first data acquisition modules all the way up through analyzing single array data at a statistical level, to multiple related experiments, such as the one we showed in the previous slide, all the way up to clustering multiple examples from multiple different conditions to start asking the biological questions about why RNAs go up and down together. For intermediate analyses, where we'll be talking today, as introductory issues of data analysis, I'll illustrate dChip and a couple of other tools that indicate how reproducible experiments can be, and the kind of systematic errors that can creep in. The reproducibility helps you by repeating, helps you reduce the random errors. And here are four papers recently that talk about measurements from multiple measures from the same experiment, or multiple measures by using two completely different microarray technologies. And I urge you to take a look at these. When we compare two distributions from microarray experiments, you can think of these. Even if they're not perfectly normal distributions, they're going to be roughly bell-shaped shaped curves. So let's say that this is experiment 1 and this is experiment 2. You say, oh, they look the same. This is experiment 1 under condition 1. This is under condition 2. OK, now they look different. But how do you quantitate that? And the way you ask that is, the means of those two roughly bell-shaped distributions are far apart from one other. How far apart? Well, they're farther apart from one another than the width of the distributions individually. And the distance between them, you can think of as the mean of the difference of the distributions. And then the width is a measure of the root mean square standard deviation, so [? versus ?] the combined width of the two. If one of them is wide and the other one is narrow, you have to have some way of combining those. So that's sometimes called a student t-test. And the t statistic itself is simply the mean over the standard deviation. In other words, how many standard deviation widths apart are these two means? Or if you take the mean of the difference, take the distribution of the difference, then you want your null hypothesis. H0 here on slide 33 is the null hypothesis. If the mean value of the difference is 0, there's no difference between the two distributions. If you can rule that out, then that would be the point of this test. So you can think, how many widths apart are the means of these two distributions. Now, this requires that actually, the distributions be very close to normal, not distinguishable from normal distribution with all its properties. If you are in serious doubt or can prove that they're not normal, then you should go to a non-parametric. Normal means it's parametric. It has a mean and standard deviation that characterize it well. Then you can use a non-parametric. Whenever you see the word "ranks," that's a tip-off that you're going into something where you're making fewer assumptions. This has lower power. That means you might miss some significant differences. But on the other hand, if you can convince yourself with the Wilcoxon matched pair sign ranks test, then you don't need to worry about whether it's normally distributed. In any case, we're going to look at some distributions and ask informally whether these are the same distribution or different. Yes. AUDIENCE: [INAUDIBLE] GEORGE CHURCH: So the question is, how do you deal with the multiple hypothesis testing. And this basically is exactly the same answer that we would have given in the last lecture on multiple hypothesis testing in genotyping. If you apply exactly the same-- it's a very good question, very appropriate here. Just as before, where you would have multiple different phenotype-genotype combinations that you might want to test, essentially testing every possible single nucleotide polymorphism or combination in the genome, to a first approximation, whatever your significance is, it needs to be that much more significant if you have that many hypotheses. AUDIENCE: [INAUDIBLE] GEORGE CHURCH: You either have to improve your data. Allows you to test more hypotheses. Or you need to reduce the hypothesis, that number at the outset, by having a sharp biological question at the beginning. It's an excellent question, but there's no magic wand except those two that I know of. OK, so here's some examples of independent experiments. Now, when someone says an independent experiment, you have to be clear about, is it that the same RNA sample split and then labeled independently. That's really not an independent experiment. On the other hand, you could take two completely, where you repeated the best of an independent experiment. If your objective is to ask how reproducible is the entire biological phenomenon, you should go back as early as possible, make a new cell line, try to get the conditions exactly the same, but completely independently executed, possibly by different researchers in different laboratories. In that extreme, you expect to have more scatter. Here, these are the regression lines. The R squared is the number that pops out as an indication of deviation from the linear, just like the linear correlation coefficient, which is basically a squared term. You can see that as instead of splitting one sample, and doing kind of a trivial differentiable labeling, if you have more independent samples, you get more scatter and a lower figure of merit for the regression line. OK, now, what are the guidelines for-- what are some of the considerations in RNA quantitation? I think we've touched upon this before, but I just want to drive it home, that some people will say, I'm only going to look at things that are more than a three-fold effect. This is sort of the ratio limits that you might perceive in the early RNA [? TIP ?] experiments. But I think we're getting better at it and the biological motivation is high. We've seen that human trisomies, where you just have a 1.5-fold increase in dosage, every single one of them has a huge phenotypic consequence. Many of them result in lethality. We should set as a goal to be able to monitor most of the RNAs of biological significance down to this 1.5-fold effect, which can have these dramatic implications. We mentioned the Oligonucleotides, we might be able to get more of them per gene. How cam we utilize this, not only the number that we can get, but the specificity? If you have a gene length oligonucleotide, or cDNA, then you're going to pick up not only the gene of interest, but every related gene, all the alternative splice forms, all the very, very close family members. So with oligonucleotides, you can then go and target individual splice forms, but then when you apply your algorithms, you have to be careful not to lump them all together as if it's one gene. You have to say, OK, this is splice form number one, number two. And just having oligonucleotides aimed at particular exons is not sufficient to tell you which exons insist in particular RNAs. You can have present in the population exons 1, 2, 4, 6 12, and so forth. But you don't know whether 1 and 12 are on the same molecule, however. That requires a more specialized method, possibly high-throughput method. There is another set of economic forces pushing towards just doing a subset of the genome. Just like not repeating the experiment, you probably don't want to give in to economic forces unless you absolutely have to, because if one person studies a cancer subset, another studies a blood-related subset, and another one studies these little pieces of the genome, then when they want to pool their data in order to ask questions about what genes are cluster together because they're in their proliferative cells, and which ones cluster together because they're in this developmental stage or another, they can't do it because they don't share enough genes on their arrays to do this meta analysis. So that's a consideration when you're in the experimental design phase. And hopefully, computational biologists are involved not only in the interpretation of the data, but in the design of the experiments as well. Here's yet another way of looking at the variation that you have in the experiment. We're introducing, I think, the coefficient of variation here, which is simply the standard deviation normalized to the mean. So you can just phrase it. It's a way of sharing, in a generic sense, how much variation you have. So you can say the coefficient of variation is, say, 10%. And that's independent of what units you're measuring. And so we have is on the horizontal axis, the x-axis here, number of messenger RNAs per cell, and in the vertical axis, the coefficient of variation. And you can see that when you get up above, say, 20% coefficient of variation, you getting less trustworthy, because here, we've used the algorithms that are built in to the [INAUDIBLE] software for asking whether it thinks an RNA is present or not if the intensity is very low, and a variety of other criteria. For a single experiment, it will classify whether it thinks the RNA is present or not. But if you use a large number of different experiments-- each of these dots being a different RNA-- you use a large number of experiments, you can now beat the company software, because it's made the assumption that you're just doing one experiment. And so here in the dark blue are examples where in 3 experiments, all three experiments was called f one by one. But you can see that even with cases where it's not called present in any of the three experiments, these magenta ones, you can still find very high reproducibility, that is to say, very low coefficient of variation, down around 10%. There's some pink dots all in this region around 10%, and these are just as reliable as the blue dots. Even though they're not called present by the software, collectively, they're very reproducible, and therefore, they're trustworthy. So actually, reproducing your experiment is not just something you do to appease nature and lower your statistical noise. It actually allows you to get data for RNAs that otherwise might be inaccessible. So there's immediate gratification there, even at the slight expense. So now let's broaden back out a little bit on a number of different methods and their advantages and disadvantages. Each one has a set of advantages. We've already talked about two of them, which is the immobilized genes, labeled RNA scenario. That's basically the microarrays or chips. And the advantage here is that in a very high throughput manner, you can manufacture large numbers of these. And you can get high multiplicity, all the RNAs that we know of monitored simultaneously. The in-situ, we've also talked about. The major advantage is retaining the spatial relationships. Some of these other methods, if instead of immobilizing the probes on a solid surface, you immobilize the RNAs, and then one by one, you label the probes, this will allow you to first, say, separate the entire transcriptome of RNAs in electrophoretic separation. And so in a highly parallel method, you've now immobilized them after they've been separated by size. So if you want to know the size of the RNA, which is a big hint as to its exon composition and so on, measuring the size of RNA, this is one of the few ways to do it. Very hard to do with arrays or in situ. OK. If, on the other hand, you want sensitivity, where you want to really detect at the noise level, which say, for mammalian veterbrate RNAs is around 10 to the minus 4 copies per cell, that's the level at which if you look for almost any part of the genome, any kind of RNA, even things that shouldn't be expressed, you will find them down in the minus 4 [? per cell. ?] That probably is not biologically significant, but it's a biological fact. Getting down to that level, or if you have a mixed tissue and you want to detect 1 part in 10 to the 10th, you might have 5 times 10 to the 5th messenger RNAs per cell. But if you have 10 to the 5th cells, then a single copy messenger RNA would be down to the minus 10. That's feasible with reverse transcriptase, quantitative reverse transcriptase, [INAUDIBLE].. And it has it is the standard which all the others can barely match. Reporter constructs are something we do not consider generally a high throughput method, although there are genomic constructs of reporter constructs for an entire genome like yeast. But here, the real advantage of that method is there's no worry of cross-hybridization. With in-situs, with northerns, with arrays, there is a chance that if you probe for RNA x, it will happen to hybridize, especially if it's present in high abundance, it will happen to hybridize to one or the other ones. But with a reporter construct, we will take a fluorescent protein array, a luminescent protein and hook it up to the gene, and insist with the gene you're interested in. And that will directly or indirectly monitor the expression of your favorite gene. That has no possibility for cross-hybridization. We've talked about the advantages of counting. The disadvantage is of course cost. It allows you to do gene discovery. It doesn't address alternative splicing. Here's an example of comparing two of those methods. As microarrays are being introduced, one needs to validate them to ask whether you're measuring one RNA or multiple RNAs of different sizes to ask whether quantifying a northern blot correlates with quantitating an array. And here, you can see a fairly acceptable linear relationship between the two quantitative measures. And this has been played out many times. The opportunity that you have when you make an array, we said that the SAGE and NPSS allow you to do discovery. But another way of doing it is putting down lots of oligonucleotides, even oligonucleotides in regions where your genome annotation may not have indicated that there's a gene. So you can see here the bottom 60% of this array was in so-called non-protein-coding regions. And you can just see what you get when you do that. It doesn't cost that much more to put down some of these non-coding regions. And you can ask in these untranslated regions whether there are maybe antisense RNAs that will overlap in the translated regions. Or you can look for DNA protein interactions in certain kinds of experiments. And you can look for RNA fine structure. Where does the gene actually end? You may annotate that the RNA ends here, but you need ways of actually measuring them. So there's a lot of uses for nucleic acid probes in so-called non-protein-coding regions, which can range from 12% of the genome in simple prokaryotes to up to 98% of the genome in humans. So what are the sources of random systematic errors? We have a secondary structure that we talked about at the beginning of this lecture. It can cause different parts of the array to have different hybridization efficiencies. The position on our array to have an effect, for example, poor mixing, if you're making your array by a non-reproducible method, the amount of target nucleic acid immobilized on the array can vary. And you need to control for that, for example by having an internal standard. Cross-hybridization hybridization, we've talked about. The unanticipated transcripts, you can handle by tiling, by basically putting oligonucleotides throughout the genome. So here's an example of spatial effects. What you do is spike in known amounts of known RNAs which are present throughout the array. And so these are internally spiked in addition to your unknown fluorescently labeled arm probes. And you can ask whether you're getting a perfectly uniform, edge-to-edge hybridization with the known for the answer. And if you're getting peaks and troughs, then you can use these internal standards to calibrate that particular hybridization experiment and correct for this kind of systematic error. This could occur again and again. Here's two different experiments giving roughly similar edge effects. You need to account for these things to avoid that particular source of systematic errors, especially if you put all of your oligonucleotides for a particular gene near one another. A better strategy for statistical experimental design is to put your oligos randomly throughout the array. Here's another one, unanticipated RNAs. Two examples, one an open reading frame of unknown function. You can sometimes mis-annotate. If you have two open reading frames on opposite strands, it could be one is used, generally speaking. One is used and one isn't. And you could pick the wrong one. You might pick the big one, and it could be the little one is the one that's actually used. And that's what happened in this case. And another one is-- so that was a translated RNA. We just happened to pick the wrong strand. Here is an untranslated RNA, such as the snow RNAs we saw before. This one is an untranslated RNA that was discovered in a so-called intergenic region. If you have a statistical test for the goodness, the quality of an individual oligonucleotide hybridization, based on, say, its reproducibility or its relative intensity that you expect-- if you have 20 different oligonucleotides all for one gene, and you expect number 1 typically is stronger than number 2, and then you find a case where number 1 is weaker than number 2, then you can flag that. You can say, I don't believe that particular spot. And if you color-code them all-- see, here is white spots, things that don't fit your statistical model for the array hybridization. This is the advantage of having a statistical model of the entire process. Then you can mark those as white, and you can look to see whether they have a [? statistically ?] significant spatial distribution, which they do in this case. They all seem to be clumping at this corner. Now, what could cause that? Well, we've already illustrated that there are ways that you can use internal standards to calibrate. This was not a case where we had poor hybridization efficiency or strong hybridization efficiency around the edges. This was something where the alignment of the grid is done by these little squares along the edge, and the computer algorithm that finds these spots was distracted by this little spot off the side, which is not part of the checkerboard. And once you manually correct that error, now you snap in. On the right-hand side of slide 43 is now the statistical model of this after getting the alignment right. Here, you had been associating the wrong oligos with a particular signal. It didn't fit the model. Now it fits the model, and you see the little scattered strips of gray where you have individual genes which are misbehaving, rather than the entire corner of the array. OK, so now we get the very interesting interpretation issues where we're using the same kind of information. Once you have a model, a very sophisticated model of how the individual oligos in the array behave, here, what we do is we take genomic DNA as an example of a fairly equimolar calibration standard. If you take genomic DNA and label it, you expect every segment in the genome to be present at the same molarity, with the exception of repetitive elements, which we'll put aside for the moment. And so that means that any place, that is, any oligonucleotide that doesn't hybridize with the genomic DNA, such as these ones that go close to baseline here at 0-- remember this is perfect mass versus [? mismatch ?] that we're plotting on this. When you get close to 0 for the genomic DNA in black, that means that really doesn't hybridize well. It's not that it's missing from the genome. It's that it has some secondary structure. So this is the secondary structure that's been a theme for this plot. And that sort of secondary structure is actually a piece of data that you can do data mining on. You can go through the entire genome and you can look for secondary structures. And you can ask for those. Secondary structures depend on what part of the genome is transcribed. Now, here is a messenger RNA. This is one of the few messenger RNAs for which you have a plausible secondary structure. Most secondary structures are on structural RNAs or enzyme-related RNAs, the ribosomal RNAs, [INAUDIBLE] RNAs, and so forth. This is the messenger RNA for this gene product, LTT. And if you look where this black arrow is coming from the right-hand 0 you'll see a long helix. And that helix is at the 3' end of the messenger RNA, and it's very well characterized both structurally and functionally. And it's known to be involved in at least one important biological process, which is the termination of transcription. When you get close to the end of the RNA, that [? pair ?] can reform, and it sends a signal to the transcription apparatus to stop. So that is a believable hairpin of known function. And the interesting feature of this microarray is that's one of the places where both the genomic DNA in black and two completely different RNA samples fail to hybridize, consistent with it being a very strong hairpin with a dozen G, C and A base pairs. Another thing you can derive from this detailed model of the array, here, you have 60 different oligonucleotides along the gene and adjacent intergenic regions. The question is, where does the RNA transcription stop. Well, if you look in the places where the DNA control is high, you'll find the RNA is high, going from right to left. The RNA tracks the DNA. The red and blue tracks the black until you get to position -33. And there, the red and blue drop to baseline, and the Black stays going up and down at a higher level. And that happens to coincide with the known transcriptional start. And so that would be another way of mapping the transcriptional start. You'll notice that some of the hybridization intentially drops below 0. This is just an artifact of having the perfect match minus the mismatch. If it happens to be the case that your mismatch control is cross-reacting with some other DNA, say repetitive DNA in the genome or RNA, then it can get actually more intense than the perfect match. And so you can get a negative value. But otherwise, the negative intensity would be meaningless. Now, splice domains. In principle, you can go through the whole human genome and you predict where all the exons are, where all the splice junctions are, and in principle, even all the alternative splicing. In practice, is not that easy. And you can use all the hidden Markov models and so forth we've been developing. You can do multi-sequence alignments to get these motifs here, where two bits is a full scale. And you can find donors and acceptors in this kind of pattern, GT donor, AG acceptor. But when you come right down to it, you want to have some way of going through this empirically as well. And so what you can do is you can basically ignore or look sort of independently, do a tiling of the genome with oligonucleotides, as was done here by Shoemaker et al. And this was, I think, one of the nicer papers that came out in the Nature issue on the human genome [INAUDIBLE] sequence. Here, as the sequence was coming out, chromosome 22 was one of the first chromosomes nearly completed. At the top of slide 47, you see how the metaphase chromosome is banded and labeled. If you take a little [? 113-kit ?] kilobase chunk of that, it's the next line down. Then you blow that up further, and all the way down to oligonucleotide 60-mers, tiled every 10 base pairs as a starting point all the way along this 100-kilobit chromosome 22. And then you hybridize it with RNAs from a variety of different human tissues. Then you ask, in the vertical axis, what is the log of a normalized signal intensity for these various RNAs. And you'll get a little histogram here, where purple spike means there's a lot of hybridization under at least some of the conditions. And then there'll be a zone where there's almost no hybridization. And that's because those introns that we had, that we showed in the previous slide, are spliced out, and they're in low abundance. They're displaced out of the nucleus before the RNAs accumulate, so they tend to be in low abundance. And they're not found in the mature messenger RNA. And so when you label these up, you're selectively labeling the exons in [? CNA. ?] And if you can see, they coincide well with the little green exons in the annotation, except every now and then, you'll find something-- here's a case for exon 3 where the green annotation in the original sequence is too short. And here's a blow-up near the bottom where the purple region clearly extends beyond the green annotation in the [INAUDIBLE] from sequence algorithm, sequence analysis algorithm, where 102 base pairs should be extended five times to that exon to make it a slightly larger exon. But when you extend it by that, you ask, well, does it still have the splice site, or does it have a new splice site that we can recognize. And sure enough, it does. It has an AG and a fairly good match to the motif we had in the previous slide. And you can see that the purple intensity drops close to 0 here as soon as you get out of the exon as now properly defined. So this is a way of including additional data in addition to the sequence by tiling and by quantitative hybridization. Now, the last topic today is time series. This connects the quantitative data that we're collecting, where you're not just collecting an isolated condition and comparing it some other condition. It actually matters the order of the different conditions that you have. And this is a great advantage in analyzing causality, and we'll illustrate it in the context of messenger RNA decay, and finally in ways of aligning different time series data. Now, why do we want time [? courses? ?] If we do a gene knockout or we do a gene deletion, by the time you isolate that mutant and characterize and do the RNA, you've now gotten not just the primary effects, but all the downstream effects of that knockout. So the best would be to have some kind of conditional control of the transcription, so that when you first either turn it on or turn it off, the first events that occur are likely to be primary events. Now, the way that you control that needs to be, not have too many perturbing forces on the whole system. So temperature shift, it's an easy class of mutation to get, but it's not suitable because there's a huge temperature effect on the entire system. Chemical knockouts can be more specific, but you need to prove that. An example of a fairly time-honored chemical knockout is rifampicin, which fairly specifically affects just the RNA polymerase. And so this is an interesting case, where the effect is to stop initiation of transcription. And so then, as we do our time series, what we see is the RNAs for LPP, which we showed a few slides ago, is very stable. It basically lasts longer than the lifetime, the doubling time of the cell, possibly many cell generations. And other RNAs, such as CSPE, have extremely short half-lives in the order of 2 and 1/2 minutes. And you can compare various methods for quantitation. Then you've come up with different half-lives here. OK, so that's an example of a very significant class of chemically manipulated knockouts. So you can precisely phase them. You have very few other consequences, and then you can measure a time series. It'd be nice to be able to do that for any particular RNA and see what the downstream consequences are. Now, whenever you do a perturbation where you have two time series, you want to know how all the RNAs occurred during heat shock or some other pulse of some chemical relative to pulse of a different chemical, or the time series as it would have occurred without any. You can see how they won't necessarily align up point by point. You can't just start them at time 0 and expect them all to line up. In fact, you can't even expect them necessarily to line up where you have a uniform stretch. You might have to have piecewise stretch, where certain parts go faster than others. Now, this may hopefully click in your mind a connection to the dynamic programming, where we had two sequences of bases or amino acids. And you wanted to expand or contract different sections of those by inserting a placeholder. Well, it doesn't make quite as much sense here with time series to insert a placeholder. So you can do that. You can have a discrete block diagram. Just this, here's series A and B in middle upper diagram. Or you can have a more continuous function, where you've tried to more smoothly warp. Both of these are dynamic programming algorithms. The smooth warping is slightly more a little more complicated. The insertion deletion one is exactly the same three conditions that we went through for pairwise alignments in dynamic programming. But this is partly to drive home how many different ways you can use dynamic programming. You can use it in HMMs, which is that multi-sequence alignment, and now for time series and gene expression. And you can see, from the literature on cell cycle, almost all of the data time series that we have so far actually don't align perfectly point by point, because you use wildly physiologically different conditions to get cells to synchronize, say, for cell division, or to start an event here using a mating pheromone, a small peptide that's released in the media that kind of controls the cell cycle and allows you to arrest and then release from arrest, or a temperature-sensitive mutant, even though I malign temperature-sensitive in just a moment. It is one of the most precise ways of getting synchrony of cell division. Cell division is a particularly good illustrative notion, partly because we mentioned it earlier in the course. But also, if you think of any dividing set of cells, many of the cell types that you'd be interested are dividing-- stem cells, microbial cells and so on. That automatically is a mixture of cells. If you mush them up and extract RNA, you're kidding yourself if you think this was a homogeneous population. If on the other hand, you synchronize the cells, then you've removed one major variable that could confound. Now they are much more homogeneous cells that are in the same state, and the cell cycle can be synchronously isolated as a population. There may be other sources of heterogeneity, but you've eliminated a big one. In any case, you take these two data series. They have different time constants, different lengths, and even different warps. Now, you want to take the x's and superimpose them on the o's. And here's an example of that now. They're both put together, and even though there may be little deviations for any particular gene, when you talk about the thousands of different genes, very rich pattern. Lots of information, plenty of opportunity for smoothing out individual variations. But here, you get superimposed patterns. And here's the traceback that tells you exactly where the insertions and deletions or smooth warping might occur to align these two different cell cycles' data sets. So in summary, we've connected the multi-sequence alignments from last class to allow you to model RNA structure, how RNA structure helps you model it. An interesting class of RNA guide sequences involved in methylation as an illustration of finding genes that don't encode proteins. And we talked about various quantitation methods, errors that present and solutions of errors, statistical methods for asking whether two distributions are related or have no difference in their means, interpretation errors about where RNAs start and stop, how you get alternative splicing, and finally time series data, which we will find very useful for connecting RNA and protein measures over time series for analyzing causality and systems biologies. OK, thank you very much. See you next time. Be sure to get your problem sets in to your teaching fellows. |
MIT_HST508_Genomics_and_Computational_Biology_Fall_2002 | 4B_DNA_2_Dynamic_Programming_Blast_Multialignment_Hidden_Markov_Models.txt | The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license, and MIT OpenCourseWare in general, is available at ocw.mit.edu. GEORGE CHURCH: OK. Welcome back to the second half, where we'll talk about multisequence alignment, for starters. And I said that I would show this slide again. This time-- before, it was to introduce how we would go about getting an empirical substitution matrix from distantly related protein sequences, such as distantly related immunoglobulin family members. Now, we would like to ask, how did we get that multisequence alignment? This is one way of thinking about it as a generalization of the two-dimensional array that we had before, where we would have, say, two sequences, one horizontal, one vertical. Now, the third dimension is the third sequence. This gets harder and harder to visualize as the number of sequences you put in, but let's think about it in three dimensions for just a moment here. And when you have a multiple alignment, you can think of it as dynamic programming on this hyperlattice and that the indels for any pairwise combination may not be optimal for the triple. And let's go beyond triple, but to a very simple dinucleotide alignment. And we will say that this is the optimal multiple alignment. You can see here that the multiple examples of AT anchor the A and T as being separate positions, even though normally, if you just did a pairwise alignment with a high gap penalty, there would be a tendency to line the A up with the T. You would not have these canceling indels. But in the context of the multialignment, you now have a different interpretation. So we want to generalize the kind of algorithms we've been using. And again, this will be a recursive algorithm where the score of a two-character string is defined in terms of the maximum of various shorter strings. So at the very top is the case where we have no insertions-- the simplest case, where we have no insertions or deletions. And we just ask what is the score of having a [? VSA, ?] that is to say, this triple single-amino-acid comparison, just like what was the score of having a V substituted for an s. Now, we're asking a V substituted for an S substituted for an A. Now, the number of different cases we have here-- before, it was 3 for a global alignment, which was k, being the number of sequences, was 2. Now k is 3 for a three-way comparison. And all possible subsets is 2 to the k minus 1, in this case, so it's 7. So seven cases, and you can just walk through them. You can see the first one is no insertions or deletions. The next three are two insertions or deletions in the three different ways that can happen. And then the last three are one placeholder, one of these dashes, which means that the other two sequences have insertions relative to the one dashed. So this is the seven cases for a three-way comparison. Now, as k grows, then both the space complexity-- the amount of lattice points that you have to store somewhere, either in RAM, or disk, or somewhere-- grows by n to the k-th power where the sequences are roughly n long and the number of sequences is k. Now to compute each of those nodes-- well, I mean, what will be on the order of 2 to the k power, because remember I said that the number of subsets in general is going to be 2 to the k minus 1 or about roughly 2 to the k. And so the time complexity is have to do 2 to the k comparisons per node. And there are n to the k nodes, so it's of the order 2 the k times n to the k. Now, this is not a straw man. This is not some naive algorithm. This is using all the power that we developed for the pairwise comparison and we're just generalizing it. And so this is actually a hard problem. This does scale exponentially with k. And it's not like we only want to do k equals 2. There are very good reasons for inferring structure or function without experiments, just from sequence. And the larger k is, the more you can explore. It's like doing a huge mutagenesis experiment and exploring viable mutants. So we want to do multisequence alignments, so how do we deal with this? This is the way we deal with most non-polynomial calculations, that is to say, in this case, exponential, which is we approximate. Now, you can get something that's very close to the true optimum if how to prune this hyperlattice. Remember, one of the examples I showed was you could take this band. If you know where the band should start and how wide it should be, you can essentially prune off many of the nodes without really losing any optimality. But you have to be very sure you know where to start it and how wide it should be. So it's optimal within those constraints. Then there are others which are more heuristic. They are not guaranteed to be optimal, but on the other hand, they don't necessarily require arbitrary pruning. And the two that we'll illustrate in the next couple of slides is a tree alignment, as illustrated by ClustalW. By the way, pruning is illustrated by a program called MSA, which is short for multisequence alignment. And we'll show a star alignment. And then when we get, later on into the transcriptome part of the course, we will talk about the Gibbs algorithm. So let's walk through ClustalW, and then a star algorithm. So here's progressive multiple alignment. And I think most of you, if I had given you the luxury of just thinking this through during the break, how you would do the multialignment, this might be the algorithm you would come up with. Almost always it makes sense to start with the pairwise alignments because that is a solved problem, and we have fairly good scaling for that. And so here, you take each of the, let's say four sequences and do all pairwise alignments. And you get this 4 by 4 matrix. It's going to be symmetric, so you only have to do the diagonal and the off diagonals on one-half of it and. And you get the best score is S1 with S3, which has a score of 9. And so you can construct a tree. And this is-- basically, we're starting to describe the method by which we construct a tree, such as that tree of life that I've shown a couple of times now. And so when you construct a tree, you take the two closest scoring sequences, and you indicate them as terminal branches of the tree. And you connect them to a fork, a branch point. And the distance of each from the common ancestor is indicated by the length of these lines. And so the second-best score is S2 and S4. It's a little bit weaker similarity than S1 and S3. So you have these longer branches indicating greater divergence. And they're in their own cluster. Now, it turns out that then the common ancestor for all the sequences, which would be the common ancestor of the common ancestors of the first two clusters, is represented by this final branching closest to the trunk of the tree or the roots of tree. And here, distance is this horizontal axis. And then, once you have this dendrogram, the next step, or the full steps, are aligning each of the sequences, which you already had to have done in order to calculate the similarity matrix-- and again, these are a pairwise alignment of S1, S2, and S4. Steps 1 and 2 were already done to get a similarity matrix. Now, step 3 is new. You align this alignment, we'll call it the pair S1, S3, with the pair S2, S4. And you could imagine keeping doing this hierarchical process. If there were additional sequences which are even more distant related, let's say S5, you would take this alignment of S1, S2, S3, S4, and align it with a single sequence S5. So you can see how you can align not only sequences, but you can align pseudosequences, which have these little indel bashes in them. So that's one method. This is a different method. And here, the premise is that you've got one sequence which is sufficiently close to all the other sequences that you can use it as an anchor sequence. And whatever indels you put in individually pairwise, for that sequence, can be propagated throughout the entire multisequence alignment. So here, we start the same way. Here, we have five sequences instead of four, but it's the same thing. You do all pairwise similarities, and you give a score. These scores are the scores that would have come out at the end of that traceback in the pairwise alignments. So this is not a pairwise matrix. This is the results of 5 times 4 over 2 pairwise alignments. Each of these boxes itself is the outcome of a full matrix on S1 versus S2, for example. And you can see from these set of scores that the best score, or the best set of scores for any sequence, is S1 has the best score to S2, and it has the best score overall to all of the sequences. And so we'll use S1 as the focus of the star geometry. And we'll say OK we've already compared every sequence to S1. We compared every sequence to every sequence. But let's focus on that. And now take wherever the indels were that were required to get the best score for S1 with each of the others, and have S1 in red in each case, and use that as the anchor. And so then in the multialignment, you take all the indels relative to the red one, and introduce them so that it's the anchor. So those are two radically different ways. And we'll get to the Gibbs sampling later. But the Gibbs sampling, just in a nutshell, is in general when you have a hard problem, where you can't comprehensively go through the entire space, what you do is sample it. You say, let's try a few things, and try to randomly sample it, and maybe even develop locally. If, after randomly sampling in certain places look better, then look near there, and find other solutions, and keep optimizing. That's the Gibbs in a nutshell. Now, we have explored the space-time accuracy trade-offs. You can improve time by having this storing, this pairwise or multi sequence in a matrix-- so that's actually you've done a trade-off where you've taken up computer memory in order to save time. And then if you're willing to sacrifice a little accuracy or a little comprehensiveness, then you can save even more time or memory. Now we want to use motifs, which is the sort of thing that you get out of local alignments, to find genes. And we're going to use the motifs and the finding genes as a way of introducing a particular motif, which is a CG motif, as a simple example of a hidden Markov model. Now, how do we find genes? Genes have little bits of sequence at the beginning, in the middle, or the end, which are distinctive. They have distinctive properties, typically sequence properties. So at the beginning of the gene, before the protein-coding region or RNA-coding region, you'll have regulatory elements such as promoters and so-called CG islands. Now, remember the CG islands, because that's what we're going to use to illustrate the HMMs. The CG islands are basically an abundance of the CG dinucleotide. Of the 16 different dinucleotides, CG happens to be underrepresented in general invertebrate genomes, and over-represented in promoter regions upstream from genes. And the reason is probably that they bind to transcription factors, and the transcription factors protect them from methylation, and thereby protect them from a mutagenic process that would otherwise cause them to become a TG. Now, that's the example of a distinctive sequence element that indicates the beginning of a gene or just before beginning of the gene. Within the gene, especially-- well, only-- if it's a protein-coding region, you'll have preferred codons. These are preferences that are set by the particular abundances of transfer RNAs in the cell, as well as other constraints on the sequence. If you're in an organism that does RNA splicing, you'll have RNA splice signals, and they'll have distinctive sequence features. You will have-- if you have RNA splicing, then you will have to maintain the translational reading frame across the splice junctions. That's a hint. If you have multisequence alignments, then you can look for conserved positions and interspecies conservation. The ultimate cheat is if you have a cDNA in the case of species that are spliced, then you can figure out the splicing just empirically by the presence of actually sequencing the messenger RNA that encodes your gene. So you know there's a gene there because you found it present in the messenger RNA population, and you sequenced it. Now, there are problems with each of these approaches. Promoters and CG islands are sort of degenerate. They're weak sequence signatures. There's a high variety, and they're used in combinations. When we're looking at preferred codons, we need a lot of codons in a row to see a preference over random sequences. Random sequences will also contain some of the same codons. And if you need longer ones, then you'll miss tiny proteins. And we'll talk about this in just a moment, specific examples. Similarly for RNA splicing, you can have weak motifs, again. And alternative splicing-- it's not like there's one specific splice that occurs in a particular gene segment. There can be multiple kinds. Conservation requires that you have the right species, that at least some of the species in your multisequence alignment are just the right distance-- not too close, not too far away. And cDNAs are great, if you have them. But if you have very rare [? trends, ?] you need to have the cell type and the rare [INAUDIBLE],, rare messenger RNA within a cell type. So let's talk about the sizes of proteins. If you look here, I plotted the sizes of proteins in annotated genomes-- two of the first annotated genomes is the smallest eukaryote yeast and the smallest prokaryote Mycoplasma-- and asked what were the sizes of the proteins that are annotated? Proteins in quotes, because this is what humans and computer programs together chose to represent. This is not truth, necessarily. And you can see it goes out to over 900 amino acids. And if you go to humans, this would go out to 10s of thousands of amino acids long for the largest proteins. But let's focus attention on the smallest proteins. How is it that it precipitously drops off at 100 amino acids? Why are there so few proteins that are short? And there are slightly more short proteins in Mycoplasma? Any guesses why they're so few? Why does it drop off at 100 amino acids? STUDENT: There are more but we can't find them? GEORGE CHURCH: Right, there probably are more. It's not that we can't, it's that the annotators chose not to. And why did they choose not to? They just agreed that they would stop at 100. That was getting too short. And this is what kind of illustrates why. Here, every genome has its own GC content, its own codon usage, and so forth. Here, we're just talking about just the first order percentage of GC versus AT. And the genetic code, theoretically and as observed, can restrict genomes so that they really can have a minimum of 25% GC content, or 28%, and a maximum of 75%. And essentially all genomes fall in that range, and yeast is around 39% or so. And then if you plot-- stop codons tend to be made up of As and Ts. The stop codons are TAG, TGA, and TAA. So if you have an AT-rich genome, you're going to tend to have a lot. You tend to run into a stop codon at random quite frequently. So if you have a long open reading frame in an AT-rich genome, that's very-- if you have a modestly long open reading frame, that's very significant, AT-rich genome. But you have a GC-rich genome, then you can go for a long time at random without running into the stop codon, so it's less significant. So you need to have more codons in a row in a CG-rich genome in order to convince yourself. So it's usually somewhere in between. And you can see that there's this general trend. You need to have more codons in a row to convince yourself as the GC content on the horizontal axis goes higher. And basically, the place where you start getting too many false positives is around 100 amino acids. And so that's why the community just decided to cut off there. When we get to proteomics, we'll talk about ways that you can empirically, by mass spectrometry and so forth, find those small proteins. And genetically, of course, you can find them. Let's talk about the most extremely small ones, and ask whether these extremely small open reading frames are interesting. And I think they're very extreme examples are very interesting. So the smallest that I know of is a pentapeptide, which is actually encoded in not just one, but many different phylogenetically diverse, large ribosomal RNAs. So here, ribosomal RNA normally acts as part of the translation apparatus, but here, it is acting as the messenger RNA, as well, presumably a separate molecule, maybe possibly a degraded version of it. But in some way or another, the 23 sRNA encodes this pentapeptide, which is not just some junk-- you can have junk DNA, you can have junk peptides. But this one actually confers erythromycin resistance at low levels in wild type. It is not a mutant kind of peptide. It's the normal pentapeptide. Now, here's three examples that are related to one another. They have somewhere between 14 and 16 codons, and they have this very strange amino acid composition when you do the translation conceptually in the computer. Remember, tryptophan were a rare amino acid. Well, here's two of them in a row. That's pretty unusual. Here's seven phenylalanines in a short stretch. And here's seven histidines right in a row. This is really bizarre. And what furthermore gets even more conspiratorial because these seven histidines in a row happen to be-- the next gene down is a histidine biosynthetic gene. And not only that, but about eight histidine genes in a row come after that. And the same thing with phenylalanines upstream of phenylalanine biosynthetic genes, and this weird excess of tryptophan is upstream of tryptophan biosynthetic genes. So what does this all mean? What it means, probably-- and there's actually quite a bit of experiments on this-- is that this is an excellent feedback loop, where you want to do feedback in the most relevant way. So here, if you want to know whether you need to make tryptophan, phenylalanine, or histidine, you ask whether there's enough of it around to do translation. That's very relevant. And so this has to be sensing the translation process itself. It's asking whether the transfer RNAs are charged up with amino acids enough that you're getting efficient translation. If you're not, then you'll pause here. That ribosome will hesitate, waiting for the right transfer RNA. And as it hesitates, this RNA changes. It's folding. And a series of events results in-- if it's hesitating, then it wants to make the biosynthetic genes downstream to make more of the amino acids. So the tRNA has to be charged up. So you get this nice, little feedback loop that the hesitation causes a change in RNA, which causes change of transcription, and you make more of what you need. So I think these are interesting examples. And of course, if you knew in advance you were looking for runs of histidines, that would be great. But for other open reading frames, there may be a different story. And so you need to have methods for looking for very short motifs. So let's go back to the bigger question of motifs, and ask, how do we deal with them more rigorously? And the way we deal with them more rigorously is these profiles. Now, what we're going to do is we're going to take a multisequence alignment. You now know how to do multisequence alignments. And we now want to capture that information, and deal with these position-specific profiles. Remember I mentioned the PSI-BLAST and other algorithms. You acknowledge that you don't have a generic substitution matrix for all positions in all proteins or all nucleic acids. You have a different substitution matrix for every position. Because one position might be, say, an alpha helix. We have one substitution matrix. And another one might be in a coil. So here, this is all about what motifs are all about. Each position has a different set of rules. So the first position in this tetranucleotide-- it doesn't care what it is. It can be A, C, G, or T. These are four different sequences, real start sites, that we've aligned, either manually or by computer. This is dead easy to do the alignment, but the interpretation here is the position upstream of the start codon doesn't matter. So your matrix down below is-- A, C, G and T each get a 1, which is a count. We could do it in terms of frequencies, percentages. We're doing it in terms of counts here because that's just a restatement of the data. The T and the G position at the 3-prime end of the codons are, in this small sample, invariant. And so they get a count of 4 for the correct base and count of 0 for all the alternatives, A, C, and G, for example, instead of T. And the A position is not quite invariant in this sample. GTG is a perfectly good start codon in, say, 1 sequence in 10 or 1 in 4 in this case. And so you get 3 and a 1. So this is the weight matrix or position-sensitive substitution matrix. This is more precise than, say, a consensus sequence or a single sequence from the sample. But it's not the most precise way of representing this. It's position sensitive, but we've lost the higher order correlations between positions. In other words, we've lost the dependencies of adjacent bases, or bases that are a few bases away. But let's see how this plays out, this position sensitive. This is another way of representing is in terms-- is an information theory version of this, where the full height of each of the bases is 2 bits. And that's the same 2 bits we talked about in the first lecture, since there are four bases. And this is the same motif, ATG. The T and G were invariant in this larger sample, or nearly invariant in the sample size of now instead of just 4, but more than 1,000 sequences. And again, A and G were the predominant ones. You can see a little bit of a T there in the first position. And then the base just upstream from the ATG is almost completely random. And so its information content is close to nil, and so it's 0 bits. Now, this is easy enough that you can just do a big search aligning on the ATG, which is a very striking thing, and look to see if there's any other residual information content to the side. And sure enough, you find this little blip of Gs and As, mostly, at minus 9 relative to the A of ATG at 0. And it turns out that-- again, experimentally verified-- this motif-- so the ATG motif binds to transfer RNA, and the GA-rich motif actually binds to a ribosomal RNA sequence. And so then basically, the messenger RNA is coaxed into the right position, to be in the right position of the ribosomes where the tRNA can bind the initiator. So here's an example where you can do a multisequence alignment. Here are 1,000 of sequences. k equals 1055-- remember, this is exponential of k. And you can find these motifs that have great biological significance. Now, once you've done multisequence alignment and you've derived the weight matrix, this position-sensitive substitution matrix, now you want to be able to search the genome for these things. You know what a start motif looks like. You want to find them all. And it wouldn't just be the ATG, it would be this full, including the GA-rich motif. And the way you do that is now take this weight matrix, and ask for each-- we're scanning the genome, and we run into the sequence [? AAT ?] AATG. Now you want to know, how good a match is that to this weight matrix, which was taken from either 4 sequences or 1,000 sequences? And the way you do it is for each position, you ask what was the score in the whole learning set? And now this should be a now independent test set you're trying this out on. Here, the learning set and the test site are the same. But you basically have the A is a score of 1, which is not going to be a big contribution because they were all the same. So then the second A is a score of 3, and the T and G are a score of 4, for a total score of 12 for this particular tetranucleotide instance of this motif represented by this weight matrix. And then you can see that the top three sequences, which all have ATG, have the best scores. And the bottom one, GTG, even though it's a valid member of the learning set, it was something which was underrepresented statistically. GTG tended to be less frequently encountered than ATG, and so it gets a lower score when you search the genome for it. So if you prioritize these, they would be prioritized in this order by 12, 12, 12, 10. So now the final topic, which talks about a very simple and short motif, which is the CG motif, which we claimed is over-represented in promoters in vertebrates. But before we talk about these very short motifs, let's talk about why we have probabilistic models in sequence analysis in general. And there are three main uses. One of them is recognition-- for example, the recognition that we've been doing is, is a particular sequence of protein start? In other words, does it have a score which is statistically significant? That's basically what we were doing, very anecdotally, in the previous slides. Or another task is discrimination. We ask questions like, is this protein more like a hemoglobin or like a myoglobin? The first question is about one sequence relative to, say, a weight matrix. The Other one is about two sequences, asking how-- or three sequences-- whether a particular protein is more like one than another. And in a database search, we would go through. A question might be like, what are all sequences in [INAUDIBLE] that look like a serine protease? This would be asking for recognition multiple times, over and over. So here is the basic idea-- which will be a Bayesian idea soon, in the next slide-- is assign a number to every possible sequence such that the probability of that sequence given a model-- so this jargon here, P of s/m, s bar m-- is the probability that you would get that sequence given a model. So the model might be this weight matrix we've been talking about or it could be something more complicated. So what's the probability that we get the sequence ATG, given the model, the full weight matrix model? And as with any good probability, as we mentioned in the first class, they should sum to 1. If you sum up the sigmas of s, you sum over all sequences, then the probability given the models should sum to 1. Now, that will be true for the p of the sequences given a model summed over all sequences. We can also have the probability of a sequence in your population of sequences irrespective of model. And those should sum to 1. And the probability of models in your collection of models, irrespective of sequence. And here's a very useful theorem, called Bayes' theorem. And this is completely general. It doesn't depend on models and sequences. You could just call it m and s, where m and s are just two things. And this is generally true, is that the probability that the model given the sequence is equal to the probability of the model times the probability of the sequence given the model divided by the probability of the sequence. And more jargon, but explanation of some of these terms here, is that the probability of the model and the probability of the sequence are prior probabilities. These are probabilities which are not conditional. They do not depend on something else. Well, when you have this little bar in the middle, it means that you have the probability of the model given the sequence. It's called a posterior probability. Now let's see what all this Bayesian stuff is useful for. We're going to be doing-- of the various applications, we had recognition discrimination and database search. So here's the example of a database search. We'll have two models, a model that we actually have a hydrolase and the model that we have randomness. So we call this the null model or n model, and m is the model that we're interested in, they're hydrolases. So we have random bases or random amino acids. This is hydrolase and amino acids. So we want to report all the sequences where the probability that that sequence, given the model, is better than that sequence given a null model or random amino acids, that that is significant, and it's significant by the delta between just the null versus the probability of the model in general. So if we look, if we, let's say, do a database search where we have scoring metrics just as the ones we developed earlier in the talk, and we score for random sequences, we'll get one distribution in orange. And if we score for fide hydrolases, we might get this distribution in blue. And we're asking whether the probability of getting a particular sequence given the model this is a hydrolase is better than probability of getting that sequence at random, the orange. And you want that to be statistically significant. So you can rephrase this in terms of bits, or in terms of significance level of probability of 5%, which is typically the case. Now, when we're talking about the probability of a particular sequence, where we can have deviations from randomness at the mononucleotide level, at the dinucleotide level, and so on, and rather than just dump this on you as a mathematical fact, I want to give you some biological rationale for why you can have nonrandomness at every order of a Markov chain, meaning every length of sequence. So the first-order chain, the lowest-order chain, would be mononucleotides. And you might have a bias where C would be rare because the Cs mutate into Us. And in organisms that lack a uracil glycosylase, which would then return it back to a C, Cs will change into Us because it's a very common chemical reaction. It's called cytosine deamination. But a deoxy U is an abnormal base. It's recognizable as an abnormal base, and there's repair in most organisms that [INAUDIBLE],, but there's some that don't. And there's a tendency of those genomes to aim towards high content. The Cs disappear, and hence, take the Gs with them. Similarly, many organisms repair-- well a T near a T in the presence of ultraviolet light will get mutated to something else. And if you can't repair that back to a T-T sequence, it gets repaired to something else or it gets mutated to something else. And so you'll lose that particular dinucleotide out of the 16 possible dinucleotides. We've already mentioned that CG is rare. And the reason is that this is methylated for various regulatory reasons. And now, because it's methylated, even if you have uracil glycosylase, which would then take all the regular Cs that turn into Us, deoxy Us, and turn them back into deoxy Cs, now a 5-methyl C turns into a T, and you can't tell that it's abnormal. T is a perfectly reasonable thing to get. And so every place you've got a methyl CG turns into a TG, and you tend to lose the CGs, unless they're not methylated. And we'll get to that. And similarly, you can have rare codons. And hence, these turn into rare triplets. You can have rare tetranucleotides if you, for example, have a methylase, the methylase is a pentanucleotide, and every time you see that-- every time the bacteria sees this related CTAG-G sequence, that says, oh, that must have been one of these methylation deamination problems. Let's fix it up. Let's make it this pentanucleotide. And the CTAG tends to be underrepresented as a consequence. Similarly, very long stretches of As-- not just tetranucleotides, but you can get excesses of As due to the fact that messenger RNAs end in polyA. They get reverse transcribed, reinserted into the genome, and now you've got a polyA track. Or you can get polymers in general by polymerase slippage. So all these things can cause biases. And I just elaborated on one of them here, which is the triplet bias, documented here that this 10 times lower frequency of ATG than of some of the other arginine codons. So now let's talk about a Markov model. This is not a hidden Markov model yet. In just a moment, it will be. It's a Markov model because we're asking what is-- the columns that we had kept independent when we were making profiles or weight matrices, we said the two nucleotides, whether CG or AA or whatever, were independent. Now, we're no longer going to make them independent. We will allow them to recognize the co-dependence. Forget the pluses right now. Just assume they'll be explained when we get to the hidden part of this. So they're hidden for now. But what we're talking about is, what's the probability of getting an A given an A? We've got an A in the first, in the 5-prime position. What's the probability now of getting an A dependent on that one? So we're recognizing that dependence. We've said that CGs are underrepresented in the genome as a whole, and they're over-represented in promoters. So this particular transition of what's the probability of getting a G given a C in the 5-prime position-- this is one of those conditional probabilities. This is a Bayesian that we had set up a couple of slides back. And so this particular arrow going from a C to a G is represented by this probability. And you can see going the other way is a different probability. That would be p of C given G. And these little arrows will refer to itself, is example of a p of an A given an A. So this is an AA dinucleotide. And you can see there's 16 possible transitions, including four homopolymers, AA, TT, CC, GG, and 12 transitions of the other dinucleotides. Now, what do we mean by hidden? We've got CG islands where the CGs have been protected from methylation, and hence, protected from mutations. So they're fairly abundant. They're involved in regulation in binding transcription factors. And these islands will be a variable length and just have an increased concentration of CGs. And then outside are the ocean, which are not protected. They're not involved in transcription, and they mutate. And they are very low in CGs. And you want to know where the island begins and ends because that helps you know where the regulatory factors are. So now, the hidden part is when you look at a new sequence, you won't know whether you're in an island or not. And so this Markov model that you have has to be different for whether you're in an island or not, but you don't know what you're in. So here is the hidden part. So you've got a Markov model for the transitions within an island. And so in that case, you expect the CGs to be high, roughly the same as the other dinucleotides, possibly higher. And in the oceans where they're lost, you expect the CG, this particular transition from C to G, to be low, and most of the other transitions to be normal, maybe taking up some of the slots. So there's 16 different dinucleotides in islands on the left. And there are 16 in oceans on the right. In addition, there's a whole set of transitions between islands and oceans. The genome is not just blocks. They're all connected. And so you can make a transition from any nucleotide in an island to any nucleotide in an ocean. And so here's one that's illustrated, this dotted, brown line, where it says probability of a C minus-- meaning in an ocean-- given that you have an A plus-- meaning in an island-- in the 5-prime position. So that would be a transition point going 5-prime to 3-prime, from an island into an ocean, going from an A to a C. Aren't you glad that I picked a dinucleotide to illustrate this? OK, here's a real example. Here's an example where I've cut and pasted a very short sequence with only one ocean on the left, and one island on the right, in bold and capital letters. You're given this as a learning set. Somebody has, by hand, decided that the boundary occurs at this first CG dinucleotide. There are no CGs to the left, and there are three CGs to the right. And so when you make this table-- we'll call this an A table later on-- this A table has the transition from an A in the 5-prime position to an A in the 3-prime position. So that's the p A given A. And here's the CG dinucleotide, C to G transition, all in an island indicated by plus. And you can see that's quite frequent. And then below it, let's look at the same CG dinucleotide going from C to G in an ocean. And here it's unobserved in this little toy example that I gave you, so it's a 0. So 43% in this actual example-- and you can work the numbers out because it's all here-- and there's only one transition between islands and oceans, and that happens to be a CC, a C in an ocean going to a C in an island. And that gives us 0.2. And all the rest are 0s. Now, 0s are a problem, both for the CG dinucleotide in the ocean and for the transitions between oceans and islands. And the way you handle it is called pseudocounts. You basically say, what if we just missed finding that thing? We're going to add 1 to it because however big the counts are, you can always add 1, and that would give you some feeling for the-- you don't really have 0s there. You can't trust 0s. And there's even a more rigorous way of doing it called Dirichlet, where you can do these pseudocounts. And so you can see. You can actually calculate these conditional probabilities by hand in the privacy of your home, not while the hordes are waiting to get into the room. And you can recreate these numbers with that simple formula there. Now this, is a real training set based on 48 known islands, again annotated by some person. And you can see those that this A matrix, focusing on those things that were 43 and 0 before, now more realistic numbers are 27% and 8% for an island and an ocean, respectively. Now we're going to plug these numbers-- basically, I've cut off the transition tables, which are off to the right. Now let's use them to actually do an HMM. In the Viterbi algorithm-- remember we said dynamic program is a hero, and we're going to end on this. The recursion we have here, the Viterbi score for-- so l and k are the states. There are two states, island plus, ocean minus. And i is the sequence. Here, the sequence length is 4. i goes from 1 to 4. And the sequence we're testing is, is CGCG in an ocean or an island? What's your guess? That's a pretty extreme case. But this is actually using the numbers from the previous slide, which were taken from real oceans and islands. And so you start out with the probabilities being just equally probable that you can start at the C. So there are eight different states, and so we just divide 1 over 8 is a starting point, or 0.125. And so there are two possible places it can be, and they're equally probable. It's in an ocean or island, just given the C, 1/8. Now you make a transition where you multiply this times the A matrix, A sub k l, so you're going from state 1 to state 1, from an island to an island. And if you look back one slide, you remember there is a 0.27 for going to a CG dinucleotide. So the recursion here is you multiply this-- is an emission, which is always 1. You multiply the maximum of the previous Viterbi, so i plus 1 and i, times the A matrix, which in the previous slide is .27. So the previous one was 1/8, and then times 0.27, you get 0.034. And if you started in an ocean and stayed in an ocean, it would already drop to 0.01. So you can see the better probability is already that you're in an island. And if you carry this all the way out to all four tetranucleotides, you get a much higher probability of being in the island, of 0.032, than being in the ocean, 0.002. Question. STUDENT: Do you know the basis for thinking that the context for a dinucleotide is either an ocean or an island, in other words, only two states? Why couldn't the context be five states? GEORGE CHURCH: OK. |
MIT_HST508_Genomics_and_Computational_Biology_Fall_2002 | 2A_Intro_2_Biological_Side_of_Computational_Biology_Comparative_Genomics_Models_A.txt | The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license and MIT OpenCourseWare in general is available at ocw.mit.edu. GEORGE CHURCH: 101 or [? EBIO101, ?] all the other numbers. Usually we'll start these with a very brief, one slide overview of last lecture with a slightly different angle on some of the topics, and then-- and then we'll go over a preview of today's. So last session, was more an emphasis on the biological side of computational biology, and this one will be more computational side of computational biology. But obviously, both of them are-- we're trying to interweave them already. In the first one, we were specifically looking at the simplest components and the simplest systems in living systems and computational systems, and how self assembly-- here defined broadly to include symbiotic relationships between different living forms and different human inventions. And self assembly is a very critical part of biology. In the mathematics that we use, both symbolic and numeric, you must know about the approximations when we talk about the various theories going from the somewhat subatomic to the population level. There's an approximation at each level that subsumes previous approximations in it. And when you represent floating point numbers and computers, there are further approximations, which can accumulate as you do calculations. And we saw some of the ways of coping with this, such as using higher precision arithmetic in Mathematica. The other aspect of this was differential equations as a tool in studying replication, in particular autocatalytic systems which were illustrated by this really simplest possible differential equation where the incremental increase in y is a function of time, the vertical and horizontal axis is a direct function of y. A simple linear function of the-- say the population size, its growth being proportional. Then we add this extra term, 1 minus y, to-- to represent rather than just the simple exponential curve that you get at the beginning with a population growth in infinite resources. As you get close to the maximum resources here, one, you start to either plateau as you would if this were a ordinary differential equation, or we start to oscillate as you do this with an iterative solution, which we illustrated. And if the rate constant K gets large enough, then you get chaotic behavior where it can go close enough to zero that you're effectively simulating an extinction of the population. This-- the issues behind replication and approximation come together-- or came together in the concept of mutation, and in particular, the mutations that occur-- that might occur in single molecules, life being full of very important single molecules such as DNA. And here, even though you know there must be stochastics underlying this process because it is a single molecule, there has to be some way of overcoming it, because we went through a calculation that indicated that the 46 chromosomes in every one of your cells in your body has to be replicating quite faithfully, or else you'd be getting cancer. And that noise is overcome by the single moleculeness of the DNA being compensated by a multi-moleculeness of-- of energy containing molecules such as ATP and associated proteins and so forth. So now we've brought together the approximations and replication, and what this-- what replication leads to are pedigrees. Pedigrees are an example of many that we listed of directed acyclic graphs will have more pedigrees today. In a certain sense, the ultimate pedigree will be shown today. And the mutations also can be nicely modeled by-- depending on the exact application of your thought about mutations, by the binomial, the Poisson, and the normal distribution, the normal being continuous and the other two being discrete. And selection figured prominently-- and optimality figures prominently in biology. You typically won't lose a bet betting on optimality for at least exactly the circumstances under which the system has been subjected for millions of years. So this is the outline for today. Again, supposedly more on the biological side. We'll go through how purification has played a central role in the reductionist approach to biology and biochemistry, and how that purification is-- is also the antidote to the reductionism in that it provides a way of creating synthesis in going back up to systems. Systems biology is the second topic, and this is both relevant to the models, applications of models, and making the interconnections of the components ultimately in a synthetic loop of discovery and recreation perturbation. Then this is the ultimate pedigree that I was talking about. Continuity of life and how it applies to central dogma as the illustration of one of the most robust algorithms that we have in computational biology, which is the genetic code. A truly elegant discovery, and elegant partly because that's what-- the biology underlying it is so amazing. Then we go into the issues behind how we get qualitative models from quantitative data, and then how we go from those quantitative-- qualitative models and fill them again with the quantitation that's required for truth simulation, prediction, and design. And then we end up again on mutations of selection just as we did last time. OK. Familiar face-- the periodic table. Just like last time, the elements that are involved in the three major biopolymers in the cell. DNA goes to RNA, goes to protein is the central dogma, and all of those can be made of these six elements plus counterions of the polyanionic. So those are typically sodium, potassium, and other counterions. And so the total of 19 elements you can find in almost all life forms, and I would say many of the other elements on here that at least have some stable isotope in nature probably have some story that some living organism could tell if you were interested enough. And that might make a good thesis project or a good project for this class to talk about, what organisms do to either detoxify or to use in some exotic way each of the stable isotopes on this chart. Now, most organisms do not use these in the elemental form. They typically-- that in the middle here are elemental forms of these elements. Oxygen, hydrogen, and nitrogen are commonly used elemental forms, but there exists life forms for which none of these are really required. For example, oxygen is toxic to a variety of so called constitutive anaerobes, and nitrogen gas is only used by nitrogen fixing microorganisms. Each of these elements exist in a gaseous form. They tend to be more reduced-- that is to say, more hydrogen containing on the far left hand side of the slide, and they tend to be more oxidized on the right hand side, and in the oxidized form tend to be found often as salts. Here carbonate, for example, being the salt form of carbon dioxide in the oxidized form that's fixed by plants and photosynthetic bacteria in the oceans, and unfortunately in its CO2 form, major global warming gas. And you can see that every of the basic elements here has-- can be obtained as salts. Many organisms require much more complicated versions of this, require eating macromolecules like steak, and-- and maybe have even very much more exotic ones. As we purify-- when we represent the elements in that periodic table, those were the hard work of chemists that had to purify each of those elements. Not just as molecules, but it wasn't sufficient to get hydrocarbons, it had to get pure carbon. And then-- but that was only the starting point for determining not just the properties-- the elements-- but putting them back together as molecules. And this reduction down to elements and then re-synthesis back to molecules that was part of the proof that they really understood what the molecules were all about. And the same thing can be obtained in living systems. We typically start with molecules that are a covalent connection in some form or another. And when they non-covalently associate, they're called assemblies. Typically the assemblies that most biologists work with are assemblies of proteins or proteins and nucleic acids. But there are many, many kinds of assemblies and these assemblies blur in their-- in their definition into organelles. Organelles are basically assemblies that start getting large enough that they can be seen under simple micro-- microscopy scenarios. Organelles are not always, but are often bounded by lipid membranes. These are hydrocarbon containing bilayers or multilayers. Anyway, the distinction between macromolecular assemblies and organelles is soft. And then cells are collections of these assemblies, again typically bounded by a phospholipid bilayer. And some organisms are cells. Unicellular organisms are vast majority of organisms on Earth, and they are-- many of them are single cellular aggregates of cells. While multicellular organisms such as yours truly and everybody in this room can contain up to 10 to the 14th cells that are direct and regulated descendants. Now what are examples of purification methods that are actually used in-- on the road to computational biology? We have chromatography, electrophoresis, and sedimentation-- are very common ways of separating molecules, including protein molecules and assemblies. Actually organelles cells as well can be separated by these media. By far, one of the most common ways of separating these protein molecules is via chromatography and electrophoresis, and we'll see some examples in just a couple of slides. A very incredibly powerful way of separating entities in general is represented here on the far right hand side. This is clonal growth, or-- this is essentially an analysis of single molecules or single organisms. Each of these colonies, which might be growing on a Petri plate about the size of my fist, is a-- represents the growth from one starting cell to 10 to the 8th or so final cells. Finally going through an exponential growth just as we went through in last class until the point that they have depleted the local resources near them, or have produced enough toxic waste products that they have slowed down their growth to form these colonies. In certain organisms they'll just keep growing until they get to the edges of the Petri dish, but this is much more general than just growing bacterial colonies. You can see it, almost any organism that has a limited ability-- limited motility, these will form little clones such as-- this happens in various tree populations, for example. Also it represents the ultimate purification. In one step, you're getting something that would by combinations of different chromatographic steps and electrophoretic and sedimentation steps-- you might have to serially do several of these steps in a row to get a molecule or assembly purified. While here in one step by essentially limiting dilution-- you dilute the-- the molecule of interest to the point where it's a single molecule. Well now if they don't undergo clonal growth, this is not terribly useful because it's very hard to study single molecules, even if they're well isolated from all the contaminating molecules. There are ways, and we will talk about them in the course, but you need-- ideally you need some way of amplifying them. There are ways to amplify nucleic acid molecules such that they exhibit clonal growth like this. And in principle, any-- either by putting them into a bacterium so the bacterium behaves in this manner, carrying along with it the artificial piece of nucleic acid you're interested. Or you can do it entirely with enzymes so that the nucleic acids replicate and make these colony like objects. Now this is an idiosyncratic view of this purification-- of this process by which we as scientists have gone through purification and then are returning to much less pure systems as a subject of our-- of our research. This is-- we'll call this three revolutions. In the pre-1970s, we had column chromatography-- so called chromatography in that last slide, because literally the substances that were being separated were highly colored as were those bands in the last slide, those two dark bands. Hence the name chromatography. It's really a separation by the properties of the solid phase and the properties of the mobile phase, and the molecules in the mobile phase being separated by differentiable absorption to the solid phase. Gel electrophoresis and sedimentation in a gravitational field-- these were all part of this amazing revolution that allowed us-- allowed scientists to get molecules, assemblies, and cells purified away from other contaminants. Then recombinant DNA did that trick that was in the lower right hand slide of the previous slide, which was going directly to purification by going to single molecule isolation by dilution to the point where you had less than one molecule per cell, and less than one cell per square centimeter on the Petri plate. That gives you single step purity of the gene, which in effect allows you to get single step purity of whatever is encoded by the gene-- the RNA, the protein, or the enzymatic activity. This was a huge change. Suddenly everybody was spending a lot of time so called cloning of DNA and sequencing it, and almost every thesis and paper of that time-- everybody was turning into molecular biologists in order to do this, and it became very routine and very time consuming and expensive. And so the third revolution was automating this and using economies of scale so that all the genes were obtained at once and sequenced at once, rather than going through here where you would have to go through the entire library of all the genes just to find your favorite one, and then you would sequence that one working hard on isolating it away from everything else. But with sequencing genomes, it was more a process of everything you came upon was interesting, and so you didn't have to do quite as much time selecting, you just made it more of a production effort. But the subtext of this was not just automation and-- and economies of scale. It also started to return us to thinking about whole systems and doing things systematically. And this was particularly valuable in the sequelae of genome sequencing, which was functional genomics, which we'll talk about quite a bit in this course-- applying the same attitude to other biological measurements, and this returns us to whole systems. Now that leads us to the discussion of whole systems modeling-- systems biology and the models therein. So we have-- this is just one of the earlier papers. There are many now that are trying to-- we're trying to grope our way towards what we mean by system biology, but this is paraphrasing from that paper. We want to follow these four steps as a protocol to find all the components of the system. Systematically perturb and monitor the components of the system so that we can do this either genetically or environmentally, meaning changing the small and large molecules that program the-- the biological system from the outside. Then refine the model which you had maybe before perturbing it such that the predictions most closely agree with observations. Listen carefully to that statement-- refine the model so that predictions agree with observations, and then do new perturbation experiments to distinguish among the model hypotheses. We do this in a cyclic fashion, basically going back up to item two so that we're perturbing and monitoring. Now what is the-- what's the critique of this systems biology manifesto? We have-- those of you who have read books that predate the genome project and systems biology say, hey, this-- what's new here? This is the way biologists were doing it even before recombinant DNA. So it is an old approach, but the new spin on it is that-- is the word "all components." Typically before, the components would be chosen and the perturbations would be chosen based on the latest biological fad or what was available technically at the time based on the history of the component studies before that. So it's a significant deviation to now even set as a goal all components. A very challenging goal is been met in the case of certain genome sequences, it has not been met in any functional genomics that I'm aware of. But it certainly is a goal, and we're getting asymptotically close to it just as we got asymptotically close to the genome sequences. To systematically perturb has the conceit that we can list all the perturbations we would want to do and then walk through them in a systematic way rather than a-- a more whimsical way. So this is new spins, but what's missing from this manifesto in the previous slide that-- in systems biology? For one thing, and I cautioned you in the previous slide that when you start fitting your model to the data and refining your model, there's a problem of overfitting. And this will come up a couple of times in-- in this course. If you have enough adjustable parameters, you can fit almost anything, and so you have to be careful that as you refine your model that you state exactly how many adjustable parameters you have and-- and how many data points you have that are truly independent for fitting that. So we have methods to recapture unautomated data. There is a step implicit in the previous one-- actually explicitly stated elsewhere in some of these papers-- that when you have-- as you're developing the model, you will draw not only upon the systematically collected data, but also upon the literature. And the literature, as we'll see in the next couple of slides, not only has unautomated data, but it has models that are derived in a variety of-- of somewhat undisciplined-- or a different discipline. It's not an electronically compatible discipline. And so there's a process by which one captures this unautomated data and integrates it with the automated data, which can be either challenging or pathological. So we need to make more explicit these-- the logical connections that are used for deriving these systems biology diagrams and quantitative models. Finally, when you make these perturbation experiments, if they're done using-- there's a new optimization that needs to be made in order to integrate them with the systems biology loop. As I mentioned in the previous talk, the thing that makes the killer applications in computational biology so far are searching, merging, and checking. If you can find ways to-- to search, merge, and check large data sets via models, then you've made a great deal of progress. And that should be the goal here too. So what the systems biology will do, I think possibly more well illustrated by this slide and the last talk than by some of the examples so far in the literature on system biology-- but the goal is to be able to work with very simple parts, this reduction down to the basic parts, and then move up through models that are hierarchical and include all these intermediate steps to very high level ways of describing, understanding a system. This one you have the unfair advantage that the entire thing was designed from scratch by humans, but in biological systems you want-- you want to reverse engineer it to the point where it has some of the same flavor of [? board ?] engineered systems, and then develop ways of simulating the systems so that you can design new versions of it. Whenever you find yourself doing an experiment or computational biology, you're going to be asking yourself, why am I doing this? Why am I in this classroom? And whenever you do that, you should ask yourself why a bunch of times until you get down to the real core reasons. And so for example, we were in effect sequencing the genome prior to the genome project. We were spending hundreds of millions of dollars of NIH money and other-- every funding organization's money doing it in a very inefficient way. And we knew why we were doing it, I think. We wanted to map variation in sequences, variation within a species like humans that make us different from one another, variation between different species, which is comparative genomics, item three. And in between item two, we wanted to have a complete set of human RNAs, proteins, and regulatory elements, and for every other organism too. I just use human as an illustration because it was called the Human Genome Project. And we wanted this complete set so that we could go back and measure them systematically. Although this was not articulated in any-- in any way, it was-- we never did-- we didn't use the words complete or systematic very much prior to the Genome Project. And if we said these were the reasons, then we could ask why do we want to map variation, why do we want a partial or complete set of these various molecules and regulatory elements? And why do we want to compare them in different organisms and in different environments? And the answer to that would be that we would want to make quantitative biosystems models, such as the ones we were describing the last couple of slides, of the molecular interactions at all the levels extending from atoms to cells to organisms and populations of organisms, because it's the population upon which selection acts, and it's the population that allows us to understand and make useful products. And so when we ask, why do we want to make these biosystems models we have three reasons, some of which we touched-- all of which we touched upon last time when we asked, why do we model? Same thing, why are we collecting all this data to model? We model so that we can share information and so that we can construct a test of understanding. One of the tests of understanding is making useful products. So on that theme of why and making useful products, I will put this in the context of, say, the projects that you'll be doing for this class. I like to stimulate you to think about this early on, In every class I'll mention something. And you might say, well grand challenges-- grand and useful challenges are really not a great place to be doing a short term project for a course. But actually I think that the piece that you choose should be a piece of a grand challenge because it really gives you the context. And so I'm just going to walk through these three classes of challenge, not so you will feel limited but so that you will feel broadened in your mandate for what you can do. So at the simplest level, kind of reflecting last lecture, simple going from atoms to small cells with small genomes, maybe even minimal or miniature. Just like you want to downscale electronic components, we have, if we really want to show that we understand the biosynthetic route that will be the topic of today, this really key mechanism by which we go from DNA to proteins, we should be able to take it apart, put it back together again. That's the way-- that's one way of proving that we really understand it. That has not been done in a purely synthetic route. We have taken apart protein synthetic apparatus and put it back together again, but we have not completely synthesized the synthetic apparatus. That sounds odd, but that would be one step, but the impact of that would not be just proving that we can do it. The impact would be that we can now make a simple biological system that is self-replicating, uses proteins, and allows us to link the atomic changes to population evolution. In populations of humans, this would be daunting computationally. But when you think of populations of self-replicating molecules such as the simplest one last week, which is these trinucleotides being ligated into hexanucleotides, you can start to conceive of actually connecting the atomic modeling to the population modeling, which is basically the breadth of this course. It covers the whole breadth, it was collapsed down to that simple model. But more-- even more importantly, we can start engineering smart materials. Materials that have important properties that in a certain sense compute in chemistry. We can make a whole alternative chemistry of stereospecific meaning sensitive to the actual handedness of the molecules, which is so critical in pharmaceuticals and in enzymatics in general. This can be engineered by getting a handle on the synthetic machinery of life. OK, so that's one that's at the simple end of the spectrum. What about going from-- that's going from atoms to cells, how about cells to tissues? When we typically-- and many of you are either already in the biotech pharmaceutical industry or feel that the research that you might be doing as a graduate student would contribute to that in some way, however indirect. The way that you would program a computer, we might fill up this room full of laptops and go pouring random chemicals on them to see if they then produce the graphical user interface that we desire. This would be the drug screening approach to programming. And obviously the way we program actually is we work in the natural biopolymers of computers, which is the strings of zeros and ones represented in the computer. And we program those as long strings, and so the equivalent in cells if you-- might be to manipulate the genome itself. We're doing genome level programming either at DNA, RNA level via nucleic acids. This is not an either/or, this is probably something where one is augmenting by studying and programming at this detail level. And to do this manipulation of stem cells is a growing avenue of research. This gives us access to STEM cells and cells that are capable of replicating and differentiating into almost any cell in your body, and rather than dosing your entire body with a drug, you can now specifically deliver a particular cell to a particular place, and have it take its-- its role as a replacement. And so this is the kind of programming I think we should be thinking about as a grand challenge. Remember, grand challenges are not going to happen tomorrow. You have to do some piece of it. Question? AUDIENCE: In the first bullet in B, since it appears we don't really yet know what the function of a protein is going to be unless we know how it folds, right? GEORGE CHURCH: Right. AUDIENCE: So we might be able to model the sequence, but how would we model anything other than that? GEORGE CHURCH: So the question is, what do we know about-- or, rephrasing the question. What do we know about a protein before we know its fold? And actually historically we knew more about protein-- much more about protein function than we knew about their folding because the-- and we'll get to some of the definitions of functions in just a moment. But you can study it biochemically in terms of what it binds to, what place it-- it holds in the replication of the cell, and so on. We do know the folding of most proteins, and that will be-- part of the post genomic era will be producing the three dimensional structure and biochemical function of all the proteins. AUDIENCE: But here you're asking for changing that, changing the genome programming, right? And then try to-- GEORGE CHURCH: Of the parts that you understand. AUDIENCE: OK. GEORGE CHURCH: I mean, obviously we do engineer a variety of physical and biological systems without full understanding. An even grander challenge would be full understanding. Here we're trying to take subsystems that we do-- it could even be a highly integrated system where you do model the entire system, but there will always be gaps in your knowledge, just like there are gaps in the human genome sequence. And the final illustration, number C, is going up to the most complex systems that we're dealing with, which would be morphological systems and even-- and the populations that result from that. And here we will be-- you can deal with morphology in a way that-- at the molecular level all the way up through the morphology of assemblies of cells, how cells aggregate. And all this can be modeled and used to great effect, whether it's smart materials or replacements in human systems. So let's talk a little bit more about these components and how they're interconnected. Whether we are taking these components apart or putting them back together again, we need to understand how many of them there are and-- and how we're going to access them in databases. I'm illustrating this with three organisms that are nicely poised to show the extremes on the left and right and something in between. So this is mycoplasma pneumoniae-- sorry, mycoplasma genitalium, one of the smallest living organisms and smallest genome. Its genome size is a little over half a million base pairs. The worm caenorhabditis elegans, one of the first metazoan multicellular organisms sequenced, was a little less than 100 million bases, and the human at 3 billion bases. Neither the worm nor the human is completely sequenced, despite some possible indications to the contrary. There are quite a number of gaps in each. The many bacterial genomes are completely sequenced, including mycoplasma. The number of DNAs in each of these-- you have one circular genome in many bacteria. Some have multiple chromosomes. The worm has seven chromosomes and human has 25. Those of you who have studied biology or just listened in the last lecture where I said we had 23 pairs of chromosomes to segregate, you nevertheless, there are 25 different kinds of molecules. I'll leave that as an exercise for you. You can ask me in just a couple of minutes. The number of genes encoded in these DNA depends on your definition of gene. If we define it somewhat arbitrarily as a piece of inherited material that encodes one or more RNAs where those one or more RNAs share some of the same inherited material-- so in principle there's inherited material, which is not nucleic acid. But for most intents and purposes, these-- the genes that we'll be interested in do pass through RNA on their way to protein. And the number of genes in mycoplasma is roughly one gene per kilobase-- it's about 500 genes or so. The number in worms is estimated at around 20,000, and in humans it ranges from 30,000 to 150,000. And there are betting pools on exactly how many there are, and there probably should be a betting pool on when it is we will know how many there are. It's been announced at various times, but to some extent the exact number will have some softness to it because some of the genes will be of marginal utility to-- to humans. They will have been of some consequence maybe many generations ago, but on a day to day basis they will be hard to detect the importance of whether that gene is present or not. In terms of RNAs, in bacteria you have a tendency to have more genes than you have RNAs because the genes will be constructed in a series such that one RNA can make it through multiple genes in an operon, then that operon will then make multiple proteins. So you might have slightly fewer RNAs than you have genes. And worms are an example of a multicellular organism that also has operons where genes will be strung together. They tend to be shorter and fewer in number, but then they-- but that doesn't just reduce the number of RNAs. You can then increase them because you have alternative splicing where the RNAs will be made up of multiple pieces called exons which are stitched together by a specific biochemical machine-- splicing machinery. And that could happen in more than one way. They tend to be in a linear order in the genome, but there's exotic mechanisms like trans-splicing where you pull up an exon from completely different part of the genome and then splice it in together. So this number is larger than the number of genes because one gene can produce multiple RNAs by alternative splicing. For proteins, there's additional diversification that you can modify the RNAs in various ways. In prokaryotes the number of RNA modifications is relatively limited, but the number of protein modifications starts to go up. You can have proteolytic modifications, phosphorylation of various amino acids. And-- and in multicellular organisms like worms and humans, the number of modifications reaches up into about 250 different amino acids-- modified amino acids of the basic 20 amino acids, which we'll talk about in just a moment. The number of cell types in a very simple organism in a very simple environment might be as little as one. We don't really know how many cell types there are, but basically all the cell types for an organism like mycoplasma look fairly similar morphologically and probably functionally. On the other hand, the worm has 595, 500, 959 cells, and those three-- this is three significant figures. This is pretty good for biology. And these are non-gonadal cells, and the reason we know this so precisely is the entire lineage, the entire division of all the cells, have been mapped out for this worm. And we'll show this a few slides from now. Humans on the other hand, not only is the number of-- the lineages is very poorly defined for most of the cell types in the human body, and even the number of cell types is unknown. Some people will estimate as few as 200 cell types. This is just a soundbite that is made up as far as I can tell. Some people say 200,000. It's probably a safe bet that a given-- at any given time point there are fewer than 10 to the 14th cell types. This is probably not very reassuring for those of you who would like to, say, measure expression in all the different cell types, because 10 to the 14th expression patterns would be quite a number. In addition, you have various developmental stages where let's say you have-- as you grow from single cell to 10 to the 14th cells, you pass through stages, and what may be-- appear to be the same cell type at one stage at an earlier time point may have completely different gene expression. That is known, even though the total number of cell types is not. AUDIENCE: [INAUDIBLE] the RNA, do you know how many [? extra ?] [INAUDIBLE] to [? include ?] RNA [INAUDIBLE]?? GEORGE CHURCH: OK, yes. I meant to caution you that-- just that the terminology here is used quite loosely. Gene expression is often used interchangeably with RNA expression, clearly. And then in almost the same paper, they will refer to as genes-- as protein encoding genes, completely sidestepping a large number of RNAs which really are never translated into proteins, such as ribosomal RNAs, tRNAs, small nuclear RNAs, and a whole variety of regulatory RNAs, RNAis and so forth. It's becoming very important, this class of RNAs which stay as RNAs, so be careful when people use genes and RNAs interchangeably, or genes and proteins interchangeably. So this is an example of molecular morphology, a particularly elegant example that illustrated last time. In a certain sense, the morphology of these two strands of DNA greatly-- go a long way towards explaining the inheritance and fidelity of the basic macromolecule which stores the information. What we're going to do is expand on the-- look outside of these bases which form the base pairs down the-- that has stacked up along the core of the DNA to look at how they're actually covalently attached, what the precursors are when we go from monomers to polymers. We're going to talk about polymer synthesis for the next few slides. So in order to get this-- this exquisite base pairing here, which a recent article has argued is optimal. Of all the different base pairs that could have formed in the prebiotic times, these are some of the optimal alignment of hydrogen bonds. But the hydrogen bonds just guides the base pairing, the polymerization occurs in something not shown on this slide, but shown on the next slide. Here are the two examples, two very similar bases for DNA and RNA, the monomers that are polymerized by enzymes-- polymerases-- to make DNA and RNA. So on the top is the deoxy-ATP and below is the ribo-ATP. Ribo-ATP is distinguished not only as a precursor for polymer, but it's one of the key biomolecules providing energy or transmitting energy from one part of the cell to another, one machine to another. If you look at a network diagram of the metabolism, ribo-ATP would probably be one of the most-- is one of the most highly connected nodes in that graph. It's connected hundreds of times in a graph where many things are connected once or twice. And this is the structure, this is the base here in skeletal form and space filling form. The space filling form, aside from the colors, is-- is getting to be a more accurate representation of the electron density-- sort of the electron density you might observe in crystallography or in quantum calculations. But the skeletal form allows you to see some of the hidden atoms a little bit better. You can see the nitrogens are color coded for-- by blue, and the phosphates and sulfurs-- in this case, phosphates by yellow and the oxygens by red. Carbons gray or black. And so what you see is the only real difference between the deoxy-ATP and the ribo-ATP refers to this oxygen at the two prime position, which is the deoxy. They-- they both share the-- this ribose and phosphate which were what was not represented on the previous slide are the repeating backbone. You go from this 3 prime hydroxyl-- the numbering here, by the way, for the ribose has primes after it to distinguish it from the numbering of the basis. These were studied chemically by chemists and numbered independently, and so when they were found in the same molecule, you had to have a separate. So that's the reason, throughout the rest of this course and the rest of your life probably, you'll be referring to things going from 5 prime to 3 prime. It's because the people studying the bases won over the people studying the riboses. Anyway, so the last two phosphates just provide a higher energy bonds, and [? these ?] by equilibrium is pushed so that this whole splitting in polymerization is a very favorable in terms of free energy. OK. Now as we discussed-- so those were the nucleic acid components, and then the proteins that they encode and the proteins that are required for the replication of the nucleic acid components are made up of simple derivatives of glycine, which can be represented-- full name, three letter code, one letter code. You should learn the 20 one letter codes because they're very valuable in this course and in bioinformatics in general. Again, the same color coding here. You have nitrogen car-- this is the central carbon, it's the alpha, and this is a carboxyl group. So this is-- as an amino [? acid, this ?] is a [INAUDIBLE] ion with a positively charged nitrogen and a negatively charged carboxylate. And the way you represent this in a computer, you can either represent it as a pretty picture here, either skeletal or space filling. That's of course represented by zeros and ones, but it's not a very useful way for searching, merging, or checking, right? It's going to-- if you rotate this slightly in three dimensions, it's going to [? give ?] you a completely different image and it'll be hard to search. You could represent it as the three-- the coordinates, the x, y, and z coordinates of each of these atoms. And that is something that you can search, but you also need to represent it in a-- in a way that represents the hierarchical structure by which these things form covalent bonds, and so that you can recognize groups-- groupings of atoms into polymers, polymers into assemblies, and so all the way up. And this is an example of such a hierarchical description which would be recognizable to all the computer scientists here if they were comfortable with some of the biochemical terms. So here's the configuration of this thing we're calling glycine. By the way, there'll be a lot of jargon in this [? court ?] for those of you that are computer scientists. The point of this course is not to give you an encyclopedic knowledge, it's more to flesh out the concepts [? that ?] supplies both the computer and biologists in the group. And so if you learn a lot of facts, then we'll hold it against you, but-- but every time you see a piece of jargon and you think I haven't defined it, just call it Fred or George or something like that. It's an arbitrary name. It will be defined in the databases, and that's what the database is for-- keeping tract of this. But you will have to understand the concepts, and the concept here we're trying to illustrate [? are ?] different ways of representing the molecular definitions, here by describing the-- the syntax as you would in breaking up a English sentence into its structure. So you have here-- it has a substituent of a backbone. Here in order to try to tie together all amino acids, you've made something that's a little nonsensical, which is you talk about the L backbone of amino acid. All the amino acids except for glycine actually have a handedness. That is to say, if you hold them up in a mirror it looks different from what the thing that you're holding your hand if you take, say, a space filling [? model and hold ?] [? it up to a ?] mirror. And the reason is that you can have-- these two hydrogens here in glycine will have an actual side chain coming off in amino acids. And if it comes off here, then it's [? a D ?] amino acid. If it comes off of the other hydrogen, it's a L amino acid. So here you're saying natural amino acids are L amino acids, and so you want to take this L backbone and put a substituent on it. That substituent is HYD for hydrogen, and it's linked through carbon [? 1 ?] to another hydrogen and so forth. Nil means nothing. And here's another way of representing it slightly more compact. You can think of this as just one long string even though it's on multiple lines. Here's one that's definitely [? at ?] the bottom line [? is ?] you've got this CH2 group, this methylene group, right in the middle bounded by this positively charged nitrogen and negatively charged carboxyl group. So you can see these nested, parenthetical ways of indicating the hierarchy. This allow you to search through complicated databases of compounds looking for shared properties, say, of all the drugs that bind to a receptor whether or not the structure of the receptor. If you know the structure of the drugs, you can do a structure activity relationship. AUDIENCE: Question. GEORGE CHURCH: Yeah? AUDIENCE: What's the significance of the fact that you've got the amino group and the carboxyl group both at the end, and what is usually in the middle is brought up to the left? GEORGE CHURCH: You're asking why [? in ?] a particular order? Well you can think of this [? likely ?] that your calculators. You can either enter them in the natural way that you do it, or you can do reverse [? Polish ?] notation. And different-- different ways of setting up syntax have this different thing. If you really wanted to research this, you'd look into the SMILES definition. This is a particular chemical definition, and they could justify this much better than I could. So there are 20 amino acids in the simple genetic code. There are 280 that are post-synthetic modifications of these simple 20 amino acids. We've been talking about glycine as the basic backbone shown here in black on this slide, and in blue are these side chains that lended its-- its chiral nature, its nature that has a mirror image. And each of them provides the properties which are color coded here. Orange have the property that the side chains, and hence the amino acid and the protein, are hydrophobic. They try to get out of water, they are not-- they try to bury themselves in other hydrophobic moieties, like other amino acids like this in the core of proteins or in lipids which are hydrophobic as well. Green are hydrophilic. Blue and red are also hydrophilic, but they're not only that. So they're also charged, the red ones being negatively charged as in the red of oxygen and blue being positively charged as in the blue of nitrogen. And the yellow being the sulfur containing, moderately hydrophilic amino acids. You can have more than one chiral center of symmetry, like this mirror image. You can have two such centers as in three [? inning. ?] So now we're going to put these amino acids in the context of a-- the central dogma, going from DNA to RNA to protein. We want to illustrate this in a case of a very complicated machine and a very elegant and simple algorithm. This algorithm is simple because biology cooperates largely with us. The code, unlike many codes in biology, is fairly universal, found in almost all organisms in the same form. And it's very strict, and with relatively few exceptions where you have three nucleotides which encode one amino acid. So there are 64 possible trinucleotides, 4 to the third power, and most of those trinucleotides encode some amino acid. The exceptions are stop codons indicated here, the little dashes in this table. And so let's just go through the table because that's the algorithm part of it. We have the color coded amino acids in here in a single [? little ?] code. Remember orange is hydrophobic, green is hydrophilic, blue is positive, and red is negative amino acids. And what we have [? as ?] an example would be AUG on the messenger RNA-- this is going from DNA to RNA-- is decoded by a complementary trinucleotide on this transfer RNA. Here it's unfolded, but in reality and in-- in last lecture and in the next couple of slides, you will see it more folded up. But this is unfolding it to show the 76 nucleotides or so of the transfer RNA, which has been preloaded with an enzyme which is truly the miraculous part of the geniculate code, which is the aminoacyl tRNA synthetases which recognize the transfer RNA, recognize this methionine away from all the other 20 many acids, and put it on the right transfer RNA. Once that's done, then the rest is base pairing. Mostly very Watson-Crick base pairing or something like it where the first two positions dominate and the second one can wobble. Here a G and a U is not part of the ATGC cannon of Watson and Crick, but it's close enough and it allows some ambiguity at this third position. So for example, UUU not only encodes phenylalanine, but UUC can also-- here the triplet in this table is UXU where the X is U, C, A, or G along the top of the table. And so you can look this-- you basically look up the trinucleotide in a table like this in the computer and you can find the corresponding amino acid. This allows you to go basically from a DNA sequence to an RNA sequence to a protein in the computer. And what's-- so OK, that sounded too simple. Well, I'm going to give you a couple of slides that illustrate why it's more complicated. First of all, why it's more complicated biochemically is not only do you have that amazing protein molecule, sometimes one or two subunit proteins are sufficient to take these tRNAs here encoded by red and green where one amino acid is going to be added to the growing peptide chain on the other, and the two business ends of the molecule that are responsible for handing off amino acids where they get coupled together in this polymerization reaction. These two transfer RNAs have been properly charged by the amino [INAUDIBLE],, but then they require this truly huge apparatus, one of the largest molecular machines in the cell, arguably similar or larger than any other one, which-- which allows the messenger RNA not shown to bind to these two trinucleotides in the tRNAs. And then a catalytic reaction occurs where the amino acid is shifted from one tRNA to the other, making a growing peptide chain. And this chemical reaction here shown with all these circles and arrows for the chemical bond reformation is actually catalyzed by an RNA. This is the second RNA catalyst that we've talked about in this course. The first one was briefly mentioned on the subject of replication in last class, where you can find RNAs that can be engineered and probably existed in other scenarios which can replicate using small molecule precursors. So by far, most catalysts that we will be dealing with will be proteins. But in order to get the proteins we need this really complicated ribosime-- RNA enzyme to catalyze this. Here the white are the base pairs, the Watson Crick and non-Watson Crick base pairs of the RNA, the gold is the backbone ribose and phosphates of all of those RNAs, and then the blue are the proteins, which you can see are mostly out of the periphery, not involved in either the enzymatic reaction or the recognition reaction which does the decoding at the trinucleotide codon level. This is made up of total of three RNAs. Here you see two of the RNAs in the large subunit. There's a small subunit that fits on top of this which would mask the reaction we were interested in. Over 50 different proteins and the complete three dimensional structure is known from a variety of different organisms now. OK. Now after the break, we will then take this table that we had, and the complex biochemical machinery we [? had, ?] and turn it into a program that does the central dogma from DNA to RNA to protein. So take a brief break and come back and we'll talk about this. |
MIT_HST508_Genomics_and_Computational_Biology_Fall_2002 | 5AB_RNA_1_Microarrays_Library_Sequencing_and_Quantitation_Concepts.txt | The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license, and MIT OpenCourseWare in general, is available at ocw.mit.edu. GEORGE CHURCH: We've come to the fifth lecture, the first of a series, on RNA and expression analysis. First, a quick review of last week, and it has significant connections this week, where we talked to the topic of alignments and different algorithms for obtaining pairwise alignments. In particular dynamic programming, this led to an even harder problem, which is multi-sequence alignment, from which we will draw pretty heavily at the beginning of the discussion today. And then, the issues of getting the motifs, another topic we will touch upon several times today, how you get-- once you have a multi-sequence alignment, how this gives you either an independent weight matrix where different positions are independent or in a hidden Markov model where there is some dependence between, say, adjacent nucleotides in a simple sequence such as CG. So let us carry over these thoughts about multi-sequence alignment motifs and non independence of positions in sequences to the next level higher. We will eventually talk about protein three-dimensional structure. But a really beautiful intermediate between proteins are the complexities of protein structure and the simplicity of double-stranded DNA is the folding of RNAs because they're based on the same rules of double-stranded DNA, but they have the complicated structures of-- begin to have a complicated structure of proteins. So we'll start with this integration of multi-sequence alignment of motifs with RNA structure, and then we'll switch to tell about how these RNA structures play their role by achieving different levels in the cell. In other words, we want to start to introduce how we become quantitative about the amounts and localization of RNAs in the cell. Some of the measures that we'll be talking about and the tool-- the computational tools will be more appropriate for individual measures and others will be more, what we call, genomics grade, high throughput and high accuracy. And we have this-- since this is a new category of biological data, we have to address random and systematic errors just as we did for genotyping and sequence data. This is a new set of them and a new set of solutions about the same themes of random and systematic errors. And then we'll talk about a particular set of interpretation issues that lead to additional considerations. And we'll end on time series data which will be a theme that will connect this talk to much later talks and systems biology where the ability to connect with a time series will help establish causality and connectivity. And we'll tie it to the subject of RNA analysis by looking at messenger RNA to k. Now slide three is a reminder that we'll use in two different contexts tonight. First, these are the bell curves that you've seen, at least three of them integrated before-- the two discrete binomial Poisson and the normal, which is symmetric around the mean of 20, in this case. And just to connect to the discussion from last time where we were asking what the significance of a match of a single sequence to a database might be, when you're asking for a match of a single-sequence database, you're typically really asking for the maximum match, or the maximum-- the most extreme matches. And so when you talk about extremes, when you're sampling from a distribution, and you're looking for the most extreme value for finite sampling of that distribution, that tends to be not from the normal distribution, which would be random sampling, but from-- which is this middle magenta curve. But instead, it would be extreme value distribution, which is this blue curve, which you can see in this case, since we're looking for extreme maximum, it's shifted slightly to the right. And so you can see that it comes inside of the other bell curves, on the left-hand side, and goes outside, on the right-hand side. If we were looking for extremely low values then it would've been shifted to the left. And remember, all these continuous functions go off to negative infinity and positive infinity, although at extremely low levels. And then that's the extreme value distribution. Now in order to connect this nucleotide sequence, which we've been seeing has these wonderful Watson Crick base pairs that were TNA, and so forth, is to this more complicated tertiary structure. We're going to go through an intermediate of secondary structure where we really look at whether what kind of base pairs can form. And I'm going to immediately introduce some complexities so you don't get too complacent right off the bat. Somewhere in this slide of non-Watson-Crick base pairs is a Watson-Crick base pair. Take a moment to find it. Fine. OK. So since we haven't introduced base pairs, it's going to-- well, we've seen it twice now in double-stranded DNA. Right in the middle here are labeled A is an AU base pair, which is Watson-Crick where the black dot indicates the attachment to the ribose in RNA, or to the oxyribose in DNA. And you'll find three other AT base pairs. One right to the right of it and two down below. And these four AT base pairs are all different from one another, most easily imagined in terms of the orientation of the riboses, these black dots, relative to one another. And they each have names. But the important thing is that all of these are illustrated such that they maintain the coplanarity of the bases. They typically maintain one, or two, or even three hydrogen bonds. And the planarity allows them to stack on base pairs below and above them, just as you would in normal double-stranded DNA. But sometimes, the geometry distorts the double helix enough that you might get a penalty in the free energy in the thermodynamic sense, or the kinetics, in a sense. Now here's another. So you can find almost-- you can basically make one of these base pairs for all of the possible combinations of ACG and U in RNA. And all of them will be coplanar, and they will have one or more hydrogen bonds. Probably the most stable and most commonly encountered in otherwise normal RNA double helices is a GU base pair. And you can see that this has fairly similar geometry to the AU base pair, or for that matter, the GC base pair. So let's see how these non Watson-Crick base pairs appear. And this is the transfer RNA that we saw a couple of slides ago, spinning around in three dimensions. And the sequence that was behind it was the DNA sequence that corresponds to this unmodified RNA sequence on the right-hand side. What we have-- you can think of this as four fairly canonical Watson-Crick type double helices for RNA double helices, which are slightly different from the DNA double helices. But in this-- so we have seven base pairs, six of which are Watson-Crick in the top stem loop, starting at position number one with the 5-prime end and ending position 72. So one and 72 is a GC base pair. And you can see that the anticodon where it meets the messenger RNA is at the bottom right-hand part of the slide. And if you just look at that loop and-- you've got a seven-base loop and a five-base pair anticodon stem. And so each of these-- so that each of these stems has some distinguishing features, a number of base pairs ranges from four to seven. You've got a GU base pair in the middle of the top stem. And you've got little sequence boxes which are fairly conserved, such as this T psi CG The-- is-- actually, in its original form is a UU CG. And the T and the psi, or pseudouridine, are examples of modified bases, which are shown on the left side of the slide. You can see there's quite a number of them. You can add a methyl group of ch3 groups to either of the bases, such as one methyl group on G. Or you can add them to the riboses, such as the two prime methyl groups, which can go on any of the four bases because it's modifying the ribose which is generic. Most of the other ones are very specific to a base. So for example, dihydrouridine is a modification that can only occur to uridines. The pseudouridine-- similar thing. And so on. So each of these requires an enzyme. And we will highlight one of the enzymes that's involved in putting the methyl groups onto the sugars, the [? 0,2 ?] [? trimethyl ?] in just a few slides. But right now, what we want to ask is how did we get this folding structure. Now this is not the three-dimensional structure. This is the intermediate, between the primary DNA sequence and the fully modified, fully folded three-dimensional structure that we saw spinning around a couple slides ago. So the first thing is the way-- you can try folding this, for you oppose each base pair in turn to look for possible matches. And then, what we've done historically-- this is in the mid to late '60s-- was you take each new transfer RNA sequence and ask whether it makes a decent fold in this simple planar representation that's related to the previous ones, under the assumption, the hope that there would be some conservation, not only of sequence of some of these motifs like the PSI CG, but also the way that it folds up. And you might even hypothesize that maybe it doesn't matter what the sequence is in some of these stems. What's important is that it's capable of forming a stem, that it is that position one is complementary to position 72. If position one were to change from a G to an A, then position 72 should change from a C to a U. So how do we formalize this? How do we formalize the process by which we generate this so-called cloverleaf structural or any similar folding pattern for small nucleic acids. And what are the limitations of those algorithms. And these dotted lines, you see, are some of the non Watson-Crick based pairs. Some of them will stack. Many of them-- some of them will actually form the hydrogen bonds of the Watson-Crick based pair but they won't otherwise have the rest of the geometry. And you can see that some of these will provide connections between two loops which are separated by stems. And this kind of folding back means that it's not a simple set of helices. So the way that we formalize this is we say that position number one, if it's bound to position number 72, and the exact sequence maybe isn't as important as their ability to pair well another, then you expect if you take a large number of transfer RNAs and do a multi-sequence alignment, as we did last time, then in that multi-sequence alignment, you expect that when the G changes, the C will change, too. And that's called covariance. And if you look at the vertical axis, the maximum that can be achieved is the same kind of maximum that we had in the motifs last lecture. The motifs, we've actually had a couple of times. It can get up to two bits. Two bits is the full scale for a base pair, or base, which can have four different values, A, C, G, or T. And we're calling this mutual information if it has the same units, a full scale of zero to two bits. And so we see along the horizontal axis, here-- call them positions I and J, which range from one to 72, which is the core part of the transfer RNA. The last four bases are added by a specialized enzyme. But position number one and position 72 covariates [? assume ?] this peak in the far left-hand region. In effect, there's seven peaks in a row there which correspond to the seven stacked base pairs they covary. Similarly, in the TC to UC stem that we talked about a couple of times there, that those five nucleotides covary as you would get in a stem. The anti-codon stem is another five. And that D stem, so named after the hydrourodine modifications is four base pairs. And the way that this is derived-- and we're going to work through this, an example, in the next slide-- but just as a labeling of this axis here, the mutual information between the Ith base and the jth base, that is to see, for example, between I equals 1 and J equals 72, is simply the sum of the frequency of getting that particular I, N, J-- F is the frequency of getting, say, a G as position one and a C at position 72-- times the log base 2. Remember, when we're talking about information content of nuclear-- or of information in general, bits in a computer or nucleotides in a sequence, we do log base 2, and-- as introduced by Shannon and others. So now it's going to be the log of that same frequency. So the frequency of getting that particular I and that particular J type of base normalized now to how frequently those two bases occur throughout those positions. In other words, you know how often they co-occur at those two positions. Now, how often do they occur independently of one another? And that's what the denominator is here. So when you take this ratio, you put it on a conventional scale, and then you have something that's analogous to the P law P of information theory. And you sum over all of the observed bases at positions I and J. And that that's for a particular I and J, you sum over all the X's that occur at position, say, one and 72. And then you repeat that. You can get this Nth of IJ for every matrix element going from one to 72 in a symmetric square matrix. So let's work through this for two extreme examples. The extreme case where have perfect covariance and the extreme case where you have no real association. So we're going to illustrate this with a toy multi-sequence alignment here. This is just the same way we did the multi-sequence alignment in the last class. Here there are no insertions or deletions, but it's the same thing. You could derive a weight matrix for this. And you would see that the first column-- the far left-hand column, I equals one, has all four possibilities, and the so does the rightmost column of that multi-sequence alignment J equals six. And so let's calculate, are these covarying in this simple multi-sequence alignment of four [INAUDIBLE].. So we calculate the mutual information for I equals one, J equals six Nth of one sixth. It's going to be equal to sum. The first term in the sum is for the AU. And then we're going to walk through the CG, GC, UA. So there will be four terms in the sum. Each of the terms will have, coincidentally in this case, have the same frequency for that particular pairing of AU. And remember this is not a base pair. This is a covariant pair of nucleotides that could have been anywhere in the sequence. We happen to pick the first and the last base. So they all have the same frequency. That frequency is 1/4. So the AU occurs one quarter of the for sequences in the multi-sequence alignment, so that's one quarter. And then, remember it's the same frequency inside the logarithm, but now in the denominator, we're going to normalize it to the frequency that the A occurs in the I equals one position, which is one quarter, and the frequency that U occurs in the J equals six position, which is one quarter. So that's one quarter over one quarter squared, or four. So 0.25 times log base to a of four is going to be two. And 0.25 times two is going to be 0.5. And that's the first term. That's for the AU pairing. If you go down through all four terms, they all end up being the same form. The frequency is always going to be 0.25 for the pair and 0.25 for each of the individual bases. So you end up with four of those, four examples of those for each of the four cases. And so four times 0.5 is two. So that's consistent, hopefully, with what you would have expected for perfect covariance. You're getting the full information content, the full range of two bits, and so that's what we achieved. So now, as a controller is just further gratification that we actually understand this, as we're working through the example of comparing I equals 1 with J equals 2, so the two far columns. And here, you're familiar with I of 1. J equals 2 is always C, and so it's not covarying with the first column as in the previous example. So let's just work through it the same way. So the first term in the series is 0.25 again because the AC pair, not base pair, but pair of bases, occurs only once in the four of the multi-sequence alignment so that's 0.25. Then you have the logarithm base two of that same 0.25 now normalized to the frequency of the A in its column, the one column, which is 0.25. And the C in the J equals 2 column, which is-- it's always there as unity, so it's one. So now that's the big change here, is instead of having 0.25 in both of the denominator terms, it's now 0.25 times one. It's now it's 0.25 over 0.25, so you have the log base 2 of one, which will be zero. And that zeros out the whole term. And so you have mutual information of zero, as you would expect from this particular toy example where columns one and two do not covary. And so this is the same formulas that was in the previous slide. And a generalization of the one that we walked through term by term. And here's the reference for that. So now how do we go-- so that was-- we've taken now hundreds, possibly thousands, of transfer RNAs. We've done a multi-sequence alignment. We've produced that mutual information pattern that we saw before with the one by 72-- one to 72 by one to 72 comparison where you got the spikes at each of the double helices. Now how do we turn that into more general practice. How do we generate secondary structures which are kind of at this intermediate in between the primary sequence and the three-dimensional structure using a particular class of experimental data combined with the sequence data. This does not necessarily require the large set of line sequences, but it obviously benefits from it. You could do a secondary structure for each element in the aligned sequence in order to-- and use the mutual information, if you have it. But let's just talk about just the simple application of these thermodynamic parameters to the prediction of secondary structure. And what are our expectations before we go through the algorithm. How good is it? In this fairly close to state-of-the-art paper, looking through over 700 generated structures, they have-- in each set, it contains one structure that, on average, has 86% of its known base pairs. That's not saying that it's necessarily identified as the top, the best, structure. It's saying that it has one structure by the criterion that they're using. This as a weak self praise. But let's walk through how that works. When someone says that they're going to predict a secondary structure or a three-dimensional structure from a primary sequence, more or less, from scratch, they really typically mean that there's going to be a variety of other chemical data that they take into account, but it will be generic data. It will not be specific chemical data for this particular molecule. And the generic data, in this case, are measurements of the thermodynamics of melting of model oligonucleotides, usually large amounts of them, monitored spectral photometrically. And from the temperatures of melting, basically, at equilibrium where you're getting half melted structures, you can determine the free energies where the negative free energies are the desirable ones, the ones that are likely to happen if you let the system go to equilibrium. And this is a kind of interesting application of the free energies for nucleic acids. Here, the algorithm that one uses is concerned mostly with adjacent base pairs of base pairs. So it's not a base pair, as you might think that the hydrogen bonds that determine the Watson-Crick and non Watson-Crick base pairs would dominate. Instead, it's the stacking interactions that dominate. And since it's the stacking interactions that dominate, the hydrogen bonds are basically exchanging a water hydrogen bond for a base pair hydrogen bond. It looks very specific but in terms of free energy, it's fairly weak. The free energy is determined more by the stacking of pi orbitals as depending on the geometry that you get, say, when you have a GC-- a CG base on top of an AU base pair here at the bottom of this helix. And that stack, it gives you a -2.1 kilocalories per mole. All the units on this are kilocalories per mole. And by going along and taking each of these stacks a pair of base pairs is what you're measuring. You can get all the negative free energy so they're stacking. Then you have some penalties, some things that are less favorable, that would not happen spontaneously if they did not have these mitigating negative free energies already accumulated, which would be the loop and the bulge here. The base pairs on either side of that bulge will stack up on one another. And that bulge will kind of flip out of the, otherwise, regular double helix. Similarly, bases at the end have a slight penalty. And so then you can add it all up and you can calculate it overall delta G for the entire structure. And if you do enough of these things, you can get a feeling for which ones are likely to be occurring in your RNAs. Now this should trigger in your mind, as the third example we've had where the conceit of a motif analysis, that you can do a multi-sequence alignment and each column and multi-sequence alignment is independent. This is the third example where that's not true. We have here the three energies are dependent on pairs of base pairs. The previous examples were the very distant connections that you can get in folding up a transfer RNA. And the earlier example was the CG dinucleotides. The assumption of independence of columns in a multi-sequence alignment is a very powerful one. I don't want to undermine it too much. But it doesn't hurt you to have three examples this early on in the course. Question the independence of columns in multi-sequence alignments-- very important thing to question. We've got mutual information theory that we had a couple of slides ago as one of the most powerful ways of questioning that when you see it. Now, that's the way that this particular base pairing, that we see here, is one example-- now you could take each of these and shift the right-hand half of the molecule relative to the left-hand by one base pair. That would give you a much poorer set of energies and much, much more bulges and more-- longer loops and so forth. And you end up with a poor delta G. And what you can do is you can rank and do one of these maximum-value searches by going through that. This should trigger in your mind this is another way of thinking about that search. You take the entire sequence, whether it's transfer RNA, or in this case, a 400 nucleic acid sequence, and you draw lines between every base where you have a favorable free energy. And you look for a set of lines which do not overlap one another, because these would represent short sequences of local-- you can think of this as a local sequence alignment between one half of the nucleic acid and the other half. Now this is not a sequence identity, remember, this is a sequence complementarity. That is to say, a reverse complement where complement means you've substituted As for Us and Cs for Gs. So you're looking for-- but in many other ways, this is analogous to the dynamic programming where we took two independent sequences and slid them along one another and allowed for insertions and deletions. In that-- in the dynamic equilibrium before, we did that formally, all possible such slippages by setting them as the two axes of a table and then filled in the squares for all the matches. Here, we would fill in the squares, not for the matches, but for the free energy of the stacking for these short subsequences. Now the reason that they don't cross over, and the reason for this little note in the lower left-hand corner of slide 11, that does not handle pseudo knots. Psuedo knots we'll explain in the next-- we'll show graphic examples in the next slide. But it basically means that if you allow such sequences to occur willy-nilly throughout the sequence, then you'll get these tangles that for a while people weren't sure whether they occurred or not. The one or two non Watson-Crick base pairs that you might find connecting up tRNA in these tangles were not considered long stretches that would connect loops. But since then, they've been proven to be of great biological significance. In any case, to do this without the pseudo knots, without allowing any crosses is still a challenging problem. It's basically the dynamic programming where N is the length of the primary sequence then takes on the order of N squared and compute time and space in order to figure out all the possible pairings that can occur. And then you go through and you rank them, which one gives the best free energy, and then you do the trace back and you get the top scores for that molecule. Now let's talk about pseudo notes. We excluded that, but now we'll reinvite it. We had those little ones, a couple of base pairs in the transfer RNA. But a much more dramatic one we alluded to in the second lecture. We talked about the genetic code. And in order to introduce you to exceptions to the genetic code, I gave an example where the ribosome jumped over 50 base pairs if presented with the right context. It didn't follow the normal code of having a triplet and another triplet right in a row with no punctuation. Here we had the punctuation that required, what may have slipped by at the time, a pseudo knot. And this is an example of once such pseudo knot in the best that we can do of a two dimensional schematic. And then something slightly better where we have a more three-dimensional and another three-dimensional view of this. And this is the RNA pseudo knot, which is-- one of them, which is responsible for frame shifting in the-- that breaks this genetic code. And so let's just follow how this goes. You've got basically a normal helix here at the bottom starting at the five prime in position one to seven. It would go through a normal five-base-- sorry-- six-base loop, from eight to 13, and then finish the stem 14 through 18. That would be a normal stem loop where the loop is six long, eight to 13. But at the end of eighteen, you have this little green loop that goes back and now makes a nice perfect four base Watson-Crick stem. Now in the middle of what would have been a loop-- and so this fold back is what we meant by a pseudo knot, and what would have been represented by a crossover of those red lines in the previous slide-- something that makes it much harder to compute. In fact, it makes it so much harder, in the next slide, that it goes up from an order of N squared, which is your typical dynamic programming in a pairwise alignment to order in sixth in CPU time and order into the fourth power in memory space that you need to set aside for storing up the table of possible pseudo knots that can occur in the context of the otherwise normal circle with the non-overlapping connections. This is a relatively recent innovation where a dynamic-- it's still a dynamic programming algorithm, it just has more possibilities, more complex-- higher algorithmic complexity. And the combination of the biological discovery to pseudo knots are important for frame shifting a variety of other biological phenomenon. And the now three-dimensional structure and now an algorithm puts pseudo knots well within the sort of things that you should feel comfortable. Now we're going to go back to hidden Markov models in a slightly more complicated context here, that we take the simplest one we could, which was a dinucleotide, simple, unfolded straight DNA. And the part that was hidden, as you will recall, was whether the the CG dinucleotides-- or sorry-- the dinucleotides, which could be any of the possible dinucleotides, including AA, CG, and so on, whether it was present in a CG island, or whether it was in a region of the chromosome which was likely to have CGs, or whether it was in a CG ocean which was low in CG dinucleotide content. So the hidden part was the plus-minus whether it was in an island or not. Now what we're going to do now is take this and transfer this over to the kinds of motifs we are finding in RNAs, like transfer RNA and another class of RNA, and say OK, now, whether it's the hidden part of the Markov model is these transition probabilities. The hidden part is whether it's in a particular secondary structure or not, not whether it's in an island or not, but a secondary structure. In this particular case we're going to talk about is a very interesting biological illustration where the hidden Markov models will be modeling these boxes, these motifs, that are involved in base pairing or recognition that forms the secondary structure of-- if it's necessary for guiding a particular enzyme. Now, remember we had all these modified bases that we used, that we saw in transfer RNA. Some of those are simple protein interactions with the transfer RNA that adds a methyl group here or there. It turns out that all of the methyl groups, the O2 prime methyl groups, these are on the sugar of the ribose. In ribosomal RNA, a few, just a small number of a few dozen of the ribosomes in this multikilobase ribosomal RNA are methylated O2 position. How does the enzyme know, or the enzymes, know to get exactly those bases? The way it knows is it doesn't use pure protein brute force to make a complimentary surface of protein nucleic acid. It actually uses this elegance of base pairing to make a guide sequence. And so what it's looking for is-- the protein cooperates with a small RNA, so-called snow RNA, or small nucleolar RNA, to find a place where the snow will recognize the place that you want to methylate. And then the protein methylates the base in the middle of that guide sequence. So then the game, the computational biology game, that these authors played was how can we find all the small RNAs, the snow RNAs, present in a genome when we very little about that the genome? What they knew was they knew the genome sequence. This is for yeast. They had a few examples of snow RNAs in humans-- almost none in yeast. They had the subsequence of the ribosomal RNA, of course. And they could-- what they wanted to do then is ask where in the genome do we have little guide sequences flanked by some of these other motifs and characteristics like a base pair, 4-8 base pair stem, that will match the ribosomal RNA. So you basically march along the algorithms, you march along the ribosomal RNA looking for matches elsewhere in the genome. And then ask whether those matches elsewhere in the genome have some of these other contexts, features. You can see this is going to be a more complicated algorithm than just looking for CGs. So this is how it works. That stem that we had is now item number one. The various boxes which were basically sequences are now turned into ungapped hidden Markov models. The hidden part of it is whether it is present or not in the context that adds up to this guide sequence. The guide sequence itself is a hidden Markov model which has to be an imperfect, probably imperfect, duplex with the ribosomal RNA. So that's how that's modeled. The most complicated is that terminal stem number one, which is a so-called Stochastic Context-Free Grammar. That's what the SCFG stands for. And that just means that it is even less constrained than the HMM. The HMM is less constrained than a simple motif, which is less constrained than, say, a consensus sequence. It is constrained, it has the grammar, if you will, or the particular rules for the base pairing that have to occur over a certain region in a certain part of this putative snow RNA. So anyway, you apply each of these criteria and you have transition probabilities which come in from a learning set, such as the human snow RNAs. You have a learning set that tells you what these transition probabilities will be. And you and you now apply this to the entire yeast genome, and you get a bunch of candidates, snow RNA encoding genes. Now you can't use things like the long open reading frames that you normally would use for finding genes. So this is a very valuable tool. But now how do you convince yourself that this is a gene, that this actually encodes a snow RNA, and that those are responsible for guiding the methylation at particular positions in the ribosomal RNA. The way you do that is-- well, before we get to how you do that, we want to ask how does this algorithm perform relative to other-- the few other algorithms there are for finding genes which do not encode proteins. And the first of these actually dates well before 1991. But there were ways of looking for transfer RNAs in sequence. They would use everything we know about transfer RNAs-- the little boxes that are conserved as sequences, the regions that are conserved only-- not as sequences, but as base pairing potential, et cetera. The loop lengths look-- are constrained. All the constraints that you can muster back in '91 were applied. And it was fairly slow. It would only do 400 base pairs of genome chunk per second. And when you have genomes on the order of many mega bases, this is slow. And it had-- it missed about 5% of the true positives. It had 95%. And false positives sounds impressive-- only 10 to the minus 6. But when you think of a double stranded-- both strands of E. coli being about 10 million bases, then this is about four false positives. And bigger genomes, of course, would be even a larger number in an absolute scale. So then, six years later, the speed is now 100 times faster. You're now only missing 0.5% of the true positives instead of 5%. And the false positives is now vanishingly small. So very often you can just arbitrarily trade off the number of true positives you miss with the number of false positives you get and make one-- take one advantage of the other. But here, it was a win-win situation. They both went in a favorable direction. So how do the snow RNAs compare with that? Here, another two years passed. We have the snow RNAs are just starting out. They have, probably, a little better than 93% true positives. This is not as good as transfer RNAs. This may be-- this may improve or it may not. The false positive rate is acceptable. So then the question becomes, how do you track down-- after you track down these genes, how do you then prove that they do what you think they do, that they actually are responsible for methylating the riboses or the bases in question? So it turns out that the technology we set up in the sequencing and genotyping lecture where you extend with DNA polymerase a primer on a template so the primer binds to the template and you extend by either many base pairs, as in the conventional dideoxy sequencing, or one or two base pairs in some of the more up-and-coming genotyping methods. Those extension methods, those DNA polymerase-based extension methods will stall when you run into this particular kind of modified base where a bulky group is introduced onto the two prime position of the ribose on the template. So you're extending the primer, sitting on the template, and it will stall here. And it stalls more when you decrease the concentration of the deoxynucleotide triphosphates in the extension reaction. So that's what these little wedges mean at the top of each of these columns. They've done an extension with all four triphosphates present, either in high amounts at the big end of the wedge or low amounts in the small ends of the wedge. And to tell where you are in the sequence-- this is what using reverse transcriptase polymerase on a ribosomal RNA template-- to find out where you are, you do this dideoxy, which is basically the conventional DNA sequencing. Where you terminate, it either is used Us, Gs, Cs, or As in the template. This allows you to get oriented. And basically, your sequencing on the far left-hand set of lanes. And these pause sites our present, say, the wild-type is the first pair of lanes next to the sequence lanes on the left-hand side of this display. And you can see there's a pause at every single known methylated base. You can determine methylated bases by other methods as well. But so now then, the computational biology predicted a set of snow genes. In fact, ultimately, all of the snow genes in the yeast genome we think, explaining all of the methyl groups, at least. And one by one, these were knocked out cleanly so that there's no gene there anymore for the small RNA. And then you ask, well, how does this affect the methylation as detected by this extension assay? And if you look on the far right-hand side for deleted number 40, and you can see that position number 596, near the bottom, circled in red, which is present in the wild type and all the other mutants, is absent from that particular mutant number 40. So there's no pause there. One infers there's no methylation. And that was the specific site that that snow RNA guide sequence was predicted to bind. It's aligned with the position in the guide where you expect there to be a methylation occurring. And you can see in each lane there's a different circled red missing black pause site. And until we get to the one in the middle, Newton numbers for snow RNA number 60-- and here's actually two missing bands in the same lane. And how that can occur, there's two different ways that a snow RNA can-- knocking out a single gene, a single snow RNA can have an effect on two different methyl groups. One is if the guide sequence can bind to two different places in the ribosomal RNA. And the other is if there are two guides sequences within the same snow RNA. So now that we have at least some grounding in the kind of structures that can occur, now we're going to ask how we monitor and measure the amounts of these structures in biological systems. And we will also see how these structures impact the methods that we use for the quantitation of the structures. So we have choices of molecule that we're going to measure when we're monitoring the various molecules in the cell. Why are we focusing on RNA? Well, part of it is because of its nice structural continuity between the simple DNA and the very complicated proteins. But the other is that if we want to study different points in the regulatory and metabolic networks that we'll be talking about at the end of this course having to do with systems biology, if we choose-- we want-- every part of it is subject to some kind of control. Transcriptional control is one of the early stages. And then there are many stages subsequence to that lead to the protein and ultimate phenotypes that result in proliferation of the species. If you want to look at transcriptional control, it would not do well to study protein because the closest thing to transcriptional control that you can measure is the RNA products. You can study the transcriptional control itself directly, as well, by studying the DNA protein interactions. But if you want to measure, a defusible molecule RNA is the thing to do. And there are multiple different methods for getting at a co-regulated set of genes, co-regulated at the transcriptional level. We'll illustrate a few here. And why do we need multiple methods? Well, we've talked about random and systematic errors. Random errors you can compensate by repeating the experiment. The random errors will average out, ultimately. Systematic errors will happen the same way over and over again. So you want to have something out of the box to allow you to check it, or to model it, or to allow you to do integration, as you might want to have in complicated systems. So here, just to start us thinking about integration and checking different ways of getting transcriptional co-regulation, let's think about-- if you look through all the proteins that occur, you will find proteins that occur together, frequently, as either as fusions or as separate proteins. Where in operands, they'll occur as coding regions that are clustered together in some species or maybe less clustered in other species. When we have metabolic pathways where a small molecule will be shared by-- as the product of one will be the substrate of another, and so on, you'll have this chain of events as in the lower left-hand corner. And these sets of enzymes that need to be working together, need to be co-expressed. They need to come up together and go down together when they're not needed. They need to come up when they're suddenly needed. And so, you might have an entire pathway, or a set of pathways, that are co-expressed. And one way to do that is to cluster them in the genome. When they are co-expressed, you will sometimes find upstream of them, motifs, such as this. Again, here's the two bits for the vertical scale, where this might be enriched. And so this would be another indication-- so when you find these in front of genes, you might expect them to be co-regulated. When you find a set of proteins that are consistently together in different organisms, so-called phylogenetic profiles, you will find that this set of proteins that is involved in a common enzymatic pathway, metabolic pathway, are not only co-regulated and found together but in-- along the chromosome, but they're found together when you go through many different species. They will be deleted or inserted as a block, or they'll be found scattered around the genome. But you'll find that when one disappears, they all disappear, in general, statistically speaking. It's this phylogenetic co-occurrence is another clue that you might expect them to be co-regulated in those genomes in which there they do co-occur. Anyway, and microarrays will be-- and variations on that theme will be the main thing that we'll talk about. But I wanted to put it in the context. And I'll just expand on one of these at the bottom here in slide 22. This is an algorithm for reconstructing likely combinations where in some organisms you might have the entire biosynthetic pathway as a series of genes which encode one by one, all the proteins in this case that are involved in purine biosynthesis from simpler molecules. But in other organisms, you might have them scattered all over the genome, but they might be co-regulated. Their RNAs might go up and down together. And so, if you look at enough genomes, you can reconstruct the likely combination of enzymes. And here's how it might work. In any one of these, for example, E. coli, you might see that they're scattered about-- a pair here, and a pair there. Singletons don't help much. But if you take all the pairs from a lot of different organisms, you can reconstruct this network where you say, oh, this gene will call L, Q, Y, C-- all these are probably involved in the same process. If you get a hint for what any one of them does, say, one of them is involved in purine biosynthesis, so then you find that they all are. And if you guess they might be co-regulated very tightly. So now let's figure out how we actually measure that they're co-regulated very tightly. And the way we can do that-- as we do that, whatever method we use, we want to ask are we interested in ratios, relative changes, or are we interested in absolute values? There are various things that we can do with absolute amounts that are very hard to do with ratios. In particular, if we want to ask, is a particular protein level high because its translation is efficient or is it high because its transcription is efficient, if you find that it's full of abundant codons as if it wants to be efficiently translated, is it also have a high-level promoter as if it wants to be transcriptionally active? These sorts of questions really benefit from having absolute amounts, meaning so many molecules of RNA per cell, so many molecules of protein per cell. But we get that to direct causality, we want to get at the motifs. This would be one of the objectives of doing the RNA quantitation, to allow us to cluster RNAs that are co-expressed, and then to start looking for motifs and direct causality. Another thing that we might want to do is to classify. We can ask whether small molecules or mutations, such as occur in cancers, cause enough of a signature that you can then use as to say, OK, this self state that we see is a recognizable small molecule effect, or stress effect, or mutational effect, cancer. Now, when we-- we will be talking about microarray and related methods, but I want you to question the advantages and disadvantages of these methods. And so I'll compare it to a number-- but let's start with the most dramatic comparison, which is with in situ hybridization. So in array hybridization you'll have tens of thousands of different gene probes mobilized on a solid surface. And you'll label up the RNA from a mixture of different cells-- different mixture of different RNAs within a cell. But you'll be able to ask questions about 10,000 genes at a time. In an in situ experiment, it's the other way around. You take a cell in it's fairly natural environment, usually fixed, but fixed with maintaining the spatial aspects. Then, if you look within the cell, with a single gene at a time, or maybe two or three at a time, a very small number, not tens of thousands, you can look to see whether the RNA is uniformly spread throughout the cell and uniformly spread throughout all the cells in the tissue, or in, say, you've got a mixed population of yeast cells, whatever. And you can find cases in the literature where it is not uniformly present in all the cells and not even uniformly within a cell. Here is one of the more dramatic cases where the two X chromosomes in mammals behave differently from one another. The female mammals will have one RNA-- one chromosome expressing most of its RNAs at normal levels, and the other chromosome expressing almost no RNAs. It is expressing at least one RNA, and that RNA is-- which is XIST and it's covering that whole chromosome or is localized over that chromosome and not the rest of the cell. So this is an extreme case of localization that you can monitor with microscopic methods, fluorescent microscopic methods. Instead, we'll-- keep this in the back of your mind as you look through the microarray and other experiments where you're mushing together a variety of cells that might be in different stages of the cell cycle, might have slightly different environments, and even within the cell, the RNA-- you're losing the information about the RNA localization. Let's take a short break and then come back and connect on-- finish up in situ hybridization, and connect to other methods for quantitation of RNA. |
MIT_HST508_Genomics_and_Computational_Biology_Fall_2002 | 8B_Protein_2_Mass_Spectrometry_Postsynthetic_Modifications_Quantitation_of_Protein.txt | NARRATOR: The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license and MIT OpenCourseWare, in general, is available at ocw.mit.edu. DR. GEORGE CHURCH: OK, welcome back. One quick announcement before we get started. The teaching Fellows and I have, in response to a few gentle inquiries have, come up with a plan for making your problem sets and projects a little bit easier, getting you some time. Take a look at the website and talk to your teaching Fellows. But basically, there will be no problems in set six. It will be combined with problem set five, and so you'll have a full three weeks to work on your project. Tripling your project time. Assuming you haven't done anything so far, which hopefully all of you have done a lot so far. OK, so we were correlating absolute levels of messenger RNA abundance in messenger RNA protein. And here, this study was subjected to a little bit of not terribly controversial critique. What you're seeing here is at very-- if you have the very low abundance proteins, you might have what looks like some correlation. Then you add a few more low abundance proteins, so this is just some fluctuation. And adding proteins should improve your correlation coefficient. But if they're of low reliability in either the protein of the messenger RNA scale, or if there isn't correlation between the two for biological reasons, then your correlation coefficient could drop just at random. But as you add-- as you get up to adding all the proteins, you could either be dominated by a few high abundance proteins that fit this perfectly, the messenger RNA protein abundance. Anyway, some of the critiques of this had to do with the assumptions underlying the Pearson linear correlation coefficient, which in calculating the statistical significance of some of these analyzes in the previous slide, makes an underlying assumption that you have a normal or Gaussian bell curve. And you don't need to do this in order to-- the variety of measures of correlation, which do not require this parametric assumption. And there are tests for how close to normal a distribution is. Deviations from normality can of all types. For example, you can have slightly flatter or slightly sharper than normal. You can have skewed to the left or to the right, and so forth. Some of these, like skewing, can be corrected by a log transformation, where you simply take the logarithm or some other transformation, and a log being by far the most common, and theoretically justified, and now it becomes normal, and then you can do a statistical test. As it turns out, this is not a huge effect, but they use it in order to point out that you can when you're testing these, especially the low abundance end of the spectrum. You might want to use a rank test. And a rank test here is illustrated that you take if you have, say, two columns, this is a series of pairs of intensities of messenger RNA and proteins. Say columns X and column Y down in the lower right-hand corner of Slide 37. And let's say the abundances, the absolute abundances for protein X is 1, 6, 6, and corresponding RNA Y, which would be 8, 2, 3, and 4. Now, you want to ask whether the correlate or not. And what you do is rank them, and so the rank of X is 1, 3, 3. Here is a tiebreaker-- the way you deal with a tie is you give them all the rank of the middle one of the series in the pi. And you get a rank for Y, and the total number, in this case, is 4, and so when you have the rank test score is basically the sum of the square of all the differences in rank. So the difference in rank here would be 1 minus 4. It'd be 3 squared, and you take the sum of all the squares and you plug it into this S which is going to go into a correlation coefficient. Similar to in values and hypothesis to the Pearson, but now not making assumptions about the parametric, you just talking about ranks, it's a non-parametric test. And then, the N is the total number, and you apply that formula. Now, they apply that formula to this exact roughly the same data set or very similar. Actually data set. Another critique they had was that using a better measure of the protein abundance would be using a radioactive tracer in the protein. And then measuring quantitatively the intensity of beta particles released from the methionine in the proteins. Making that change and comparing it to the same messenger RNA assessment, they got this fairly linear correlation over three logs. Using the Pearson correlation coefficient, which they didn't entirely approve of, they got a 0.76, which is a modest-- it's a significant linear trend. And using their rank method 0.74, which is basically very similar. And found no significant difference between the top 33% and the bottom 33 proteins. Undermining the previous claim that there was less linearity at one than in another. You might, or you expect that the least abundant majority of proteins would have a little bit of either biological or instrumental noise. Nevertheless, this group found that it was a good correlation. Now, these two plot-- the plot in the next slide looks similar, but it's really quite different. Here the Y-axis remains. The protein abundance is measured by this F 35 labeling, but now we're getting back to this game of asking to what extent can we make predictions about the properties of the proteins? In this case, their abundance is based on their use of abundant codons. And a way of quantitating this is coadaptation index. Shown in the lower left part of the slide is you have-- it's the log of the coadaptation index, is a sum of all the frequencies F of I of each of the codons. Where I is the 61 non-stop codons out of 64 total. And The W sub I is a waiting factor, where it's the ratio of the frequency of codon I. Let's say a leucine codon there are six different leucine codons, and say the first one, the W sub 1 is going to be the ratio of the frequency of that first codon to whichever one happens to be most abundant one. Could be the first one, or it could be one or the other six. And so, that's the formula that is used. And you can see, again, a nice linear trend where indeed, the most abundant proteins do tend to use the most abundant codons. The codons that you find that are most abundant both in transponder RNA level and in usage in abundant proteins. Now, just as with RNAs, you can measure them on an absolute scale or a ratio scale. The advantage in principle if you have it on an absolute scale, you can always calculate ratios from it. But not necessarily vice versa. You have ratios you can't always get to absolute. So that's one advantage to absolute. If you're looking at things like codon adaptation index or messenger RNA, or a variety of other motivations for you, need to do it on the absolute scale. But there, an argument can be made for doing it on a ratiometric or relative scale, in the sense that you can establish internal standards which are more precise. You can really eliminate many of the systematic errors that can creep in due to differentiable ionization in the case of mass spectrometry, and so on. Now with RNA, the way we did this ratio testing ratio quantitation is we would label one messenger RNA red or site 5, and the other one green, and then by using selective filtration in the imaging, you could get ratios. With mass spectrometry, you don't have colors. But somehow, you want to get the same idea across and so what you do is have masses. So what you want to do is encode them in something that will not change the chemistry, but will change the mass. Not change the mass too much because if you change too much, you might change the chemistry, or you might not be able to find the shadow peak, the second peak. So basically, what we're doing is we have a-- want to do is take self-state 1, label it with a light ICAT reagent, self-state two, label it with a heavy, meaning more neutrons in it and then mix them together as early as possible. And then all these steps that could have little systematic errors in it will be the errors will be equally distributed to each of the test sizes and labeled and then measure the mass. And for each mass, you'll have two peaks that are separated by whatever the difference in mass between the labeling changes. So what are the properties that you want of a labeling agent? One is that it should react covalently so it will survive all these fractionation steps and mass detection steps. So it has to be, in this case, file specific. You want a way of pulling out only those peptides that have been modified. All the rest are just going to contaminate your mass spectrometry. And in between, you want something where you can differentially label heavy or light atoms. In the original proof of concept, this was done with hydrogen atoms versus deuterium. They differ by one atomic mass unit per position. However, you will see in the next slide, this has the unfortunate consequence that hydrogen and deuterium actually are not only mass distinguishable, but they also have different chemical properties. There's an isotope effect that's detectable in a variety of chemistries, including retention time on the HPLC. This has since been upgraded for you having C13 versus C12. Now, that's a much more subtle difference. Same mass difference. You have nine different carbon atoms in here, and that works better. In addition, there's a way that you can cleave off the biase after it's done its job, where you would add it, and you selected all those peptides that have been modified and then acid cleaved all that to clean up the mass spec. So here's an example where you have the difference in M over Z, of 4, M over Z and it's between these heavy and light peaks. And the ratio of these two is something you can use for essentially every peptide where the pair of peaks is in an uncluttered part of the mass spectrum. And here's evidence where you can see that on retention time and the horizontal axis on the left-hand side of Slide 41. You can see this little red line shows how the centroids of the set of peaks should line up and has been displaced through the chemical effects of that adding a few neutrons. OK, so the lesson here is that hydrogen deuterium are not necessarily chemically identical. The conceit of isotopes is that they really should be chemically identical, but really it's better if you work with heavier atoms to introduce neutrons. Now, so what can we do with this ratiometric acid? Now, we're going to-- just like before, we compared absolute protein levels to absolute RNA levels, now we're going to measure ratios. And to do ratios, we have to have two different conditions. So another advantage to absolute is you can do it all under one condition. Here the two conditions that were chosen for this proof of concept experiment were glucose and galactose. That is to say, growing it plus or minus galactose. Galactose is a nicely understood metabolic and regulatory system in yeast. It fits in with what we know about the central carbon metabolism, and it induces a set of genes in blue here that are required for galactose catabolism to produce energy. The most strongly ones are way off in the upper right-hand corner here, Dow 710, and one, these are the core catabolic enzymes. But almost all of the innate blue triangles have some kind of story like that. And these all are in the upper right quadrant have a high log 10 ratio of expression up to three logs to the third fold induction. Similarly, at the other end of the spectrum in the lower left-hand quadrant are respiratory genes that are involved in, say, oxidative phosphorylation. And these are moderately depressed under galactose conditions, and that's why they're in the lower right-hand corner. And then the green ones are not quite along this diagonal. Their messenger RNA is increased, but their protein expression is not. And these are the ribosome protein genes, and this is another phenomenon that's well documented in the system. OK. Now those are examples of how you can use absolute and relative, and why you're motivated to use absolute and relative measures for proteins and messenger RNAs. Now, these are all treating as if, just like before, I said that messenger RNA might lump all together all splice forms and call that the gene product. Although you know better, and the same thing with proteins, you might lump together all the protein splice forms. And not only that but for a particular protein splice form, there are many different synthetic modifications, such as proteolysis and phosphorylation. And so we're going to talk about these modifications very briefly to hopefully whet your appetite for one of the most exciting parts of proteomics and whether it's identification or quantitation. So we've already mentioned radial isotopic labeling as a way of quantitating. Using various radioactive sulfur, you can use, whether it's stable isotopes or radioactive isotopes, you can use these to do pulse labeling to monitor a dynamic process. P-32 if in particular, if you want to enrich for some of the most well-studied and significant photosynthetic modifications involved in signal transduction. You can enrich for particular types of amino acids. We already showed that the cysteines, which is an arbitrary amino acid chosen for the ICAT ratiometric method, that was chosen because it has interesting reactivity, not because it's intrinsically important low abundance regulatory molecules. Phosphates, on the other hand, can be very important, and you might need to enrich because these important regulatory phosphorylation sites can get lost in the snow of all the rest peptides in the proteome. This can either be done-- this enrichment can either be done by immobilized metals such as iron and gallium, and so forth. This is called imac for immobolized metal affinity chromatography, or you can have antibodies that are specific particular phosphate peptide, and particular phosphate amino acids. You have lectins for carbohydrates as front ends for mass spectrometry. Even when we do P-32 labeling metabolically where the P-32 will only label the subset of the proteins which are phosphorylated. It is still the case that some of the most interesting regulatory cell cycle proteins are not detected above background because there are many abundant proteins, such as ribosome proteins, central carbon metabolic enzymes, which are needed in high abundance, but also need to have phosphorylation. And so you get this forest of phosphate proteins such as ribosome and metabolic, which make it hard to detect the regulatory ones. So labeling is not a panacea, and we'll come-- I think you'll see as we go through the protein modifications and mass spectrometry, is a multidimensional purification really is the way that you get away from the ribosomal proteins and the highly abundant metabolic proteins, which are interesting, but you need a way of both setting them and the regulatory low level proteins. Here are some examples of natural processes. So you can think of this as a special class of post-synthetic modification. Instead of having a phosphate glomming on, you have two different peptides either intramolecularly within a protein or intermolecularly between proteins. And you should be highly motivated to study these because they tell you something not only about protein structure, three-dimensional structure, which was the topic last time, but protein-protein interactions. And not just theoretically, what proteins might interact with other proteins, or might bind in vitro could be in vitro artifact or in yeast to hybrid system, could be a yeast to hybrid artifact. These are actual covalent caught in the act in vivo protein interactions. And some of these are-- most of these are very well documented. By far the most common one and of great significance to protein tertiary stability in the class of proteins, which are extracellular. These include extracellular domains of membrane proteins and secreted proteins, and, in particular, because there the oxidation state is such that this sulfur sulfur bond is stable, while intracellular tends to be more reducing atmosphere, and so that these disulfides have trouble forming. Collagen has a lysine cross-link. Ubiquitin has a C terminus to lysine cross-link. Fibrin involved in blood clotting has glutamine glycine, and so on. As some of your proteins age, you will find glucose in high concentrations in your blood will glycolate the lysine residues. And this is part of the process by which these proteins eventually lose their function and are cleared. Protein nucleic acid interactions we've been talking about so far are non-covalent. Some of them are covalent, for example, when you want to prime de Novo polymer synthesis, DNA synthesis. OK, so what are the consequences for the mass spec algorithms we've been talking about, say, de Novo sequencing or finding a peptide spectrum in your database? Well, you can see the masses are going to be fairly straightforward. Here's some examples of some masses, of some peptides, and some cross-link peptides. On the top right, you'll see one intermolecular cross-linked between a lysine and a lysine, and an intermolecular between two peptides. Now each of these peptides in this display, these are just simple masses. These are not fragments. These are not some fragments. So you expect these to be triptych products. So each of their C terminals should be either arginine R or lysine K. R, K, K, R, so forth, and so you can see this is two peptides, one ending in R, one ending in K. So let's look in detail at this example where you have an intramolecular cross-link. And you can see that as you cleave it, each of these peptide bonds entering B ions from the N terminus and Y ions from the C terminus. You'll see that there's a special case in the region that's defined between the two cross-links and that any peptide bond cleavage that might occur in the gas phase when colliding with argon or some other inert gas will break the chain as usual. But the chain won't fall apart because it's got actually two connections. One is through the normal peptide bond. And the other is through the cross-link. So cleavage is all through in here. It takes two hits to get separation of these, and two hits is unlikely. And so, you'll tend to see the B ions in the Y ions right up until you hit the first cross-linked amino acid, and then you lose it. So that's one of the complications that you have from cross-links. The other one that when you have, say, cross-linking two peptides, might occur when you have an interaction between different proteins, is you'll now have two sets of B ions and two sets of Y ions. As If just having B and Y in the same spectrum isn't enough, now you've got two of each, and even though you don't have the cycle to worry about, you have a full set. But there is a algorithm that Tim Shen and others have developed for dealing with that in very clean cases. And here's an example of actually using the cross-links that you get from mass spectrometry as a fairly inexpensive set of constraints that you can use for getting distances either intramolecularly in this case or conceivably intermolecular. And the constraints. You can't get any bigger than the cross-linked distance. You know that the chemical structure of the cross-linker, and so you can say these two amino acids with their side chains and so forth are reacting have to be this link or shorter. And that's what these little yellow lines indicate in this fibroblast growth factor two, FGF two, where the crystal structure of FGF two is known. And these constraints will greatly aid your ability to find distant homologs or to increase the precision of your homology modeling in three dimensions. The shorter you cross-link, obviously, the better, the tighter your constraints, but it might reduce the efficiency of cross-linking. If you're doing this as artificial cross-linking as opposed to if you have a natural cross-linking, you're basically stuck with whatever the natural. This was an artificial cross-linking with a chemical cross-linker, or five functional cross-linker. That's just a reminder. This is a different way of showing some of the things. We had a scatterplot last class, which showed that as you increase the sequence identity in homology modeling up to 100% on the vertical axis, you decrease your uncertainty and the observed root mean square deviation that you get between two structures. But they're better than 80%. Then you have in the order of one Angstrom deviation, which is quite acceptable for many purposes. Now, if you want to do threading to very distantly related structures getting down around 30% is getting at 25% is getting the Twilight Zone, where you really can't believe it. It's off by too many angstroms. But these constraints in the previous slide could help you out in either doing the mozzie modeling or doing a threading where you're searching through your favorite sequence through a database of three-dimensional structures to ask which three-dimensional structure is closest to. Now that FGF two, we had two slides back with all these constraints can be run through the threading algorithm, where you run the sequence through the database illustrated here to this horizontal, then to the various rows of different structures here. The fold family is in the second column from the left. The sequence identity of our search sequence, which is FGF two, against all these three-dimensional structures that present identity here 98.6 is basically, that's the same structure. That's the trivial example because threading rank is number one, as it should be since it's exactly the same structure-- almost exactly the same structure. The constraint area, of course, is going to be zero because all three-dimensional structure, and all the cross-links work to that structure. But an interesting one now this has been ranked by the constraint error, not by the threading rank. And so you can ask does this improve the hits? And the next one down is FGF two compared to aisle one data, and they do have the same fold family. We know that from three-dimensional structure, and the percent identity is way below the usual cutoff where you can't infer from threading or sequencing. In fact, the threading rank is five. It's not the second-best thread. But it's a straight arrow zero, and so if you combine the good threading rank in the constraint error, then you would put this as your best just homolog 12% to 13% sequence identity. And of course, it's beating out better threading ranks because it has fewer constraint errors. So you can see how powerful these constraints might be, and it's certainly-- you just need to evaluate just exactly how cost effective the mass spectrometry is. Now last topic today, in the realm of protein modifications and interactions, are how we quantitate metabolites. Now you can see that we've got some momentum here on quantitative proteins in RNAs. And so what are the issues that are slightly different from metabolites? And Slide 52 summarizes some of these. You have when you break open a cell to isolate messenger RNA or proteins. There is the rate at which degradative enzymes act is on the order of seconds. That's the rate at which they go, while many of other metabolic processes take on the order of milliseconds to microseconds. Very rapid kinetics, and so as the cell starts to get a little bit sick on the second range, all these enzymes are scrambling the metabolites concentrations. So you have these rapid changes. The detection methods are historically idiosyncratic. They might be enzyme linked, where you'll have in order to detect the metabolite, you have a series of enzymes that result in some fluorescent or luminescent assay. Or they could be gas chromatography, liquid chromatography, NMR mass spectrometer, and so forth. The good news is there are usually fewer metabolites than there are RNAs and proteins. There could be 30,000, some RNAs and proteins typically only 1,000 or so metabolites, even in the more exotic, the metabolically enabled such as E Coli. Here, from their various databases, ecosite, width, tag, and so on, which integrate information about metabolites with the enzymes to act upon them. Here, we're just looking at the mass range that we have. Typical mass range, they're very small compared to proteins in RNAs. Most of them being around 200 atomic mass units. And many of them having absolutely identical mass, that is to say, they have atom per atom exactly the same composition, even though it's arranged in three dimensions very differently. For example, isoleucine and leucine, as their names might imply, have exactly the same mass no matter how many significant digits you put on them. And this is illustrated by actual data on isoleucine and leucine. These supposedly highly purified versions commercially available of isoleucine and leucine mixed together here and run out in these two dimensions of mass and the horizontal axis and retention time and hydrophobic separation. Now, not a peptides but amino acids, metabolites, and you can see how that even though they are identical in mass, as shown on the previous slide around 131, they are separable by their hydrophobicity. They have the same atomic composition, but they are separable just by this hydrophobic separation. And you can see in the commercial press, there are a variety of contaminating molecules that co-migrate in their reverse-phase dimension, presumably because something like reverse phase is used for purifying them commercially. So there are basically three ways of distinguishing molecules that have the same mass. The one in the previous slide was separating them by another property, like retention time on hydrophobic properties. Another one is secondary fragmentation. Just as we could fragment peptides by collision in the gas phase with some inert gas, we can do this with metabolites. And so two things that have the same mass may have a different fragmentation pattern. And you can see again, you can cleave it every particular position, and here are two different aspects and two different labs. Slightly different methodology showing fragmentation in almost every carbon bond. The third method by which you can distinguish compounds that have exactly the same mass, and this case this is the most extreme case. These compounds have the same mass. They actually not only have the same chemical composition, the same atomic composition, they actually have the same chemical structure. Their three dimensions are the same. Their mass is the same. What it is is this-- let's say, the red is a carbon-13, and the green are the carbon-12. You can have the carbon-13 in different positions on this, say, this glucose molecule. So it has the same four-dimensional structure, the same mass. It's just you moved the position of the C-13 to different positions. This is actually an interesting case when you have natural abundance glucose, where you have C-13 trace amounts, it can be positioned on various different carbon atoms. But you can still tell where it is, by when it-- it's now not broken down in the gas phase like collision-induced dissociation. But it's broken down in the cell. It goes through different pathways. And this is the example-- that we're going to be talking about pathways non-stop for the next three sessions. But here's an example of central carbon metabolism, where you start with glucose and glucose phosphate in the upper left-hand corner of this network diagram, and you end up with carbon dioxide down on the lower left. And it can go through various pathways through ribulose or down through three carbons. And each of these three and two carbon breakdown products can have the labeled atom, the mass tagged atom in different positions. And as this quote from literature points out that when you want to study the fluxes through the pathway, by monitoring, you can actually do a pulse or a stable steady-state labeling with isotopic labels. And you can monitor the fluxes through these pathways. But you need to take into account all the different ways that you can go through the pathways. Especially, when you're doing metabolic cycles like the TCA cycle or you have to think through all the multiple turns. Now, in principle, that kind of metabolic tracing can be done either with mass spectrometry or with nuclear magnetic resonance. When you do quantitative 2D nuclear magnetic resonance, you're basically looking at the shifts in the spectral quantities for the carbon-13s. Remember, we were talking about carbon-13 labeling and the normal, most abundant protons. The chemical shifts here that you get are due to the exact chemical environment of this proton or the carbon-13 for, say, the alpha for each of these amino acids, alpha betas. And each of these little clusters are schematic for the intensity of these particular atoms that you're monitoring by their isotope effects on the NMR here. The odd number of nucleons is critical to the detection. OK, so if you know the structure of the network, then you can use that knowledge, and you know which of these atoms go into which parts, then you can use this to quantitate the fluxes through any point in the network. On the other hand, if you only know part of the network, then you can use this way as a tracking to slowly piece together how the network must go. Most of this is both worked out well before genomics and our current systems Biology methods, and so there aren't real algorithms for doing this as far as I know. Although, certainly, there's an opportunity for doing it. Now, this is measuring-- remember this ratios versus absolute amounts. This is measuring not only ratios of metabolite concentrations. These are not metabolite concentrations, but fluxes. So with metabolites, you can measure concentrations or fluxes absolute or ratios. All four of those combinations. Now, let's say-- again, in principle, if you can measure absolute concentrations, you can measure ratios, and you can measure ratios of fluxes. So how will you measure absolute concentration? So remember, we said that one of the problems was that as soon as you start perturbing the cell in microseconds, you can get changes. But one way to do this is without slicing the cells. You can snap freeze them first. Yes, snap expose them to aqueous methanol at -40. You can wash them at that, it's a liquid, and you can remove the outside metabolites. And there's reason to believe that this is the minimally perturbing method of preparing the cells. And then you put them in basically boiling alcohol, and then quantitate with NMR methods such as the ones we just talked about, getting up to 1,300 measures per sample. Some examples of some of these internal metabolite concentration, now, remember, these are not flux ratios. But actual metabolite concentrations of things like glucose phosphate, ATP, pyruvate, so on, can be correlated to genomics by the vehicle of gene knockouts. We have wild type on the top of the far left, followed by a deletion of HO. This is a homing in the nucleus, should have no metabolic consequences at all. This is used as a control. It's a pseudo wild type, just shows that the deletion method itself is not changing things, the metabolism. And then, as you go further and further down this list, you get more and more severe expected effects on the ability of the organism, say to produce energy, as exemplified by its ATP production. Now, if you have low ATP production, that means you have high residual levels of ATP, which is the other end of the energy spectrum. And so if you look down these columns, it's a little hard to see with all the clutter that's produced by the standard deviations. I wouldn't want them to get rid of those standard deviations. It's wonderful, but anyway, you can summarize this by looking at the ATP to ADP ratio. That's a way you don't need ratios for this to work. But it's a way of accentuating this energy balance, and so you can see, for a wild type, you have about almost 7 is the ratio of ATP to ADP. Highly charged on this high energy state. And as you get to these pet mutants, which are mutations in the mitochondrial process, you find the cell is becoming increasingly ineffective. Now, these were all chosen because these were so-called silent mutations. The title of this paper says that you can get phenotype by looking at the molecular analysis quantitating the whole holistically, systematically all the metabolites for things that otherwise have no phenotypes. OK. Now, this representa-- as you start quantitating whole proteomics, proteome interactions, proteome modifications, metabolites in their interactions, you start wanting to relay this information. Summarize this information in the context of models. And just as last time, the upper left-hand portion of this summary, I emphasize that no model is exact. Sometimes the people working in the field convince themselves that it's more exact than lower models. But every one of these, even the quantum mechanics, is a poor approximation of electrodynamics, and molecular mechanics has only spherical atoms represented. Some of the most challenging cell models involve massive equations of stochastic. Now, we're not talking about single atoms anymore. We're talking about single molecules. Still it's too coarse for many experiments. You can have phenomenological rates, such as the ones we've been talking about, represented in ordinary differentiable equations. How do you get the parameters that describe now concentration and time? Not single molecules, but treating them as a bag of a particular part of the cell, or the whole cell having a particular concentration of molecules. And then we'll go on as we get into the network analysis, and the next few lectures, we'll talk about some of these other models, which have their roles. But let's just get at-- how would we get some of the parameters that describe concentration in time? And when we talk about the formalism of these networks, regulatory networks, mainly are about binding. But they intimately connect with catalytic networks, where you not only bind, but you actually change the covalent structure of molecules. In the simplest such case, single substrate going in single product coming out is that the top of this Slide number 60. And here, you can see that the enzyme E, which is typically a protein and/or but there are RNA catalysts and so on. But the property that's emphasized here is that the enzyme is not consumed. As A goes in, it makes a covalent change from EA to EB, B is released. E is recycled. E is not consumed by the cycle, but A is consumed. B is produced. But let's look at it in a different light. In a particularly interesting class of reactions, for example, those involving the regulatory cascade signal transduction enzyme modifications. Here the enzyme now becomes substrate. The ATP is now no longer consumed in the whole cycle. ATP comes in, modifies the enzyme to a phosphate enzyme. And the accompanying reaction regenerates the-- turns the ATP back into ATP. So the ATP, in a certain sense, is catalytic here, and now the enzyme is consumed, producing a fossil enzyme. So it all depends on when you started constructing these graphs. What is the node-- within the node is a substrate, and node is an enzyme, depends on how you look at it? And I do this somewhat provocatively, so you'll think about these networks not just as binding, but as catalysis. And not just the enzyme being the catalyst, but sometimes the substrate as well. Now, this is the simplest case of measuring kinetics. This is not equilibrium. This is kinetics. And we're studying this plot on the far left-hand side of Slide 61. As substrate increases, the rate of production of product increases. You could start out with zero product or small amounts, and it used to be historically at bottom, you would require to do the experiment, you would require that the product be as close to zero as possible. You'd do initial rates, and you have this simple relationship where the increase in product was a simple function of the 1 over 1 plus reciprocal of the substrate concentration. And as the substrate would increase, you'd eventually saturate the amount of enzyme that you have present in the experiment, and that would be the maximum velocity for that amount of enzyme would be max. However, if you have a more full equation where you take into account all the players, at least in the simplest system, the forward velocity, the PDT the derivative of product concentration as a subject of time, this might be in mol/Ls per liter. It's going to go up as a substrate goes up. More substrate means it will go faster in the forward direction towards product. But if you have some product, you'll have some product inhibition. You'll have a tendency the kinetics to go in the opposite direction. So this is negative-- there's a negative component with products. The KS and KP are sometimes called Nikolaus constants as the substrate gets closer to the Nikolaus constant. This is basically related to the binding affinity for the substrate. And that's a natural halfway point where the substrate is half saturating is roughly what the KS is all about. Now let's compare this very simple case. We have one substrate producing one product to a more typical case where you might have two substrates producing two products. And let's take this out of a real network. We're going to show this real network in just a moment, and it's all slurry. Here you have two substrates ATP, and F16 going to 2 products ATP, and FTP. And you've got this same form where you have a velocity of this reaction. And it's a function of the reactants F16 and ATP, and you find them in the numerator here. And you find these Nikolaus constants in the right places. But in addition, you find in the denominator this curious term, it has these fourth powers. Well, we didn't see any fourth powers in the previous slide. Why are we getting fourth powers just because, and why is AMP in here? It's not even one of the reactions or the products. What's going on? Well, this is actually a regulatory phenomenon allosteric where you have a second site on the enzyme. One site does the catalytic magic, and the other one is regulated by some in the infinite wisdom of the whole network, that is important feedback. So AMP is related to ATP as a further step, and it feeds back on this enzyme, as does F16, and ADP, and so forth. So all these-- and this fourth power just says you want it to be cooperative. You want to have a nonlinear regulation. That's what's going on in this term. And it doesn't occur in the whole network, we'll show in the next slide. But it does occur at some key points, like this one where the enzyme has two sites, catalytic and regulatory. And when you see these terms that are greater than linear, like the fourth power, that's sometimes referred to the Hill coefficient and refers to the steepness of-- instead of having this kind of curve, you have this kind of sigmoid curve. And the sigs peakedness to that is related to that power. Now, let's look-- if you compare-- so this little piece, this phosphofructokinase step, is going to be put in context of the entire network in the red blood cell, human red blood cell right here in the upper left quadrant of the circle. And this is the simplest analysis-- this is the simplest metabolic network that you'll be talking about. This mostly involves covalent transitions or pumping across membranes. It treats the whole cell as a uniform bag, with a membrane being a separate compartment. And there are really two objectives of this. One is to produce ATP. So that you can run the pumps to keep the osmotic pressure constant across the red blood cell membrane, so that it maintains its shape. And the other is to maintain the redox at the right level, so as if-- so you have a reducing atmosphere. The hemoglobin has an intimate contact with oxygen, and so there is a certain low level of oxidation of the Iron rather than just binding to the Iron to make hemoglobin which is not a good physiological state. So you want to have this reducing potential to get it back to the correct oxidation state, so it will bind oxygen. So you have a little bit of purine metabolism here. Quite a bit of simple glycolysis, not oxidative phosphorylation. Just because glycolysis produced reducing potential and little ATP, and these 40 or so enzymatic reactions can be modeled with about 200 parameters. All of these 200 parameters have been measured accurately by purifying each of the metabolites and enzymes. And this has been reduced to an ordinary differentiable equation model that has been evolving since the '70s. If we look at Slide 64 that this phosphofructokinase, we have the same form that we had a couple of slides back. Here is the fourth power term, and the denominator having AMP as a regulatory molecule. And then you have the Nikolous constants, and so forth, explicitly stated here in the numerator for the substrates F16 and ATP. And you have a similar equation for every single enzyme step in that whole network in the red blood cell model. And in green are the concentrations, at any given time point, that's going to be their dynamic or steady state. At any given time point, you'll have green for the metabolic concentrations and red for the fluxes indicated by the enzyme. And you can run this as a simulation for the red blood cell and see all the interesting questions about robustness and optimality, and so forth. Even though this is a very mature model, there still is lots to be done, even in this very simplest system. Now, we're going to go on to more complicated systems in the next three lectures on networks. But for today, we've basically tried to integrate the protein, either absolute or ratios, with RNA measurements and with metabolite interactions, proteins post-synthetic modifications. So until next time, thank you very much. |
MIT_HST508_Genomics_and_Computational_Biology_Fall_2002 | 9B_Networks_1_Systems_Biology_Metabolic_Kinetic_Flux_Balance_Optimization_Methods.txt | NARRATOR: The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license and MIT OpenCourseWare, in general, is available at ocw.mit.edu. DR. GEORGE CHURCH: As promised, we're going into flux balance analysis. The simplifications here are even greater than the ones that we have been making so far. But hopefully, you will see some benefit. We're here. Assuming the time constant metabolic reactions are very fast compared to cell growth. Cell growth, we've seen the time constants are in the order of seconds or sometimes less, while growth might be on the order of hours. Even though we've seen interesting dynamics that can occur, those phase diagrams that we had in the ordinary differentiable equations, there are also many-- it's very typical for a cell to be in fairly stable conditions. There'll be a transition time, and then they'll be stable again. And that will be the steady-state assumption. So we'll say there's no net accumulation of metabolites. So even though there may be fluxes into and out of every metabolite, the net change is zero. And so that means this equation we've seen a few times, the change in X will reflect the time is zero. And that means that stoichiometric matrix times the flux rate for that for all relevant reactions, you can think of as a matrix representation. Sum of all the inputs and outputs of that particular X position, minus transport vector, which is kept separately, is all zero. So what does that mean? That means the stoichiometric matrix times the internal fluxes is equal to transport across the membrane. And this is something-- I'll show you-- we're going to do this in two steps. First, we're going to solve for that as if there was an exact set of equations, just the right number of equations, just the right number of unknowns. So you can actually get a solve for the unknown fluxes. And then we'll work through it where it's actually underdetermined, but there are inequality constraints. First, let's take the exact case because the stoichiometry is known that's S. These are the zeros and ones, minus ones, twos, and so on. The uptake rates can be known in the sense that you can regulate the amount of input and output by the rate at which you remove substrates and add them. And the metabolic flux's are what you actually want. This is in contrast to the red blood cell or other ordinary difference equation cases we've been talking about. Where the rate equations were known, but what we were looking for concentrations. Here we're less concerned about concentrations, and we want to learn what it might be the fluxes and later what might be the optimal fluxes. So Slide 42, let's say that we-- as, in a way, nomenclature, we focus on positive fluxes. If we want to go the opposite direction, we'll make that another reverse flux, separate flux. So focusing on positive fluxes, we might know the flux level in certain reactions that will help us have the equations appropriately constrained. We can control update rates. We can have maximum values for uptake in internal fluxes. So here's an example. Let's walk through it. It's nice to have a couple of-- one or two examples where we walk through in a given class. So here, we're going to have an input molecule A is transported across the membrane at a rate R sub A. It then has essentially a decision fork here. Some fraction of the A is going to go through the X1 reaction, and some fraction through the X2. The amount that goes through X1 and X2 is what we want to know. We are going to be given-- we're going to essentially clamp control the rate it goes across membrane RA. So that's going to be a given. You can see the constraint here. RA is going to be 3. And we're going to also know the rate at which B is removed, and that's going to be kept constant at one. So we're going to solve for the two internal fluxes X, and maybe this external transport flux are C. Now, there's going to be conservation of mass, so we can start setting up the flux balances in the upper right-hand corner here of Slide 43. Take a look at A. A is created-- essentially, the intracellular concentration of A is dependent on R sub A, and then it splits into X1 and X2. So you know that R sub A minus X1, minus X2 is going to be zero. They're going to cancel out because A, a steady-state assumption, is going to be constant. Even though the fluxes are not zero, there sum is. And for B, a similar argument can be made X1, which creates B has to sum will also be zero. And same thing for C, all the inputs have to be the outputs. We have these two constraints where we've just said we're going to clamp experimentally the amount of A going in and the amount of B coming out. And we want to solve for the other three kinetic parameters. So we have three equations and three unknowns. The unknowns are the two internal fluxes in R sub B. And so, another way of stating this equation X1 plus X2 is equal to 3 because we know three going in has to equal X1, X2 by flux balance. This is, essentially, the flux balance for A can be summarized as X1 plus X2 equals 3. Or in matrix formation, it's the top row here 1, 1, 0. All right, and the 3 is in the transport column vector at the far right hand side. And similarly, you fill up the rest of the symmetric matrix with zeros and ones and twos and minus ones and the transport, the flux vectors are 3, 1, 0, for the constraints that we have. And you solve for this. Three equations, three unknowns. You can solve it-- my standard linear algebra tricks S times B equals. So V is equal to inverse matrix S times B. The matrix we had in the previous slide, which was-- first row was 1, 1, 0. Now, when you do the inverse matrix, you get this upper right-hand portion, and you multiply it out, and you get the column vector solution which is 1, 2, 4, for X1, X2, and R, C, respectively. Now, they're plugged into this diagram, and you can see that the whole masses balance out. 3 is equal to 1 plus 2, 2 times C, 2 times 2 is equal to 4. OK. That's an example where it's heavily constrained, and so we can solve it exactly. However, very often, there are not [AUDIO OUT] of the measurements that have been made. We can't clamp all these fluxes, and there are many more internal fluxes than external. And so we have an under-determined system. What is a good system biologist to do at this point? Well, the formal solution is it's no longer a point, a single point, a nice column vector of fluxes. It's now an entire feasible space. This is a kind of a cop-out answer, is that, OK, we have fewer equations than we have unknowns. That means the unknowns can occupy this entire multi-dimensional space, where however many fluxes you have, if you have hundreds of fluxes, you'll have hundreds of dimensions. You have three fluxes, A, B, and C. Then you'll get this kind of multi-dimensional polyhedron region, and anything inside that polyhedron is the acceptable solution to the under-determined set. Well, this is still progress in that you now know that your solution is in there somewhere. But what if we wanted to find-- add some more constraints, which are not now exact constraints anymore, but they're more an optimization process. You might have this multi-dimensional polyhedron will be bounded by inequalities. That it's less than-- is greater than zero. Right, that it's positive, or it's less of some maximum flux. But then you want to find some optimal solution, and the optimization is the thing that math has been harnessed for in some other fields outside of ours, such as economics. When you're doing planning of transport or planning of economic investments, and so forth, you want to know how much of your resources do you want to send to the X1 route? And how much do you want to send to the X2 route? Very analogous to the situation we had in the previous slide. And that can be solved by linear programming. Linear programming often finding economic applications. But we need to ask what is it we want to optimize? In economics, you want to optimize, typically, the bottom line. You want to lower costs, and increase profits. What we have here is we have this very commonly encountered in many metabolic systems where you have this convex polyhedron cone. Convex just means that there are no little gaps in it. No little carved-out regions. It's all contained. That convexity allows you to now take a linear objective function, say a multi-dimensional plane or a line that you then move through this convex space. And since there are no gaps in it, you will eventually move off as you move this objective function to-- the objective function gets better and better as it goes off through this feasible space. It eventually gets to a point where it leaves the feasible space, and that is the maximum value. The optimal value for that objective function. So we can use this feasible space combined with this objective function to find some optimal. Now, what is the objective function that we want to use? And if we were doing red blood cell, the objective function might be ATP or redox or both or delivery of oxygen. And for a variety of other systems of importance to understanding health or biotechnology production, or other medical and engineering goals, what we want to optimize is biomass. The ability of the cells to grow and produce other cells or produce a particular subset of molecules in the cell, but let's deal mainly with the case of cells producing cells. And what this is is a sum over all the monomers, all the components of the cell, which represent the body of the cell. So as you move-- as you transport small molecules in from the environment and incorporate it into the cell, as certainly you can think of it as a sync, it's removing those molecules from circulation from some solution. And this sum over all the components, the ratios, the monomers, you can think of there is a fixed ratio of alanine to glycine to leucine to all the other components of the cell. And this is known. You can know this without knowing lots of other parts of the system we want to know just by taking the cell and doing a chemical composition on it. Very simple experiment. There are tables of this known provide variety of cells. And it hardly changes depending much on how a cell is growing. What sources of carbon and nitrogen do not greatly affect the ratio of alanine, leucine, and glycine because those are determined by what it takes to run the cell. So this is important, flux you can think of this as a kind of a lumped flux, and this will be our objective function as well. So the objective function Z sometimes called, so Z is going to be equal to the flux of growth. So you can see that you have, again, the same equation against the chemistry matrix times the internal flux is equal to the uptake fluxes. And we've got this now-- this constraint, this optimization function. So let's take-- just like we had a very simple exact solution before, now let's take this very simple linear programming or LPM solution where now it's underdetermined so we can't get the exact solution. But we can ask what's the maximum for a particular objective function. The objective function here is not going to be the biomass production of the entire cell. It's going to be either maximizing the production of D or C. Now, this is a slightly different diagram than we had before. We still have the rate of uptake of A on the left side of this kind of circular pseudo cell. A goes in. It makes the same binary decision-- two-way decision of X1 versus X2. It's not binary. It's these quantitative real numbers determine how much X and how much X2. And then, if it takes the upper X1 route, then it splits into B and C. If it takes the X2 route, it turns into B and D. Both cases it produces a B molecule. You can already see a constraint coming up here, which is the RA is going to be for RB. However you go from A to the outside again, it's going to produce one molecule of B for every molecule A. This is a perfect conservation reaction. And we've said that we're only interested in positive fluxes, so you get this little triangle here of X1 and X2 are greater than zero. And they're constrained, and we've said that RA we're going to clamp so that it can't have more than one arbitrary molds per liter, per minute. And so X1 plus X2 are constrained to be less than RA, so they are less than 1 as well. So you get this feasible space, which is this diagonally cross-hatched region, and that's the exact solution. That's the set of all exact solutions. But now, if we want to maximize a particular objective function Z, in this case, let's maximize the production of D, and we're not too concerned about anything else. Then you get this line. You can think this is a hyper-plane going-- a line basically going up through feasible space starting at the bottom of the slide going up and up and up until it just barely leaves the feasible space. And when it does, the last point it gets to is the maximum, and the maximum happens here to be X1 equals 0, X2 equal to the max RD. Maximum rate of production of the molecule we're interested in. If we had an objective function that went off the other axis, it would be X2 zero, and X1 equal to maximum RC. So you can see how this works. We create the feasible space with this design and objective function Z, and we run the Z off the edge of this convex space. It's important that it be convex. OK, so how applicable is this linear programming and so-called flux balance analysis? It works when the stoichiometry is well known. For E coli, it's well known. For newly sequenced genomes, we can make connections to previous enzymatic reactions. And we can guess at what the stoichiometric matrix would be. But in that case, the stoichiometry is less well known, and in which case, you're going to have to really embrace your outliers. When you get to the N, you're going to see all the errors and go back and figure out what was wrong with our stoichiometric matrix derived from our genome. You don't need much experimental information to run this. But you need some, and we'll explore two of them or at least to test how it's going to do data. So what are the precursors to cell growth that we are monitoring as a Z function? We want to come up with-- define this growth function in terms of the biomass. And there's like I said there's tables of composition we'll show on in a couple of slides. But you can use this as a part of the complete metabolic network. You can also use this as observation function. It can be described as some small number of biosynthetic precursors, plus the energy and redox cofactors. Now there are many ways of doing in silico cells. We show the red blood cells ordinary equation. The kind of in silico cells here where the optimization is fairly limited, the stoichiometric matrices have only been worked out for three, maybe five cells, three published. Yeast is on its way. And these are mostly hand curated. There is definitely a need for getting a more automated input from genomic models to these flux analyzes. Here's some references here. We'll be talking in a moment. First, we'll talk about the wild-type case for each of these cells under a variety of different growth conditions. Then we'll move on to mutants and ask whether mutants we expect to be optimal or not. And that will be called minimization of metabolic adjustment. Now, where do these stoichiometric matrices come from? Parenthetically the kinetic parameters that we are talking about are known for some of these complicated biochemical systems, not just for red blood cell. And where both the stoichiometric matrices and the kinetic parameters come from, of course, as a vast literature is an unwieldy literature in the sense it was done at a time before anybody thought that we were going to be-- that they were going to be responsible for getting this into computer passable databases. So it's mostly been re-entered by technicians into databases, and you'll end up with these diagrams, such as the central one where each of these boxes contains is a node that contains a substrate. And the lines have inside information numbers this dotted-- this multiple decimal point showing a hierarchical classification going from left to right getting more and more detailed in the enzymatic reaction. And this is built up in a database. And from this, you can access such things as effects of kinetic constants, effects of pH, and other details. And also, from this, you can get the stoichiometric matrices in principle because here, you can see that you've got, say, an input of ADP plus glucose 6 phosphate going into this reaction or coming out depending on which direction you're going. And so those convert to zeros and ones in the matrix. That's the source of the stoichiometric matrix, which tells you what reactions are allowed. What two things can come together, or one thing can be converted by the various enzymes that actually exist in cells. And you can toggle those on and off, either by regulation or by evolutionary change or by genetic manipulation or mutagenesis. You can basically have a universal matrix, and you toggle on only the ones that you think happen in your organism. That's how you get the stoichiometric matrix. How you get the optimization function, the Z function, which is how you want to make those decisions of how much X1 and how much X2. This is dependent on the biomass composition. And the biomass composition, say we have on the vertical axis here, is the coefficient in the growth reaction, which ranges here over almost eight logs from the lowest compositional fraction, which are some of these coenzymes. So just like enzymes, they're only needed in small amounts to these that are needed both for major participants in hundreds of reactions, like ATP. Some of which contribute heavily to biomass. So ATP also contributes to large biomass like ribosomal RNA. And then the various amino acids, which tend to be many of these stars at the higher levels, like lysine, leucine, and the other 18 amino acids. OK, so these are examples of the numbers that would go into the optimization curve here. These red hyper planes would be that linear some in the ratios in the previous slide. Now, we've already seen, we've already worked through an example where we slid one of these hyperplanes off the edge of in a two-dimensional model and got the optimum production of a particular substance D. But now, we're trying to optimize this linear sum of all the metabolites that go into the body of the organism. And just focus on the green feasible space. It's actually-- some of it's hidden behind the yellow, but imagine this whole convex polyhedron of green which is feasible space, FS for wild type, WT, and you can get an optimum. You assume that this optimum, whatever the conditions that you're looking at is likely to be achieved because for millions of years, this organism has been living through all the different growth scenarios. Glucose on glucose, galactose, various nitrogen sources, and so forth, so no matter what reasonable set of conditions you throw at it, it's seen something like that before or some other combinations, and so it's optimal for it. And that's what that top right red dot means is that's the optimum for the wild type. Moving this hyperplane, which is the sum of all the different growth components in the correct ratio. So if you're going to optimize all the decisions, X1s and X2s so that you get the right ratios, so you don't get way too much, say, of some rare amino acid, like tryptophan and not enough of the glycine, say as alanine is most common. That's great for a wild type under a variety of different growth conditions. But what if you throw a real curveball and say not give it a condition, but give it a perturbation it really doesn't see very often, if at all, which would be to knockout of gene completely by deletion or conceivably knock in a gene. Add a gene that it hasn't seen before very often. Well, now you could say we're going to do the same optimization. We'll run the same red hyperplane through the new-- so the new feasible space is reduced if it's a knockout, could be increased if it's a knock in, where we're adding a gene. But in this reduced space, the original optima is no longer accessible. And if you rerun the optimization running this plane off the edge, you'll find a new red dot in the yellow space, which is a knockout optimum. But that could take-- after you do the knockout, it could take evolutionary time or at least long lb times to allow all the other genes to mutate and be selected so that they accommodate this new knockout. That the wild type had plenty of time to do that had millions of years. The yellow feasible space for the knockout may not have had that, so you want to know what's this immediate response. And for its immediate response, you might imagine is showing here as an orthogonal. A closest distance think of this is a Euclidean distance in this multi-dimensional space between the wild type optimum and the projection onto the feasible space of the mutant. Now that distance, you should already be thinking that this is no longer a linear programming. This may be quadratic because the distance is a quadratic function. And we'll see in particular since there are certain pathologies where you really are forced to take-- you can't just take a projection. Because sometimes, the projections can end up in a part of space, which is not feasible. This projection does not land on the yellow feasible space, FS of the knockout. So what you need to do is nudge this purple symbol up just to the nearest point of the feasible space, which minimizes the distance to the wild type optimum, and still falls in feasible space. Now, this is a quadratic programming algorithm. It's really just-- the linear is very simple to think of as just moving this plane off the edge of the convex space. We'll go up-- quadratic programming is a bit more complicated, but I think you get the idea you're minimizing that distance. OK, now, with any good modeling exercise, there should be some data lurking in the wings. Many of the network models that we will do are getting more ambitious than even the massive amounts of data that are coming in. There are two types of data that might leap to mind as being appropriate for this. Remember, we said that what the organism has done is optimize the use of these networks so that you can maximize growth. So the two sources of data that you'd want to test this with would be the flux data itself because you're saying-- you're predicting how much is going to be going in each flux direction. And the growth data, so you're making predictions that you'll optimally use the fluxes in the network in order to maximize growth for various mutants and various different conditions of growth, different carbon and nitrogen sources. So here you have a group in Zurich, which is among the very few groups that can actually measure the internal fluxes for metabolic pathways that begin with, say, isotopically labeled glucose and end in the various amino acids. You can measure these-- we prepared ourselves for this a little bit in the last class where we talked about the isotopomers where you can have different chemical compounds whose only difference-- they have basically identical chemically. But in different carbon positions, you have a carbon-13 in one place, carbon-12 in another. And the exact arrangement of carbon-12 and carbon-13s in this will determine the isotopomer. And if you have mixtures of carbon-12 and carbon-13 glucose-- in the upper left-hand corner of this diagram includes glycolysis and pentose phosphate and TCA cycle. This isotopically labeled glucose goes through here. And then you need a little bit of modeling that we won't go into. But that does require a stoichiometry matrix, just like our optimization modeling does. But now, very independently of that, you need to ask how the isotopes of glucose would make their way into the amino acids. And then once you have that, then given the amino acid ratios by mass spect in NMR, you can go back and calculate what the fluxes must have been in here. Once you have those fluxes, then you can ask how close to optimal are they for the wild type under one condition. For the mutants, under the same condition. For the wild type and mutant under different conditions. OK, so this is the first class of data that we'll use, and it's the internal fluxes for wild-type and mutants. Look in the upper right-hand corner where-- so you've got in the upper left-hand corner is the icon color-coded from the previous slide. The upper right-hand corner is the predictions for wild type using the linear programming or FBA model. Get predicted fluxes on the vertical axis and experimental fluxes on the horizontal axis. You see, here is a good correlation coefficient of about 90 plus percent. And a probability that this would not be random. That this is a positive linear correlation is better than 10 to the minus 7. Very unlikely to happen at random. Then you can focus in on the outliers like number 18 here, but overall, it's a good-- I mean, this can allow you to ask what experimental or model problems you might have. But let's ask-- that's the wild type under common growth conditionns. What about mutant under the same growth conditions? Same stoichiometric modeling, same measurements, now the experimental fluxes versus predicted fluxes in the lower left-hand quadrant is almost completely random. There's no positive or negative correlation that's significant, as we might have expected for the mutant. Remember, it isn't optimal. So you don't expect linear programming method which produces the optimum to be appropriate. Unless we had allowed this to evolve in the laboratory for hundreds of generations or a sufficient number, however many that is, to get all the other genes or enough of the other genes to adapt so that you get near an optimum. In any case, this one is random. If you now use the quadratic programming approach, the MOMA or minimization of metabolic adjustment. Now get the very exciting result that it now becomes statistically significant again. Still a few outliers, 17 and 16 here in magenta, and these are things where you can now make very specific tests. Because now you know which ones are most discrepant between the experimental and predicted fluxes. And you can go in and ask what part of that model or what part of that data collection might be accounting for the poor fit. But overall, this starts to give one the impression that maybe the mutants will not be eligible and that they can be somewhat better served by this quasi-non-optimal solution. And if you walk through various different conditions with different knockouts, you can see multiple examples of this. So here in the upper left-hand quadrant of Slide 59, where we have the carbon limitation condition on the far left and then the comparison of the method for flux balance on wild type. Here's that 0.9 correlation coefficient and good P-value, and then comparing MOMA and FBA. And again, the significance of switching from FBA to MoMA is 3 times 10 to the minus 3, indicating that the quadratic programming is the more appropriate application here. And as you walk through this, you'll see many good correlation coefficients and many improvements when you go to the quadratic or MOMA, not every case, but certainly many of them. Now that's one class of test, which is the internal fluxes. The idea is that you adjust the internal fluxes until they're optimal to produce growth. Well, how about measuring growth itself on a set of mutants. Again, these mutants you expect will follow the MOMA prediction slightly better. And so Slide 60 is a particular way of measuring a large genome-wide set of mutants. This is not a trivial undertaking. We know how to measure a transcriptome set of RNAs. Where on microarrays, maybe there's an analogous way to measure genomes worth of mutants. Ideally, you would have a mutant, not just one per gene. One knockout per gene, which is maybe a very expensive handcrafted deletion, but you'd have a little targeted mutation in every domain that contributes to that gene function. Mutations in the DNA regulatory elements, various protein domains, RNA stability domain, and so on, not quite at that level of ideal, but getting close to it. It's transposon mutagenesis, where you insert small bits of DNA randomly throughout the genome. The modern set of commonly used transposons are pretty random in their insertion site choice. And then, you need a way of turning this collection of transposons into a readout of growth rates. Well, one way to do this is to have a population of cells, each having their own distinctive transposon and growing them as a mixture. And as they grow as a mixture, the ones that grow better will dominate the population. And so their transposon hit the junction between the transposon-- the transposons is universal. It's present in every cell. But where it sits is unique to that cell, or nearly so. So it sits in-- the junction between transposon and genome is unique, and you want a way of assaying that. The way you assay that is you take the complete DNA from the entire cell mixture, and you want to say how much of each transposon exists? So you cut the entire DNA mixture with a restriction enzyme that cuts frequently. So called 4 cutter, cuts with about every 236 base pairs on average. And you like it on this very special kind of linker, which is a linker which will not amplify with the corresponding primer until some other DNA synthesis has gone through because you see this little Y linker is not perfectly logically based paired. It requires DNA polymerase to make a complement. Complement then will bind the primer. It's a long way of saying that only in cases where you have a primer in the transposon near one of these Y linkers will you get amplification. Transposon alone, you won't get it. Y linker alone, you won't get it. That's one step of enrichment, and the reason you have to enrich is if you just threw this whole thing on the microarray, you'd get everything lighting up because every genome has every piece of the genome in it, and the transposons are present in trace amounts. So you need to amplify the junction fragment by first this trick, followed by a T7 promoter. This is a phage promoter a very commonly used. Very clean RNA polymerase background that's been incorporated into any transpose on your favorite transposon. So now you have two steps. First, this ligation-mediated PCR, and second the T7 in vitro transcription to make RNA. Now, you've basically reduced this to a problem similar to transcriptome microarrays that we've done before. Now the RNAs are surrogates for the amount of each strain. We're not measuring RNA. We've made an RNA that represents the transposon junction fragment and hence allows you to quantitate the amount of that particular mutant in a population. As that mutant goes up or down in the population, so does the RNA from this assay. So you might ask at this point, well, this is great, but I don't trust it. Just like some people might not have trusted mass spectrometry as being reproducible enough to be quantitative. So the way that you determine whether something is sufficiently reproducible to be quantitative, one way to do it is to do two independent selection experiments. Evolutionists might say, oh, if we reran evolution all over again, we would not get the same set of organisms we have on Earth today. That may be true, but that's because there are all sorts of bottlenecks in historical events. In this case, we specifically tried to make it reproducible by avoiding bottlenecks by having every mutation, every transposon type represented by 1,000 different independent events. As well as 1,000 copies of every transposon hit, so you keep the population size large, and so it becomes more reproducible. And the way you measure the reproducibility is you do the two selection experiments. The two ideally independent sets of transposons subjected to exactly the same selection procedure. And you see a very gratifying curve here with an R-squared regression measure in excess of 98%. And the kind of scatter that you would be pleased to get in an RNA microarray experiment. So this is reproducible and, therefore, quite capable of producing quantitative data. Well, now let's go back and compare it to the two different methods of modeling the flux optimization. There's the FBA or linear programming, which assumes that the mutant is immediately optimal. And then there's the MOMA or minimization of metabolic adjustment, which doesn't assume immediate ophthalmology, but assumes that it's close to wild-type optimal. So starting at the top with the linear flux balance, you can classify the predictions of the model for each gene. And here we have almost 400-- almost 500 different genes mutated, and first you run through in silico predictions you can classify them as being essential for a particular growth condition. They aren't necessarily essential for every growth condition. But the growth conditions here the essential or nonessential or something in between. We'll just focus on the extremes of essential and nonessential. And then, the experiment can be classified as to whether there's significant negative selection or no noticeable selection in the particular growth conditions. And so you expect the ones to be essential in these growth conditions to be heavily negative selectively, and you have 80 examples of that out of 142 predicted. However, you wouldn't expect any to be missing in selection since they're essential. So the 62 should be a zero for the 62. This is an example of how we're going to try to explain or use this as a way of generating interesting hypotheses about the exceptions or outliers. Similarly, the nonessential genes should have no selection. This 180 in the lower right-hand part of the upper FBA, but the 119 should be zero. So the first pass explanation is OK. We knew that these weren't going to be optimal right away. So let's use the other model, the sub-optimal, nearly optimal model MOMA and see how well it does. Well, on the far right-hand side of Slide 62, you see the chi-square P-value is how well the predictions agree with the observations. And it is significant for the linear programming. It's 4 times 10, then minus 3, fairly significant. And the MOMA is much more significant than the minus 5. It has improved some of those upper right 66 and lower left 108. Those should still be zero, and they're not. And the extent that they deviate from the ideal expectation means that either the data-- or somehow the way we're collecting the data not the way we expected it. Or that the model, more likely the model has some problems in it. And examples of problems for those two classes, the predicted essential genes which show those-- which show no selection, might be examples of redundancies that we failed to model when we put together the stoichiometric matrix. The stoichiometric matrix we take every known biochemical reaction. We might take convincing homologs sequenced homology level, and say those might be examples of redundancy. We might take analogous or parallel roots that are known documented biochemically, where you can get to the same point by a series of possibly non-sequence homologous enzymatic steps. So those are novel redundancies. This is an example of potential discovery, and can be pursued in a very directed way because these are 66 very specific predictions. 108 essential gene knockouts, which nevertheless show negative selection, could be examples of position effects. Where a mutation in one gene affects other genes that are close along the DNA position, meaning along the DNA, an example of this is that a variety of mutations in a gene which is upstream from another gene in an operon, have poorer effects on downstream genes due to the coupling of transcription and translation. You can have other position effects as well. These are examples of possibly many explanations and discoveries that can contribute to the wonderful nature of-- it's a win-win situation. Either you get great correlation, or you get great exceptions and discoveries, or both. OK, now, when we talked about redundancy as one of the possible explanations in the previous slide, that brings up really important conceptual component of the post-genomic functional genomics world. Which is what do we do about multiple homologous domains? When you go through and you sequence genomes, you find parallels. You find either whole genes or pieces of genes which have high sequence homology, which we talked about at the beginning of the course. And here's examples of three protein-coding regions. We encode enzymes involved in the biosynthesis of amino acids. So you can imagine that when you grow the cells on minimal media that you are not providing the amino acids, so the cell has to make the amino acids. If it's got a mutation, a transposon hit in one of these genes that's key in the biosynthesis synthesis of lysine 3 and methionine. And you might expect that it won't grow well. It'll be at a selective disadvantage in minimal media unless one of the other genes covers for it. As possibly as green-- so these have two or three domains here color coded. So the green domain is shared by these three biosynthetic proteins that are otherwise very different in their metabolic contributions. And you might imagine that some of these green domains might cover for others when one of them is mutated. One of them, however, the one in lysine when it's got a transposon hit in that domain, the replication rate selective disadvantage in minimal media is severe. It's a fact. It's an order of magnitude, while the others are more subtle. It could be that the lysine covers for the other two because it's made in large amounts, is very active. But the other two can't cover for the lysine, or there's a variety of other possible explanations for this observation. The point is that this generates hypotheses that allow us to follow up on the flux balance or MOMA type of modeling. And I think the reason I spent so much time on it is I think the concept of optimization is something that's important both in the sense that we're in engineering, as engineers, we're optimizing living systems. And also, as students of evolutionary systems where to understand what those systems are optimized to do, we need to look at from this perspective. So this is just what we've covered today. We've covered both continuous and discrete ways of modeling molecular systems. And, in particular, the red blood cell and the copy number control, as a way of dealing with metabolism and biopolymers separately, and the flux balance has brought these together, and brought in the notion of optimization. OK, so until next time. |
MIT_HST508_Genomics_and_Computational_Biology_Fall_2002 | 3A_DNA_1_Genome_Sequencing_Polymorphisms_Populations_Statistics_Pharmacogenomics.txt | The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license and MIT OpenCourseWare in general is available at ocw.mit.edu. PROFESSOR: Ready? OK. Well, welcome to the third lecture. Quick review of what we did last time-- first slide. We talked about purification, basically at every level of the hierarchy in complexity, from the elements all the way up to organisms, and in particular, the awesome purification one can get by serial dilution down to single molecules-- recombinant DNA molecules, in this case, embedded in E. coli cells-- and how this purification led to a revolution, first in biochemistry, then in recombinant DNA. Molecular biology then led to the Genome Project and systems biology, which brings us to models of interconnections. And to start in with algorithms that are useful in systems biology, we start with one of the simplest and most robust ones, which is this genetic code, shown here in the lower right-hand corner. But as one looks at the huge diversity in the tree of life, you find examples of exceptions to almost everything you can come up with, including the genetic code. And we talked about how to cherish the exceptions, as usual, here. And then, getting back to systems biology, in terms of how one creates qualitative and quantitative models from functional genomics data and establishes evidence. Finally, we ended on mutations and selection, as we will do in all three lectures here. This lecture in particular will focus on mutations and selection. So what is today's menu? We'll start with types of mutants and how they're represented for bioinformatic purposes. We'll talk about the three main methods by which mutations occur, drift, and select so that you can determine the frequency of different alleles in populations. In doing that, we'll rely on our friend from the first lecture, the binomial distribution, in the context of an exponentially growing population. Then, give you some very practical training here on association studies, where we illustrate it with a very important example of HIV resistance and a very useful statistic, easily as useful as the binomial and the Gaussian-- the chi-squared statistic. And then, we'll continue to talk about association in the context of causative alleles and importance of getting haplotypes, and then technologies that are required that have been used to get the framework first genome and how one might change strategy in getting subsequent genomes in order to make cost-effective these very large association studies. And finally, in the context of that, we'll talk about random and systematic errors in more detail so you get lots of examples and ways of dealing with it computationally. So just to go from our brief discussion about our friend, the 100% DNA sequence identity, or amino acid sequence identity that you might find in the lens protein and enolase enzyme, we can see that, even at 100%, you can find differences in function. And so functional measures are a good adjunct to DNA or amino acids' identity. At 99.9% identity, we're talking about the level of single-nucleotide polymorphisms that might exist between any two positions in your mother and your father's chromosomes in your body, or the differences between one of your genomes and one of my genomes. About once every kilobase, there will be a polymorphism, a difference. It's often a single nucleotide, an A for a G. Then, as we go to 98% differences, then we're talking about the differences between one of our genomes and a chimpanzee genome-- in other words, a completely different genus and species. However, by the criterion we mentioned in a previous lecture about bacterial definitions of species, those would be almost identical, and you'd have to go to identities less than 70% in order to call it a new bacterial species. So you can see that this is a very soft number, very dependent upon the context and the branch of the tree of life that you're working with. Then, we have sequence homology and very distant homologs which are only detectable, as we'll see when we get to the proteomics section of this course, by three-dimensional structures, not by sequence. And that switchover-- sequence homology will be the topic of next lecture, and the very distant one in 3D structures will be later, in proteomics. But that occurs at about 25% to 30%. This is just a reminder and an introductory slide to the next slide. We have different phenotypic effects due to different types of mutations. And ignoring the phenotypes that we talked about before, you have the types of mutations. Classes are null mutations, dosage, conditional mutations, gain-of-function, altered ligand specificity. Now, that's in broad terms, colloquially. But how is this achieved on a more molecular level, and how do we represent it compactly for bioinformatic discussion? So we have single substitutions. These can be a single base par, an AT base pair, to a CG base pair, or a GC base pair to a TA base pair, and so on. There can be deletions and duplications. These can range. The deletions and duplications be as large as a chromosome or as small as a single base pair. You can delete that A rather than change it into a C, and that would be one base pair. So when you delete an entire chromosome, that's called aneuploidy. The example in the previous slide was trisomy 21. If you remove that chromosome 21 instead of having three copies of it, it would be monosomy. And these are huge consequences even though they're fairly subtle changes in dosage. Now, a special class of deletion and duplication occurs when you have a tandem repeat of a sequence. When you have anywhere from a single base repeat, AAAAA, or dinucleotide, trinucleotide, all the way up, those have a very high tendency for both forward and reverse mutation, both deletions and duplications, because at the very small level, you have polymerase slippage and other microscopic events. And then, at the larger level, you have some kind of recombination, often homologous recombination, that occurs that cause deletions and duplications of tandem repeats. An inversion here. We're representing a complicated chemical event which involves double-stranded DNA going 5 prime to 3 prime on the top strand, from left to right, and 5 prime to 3 prime on the bottom strand, going from right to left. And you're taking a little chunk of that and flipping it so that you break some bonds and remake them. All the 3 prime is conserved, but you've done a reverse complement of the top strand and the bottom strand. In that reverse complement, you basically turn Cs into Gs and change the order. This is sometimes abbreviated for simple genetic description. Let's say we inverted CDE. You might change the case or add primes to it, color it. In some way or another, you indicate that it's now a reverse complement, and of course, the gene order or the DNA segment order is reversed to indicate that inversion. Translocations, insertions, and recombination have in common that you will make a break somewhere, and then you will bring in a new piece of DNA, just like the inversion, where you inverted in place. Here, it involves something acting at more of a distance. You'll break between B and C and X and Y, and you'll, in this case, do a reciprocal translocation where you conserve all the DNA, and A and B is now next to Y and Z. An insertion is like that, but now the DNA need not be reciprocal in any sense. It can come in, more or less, from outer space. Here, it came from Greece. It can be a transposable element that came in from who knows where and inserted in between the A and B. Recombination. Here, I illustrated this homologous recombination. And you can have an non-homologous combination, but this is by far the most common and interesting. Even some non-homologous recombination involves short regions of homology where you basically have either two sister chromosomes or homologs from mother and father or paralogs within the genome where you've got to duplicate a gene that might even be on the same chromosome arm. And now, just as with the deletions and duplication, where you have tandem repeats, if you have repeats anywhere in the genome, you can take a small difference. We're emphasizing the similarities here, but they have a few small differences that allow you to track them. Those can be exchanged by single-strand or double-stranded DNA, various chemistries, and you can either do a nice reciprocal exchange here, where you preserve all the DNA and the little C gets replacing the big C by some kind of crossover between D and E, somewhere between C and F, or you can have a nonreciprocal exchange where you pick up a little bit of C and duplicate it, and now there's no big C left over and only little C. Called gene conversion. But you get the idea. These are a fairly exhaustive list of the kinds of simple, elementary mutations you can have. Now, you can pile these in various combinations with each other, and over long periods of time, you can get completely new sequences. There's even ways that you can get de novo synthesis of DNA, such as mechanisms of terminal transferase, which is used-- yes? AUDIENCE: What are the [INAUDIBLE]?? PROFESSOR: Oh, we're going to get to that a little bit later on, but I can give you a preview. In human genome, these point mutations occurred about 10 to the minus 8 per base pair per generation. Deletions and duplications, especially in tandem regions, can be as much as six orders of magnitude higher-frequency than that. So there's a huge variation from position to position and types of alleles that will determine the mutation rate, and you should be quite aware of that as you go through various computational exercises. This is just a bit of commonly accepted nomenclature. Mutations and polymorphisms are basically same thing. There are differences between you and me. They become polymorphisms, meaning when they are common alleles, when their frequency is greater than 1% in a population. This is common in the human genome community. It may differ for other ones. But this is roughly where a mutation which is rare-- less than 1%-- becomes a polymorphism, which is frequent-- more than 1%. As a counterpoint to that, I would say that there is a good chance that every possible single-nucleotide polymorphism that could exist does exist in a population as large as ours and mutation rates which are modest but still, over long periods of time, allow the accumulation of mutations such that, rather than having maybe 3 million common single-nucleotide polymorphisms, or SNPs, we might have 12 billion-- one at every position. And in a later slide, we'll actually go through and calculate crudely why the frequency should be around 10 to the minus 5 and why there should be about 10,000 of us representing each of these so-called rare alleles. They're rare individually, but they're common as a group. It will make a difference as to whether we're talking about whether these polymorphisms are linked to your favorite trait or whether they actually cause it, whether they're part of the cause. No particular one can be said to be 1-to-1 with a cause. It's all a collection of mutations and environment. Now, haplotypes. What do we mean by haplotypes? We've introduced SNPs, single-nucleotide polymorphisms. If you have a SNP that, let's say, is involved in causing a APOE4 we'll introduce in just a moment, an allele that is associated with increased risk of Alzheimer's disease-- and let's say that protein has a known change in the protein three-dimensional structure, and you can call that the bad allele, or the associated allele. Now if that protein were expressed at a low level-- for example, if you had a promoter mutation or enhancer mutation-- then that haplotype, that combination of promoter mutation and protein mutation, is a more significant predictor than either one of them separately. And the fact that they're on the same chromosome is important because if the promoter or enhancer that determines the level of the protein in cis, meaning on the same DNA, or in trans, makes a huge difference. So that's what haplotyping is all about. It's determining what mutations are in cis on the same DNA in order to make associations biologically meaningful and, in terms of systems biology, interpreting them, in terms of what you know about regulatory elements and protein elements. These haplotypes can be inferred indirectly from diploid data from how alleles segregate in small families or in siblings and so forth. Or, easier to think about and probably more accurate, in general, especially with limited data sets, is direct observation. The most extreme direct observation is you pull out a DNA molecule that has, say, your promoter mutation in your putative protein mutation, or just a series of linked polymorphisms. By cloning out or otherwise physically isolating that DNA molecule, you can sequence it, and you can determine. By definition, if they're all in the same sequence, then they're on the same molecule. But you have to be careful about the sequencing method that you're using there and the specific cloning and/or physical separation method, because there are certain methods where you can get a chimeric sequence either due to cloning of two species together or somehow disassembling them through bioinformatics. You can also directly observe it when you go through the genetic processes of mitosis, which we've talked about before, which is that, as the cells divide, they split up the chromosomes, or meiosis, which is the process by which they get prepared for recombination, which we'll come back to in a little while. So when you do want to do it by linkage, you want to do a direct observation. And the best way to follow the haplotype is, if there is a difference at every position in both the parents and the child inherits those differences, then it's called informative. If the parents happen to share a single-nucleotide polymorphism even though the child is a heterozygote for one of them, it could be that the parents have additional alleles that can confuse things, and it's not informative. But the point is, if you have enough single-nucleotide polymorphisms, you can do a case-control study where you have lots of children that are affected for whatever trait you're interested in and, hopefully, a close-to-equal number which are in the control group which do not have it. An example of that-- and I'm just trying to give you a flavor for this and where to look, rather than to completely empower you on this, because that would be an entire separate course-- but you can look for association. And you have to worry about things like structure and admixture, where you've had populations that have developed independently in different parts of the world and have been randomly mixing, which is part of the model in these separate populations. But then, you bring them together very recently, and now it's no longer fair to model it as if it were a uniformly mixing population. And we'll refer throughout the course to the null hypothesis, which is the thing that you're trying to rule out, and the probability refers to the probability that you can reject this null hypothesis. And in this case, you're trying to reject that the allele frequencies in the candidate locus, whatever you picked, do not depend on the phenotype within the subpopulations. And that's the way that they deal with these case studies. Now, what are some of the motivations for studying either individual polymorphisms or haplotypes-- combinations of polymorphisms can affect the activity of a protein? Now, I could use hundreds of different examples of well-established and useful examples. But here's one that, hopefully, will hit a resonating chord in the sense that these are actually used now in certain clinical settings, and certainly in clinical research, to ask whether a patient population, either in the process of developing a new drug or using an established drug to keep the patient toxicity down and the effectiveness up. And so in the far left-hand column, is the gene or enzyme affected? And then, in the middle is our examples of drugs which interact with this enzyme. And then, the quantitative effect is in the far right-hand side. For example, thiopurine methyltransferase is something which, if you have a large amount of the activity, whether you have, say, a very active enzyme and/or a very active promoter element that causes high levels of it, then these various chemotherapeutics that are used for fighting cancers-- the amount of them need to be adjusted. So you have lots of the methyltransferase. That means you have to give lots of the drug, or else your drug study will fail, or your patient will succumb to cancer because it's ineffective. You haven't added enough. The thiopurine methyltransferase is overcoming the drug. On the other hand, if you have very low levels of the modifying enzyme, you want to lower the dose, or else you'll have toxicity. So that's an example that's actually being used in clinical situations where you can use the information to adjust drug levels or to stratify your patient population so that you put patients into different classes or exclude them from the study altogether because you know that the drug will have some bad effect. And this, hopefully, decreases the costs. On the downside to pharmaceutical development, if you do stratify your patient population and you make it through the drug study with that caveat, then the FDA will require that you put that proviso, and that makes the size of the population that will be buying the drug smaller, since the costs of developing the drug are fairly fixed. It decreases the profit. Now, I pointed out that there may be a very large number of rare single-nucleotide polymorphisms. But in terms of common ones, we're getting pretty close to saturating the common ones. And there are databases of these just like there are databases of almost everything you can imagine, some of them better than others. The common ones will, of course, be the ones that either are neutral, with respect to their phenotypic effect. That is to say, it doesn't really matter whether they're one common allele or the other one. The one allele might be at 30% and the other one at 70%, but they're both fairly neutral with respect to function. Or it could be that they both provide different advantages in different scenarios. Or the heterozygote, where you have one over the other, provides some advantage, and that's kept it in the population. But it's unlikely that they're highly deleterious because the highly deleterious alleles are going to be rare. They're going to be selected against. And we're going to fully model that in just a moment. Now, let's say that, somehow, anybody who wanted their genome could have their genome tomorrow. You could have your complete genome sequence. How would you, then, as computational biologists, prioritize the single-nucleotide polymorphisms you find in there, relative to the whole genome sequence, which is in GenBank? Now, this would be an excellent project for you to do for the term project. But what you might say, first of all-- what single-nucleotide polymorphisms would you throw out, for example, or put low on your priority list? Or which ones would you put high on your priority list? Yeah? AUDIENCE: Introns would be low-priority. PROFESSOR: OK, introns. AUDIENCE: Maybe this is a very simplistic thing to say, but I guess [INAUDIBLE] matter if they're different from one another. [INAUDIBLE] PROFESSOR: That's fine. Everybody has their own pet part of the genome they don't like. Introns almost sunk the Genome Project. They said, why are we going to sequence the 98% of the genome that doesn't code for proteins? Fortunately, we went ahead and sequenced it anyway. Another favorite thing that people mention is repetitive DNA. That was another part of the genome. But they didn't actually sequence it from Drosophila, the repetitive DNA. And it's considered also not protein coding. And I'm going to give you a couple of examples, as we go through here, to illustrate other points, but also to illustrate that repetitive regions that are not in protein-coding regions, whether introns or other non-coding regions, can be important. And here's an example. This is one of the most repetitive elements in the human genome. It's called an Alu repeat. As those of you who have done bioinformatics before realize, it's the bane of our existence, in terms of assembling and searching and so forth. But here's an example of a single base mutation in this repeat. There's about 500,000 copies of this in the human genome scattered about. It's called an intersperse repeat as a consequence. And this A-to-G transmission is found upstream from the myeloperoxidase enzyme-encoding gene. So how do we find out whether this is important in any sense? First of all, the observation is that it is associated with several-fold less transcriptional activity. That particular position creates or destroys binding sites for these transcription elements, and that might be the reason that it has lower transcriptional activity. And finally, it is over-represented in a particular type of cancer. And we're going to go through the ways that we take an observation, like a polymorphism, move to an association, like here, with cancer, then take it to a mechanism, such as here, with the transcriptional regulators. I think that's what this is about. It's not sufficient to observe the polymorphism. You can't say, a priori, whether it's important or not, whether the Alu repeats are not conserved. That's another thing that people say. Throw out all the nonconserved single-nucleotide polymorphisms. It's not conserved. It's not present in mouse, for example. It's non-coding, and it's repetitive. Now, in addition to the types of mutations, we have the modes of inheritance-- that is to say, the different ways in which a change, a polymorphism can be transmitted. And I include this to broaden your perspective. Rather than getting entirely fixated on the 3 million DNA SNPs, let's broaden the discussion here a little bit. You can have not only DNA polymorphisms, but you can have RNA polymorphisms, which are heritable. For example-- and I use this an an extreme case-- RNAi-- 22 nucleotide or so. Probably a variety of mechanisms. Bits of RNA can be induced in various ways. And once induced, they can replicate, essentially, within a cell and between cells. They can spread throughout an organism and probably be propagated over generations between different generations of organisms. So this is heritable. And you can consider it epigenetic or genetic polymorphism, depending on the nomenclature that you adopt. Even a protein conformation can be considered a polymorphism that is heritable. The central dogma tells us this protein is encoded by a nucleic acid, and there certainly is still the case-- even these heritable polymorphisms and proteins. But it has a different conformation, and that conformation recruits other conformations and so is inherited not only within a cell, but between cells and between organisms and is the causation of things like mad cow disease and so on. And finally, modifications of biopolymers, such as methylation, can occur. And this is not formally a DNA sequence change, but it's a heritable change that can have very significant effects on things like cancer and gene expression in general. Now, this is a broadening of the definition of polymorphisms. And now let's talk thoroughly about the ways that it can be inherited as horizontal or vertical. Horizontal typically means between species, but in a certain sense, it could reflect some of the things that are going on here with RNA and protein inheritance in the sense it is being horizontally transmitted between different cells within the same organism. But generally, it's a mechanism that does not involve mitosis or meiosis of the nuclear or organellar genomes. And the natural processes are transduction and transformation, being distinguished by-- transduction typically involves a viral or protein coat for the nucleic acid, and transformation involves something closer to naked nucleic acids. Transgenic is a completely laboratory-based version of these two more natural methods. I think vertical we've already talked about. This is what we normally think of inheritance. Horizontal, we saw in the tree of life, is very common, even in the late branches and the early branches. Vertical, though, is what we normally think about when we're doing crosses in the laboratory. Some of these are maternally inherited, like mitochondria and chloroplasts, but it's still the same kind of process-- mitosis, segregation of DNA. So now we've got types of mutations we want to talk about, mutation drift and selection, as the main source of the frequencies that we find in populations. We want to know, where do allele frequencies come from? And I will maintain that, generally speaking, in almost all living systems, whether they're cells from organisms that are mutating or whole organisms, whether they do recombination or not, they will do mutation selection and drift. Now, to develop a fairly rigorous model here, yet simple, we have some assumptions. We always have assumptions in models. If people tell you there are no assumptions, then you need to dig a little further. The assumptions that we'll make here for a little while-- and then, I'll give you a nice example to undermine them all-- but it's constant population size n. You have random mating. Remember, we were talking about admixture before. Every member of the population can randomly mate. They're non-overlapping generations. This is a convenience. We are not making any assumptions about the population allele frequencies being at equilibrium. Those of you have taken biology courses with-- Hardy-Weinberg makes that assumption. Here, this is a much more general model. It includes non-equilibrium, and equilibrium can be a special case. So this is relatively minor non-assumption. But we are assuming that we do not have an infinite number of alleles, nor an infinite number of population size. So now ignore everything on the slide but the upper left-hand corner of slide 15 here. This should look familiar. This should look somewhat like the logistic map where we had the incremental, slow exponential increase of one allele in a population over another, or one organism over another, based on the different selection coefficients of those organisms. And this could be a very small difference. Say, a 1% increase per generation will dominate after a thousand generations or so, quite definitely. And so that's what you're seeing-- the exponential curve-- and then it plateaus as it gets to 100% allele frequency. That's the full range on the vertical axis, is 0% to 100% allele frequency. And generation goes from 0 to 1,400. Now, what it actually represents in this particular case is closely related, a little more complicated than just one allele replacing another, because here we have diploids-- that is to say, not a haploid that just has one allele, like in the bacterial species that we've implicitly been talking about. But you can have, here, now, three combinations of alleles. You can have, say, the reference genotype of capital A, capital A, which we'd just call 100% fitness. So we have fitness and selection coefficient, which are interchangeable terms that are-- very trivial mathematical relationship between them. So we'll use w and s in different contexts here. It just has population [INAUDIBLE].. Then, you can have big A, big A, little a, little a, and big A, little a as the heterozygote. And we're assuming an additive model here, where you get a little more selection with one allele of little a, and then two alleles of little a results in 2s-- 1 plus 2s. And this has a very similar curve to the logistic map, if you had simple allele replacement. And what you tend to have in this population-- a thousand generations, in the big scheme of things, may seem like a lot to you. But in the big scheme, it's a very short period of time, and so you tend to have alleles at frequency of 0% and 100%. On the other hand, if you have overdominant mode, where the heterozygote has the highest fitness of the three possibilities, where 1 plus s is larger than 1 or 1 plus t, then you aim for equilibrium. Whether you're starting at close to 0% or close to 100% the allele frequency will converge on some equilibrium point-- in this case, somewhere around 0.6 for one allele and 0.4 for the other. And it could be anywhere. It depends on the relative fitnesses, s and t. This is just a reminder slide combining two slides from before. That slide connects the logistic map from lecture 1 to the selection coefficients we've been talking about in lecture 2 to the diploids that we're mainly talking about in this lecture, because humans are diploids. And here, just connect this to the fact that these selection coefficients, s, or the fitness, w, is relevant to different environments and the different times that organisms spend in those different environments. And all mutants are tagged by their DNA, and they're pooled, and they're selected, and you can read them out in a variety of methods. So now let's dig down into where the allele frequencies come from, based on mutation or migration, which we'll lump together here as M, selection, and drift. Now, the mutation will have a forward rate constant and a reverse. This should remind you of the conversation we just had about the different kinds of alleles-- the duplications and deletions-- how they can have different rates. If they're tandem duplication, then that has a great tendency to delete. If it deletes down to a single repeat, there's now no longer any repeat, and so the chance of generating the exact duplicate is low. So the frequency of forward and reverse mutation is represented by f and r, respectively. Now, what we'll see-- we're going to walk through this. Starting with a frequency, t sub i, where i is the number of mutants in a population size n. So here, down at the bottom, is i mutants in a population sized n. And we'll see that applying the mutation, applying the selection, and applying the drift are all applications of binomial distributions when we're talking about this discrete population of n individuals. Remember the three bell-shaped curves [INAUDIBLE]?? Curves that can be bell-shaped. Here, it's clearly a discrete population because we have n individuals, taking i mutants at a time. So the binomials that we'll be using-- all three processes have the same form, where you have a combination of some population, n, and a subpopulation, i. And of course, the remainder is n minus i. And the different combinations are times some probability, because probability is the last parameter in the binomial here. A binomial is a function of n, i, and p-- the population size, subpopulation size, and some frequency. It can be either a forward or reverse mutation frequency. It can be a selection probability, or it can be a drift probability. And we'll see how each of these work out. So we start with a frequency. You can have i ranges from 0 to If i is 0, then the frequency is 0 over n, or 0. If i is n, it's n over n, or frequency is 100%. Just that same vertical axis we had on all previous slides. The starting frequencies have some distribution, t sub i, for i going from 0 to n. And now we want to derive a new vector of frequencies, which would be m. And all we do is we apply the binomial distribution for the forward process or forward mutation. And then, once we're done with that, we now use the m's and adjust it. So now, give it a chance to do the reverse mutation, because you'll generate this binomial distribution forwards, and then you'll give them a chance to revert. Then, you apply selection. A new binomial. Now, here, the probability of a transition from your mutants, starting with t-- then, you go to m. Now, to get to s, the transition probability's a function of this fitness. Remember, w and s, fitness and selection. Here, the fitness determines the probability of a transition in a binomial distribution. And there are two slightly different equations that you use, depending on whether the fitness is greater than 1 or less than 1, whether it has a tendency to increase with time, due to selection, or to decrease that allele with time, as a function of selection. And that's what these two cases are here. And then, finally, after you've applied forward and inverse mutation and selection, whether it's more fit or less fit, then you apply drift. Now, drift just means that-- think of it. If you have a small population of individuals-- let's say they're a set of colored balls in a jar in front of us, here. It's a small number, and I pull a handful out, because in each generation, you're going to duplicate the population. But we remember the assumption of constant population size. If I pull out the new generation and forget about the rest of the duplicated, they could all be the same color. And that's the chance that you could drift to fixation, where one of them now dominates, not because it's superior, from a selection standpoint, not because it's been mutated in a directed way to that point, but just because of random drift that you have a constant population. And so you can see how that would depend on how many are in the jar. If I take half of them out of the jar, and the jar only has five in it, then it's a much higher chance that we'll go to fixation quickly than if the jar has millions in it. If I take half a million out, it's very likely that it will still be more or less the same ratios. And this is exactly how it plays out when you look at random genetic drift. It's very dependent on population size. So here are simulations going out to, say, 150 generations on the horizontal axis and the allele frequency, as usual, going from 0% to 100%. And we start with a population with 50/50 ratio of two alleles. And what you see is, if the population, the number of individuals, is only 25, then you quickly fix. Maybe 30 generations, you can-- and this is anecdotal-- get fixation. As it goes up to a population size of 2,500, you can see it, for all intents and purposes, is flat. It will eventually fix. I assure you, this simulation-- if you run it long enough, one of the two alleles will go to zero, and the other one will go to 100%. And you do another simulation, and it might be the other one, because this is random drift, not selection. So you can see the final frequencies are going to be some complicated function of mutation rates and selection rates and drift rates, which, in that last one, is a function of population size. When the population size is very, very large, you can see that's going to be constant enough. That's why you can see very subtle differences in selection coefficient. Now, we're going to come back to the mutation, selection, and drift in the context of human genetics in just a moment. But first, I want to tell you the last component of population genetics that we'll be talking about, which is recombination. Now, this doesn't occur in every biological system. I made the argument before that the mutation, selection, and drift occurs in every biological system, from cells to organisms of all types. In those biological systems where DNA can be exchanged by transduction, transformation, meiotic fusion, and so forth, then you can get recombination. And these two figures illustrate that. On the left-hand side of slide 19 is the non-recombination scenario, and on the right-hand side is the sexual, or recombination-mediated, change in gene combinations. So let's look at this. Time is going horizontally. And you can see, at the very left-hand side, the beginning of each of these scenarios, you get a certain rate of occurrence of mutations. This is based on the forward mutation rate in our previous equations. And they occur very early on. A, B, and C all occur. But they tend to die out because of drift. And the population size is small enough that half of them die out and one of them fixes. And A is destined to fix, but it takes some time for it to fix. It starts out at a frequency close to 0, and by the time it's 50% to 100%, it's ready to start picking up a second mutation at the same frequency they were occurring before. And now you can pick up the B mutation while the C dies off. It comes and goes due to drift. And then, finally, once AB fixes, then you can pick up the C, and you get ABC. That was a long, slow serial process. On the right-hand side, we see what can happen in the case of exchange of genetic material in a really mixing population now that A, B, and C all occur at the same frequency as on the left. But now, because they're exchanging information very early on, before B has a chance to die off due to drift, it combines its DNA with A, and you get AB. And some of the small A population is just destined to be fixed anyway, but happens to combine with C. And again, before drift can wipe them out, the very small selection that you have that couldn't overcome drift in the left-hand panel now fixes them all, the really favorable combination of A, B, and C, very early on. And so there's a whole series of arguments and counterarguments in the population genetics literature about why is there sex. Of course, we all know have our own reasons. But in here, they say there's a huge cost. The counterargument is there's a huge cost of having two genders-- maybe as much as 50%, possibly higher-- because they have different morphologies and different capabilities and so on. But then, there's this benefit. This is the risk, and the benefit is the earlier recombination. But then there's counterarguments and so forth that we won't go into. Yeah? AUDIENCE: Going back to drift. The drift is actually random. What is it that causes either allele to fix? If you have two different alleles [INAUDIBLE] zero for one allele, why wouldn't it just [INAUDIBLE] come back? PROFESSOR: Well, it can. You can see here that it's starting to head so that it's fixing on one allele, and it changes direction, and it fixes the other one. So it can go all the way down to close to zero and then bounce back up. This is just some typical simulations here. If you do enough of these, you'll find every possible behavior. AUDIENCE: So what you're saying-- basically, eventually, if you just give it enough time, it will fix. PROFESSOR: It will fix. You can't necessarily predict which one will it fix on. If there's no selection, half the time, it'll fix on one, half on the other if they start out at the frequency of 50/50. But you have to think of it in the context of mutation and selection, too, because they're all acting there. And when you say that something is selectively neutral, what you really mean-- everything has some very, very tiny selective coefficient. But if it's tiny enough and the population size is small enough, then drift will blind you, and you can't see it. But if you get really big populations, then you'll get small drift, and then you'll see more subtle selection. The human population is very large. Some of the oceanic species are truly enormous. So you need to think about that as a possibility. So now let's talk about common diseases. The question is, are common diseases really-- to what extent are they caused by common variants? Clearly, some of them are caused by common variants. And we've said, well, common variants really shouldn't be deleterious, because they would be wiped out by selection. They could be very, very mildly deleterious such that drift will cause them to get fixated or persist, but they can't be really noticeably. Selection coefficients of 10 to the minus 4-- very, very tiny effects-- can be wiped out in normal-sized populations. So here are three examples of common variants that almost certainly are associated with common diseases. So why are these special cases, in my opinion, rather than the general case? APOE4 I earlier alluded to is associated with Alzheimer's dementia. It's involved in lipid transport and metabolism. And this particular allele, the E4 allele, is present at 20% in humans. About 80% is the other allele, the second most common allele is the E3 allele. And so you might say, well, this bad allele, this E4, is the one that should be present in the common ancestors of humans. The E3, the good one, should be present in the common ancestors, and this E4 is a recent aberration. It's somehow getting into human population. But actually, the E4 is the ancestral one, presumably due to some difference in diet or some other selective effects. The E4, which is currently bad for us, we think, was really good for some of our related species. And so we need to think very carefully before we do anything drastic about eliminating this allele from, say, the human population. Hemoglobin sickle cell, the sickle cell allele, is probably the oldest and most famous of the molecularly characterized alleles. Zuckerland and Pauling made this famous many decades ago. And it exists in 17% of the human population, and this is responsible for oxygen transport in red blood cells. And you saw in one of the earlier slides today those sickle-shaped cells, which have a huge effect on the hemodynamics of the red blood cell. Well, another red blood cell component, an enzyme, G6PD, which is involved in maintaining the redox function in the cell, is as high as 40%. The mutant, whichever one you want-- they're so close to 50%. They're just two alleles, and one of them is deleterious, in a certain sense, a biochemical sense. But both of these have a heterozygote advantage in being malaria-resistent. And so probably that's the reason that it's common in the population. And this is probably proving to be the rule that these are examples of that convergence that we saw in the case of balanced polymorphisms where the heterozygote has some advantage, and so you get a balanced point rather than one or the other dominating through drift or selection. And the third example, which we will develop in much more detail, is CCR5. There's a deletion of 32 base pairs which confers resistance to one of the greatest plagues in the history of the human race, which is the AIDS virus. And its frequency is 9% in Caucasians. I think we wish that it were 100% when we worry about HIV, but we need to wonder why it's not 100%. There may be some other reason for it being a nondeleted version in so many humans, and we need to understand that. And as far as I know, we don't understand that. Now, this is the last slide before the break. I promised you that we would take that simple mathematical treatment of mutation selection and drift and show that it actually has some impact, that it's actually used in human genetics. Now, I must warn you that this is not a consensus view. This is a view that I find appealing, in slide 21, that Jon Pritchard has authored. And he titled it in a provocative way-- Are Rare Variants Responsible for Susceptibility to Complex Diseases? And this is a quote. It's customary in theoretical work relating to complex diseases that the allele frequencies are treated as parameters of the model. And typically in models, you'll have derived values and parameters, which are input, the things that the user is expected to provide. And you don't want to have to be guessing at allele frequencies and having those parameters. So what's new here is that, using an evolutionary process, which includes-- you guessed it-- selection, mutation, and genetic drift, we can learn or, as I say, model the underlying allele frequencies. They can be derived rather than being required as an input. And this is illustrated here with a model. This is entirely theoretical, but the parameters that are used are based on some genetic studies that have been done, for example, on autism. And so let me just define some of the terms here. This risk ratio is related, in a certain sense, to the selection and fitness coefficients we were talking about before. Now, selection and fitness coefficients refer to reproductive fitness. And in human genetics, we have broader interests into all sorts of things that either don't affect reproductive fitness, or we don't know how it affects reproduction. But it affects some kind of medical or just some trait of interest. In this case, the risk ratio replaces the selection coefficient. And this is basically saying, my brother or sister has autism-- if they did-- then I would have a 75-fold higher chance of having it than someone selected randomly from the same population as I come from. So the 70-fold increase is a very high heritability for this particular example. Also in this example, through some genetics we won't discuss, it seemed reasonable that the number of loci involved might be a large number, on the order of 100 or so. And you can think of a lot of common diseases. When you start listing the well-characterized ones, we start listing the number of genes that either are known or are plausible, like cancer or walking down the street, or whatever. These really complicated traits involve a very large number of cell types, a large number of cell components, and hence a large number of genes. So here, they've assumed a model with 100 loci, 100 different genes. And they have multiplicative effects of the polymorphisms in those genes between loci. That is to say, you need to have gene 1 working and gene 2 and gene 3, so that's multiplicative. But then, the polymorphisms you introduced into a gene-- you could think there are a lot of genes involved, and there are a lot of different positions within the gene that could be involved, and each of those is additive. You have a little reduction in activity due to first mutation, and then a second mutation, a third. Each of those is this or this or this or this, and so that's an additive effect. And so they have various historical justifications for having additive penetrance within the gene and multiplicative across different loci, different genes in the genome. Now, for these 100 loci, there will be a top five that affect the relative risk the most, and that's what's been plotted here, where we have-- the zero curve, you see, is in the absence of any of these multiplicative effects. It's as if you had no loci that were affecting the frequency of susceptibility alleles. And so, as in most unselective populations, you'll have most of the alleles being either at zero frequency or 100%. Basically, the zero representing the absence of the other the alternatives. The 100% is there. But if you have these top five loci out of 100 that contribute to the risk ratio, then those are represented by these four curves. The frequency of susceptibility alleles, which range along the horizontal axis now, from 0 to 100%, have what he calls a probability density. It's clearly not a probability density, because it's not going to integrate to 1, but here, it's the probability, the histogram of risk ratios. So let's take a break, and we'll drill down on the association study in a very interesting case. Any problem sets to hand in can put them here during the break. AUDIENCE: So I understand about modeling mutation selection. |
MIT_HST508_Genomics_and_Computational_Biology_Fall_2002 | 11B_Networks_3_The_Future_of_Computational_Biology_Cellular_Developmental_Social.txt | - The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license and MIT OpenCourseWare in general is available at ocw.mit.edu. GEORGE CHURCH: OK, welcome back. We just finished our discussion of predator, prey, and host parasites illustrating it with ways that we can have an impact on ecological modeling in oceans and in public health. And these kind of considerations start getting us into what are the global and socioeconomic considerations? What kind of impact do these kind of models have on how we make decisions? And in this context, I'm glad to be associated with the Genome Project. It's one of the first scientific projects of any reasonable scale that had from the very start, from the very first proposed funding for it in 1990, a component of 3% set aside for ethical, legal, and social issue or LC. And some of the conflicts that are covered by the grantees in this part of the Genome Project were genetic non-discrimination, privacy, reproductive rights, cloning, psychological stigmatization that can come from maybe too much knowledge, clinical quality control, what can happen with false positives and false negatives in clinical exams, safety and environmental issues, such as the ones we were just talking about and some more that I'll raise. Uncertainties, not just in quality control, but in the uncertainties in testing minors. Issues of diversity, both biodiversity and human diversity, and commercialization of the products. Who owns my cells when I give them to a hospital or a company to help cure me? Do they then own all the patent rights? So the underlined topics are the ones, the four underlined topics we'll talk about in the next few slides. In terms of non-discrimination, this can go any direction. It will go the direction the market forces push it, meaning the voters. If we want to pass really laws that say that if George Church sequences his genome and he has to report that sequence to his insurance company or worse yet the insurance company is going and get it before he wants it or not. But on the other end of the spectrum, and I hope the trend which will make it much easier for us to share our data, or use our data-- that's it-- is that the Non-Discrimination in Health Insurance and Employment Act was [? docked ?] in 1999, was introduced and passed, which would extend employment protections in the government sector to the private sector to the extent that that can be generalized. I don't know exactly where that's going but one can hope that-- I mean clearly we will always do some kind of discrimination based on genetics which can be assessed such as in an interview, you know, height, friendliness, that sort of thing. But we won't necessarily be doing it based on genetic sequence. And these are tough issues. It's the probabilistic nature of the decisions that you make during an interview different from the probabilistic decision you would make based on a DNA scan? Which is more accurate? Do you want it to be more accurate? Do you want it to be less accurate? So on and so forth. But clearly there's the trend in terms of legislation is towards less information being used for discrimination. Certainly in the insurance level. And it's appropriate in insurance because in a way that's supposed to be a process by which risk is shared. So the issue of races. Typically races, some people feel this is a scientifically rigorous definition, others do not. Some feel that it involves very, very broad strokes in other organisms at least. It can cover fairly detailed bottlenecks in populations. But in any case, there are elements of population structure which certainly are important, whether you ascribe it to the major races or to minor, smaller populations. And examples here, we've already talked about hemoglobin variants evolved to resist malaria. It's going to be one of the themes today. And as with the differences in skin pigmentation, the pressure of the environment to develop a group-wide trait was powerful and can involve a very small number of genes. You get founder effects where a particular population is highly enriched for a particular disease such as Huntington's in the case of Lake Maracaibo in Venezuela or Tay-Sachs in some Jewish populations. So when you have these well isolated populations, I think, you can see in the population genetic literature how they can be used in various ways. But, overall, and it's sort of the broadest level we all share a large set of commonly acquired polymorphisms. And even with ants, the smallest subpopulation, it still behooves us to look at the details of the individual variations in DNA sequence in the haplotypes or genotypes. I have two dangerous slides in a row. I could probably come up with quite a few more, so could you. But one of them is-- this is really a very strong advocate for her modeling. And it's totally anecdotal slide itself is that is non-scientific, historical. But this is one critics' view of the huge influence that Lysenko had on genetics and biology in the Soviet Union from the early days of the revolution. And his real pet theory was the inheritance of acquired characteristics. And there's nothing intrinsically wrong with in broad strokes the inheritance of acquired characteristics. It's not certainly not common biological phenomenon. And at the time, it was not helping their biology progress at the same rate as the rest of the world which was mainly pursuing the Mendelian inheritance models. And this critic in quotes felt that his habit was to report only successes. This is a really feel good habit. And his results were based on extremely small sample sizes, inaccurate records, and the almost total absence of control groups. He made an early mistake in the calculation which caused comments among other specialists in his field and made him extremely negative towards the use of mathematics and science. Making mistakes should not cause you to drop the use of mathematics and science. Hopefully, those of you, probably everybody in this class has made a mistake in mathematics. And, hopefully, none of you will drop mathematics in your future biology. And the second danger is the danger of ethics-free science. Now you get to the truly remarkable story that in 1979, there was a release of anthrax 836 spores and part of the former Soviet Union. And a few years later, actually 1999, this book was published describing what was behind that. And what was behind that was decades earlier in 1953. There was a leak in one, I guess, of their anthrax developments. And then 1956, they found that one of the rodents that they captured in their routine surveys of the sewers searching down a possible anthrax, that actually a strain had become much more virulent than the original that they were working with. And usually the response of public health official or even a reasonable person would be to kill this thing. But instead they decided that this was great. Let's cultivate it. And the idea was to install it into these rockets that were targeted on Western cities. And then that led eventually to them depositing spores on their own people accidentally of this greatly enhanced strain. So the question to our community as this genome engineering tools get easier and easier where you basically can sequence genomes inexpensively and even synthesize genomes inexpensively. And much of this is going to be in the public domain, what is to stop this from being a very easily disguised and potentially very inexpensive form of terrorism? And part of that is that we do what we can do to improve either detection tools or genome engineering tools to make them more of a defense type than offense. But this was done in the early days of recombinant DNA where the vectors were designed such that if the vector ever were to escape from the lab, it would die immediately. It would be lacking the nutrients it needed to grow. It would be very sensitive to detergents that occur typically in sewers and so forth. Now the vectors we use today, those were not very robust. They didn't grow very well in the lab as much less in the sewers. But it turns out that random release is fairly low risk problem. That was what we were worried about in the 1970s with recombinant DNA. But purposeful release of genetically modified organisms is more of the issue today. And, of course, genetically modified organisms on this slide have been created and engineered over the millennium, maybe 10 millennium or so, without a license. And they've been very successful. You picture this little weed-like thing at the top left, and you end up with this great 4th of July corn on the cob. And it's a hybrid corn. And similarly, these dogs range over three logarithms in adult mass would barely be recognized as the same species if we didn't know them and love them very well. But these are examples of genetic engineering that was done pre-genomics. But now when we use recombinant DNA in particular, but genetic any kind of interspecific genetic modification, this definitely raises environmental issues, especially in the developing-- I'm sorry, in the say European nations where they are wealthy, and they don't need much improvement in their agricultural needs. Some of the developing world has very definite needs for genetics, or feel they do, for genetically modified organisms. Some of these include producing vaccines in plants. Vaccines are one of the most cost effective ways of generating public health results but possibly even more effective would be to have your bananas and other crops contain vaccines. And these have actually been developed. But it's been hard getting them actually supplied due to concerns about release of genetically modified organisms. Possible allergic response and so forth. Salt and drought tolerance is extremely important. They often come together. And there are a huge number of drought and salt tolerant plants, so-called resurrection plants. About 100 different, completely different species which could provide new genes that can be introduced into non-drought tolerant plants and are being introduced by scientists in developing worlds, such as Africa. Terminators were originally, it's like this popular, unpopular, popular again. They were popular with the companies that developed them for reasons that we may not know. But there was the concern or the backlash was the companies are doing this to preventing the farmers from reseeding. By the terminator meant that the next generation seeds would be useless. And it had been the habit of a farmer to reseed. And so the terminators didn't sell for a while. But then because of the worry about dispersal of genetically modified organisms, now terminators are becoming more interesting again to a broad set of people. You talk about organic farming, there is controversy as to whether the tightest definition of this, which involves no inorganic fertilizers like nitrates and phosphates and so forth. But that means a very high animal load. High animal load means you need to use up a lot of your vegetable crops in order to produce the fertilizer for the other vegetables. And as you pointed out that natural is not necessarily harmless. A wide variety of naturally occurring compounds, natural pesticides, are also carcinogens. And here's the laundry list at the bottom of slide 43. Cloning of stem cells. We have problems with cloning definitely. In almost every species, even the successful ones, there are a variety of species which have been tried and not succeeded. There's some that have been tried. And if you look at the studies in detail they're examples of developmental defects higher than expected in that species ranging from a few percent on up. And, obviously, there's some kind of epigenetic reprogramming that's occurring here where you're not getting all the right contributions from the maternal and paternal genomes. And this can be possibly studied with expression profiles. We can start employing all of our automation to analyze how we can increase the fraction of stem cells that we could take adult cells and send them into different lineages, or we can formulate ways to either transform one adult stem cell into another or an adult stem cell into a slightly more primitive stem cell and still retain the advantages of so-called therapeutic cloning, which would be to maintain good histocompatibility, say, with the patients. So finally, education. Why should we bring up education in a course? We've talked about models of decision making in public health. But there's also a similar set for education. We want to be able to deal with uncertainty, complexity, quantification. I'm sure that I have introduced, or we, have introduced plenty of uncertainty and complexity in quantitation in your lives with this course. I apologize for the parts that are painful. I hope that no pain, no gain has some applicability here. We want to-- a theme in this course has been to cherish your exceptions. Collect them, and these can be discoveries. They are at least going to keep you honest and keep you from making big mistakes. We want to be able to translate from one data type to another, from one conceptual foundation to another, to integrate different either adjacent conceptual spheres or very distant ones. The way we do this, slide 47, we need to have measures of our progress, measures of the quality of the underlying data and the models. And we've already done this to some extent, basically, for three dimensional structures and sequence data, the primary and tertiary information. For X-ray diffraction-- this dates back into the '60s-- we have measures of data quality that include resolution, model quality, which is the R factor we talked about before. We have ways of doing similarity searches. For sequencing, a more recent enterprise in the '80s, late '80s, early '90s, we got the Genome Project launched with thoughts of measuring data quality in terms of discrepancy per base pair. That should be less than 1 part in 10 to the fourth. The models, typically, are models of protein conservation. And a similarity search is one of the greatest killer applications of all times, which is [? last. ?] And then for function we're less far along. We don't really have accession numbers, as I said. We don't really have great ways of doing similarity searches through say image databases or similarity searches other than say correlation coefficients. But I think this is rapidly changing. When we start applying these models, this is not only in the education sense, but in the probing these networks that we've been talking about, even further beyond consideration of neural networks and our interaction with other organisms, kind of combining those two, the neural networks and our interaction with other organisms, ecology, is this notion of biophilia. Which is that we, as human beings and other fairly intelligent animals, are connected subconsciously to other living beings. It is clear that no matter how urban or what culture you're talking about, snake dreams figure in very prominently. And this has to do with the need of primates to avoid and track snakes from their vicinity and the vicinity of the tribe. Little animals are cute. We just know that. And they are much cuter than the adults, no matter what the animal is, with the possible exception of snakes. [LAUGHTER] And there are anecdotal, at least, possibly well-characterized effects of the green fractals that we like to see in trees and plants. And taking this one step further from just this kind of very stimulating thought that we actually have co-evolved, our nervous system has co-evolved for so long with these other things. It would be natural for us to be very attuned to them because our survival would have depended upon our ability to avoid snakes and to deal with green versus brown shrubs. And if you take that one step further, much of what we do in the humanities say is affected our aesthetics and our approaches to it, our beliefs are affected by this heritage. And this is what E.O Wilson has championed. And I would say it's quite controversial as it should be. But in general, long separated fields come together-- it's this consilience definition-- and they create new insights. Like chemistry and genetics brings us molecular biology. And the question is, is all of human endeavor ready for such a thing, from religious feelings to financial markets and so on? And whether or not, how might genomics and computational biology contribute? It's surprising sometimes when these things do contribute. And here's some speculation on that. We have improving imaging methods. We talked about imaging in the context of in situs. Some very functional versions of this are positron emission tomography and magnetic resonance imaging, which allow you to monitor such things as blood flow or metabolic uptake in different parts of an actively metabolizing brain. And here you can inject some O-15 labeled compound. And you can get an image resolution on the order of nine millimeters. OK so the voxels here, volume elements, are limited in that range. And sort of in the 20 seconds is sort of the time frame that you're working in. And this has been applied to a whole variety of interesting behavioral tasks. Almost anything you can think of counting memory, so forth in humans. And you can map the parts of the brain that are differentially responsive to a control and an experimental time frame for the same patient in the same apparatus. And here, this is just to show you how far I can go and connect to what we were talking about in the previous slide, here religious subjects, patients, have been the differential parts of the brain monitored here are in a religious recitation process and in arresting control. And these are all the P less than P to the 0.001 significance by the very complicated statistics that are used in the analysis of these functional maps. Now not to leave us on that note but to connect it more to genomics and computational biology is more in the heart of this course. Here's an example how magnetic resonance imaging, which can be applied to gene expression, how we bring these two together, kind of imaging gene expression. And one of the powers of this in contrast the positron emission tomography had about a 9 millimeter resolution. This has a 10 micron resolution, sort of in the optical resolution range. But unlike optical methods, this will work for impact opaque organisms. And, finally, so MRI will do that in general. But to connect it to gene expression, you need to tag the gene expression such way, and we have green fluorescent protein, we have other colors fluorescent protein. We have luciferase, data glycosides, and so forth. One of those has to be turn on something that has sufficient contrast and magnetic resonance imaging. And an example of that is a way of caging a gadolinium ion such that you can see this little galactic tyrannous ring at the top right is covering it. And the little red bond there is cleaved by beta galactosidase, lacZ fusions. And so you can now make your same reporter constructs that have been used for making, say, blue cells with a core metric indicator with optical wavelengths. Well, now with magnetic resonance, imaging released that gadolinium. So it's now accessible to the solvent, has a different resonance, and you can get sharp contrast here. In a living organism on the top, with 10 micron resolution, as opposed to a fixed organism, which is required for getting the whole-mount in situ staining with lacZ. So I think, this is a very interesting combination of being able to use something alive in opaque tissue with a genomic tag of sorts. Well, on that religious note, we will sort of wrap up the course, my part of it, the lecture part. And in so doing, I want to especially give many, many thanks not just to this year's TFs, which we will get to in just a moment, but the ones that helped start this phase of the course. Actually, the course goes back to '88. But these are some of the Teaching Fellows in 1999 and 2000, 2001. One of these Teaching Fellows, [? Suzanne ?] [? Camille, ?] has stayed with us as a head Teaching Fellow this year. She was up to the top of-- and I'm very thankful to every one here, Woody, [? June, ?] [? Len, ?] Tom, [? John, ?] [? Juang ?] [? Hong, ?] Gary, and [? Lachmans. ?] If any of you are here, could you kind of stand up, wave, just wave. Thank you very much. Really, I love it. And if this course is to continue to survive, we need TFs for next year. So if any of you feel that you have the right stuff and feel that this course should survive, please contact us. Hopefully, it'll be some finite positive number. We really love the projects, and we'd love-- and in the past, many students have been reluctant to stop working on the project after the course is over and their grade has been assigned and so forth. And that nothing could make me happier than to have do that. There's a limit to just how much I can help you do that. But if anything I can do to help, providing additional mentoring and so forth, I'd love to do that. Some of these have even resulted in publications later on. We started this course, slide four of the first lecture, actually the first real slide or the first lecture, was on the origins of zeros and ones. Kind of a play on the 101 course number, where did the binary code come from? And you could ascribe it to Leibniz or one of the modern 18th century mathematicians. But Leibniz himself found that it had already been invented about 5,000 years ago or so by China's first emperor. And since then, some people have gone so far as to take this binary coding, the etching, which has 64 hexagons and has arranged it in such a way that it has decoded it so that it actually fits in with the geniculate code which has 64 codons. So for those of you who are students of the etching, hopefully, it now has new meaning for you. And to emphasize this yin and yang symbol here, remember that the purpose of having the black dot in the white zone and the reciprocal white dot in the black zone is to remind you that things are not just black and white. They're not just zeros and ones. They have more complexity, and they are constantly in change. This is the book of changes, and this course is about change. And you should-- you do computational biology. It's not just about computation. It's not by keeping yourself busy at the computer, keeping your computers busy all the time. It's about thinking, thinking as broadly as you can. So thank you very much. [APPLAUSE] |
MIT_HST508_Genomics_and_Computational_Biology_Fall_2002 | 4A_DNA_2_Dynamic_Programming_Blast_Multialignment_Hidden_Markov_Models.txt | The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license and MIT OpenCourseWare, in general, is available at ocw.mit.edu. GEORGE CHURCH: OK, so welcome to the fourth lecture. This will be the second one on the subject of DNA. The major difference between this lecture and the last one is that last one, so-called DNA 1, we focused in on types of mutants, their really closely related DNA sequences. And this one, we'll talk about the most distant related biopolymer sequences. So we went through the way you can generate closely related sequences and the way that the populations which are made up of closely related sequences obtain those allele frequencies through mutation drift and selection. And I argued that, deep down, at the most precise level, you've got a binomial distribution behind each of the processes of mutation drift and selection, at least under a certain set of assumptions. And then we went on to another very valuable statistic, which is the chi-square, which you can use in a number of different scenarios but, in particular, with association studies, where we did a simple case of a two-allele system with two outcomes-- HIV resistance and sensitivity. And then this association led to a broader discussion of alleles and haplotypes and genotypes, in general, and how one can obtain those and, in the process of doing so, expose oneself to random systematic errors, a theme that we'll return to from time to time. So today, we'll talk not about very closely related sequences but very distant related sequences and what are the algorithms that are available for finding those. These distant related sequences get a very different set of avenues. Here, we're trying to look for hints, for hypotheses in, say, new genome sequences as to what new genes might do. So we'll begin by comparing different types of algorithms that will allow us to do alignments. In particular, we'll stress the hero of today's show is dynamic programming. It comes up again and again throughout the course, not merely in pairwise sequence alignment but multisequence alignment. And we'll then go on to how the space in your computer, or the memory, dedicated to processing the time determine trade-offs. And in some cases, you will have to even sacrifice accuracy or completeness. And this also leads to the issue of finding genes, a particular type of distant sequence comparison. The ones interested in is finding motifs that are involved in finding genes. And finally, we'll end on hidden Markov model, the simplest one that I could think of that would really illustrate the idea of Markov models, probabilistic models, and the hiddenness. And so this is illustrated with a single dinucleotide with two states. So this puts in context-- in the first couple of lectures, we talked about the tree of life and how, right at the core, we had the common and most simple forms, which share the simple genetic code. And that was-- and then last time, we talked about the very tip of one branch of one of these trees-- basically, the human branch of the animal branch. And what has happened at the tip of that the last 5,000 generations, since the fairly significant bottlenecks that resulted that predated the population explosion. And so this time, we're going to talk about the whole tree and how these very deep branches can be obtained in comparing biopolymer sequences. Some of the earliest trees of life were based on the ribosomal RNA sequence because that's something that's fairly easy to trace back to common ancestors. But you also want to be able to do this with a variety of proteins, some of which go only a short distance traceably in sequence, and some of them go all the way back. What can we do with dynamic programming? I alluded to its multiple uses throughout the course. Here's some examples. We already mentioned shotgun sequence assembly. This is relatively easy because the sequences are fairly closely related to one another. Multiple alignments include sequence assembly, where you have multiple similar sequences. But as you get to more and more distant sequences, you can glean more and more about structural and functional conclusions about proteins and nucleic acids when you have a very large number, in the hundreds. And we'll see the challenges that come there. Repeats are a particular-- you can have repeats within a genome, or we can have alignments between different species. Now, birdsong seems a little out of place here. But historically, it's one of the first applications of dynamic programming where you, in a sense, have a continuous time axis, and you sample that at discrete points, often, in order to sample the intensity of the audio recording. And this will have a more direct analog than sequence alignments, which we'll do today. Later, when we're doing the RNA analyses, gene expression will show a direct time warping algorithm, which is very similar in outline to the birdsong. And then, finally, hidden Markov models, which would be the last thing today, use dynamic programming as part of the process by which the decisions are made about the model that's represented, the hidden Markov model. And these hidden Markov models, in turn, allow us to do RNA gene searches, structure prediction, distant protein homologies, speech recognition, and so on. So there's three types of alignments and scores that we'll discuss on the slide number 5. The main dichotomy is between global and local. Originally, in sequence alignment in the '70s, Needleman-Wunsch had a global algorithm, which has been modified-- Smith and Waterman's local algorithm-- about a decade later. And the major difference here is that in a global algorithm, you have reason to believe that the sequences align end to end. And you really just want to ask how many mismatches and insertions and deletions are there in the middle of the sequence? So for example, you might have two proteins that really have the same start and stop site. Or you might have an entire chromosome, and you're asking questions about haplotype, as we were in the last class, which would be, what mutations are in cis on one chromosome relative to another? Another example is you might be scanning a chromosome for a motif that occurs again and again, like an Alu repeat or a transcription factor binding motif. And this might be in the middle of one sequence, at the end of another, or in the middle of both. Here, it's at the end of one and the middle of another. And it's short and local. And so you don't want to constrain the sequence ends to align. And you don't want to penalize if there's some deviation from alignment of the ends. Taking this one step further so that it's not internal to either one, in an ideal assembly-- shotgun assembly of sequence, as we talked about last time-- you would expect, as one fragment-- one clone fragment-- ends, hopefully, you have a little bit of overlap at the beginning of the next one, and you'll be able to jump along these stepping stones to get to the end of the sequence. Because you can't sequence the whole thing. But this ideal suffix, where a suffix of one overlaps the suffix, or the verse complement, of another, this sort of alignment can be imperfect because of errors at the ends of these sequences. So generally speaking, we'll be talking about global versus local. And these specific sequences we'll come back to several slides from now. Now, you want to have a scoring algorithm. We'll use this very simple scoring metric off and on during this talk. And here, we're just giving plus 1 for every perfect match, indicated by a colon where an a matches an a, and a minus 1 for every mismatch, indicated by an x, where c does not match an a, for example. So we have five perfect matches and five mismatches in this case. And so it's a total of 5 minus 3 is 2. And in the local case, we're not going to require the ends to align. And so we can slip this a little bit so that now, there's four perfect matches, and we're not penalizing the terminal mismatches. And so we have a total of four, which, if these were directly comparable scores, would be a superior score. And you can see the suffixes, here, you're enforcing that it be at the ends. And it's a score of 3. So now we're going to compare different ways of searching through sequences. Now, that one was an exact match with mismatches, where the mismatches are penalized. And so an exact search, a truly exact search, is fairly rare-- maybe a restriction site, something like that. You can expand this to a regular expression where the insertions are restricted. They, in this case, an insertion can be indicated by any base. A, C, G, or T can occur. And then the number that you will tolerate is indicated by a numeric range-- zero to nine, in this case. And so the particular example given-- this equals sign just means that this is an example. So C, G are strict at the ends. And then it happened to have two As there, which an A is an example of a nucleotide that satisfies the abbreviation N. N is all possible. But you could similarly have the zero to nine bracket could refer to a short sequence, like a G sequence and so on. So you could get a known number of repeats. And that could represent the empirical observation that you have zero to nine AG repeats. Now, in the previous example, we just penalized whenever it was a mismatch by a fixed amount. But you could have a substitution matrix, which would actually codify your observations. When you look through natural sequences and you line them up, how often do you get an A substituting for a G? And so the penalty wouldn't be a strict minus 1. It would be something you'd look up in a table. And in that table, the diagonal would be the matches, and the off-diagonals would be specific mismatches. And we'll show how we get such a matrix and use it. Now, then you can have a profile matrix. Or this PSI means position-sensitive. And the position-sensitive means you have a different lookup table for every position in your biopolymer. And that makes sense because not all positions have the same set of equally-- or have the same sort of substitutions that are allowed. So in parentheses along here, these are actual names of programs that are available, either in commercial packages or for free. BLAST is Basic Local Alignment Sequence. And the N refers to nucleotide. And here, PSI-BLAST, again, position-sensitive. These are some of the ones you'll run into most frequently. The original versions of BLAST were basically aimed at having the largest block of contiguous sequence without gaps. And there were so-called Carlin statistics that would tell you that that's the largest sequence that will give you the best probability. But then it was widely recognized that when people actually evaluate matches between two sequences, they're not just evaluating the longest ungapped sequences. They're actually interested in the significance of a sequence that can include insertions and deletions. And so BLAST has been extended to include gaps. And prior to that long history back to the '70s is the Needleman-Wunsch and Smith-Waterman that I mentioned, a global and local, respectively, which would allow a large number of insertions and deletions of single bases and multiple bases, as determined by the parameters that were either manually set or determined empirically. And we'll try to stress ways that this can be determined empirically. And finally, the hidden Markov models truly take a probabilistic approach to each of these, allowing position-sensitive and models that are extendable not just where each position has a set of probabilities but there can be dependencies upon adjacent positions. And we'll get to this at the very end. And when we talked in the very first class about different definitions of complexity, one of them we talked about was the computational complexity, or hardness, of the amount of time that it takes, or the amount of space, or space and time, that it takes to solve a problem. And this certainly is a good illustration in today's talk in dynamic programming. Because when we want to do either a pairwise sequence alignment or a multisequence alignment-- let's start with pairwise, we're aligning k equals 2 sequences of length n, and we're allowing gaps. Now, if we're just comparing them without gaps, as we did in the earlier slides, it's trivial. It's linear with the total number of bases. It's linear within. And that, of course, scales very gracefully. But if you have a very naive-- and I'm going to set up a straw man here-- you have a very naive algorithm, then you'll go along, and you'll put in a gap at every possible position at the top in combination with every possible position in the bottom. And so that means that there's n on the top and n on the bottom for a total of 2n possible positions that you could put in a gap. And the gap can be any length, as you see on the right here of slide 7 is that you can have a gap up to length n. So roughly speaking, such a naive algorithm would scale very exponentially n to the 2n power. And this is just enormous for n the dozens. And when n is in the billions, then you can just basically forget about it. We're not even talking about computers or particles in the universe anymore. And that's for k equals 2, the simplest possible case where you're aligning two sequences. If you're aligning multiple sequences-- we'll get to that in a moment-- it definitely becomes exponential in k, even in a-non naive algorithm. So we're going to show that it's going to be non-exponential in n, the length of the sequences. But in order to do that, in order to assess algorithms, we can do them-- some of them we can do theoretically. And in fact, for the dynamic programming, we will show that you can convince yourself that an n squared algorithm is precise. But others, we will have to do empirically. And I just want to take an aside here to talk about how a particular comparison of sequence alignment programs was done. So the critical thing, not just for this test but for many that you'll be seeing in this course and you may want to do in your own projects, you want to set aside a training set in which you run through a number of algorithms or a number of parameters within an algorithm to assess which ones are the best. And once you've determined the algorithms that are best or the parameters that are best, then you want to have a testing set that's independent. That means non-redundant, as well. So the training set, if you use the training set as your testing set, you may lull yourself into a sense of complacency because it's been overtrained. And you're basically only capable of solving the problems that you set before it. So you want a separate testing set. And that was done in this case. We need some sort of evaluation criteria. So we talked a little bit of scores that you'd set up in order to score whether one alignment is better than another alignment. But in addition, when you want to compare two algorithms that may give you the same score, you want to have an external evaluation criterion. Typically, the evaluation criterion that one might have is sensitivity and specificity. Occasionally, you'll find in literature where people will just focus on one or the other. For some reason or other, they don't want to miss anything. So they want to keep their false-- they want to reduce their false positives as much as possible. Or they don't want to plow through mountains of output, and so they keep the false positives down. But you really want to have them both very low. And sometimes this is restated as sensitivity and specificity, where sensitivity is the number of true hits that you've predicted over the total number of true hits. This is, say, in your training set, where you know what the true hits are from some outside source. And then specificity is the number of true hits that you predicted over all of the hits that you predicted, true and false. Now, this truth here that comes in your training set, where does that come from? If we had access to truth, in general, then what do we need these algorithms for? And the answer is that we do have access to, maybe, a higher truth or something that's outside of sequence alignment. And this is crystallography, genetics, and biochemistry. Crystallography, for reasons we'll go into in the protein part of this course, is capable of detecting much more ancient relationships between biopolymers than is sequence alone. Similar genetics in biochemistry can test structural and functional hypotheses through great expense. And so these are expensive. And so that's the reason they're great for making training sets, but they won't necessarily replace sequence alignment and scores. So that was the setup for this slide, which is that Bill Pearson, who actually developed the FASTA algorithm, among others, did a thorough assessment of various algorithms. FASTA was one that was based on words, meaning exact matches of some fixed lengths the user could set; FASTP, which was based on these maximum-lengths blocks without gaps in it. Blitz is a variation on the Smith-Waterman algorithm, meaning a full dynamic programming, which we'll be talking about in a short while. This is a highly parallelized version of it, early parallel version of it. So the different algorithms for doing the alignment were compared, different substitution matrices and different databases. Just in case there was some database bias, he included that. And so we're going to talk about substitution matrices in just a moment. But basically, these are what amino acid or nucleotide-- what amino acid, in this case, can substitute for another amino acid in actual protein segments that have diverged by about the amount that you want to do your test on. And these different numbers just refer to roughly how distantly related the proteins are. The higher the number, the more distantly related they are in the case of the PAM matrices. So now, why did he do that with a protein level rather than doing it to the nucleic acid level? Well, historically, it's because there weren't any nucleic acid sequences. There were mostly protein sequences. But even today, there is obviously a lot more nucleic acid sequence. But there's a real reason to do it at the protein level, which is that when you look at the code that we've been talking about in these lectures, something like leucine can be represented by six different codons, which can have wildly different nucleotide sequences. So for example, CUG is valid, and UUA. And those only share one nucleotide out of three. And over long periods of time, if there's heavy selection on the protein and relatively weak selection on the nucleic acid, or there could even be pressure on the nucleic acid to change for reasons that we'll go into in the second half of this lecture, that pressure on the nucleotide sequence can cause the nucleotide sequence to change a lot and the protein sequence not to change much at all. So an example of pressure is if the tRNAs change in their abundance, then there'd be a pressure to codon usage to change. There are some reasons to do it at the nucleotide level. For example, if you're comparing sequences which don't encode proteins, that's an obvious reason. If you have a lot of insertion and deletions or a tricky biological phenomenon like RNA splicing that causes the protein to be out of phase-- the inferred protein to be out of phase or hard to infer-- then you would do it at the nucleotide sequence level. Now, I'm going to show this slide twice. The first time, we're going to take it as a given that we've been given this multisequence alignment, and we're not going to question right now how we got it. But we're going to use that multisequence alignment to derive, or to talk about how we would derive, a substitution matrix. And here, a substitution matrix, you can think-- this is a multisequence alignment. So essentially, we have a weight matrix, which, if this were position-sensitive, we would say, at this position, C never changes. If we do enough of these proteins and we don't care about position, we can build up a big set of matrices. And in general, we will find that C tends to substitute for C, and very few other things substitute for it. Eventually, you will find other substitutions. Cysteine and tryptophan, C and W, are relatively rare amino acids, and they're highly conserved. Other ones can be substituted, as you can see here-- threonine, serine, and valine can substitute for one another. So now let's take a look at how this plays out when we look at all the possible substitutions that can occur. And that's what's in slide 12. If you look along the very top row are the percentages, the abundances, of amino acids in a particular organism, say E. coli. And then there's a single-letter code, A through Y. And along the diagonal is the substitution matrix, the score which has been determined-- this is the BLOSUM matrix-- for a block represented in distantly related blocks. Here, you can see that the diagonal represents the tendency for the amino acid to substitute for itself. And those amino acids, which generally are not easily substituted-- as I say, are highly conserved, which we pointed out in the previous slide-- were cysteine, C, and tryptophan, W. And for example, W is the most strongly conserved. It's 22 along the diagonal. And the consequence of that is there'll be relatively few positive values off the diagonal. And in fact, for tryptophan, there are no positive values. The numbers here have been generated in such a way that the negatives will tend to cancel out the positives in alignments, known alignments, of sequences that are about the right evolutionary distance from one another. You want to pick-- you want to make your matrix, if you're trying to look for very distant related proteins, you want to take the substitutions that you're sampling in your training set to be as far at that same distance-- that is, very distantly related. And this is one of the mistakes that was made in the early PAM matrix. There were two mistakes, actually-- the mathematics-- well, first, the proteins that were compared were very closely related. Because closely related proteins were more trustworthy. You could align closed sequences more easily. The algorithms didn't have to be sophisticated. And the trees could be more precise. But then, because they had done-- so that already was a bias because the substitutions you get in closely related sequences aren't really the same. And then they apply to a mathematical extrapolation method, which was not adequate in terms of the actual evolution and also wasn't even correct, mathematically, although this persisted for decades as the most common-- and still-- the most commonly used matrix. Anyway, you can see that, although tryptophan doesn't have any positive off-diagonal, something like arginine, here, in this blue, has a positive 4 off-diagonal and a positive 10 on-diagonal. So as you might guess, what's the most likely substituted for positively charged arginine? It's positively charged lysine under physiological conditions. And that's why that's off-diagonal. And there are other ones. We've color coded these the same as the genetic code, where the negatively charged amino acids can also substitute for one another. So the significance of that top row, of the percent abundance, is that if you find two matching As, that's not so significant because that's the most frequently occurring amino acid in this organism. On the other hand, if you find two matching Cs, that's very significant. Because those are rare. And finding two of them at the same place in a particular alignment means it's significant. So both the abundance and the substitution matrix can be useful. So now, we're going to walk through a actual scoring of some alignments. And we want to do this in this more challenging situation where you allow insertions and deletions. So first, we will, even though we've told how it is that we get the match versus mismatch numbers as a full substitution matrix, here, you can imagine substitution matrix has plus 1's along its diagonal and minus 1's on off-diagonal just so you can do all the calculations I'm going to show you in the next few slides in your head. But also, simultaneously imagine that it could have the richness of the substitution matrix we just had. We're going to do this, the next few, with nucleic acid. But imagine you could also do it with amino acids. The nucleic acid substitution matrix will be a 4 by 4. The one we just saw was a 20 by 20 for amino acids. The indels, we'll penalize them by minus 2. But this is an arbitrary number. And you'll see how critical it is in just a moment. But you can imagine that this could be determined empirically, just like the substitution matrix was determined empirically in the previous slide. The alignment score, then, will be defined as a sum of columns. We're going to be assuming that adjacent positions are independent of one another. And we'll be scoring them independently and then just taking the sum. That gives us the alignment score for a particular alignment, a particular set of indels and a particular set of offsets from one sequence relative to another. But what we really want to do is go through all those possible alignments to get the optimal alignment, which is the maximal score defined here. To get the optimal alignment, we'd like to do that in less than exponential time with at least less than exponential per n length of the sequences. So we're going to use this pair of sequences, ATGA, ACTA, twice on this slide. And we're going to use it in subsequent slides where we do it a slightly different way. And we're going to use that very simple scoring metric-- plus 1 from that perfect match, minus 1 for mismatch, minus 2 for indel. And what we get here is these are just two of many different alignments we could have with different insertions here. On the left, number one, the most extreme case, no insertions or deletions on either sequence. We're only counting mismatches. There are two matches, two mismatches. And so that's two plus 1's, two minus 1's. And they cancel out. And the score is 0 for the one on the left. Then the one on the right, we've gone to the-- allowed an insertion on each strand, indicated by a dash on the opposite sequence. And now, you see you have three perfect matches, which is an increase in the number of perfect matches, but penalized with two indels, which are both negative 2. So it's plus 2 minus 2 minus 2 for minus 1. So this is not an improvement for the alignment on the left if we accept the scoring metrics that we had in the previous slide-- plus 1, minus 1, and minus 2 for the indels. If, instead, we say, well, indels really shouldn't be penalized that much-- we can accept insertions and deletions in these kind of sequences, we'll penalize by minus 1, penalize it the same as a mismatch-- now the score is the 3-- score for the one on the right, with three perfect matches and two insertion-deletions, is now plus 1. And it beats the perfectly aligned ones. So whether 1 is better than 2 depends on the indel score that you chose. STUDENT: Can I ask you a question? GEORGE CHURCH: Yes, question. STUDENT: Now that you are aligning two different sequences and in the case of the indel, you are allowing insertions all over the place-- I mean, you could theoretically have millions of those. But in reality, this [INAUDIBLE] the sequences in most cases would be no. You would know what it is. You can't-- GEORGE CHURCH: We know both of these sequences. These are sequences that we're comparing two different organisms. STUDENT: Right. So if you know both of them, then what's the point in allowing all these indel [INAUDIBLE].. GEORGE CHURCH: So the question is, why are we allowing insertions and deletions? And the reason is that during evolution, say, either lab evolution or ancient, insertions and deletions are valid mutations. And so we're trying to determine where the most likely places that insertions and deletions might have occurred over the course of the divergence of two sequences. And believe me, insertions and deletions are very, very common. So that's why we permit them. Now, why it is that insertion-deletions might be highly penalized or low penalized might depend on a position in the sequence. So for example, if you have a transcription factor where its precise geometry is important or an alpha helix and a protein or the translation of a genetic code where an insertion will throw the entire frame out of whack, as we had in the chemokine receptor in last class, then you want to penalize an indel very heavily. On the other hand, if you have a bunch of motifs that are kind of separated by variable linkers, then the insertion-deletion could almost be zero, no penalty at all. So you can see it matters, and it might be position-sensitive. It might not be one size fits all. But these are empirically determined-- can be empirically determined. So here's the hero-- dynamic programming. We've hopefully motivated that we can do scoring. We can determine empirically useful substitution matrices and indels. Now, how do we apply them? And dynamic programming extends beyond biology, as I've alluded. And such an algorithm solves every subproblem just once and saves its answer in a table, thereby avoiding the work of recomputing the answer every time. So the strawman that I threw up before of having this exponential problem is very readily solved. And the way it's solved is this is the subproblem way of dealing with it. And the idea of recursion, which we lightly touched upon when we defined the factorial, n factorial as equal to n times n minus 1 factorial, so defining it in terms of itself. But the key thing behind that definition and the ones we'll have here is that when you define something it terms of itself, you'd better have the call be a simpler problem and eventually terminate. And so that's what we have here. I'm going to give two examples in slide 17 and in the next slide, one of them global and one of them local. This one will be done in terms of a tetranucleotide comparison, the same one we've been dealing with all along. And the other one will be in a more abstract sequence. Here, the way do the sub-subproblem by recursion is we say we define the score of aligning these two tetranucleotides as the maximum of-- and then there are three options. It can be either the score of having an insertion on the top strand, and that's the top option. The middle one is having no insertions or deletions on either strand and just evaluating the last base comparison, which, in this case, is an A versus an A. Now, that is the way that the algorithm terminates. When you have a single-base comparison or a single base compared to an indel, then you look up the scoring algorithm, or the scoring metrics we've been using all along. So here, let's look at that final right-hand column. The score for an indel versus an A would be that minus 2 that we've been assuming along. And the score of an A versus A would be the substitution matrix diagonal, which would be a plus 1 and then, here, a minus 2. And so you can see that you're calling up these three possibilities-- indel, no insertion on the top, no insertion on the bottom. And you take the maximum of these, whichever one of these gives the best score. Now, that requires going back and calling it again. But you're calling it with a simpler-- you're asking for a simpler one. So now, you'll take the max of ATG versus ACT. And that'll ask you to look up the max of AT AC. And finally, it'll get the max of A versus A. And then you end. STUDENT: Excuse me. GEORGE CHURCH: Yeah. STUDENT: Are you assuming that insertion side is always a 1? The insertion side is always 1, right? GEORGE CHURCH: No. This algorithm allows any number of insertions up to the length of a sequence. And you'll see it when we do this in tabular form, how every possibility. But you do one at a time. There are only three cases here. By dealing with just three cases at a time, you actually end up having the full generality of any number of insertions and deletions. And that's the beauty of this algorithm. You don't have to explicitly do every possible insertion with every possible deletion. You just have to run them through once. OK. Now, I said that I was going to do two treatments of certain similarities. These are both dynamic programming of pairwise sequences. The previous one was global. This one is local. The only difference now is that we restrain the score to be greater than 0. We don't permit negative. So that means we're not penalizing the mismatches, for example, at the ends. Remember when I showed you that specific example early on. So now we have four choices-- the same three as before plus 0. And the other thing that I made a little different here is, rather than having a specific sequence-- that tetra nucleotide-- here, we have a general sequence where you show that the ellipses, the sequence is up to i long and up to j long here. And at this stage in the scoring, you're going to either lop off the i and j sequence element-- this would be a single base, and you do that score in the central scoring here-- or you have an insertion at the top or insertion at the bottom. So this is just a restatement of the previous one, generalized and made into a local alignment, which, in general, is what people do. People do local alignments rather than global ones. Because it's unsafe to say that the ends of your sequence will align. But we'll work through both of these as examples. Now, we're going to compute this as a row-by-row algorithm. Now, casually, you could just leave off this frame along the edge. But in order to make the algorithm be the same for the beginning and all the intermediate steps, what you do is you pre-fill this with numbers such that the edges are some very, very small number which is smaller than the sum of all the scores that you could get out of this table so that you can't really come in from those edges. You have to come in from the zero point, which is the global alignment requires the ends to align. So this is requiring the left-hand end to align. And so then the first comparison-- the only comparison you can really do-- is the A, A. That's the terminal comparison. And that happens to be a perfect match, so it gets a score of plus 1. Now, the next square that you can do is minus 1. And remember, each of these has three possibilities in the global alignment. Remember, it can be an insertion, a deletion, or just a direct comparison of match versus mismatch. So this first one, the insertion and deletion were ruled out. They weren't going to win the maximum score. So you basically got 1. It gets a little more interesting when you go to adding this next C. In order to add this C on the horizontal axis without adding anything on the vertical axis, that means that you've got an indel. And that means that you've got your A-A match. But now, to add this C, you've got a negative 2-- a penalty of minus 2. And so for the net result is a minus 1. And then, for each subsequent one, it's assumed that the extension is the same as the initial indel, which is all negative 2. And so this is an A-A match followed by one insertion, two insertions, or three insertions. And three insertions, of course, that gives you minus 5. And you just keep walking through this. Each one of these squares, essentially, is the maximum of three possibilities. The diagonal, if you follow the little yellow diagonal line from the 1 to the 0, that means you've taken an A-A match and a C-T mismatch, and the negative 1 is canceled out to positive 1, and you get a 0. Alternatively, that 0 is actually the maximum of that individual score compared to an indel from a minus 1 plus this mismatch, which is not going to be better than 0, and a minus 1 plus the mismatch coming in horizontally, which is also not going to be better than 0. So you end up with 0, which is the perfect match plus a mismatch, no indels. And similarly, you can fill up the entire table this way. Finally, now, you can trace what the best scores are going from end to end here, going all the way from your A-A terminal match at the left end to the A-A terminal match at the right end. And you can see the best traceback route is going through the diagonal here. This 0 is the maximum of three possibilities-- left-right, up-down, and diagonal. Similarly, this minus 1 was the best of three possibilities. Remember, this is a global alignment that allows negative values. And its maximum was along the diagonal and so forth. That's an example of the two basic steps. You set up the scoring metrics. You set up this n by n or n by m matrix. And then you just fill it up. And that's an n squared operation. It just goes up with the number of-- the length of the two sequences. STUDENT: [INAUDIBLE]. GEORGE CHURCH: Question. STUDENT: A diagonal is not the only optimal-- GEORGE CHURCH: It's not. That's true. STUDENT: [INAUDIBLE] GEORGE CHURCH: That's right. If you have an off-diagonal that's equivalent, then that's another valid sequence alignment. And actually, it comes up quite commonly, both in global and local alignments. And then the lower right-hand corner of slide 20 shows the specific interpretation of this [? Brown ?] set of errors, the particular traceback that we chose to highlight here, which is not the only one. And that's interpreted in this the same way that we interpreted the-- symbolically in an earlier slide. Now, this is also from a much earlier slide. This is the one where we had the motif to illustrate the local alignment. And on the left-hand side matrix is for local alignment using a Smith and Waterman algorithm. And on the right-hand side is the global alignment using the Needleman-Wunsch type algorithm, which we just used on a shorter sequence. And here, we've emphasized the diagonal, which gives a score of 2 and has a traceback along the magenta diagonal and would have the interpretation of the top sequence directly over the bottom sequence. On the other hand, if we look for local alignments and we do not penalize the offsets or the indels, then you can get an example. And here's another magenta traceback where we've gone-- the A-A match is not on the diagonal for the global sequence alignment, but it hasn't been penalized. So it picks up the 0's from the frame boundary cells and just picks up the positive 1 perfect match. And then, when you add a C, it picks up another one, and another C, another a. And all four add up to 4. Adding an additional base, however, does not help. Because it has to be a mismatch or an insertion or deletion. So you going from the 4 to an indel causes it to drop by minus 2, giving the two 2's. And going along the diagonal picks up a mismatch, which is a minus 1 penalty. So you just can't do better. And so that determines the edges of your local alignment. So you not only have a score and a traceback but you also have endpoints. STUDENT: [INAUDIBLE]. GEORGE CHURCH: Yes, question? You, yeah. STUDENT: Now, when this gets going, though, on the not exactly diagonal but in diagonal [INAUDIBLE]---- GEORGE CHURCH: Yes, you'll get a longer-- STUDENT: B, which is obviously not as good-- GEORGE CHURCH: And then 4. STUDENT: And then you keep going, you get back to 4. GEORGE CHURCH: And that's another example, just like the previous one. That's another valid alignment. It's still a local alignment. It doesn't have the total global endpoints. And it has an equal score to the shorter motif. STUDENT: So how do these two compare? I mean, would you-- GEORGE CHURCH: They're equal. STUDENT: --say because the other one is a longer [INAUDIBLE] GEORGE CHURCH: Well, in this case, the scoring algorithm was set up in such a way that length didn't factor into it, other than the fact that longer sequences have more chances to have more perfect matches. So in this case, they would be equivalent. As you get to more detailed substitution matrices, the chances of getting two identical scores are weaker. But with nucleic acids with this kind of simple scoring, it comes up all the time. So that's fine. We've now bargained our way down from a horrible exponential potential way of doing alignments to something which scales by n times m where those are the lengths of the two sequences. You could think of this as a rectangular matrix such as the ones we've been doing. And both the time and the space or memory requirements for the algorithm will scale by this quadratic relationship. And the amount of time in memory is modest. So in absolute terms, it would be on the order of one comparison-- that's that maximum comparison-- three multiplication steps and in computing the entry. And the memory could be on the order of a byte. Data structure could be integer, or it could be floating point. And again, you have to have some way of finding the entries in the table. So that's fine. It scales gracefully. But how big is it? Let's say we had two megabase genomes. In order to find entries of that size, you might want to set aside 4 bytes. And so you have the 4 bytes times 10 to the sixth squared. Or this is just ballpark. There are various ways you could squeeze this a bit. But this is 4 gigabytes of memory. And for a gigahertz CPU, you might be able to do a million entries per second so that with 10 to the 6 squared entries, that's about 10 days. Now, that's a fairly small genome. Most genomes are bigger than 1 megabase. And so when we had the discussions at the beginning of the Genome Project, one of the things the computer people brought up was, how are we going to compare a billion base pairs with a billion base pairs if the goal of this project is to do the three billion base pair human genome? And of course, back then, most computers were 4 gigabytes, and a gigahertz was a quite remarkable computer. And of course, the answer was that we weren't going to do a full dynamic programming of the human genome against itself. We were going to cheat in various ways. And it took the recognition that it really was practical and, biologically, not much of a shortcut to look for little anchor points that would tell you that maybe the sequences don't align end to end, but there's some anchor point where you have enough bases or enough amino acids in a row that you can say, OK, here's one point where they definitely align. Let's now make reasonable assumptions by how many indels there can be, for example, by knowing how different the two sequences are. And so if you know the differences of sequences, then you can say, I'm not going to allow more than a reasonable number of indels based on how different the sequences are. And you make a band which is a narrow width-- here's a fairly extreme example where we have a width of 3. And so rather than doing a full n squared matrix where you filled up the entire thing, we just do this band, which is on the order of the width of the band times the length of the longest sequence. Now, this doesn't look very impressive for this case because n is small, and w is relatively large. But if n were billions and w were, say, 3 to 5, then it would be a very significant savings. So there's two key things here. One is the banding, and the other is getting the anchor points. So summary for this half of the talk is that dynamic programming is really the rigorous way to compare two sequences. And after the break, we'll see how you can compare multiple sequences. We need to work towards a statistical interpretation of these alignments. That's going to require some test sets-- sorry, some training sets-- where you can see how it actually behaves on real biological populations of sequence alignments. We need to compute either a global or a local alignment. And you've seen algorithms for doing each of those and how there's important but subtle differences between them. And we've talked about ways that we can improve the algorithm tremendously using the simple scoring functions or complicated ones that are determined empirically. So let's take a little break. And we'll come back and talk about multisequence alignment. |
Modern_Robotics_All_Videos | Modern_Robotics_Videos_Acknowledgments_Kevin_Lynch.txt | Frank Park and I would like to thank you for watching these videos. I'd also like to thank the many people who contributed to this video project including my colleagues Michael Peshkin, who created the Lightboard, and Jarvis Schultz, who assisted with the robot simulations. Also the many students involved including Jian Shi, Zack Woodruff, Ben Sullivan, and most of all Huan Weng who developed most of the software and animations. I'd also like to thank Northwestern for its generous support of this project as well as the Northwestern Advanced Media Production Studio who produced these videos. Most importantly I'd like to thank my wife Yuko and my kids, Erin and Patrick, for putting up with some long hours. Thanks for watching. I hope you find the videos useful. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_1216_Planar_Graphical_Methods_Part_1_of_2.txt | Any planar twist can be visualized as a rotation about an axis out of the plane. This twist can be represented by a point (x_c, y_c) and the angular velocity omega_z about the point. We define the center of rotation to be the point (x_c, y_c) plus a label that gives the sign of the angular velocity. The motion could also be expressed as a planar twist V in the fixed space frame. Given this twist, we can calculate the point (x_c, y_c), and given (x_c, y_c) and omega_z, we can calculate the twist. The center of rotation is a convenient graphical representation of a planar twist when we only need to know the sign of the angular velocity. Let's visualize the mapping from twists to centers of rotation. This is the three-dimensional space of twists of a planar rigid body and the sphere of unit twists. We draw planes equipped with coordinate frames at omega_z equals 1 and omega_z equals -1. The top plane is the plane of rotation centers with a plus sign label, for a positive angular velocity. The bottom plane is at omega_z equals -1, and it is the plane of rotation centers with a minus sign label, for a negative angular velocity. A twist V with a positive angular component can be intersected with the positive plane, scaling V by a positive coefficient if need be. The plane a twist intersects with determines the plus or minus label associated with the twist, and the location of the intersection determines (x_c,y_c). If the twist has no angular component, then it can be thought of as a rotation center at infinity, as we'll see in a moment. Now consider three unit twists, written a, b, and c, and their mapping to three rotation centers. The rotation centers a and b have positive labels, while c has a negative label. As we saw in the previous video, the set of feasible twists of a body in contact with stationary constraints is a polyhedral convex cone, and it will be convenient to be able to represent such cones using rotation centers. As an example, let's construct a polyhedral convex twist cone as the positive span of the three unit twists, a, b, and c. The intersection of that twist cone with the unit sphere is indicated. Using the center of rotation mapping, the twist cone can be represented as this region of rotation centers. This hatched region is properly interpreted as a single convex region, just like the twist cone. It is connected by rotation centers at infinity, which correspond to pure translational motion without rotation. To see this, let's move a twist from twist b, as indicated by the green dot on the unit sphere, to twist c. The path of the twist is shown in green. The corresponding rotation center also moves from b, through infinity, to c along the path shown here. This green segment, passing through infinity, is the positive span of the rotation center b with a plus label and the rotation center c with a minus label. In other words, the positive span of two rotation centers of opposite signs consists of a ray of positive rotation centers, a ray of negative rotation centers, and a point at infinity corresponding to a pure translation. All the rotation centers are on the same line. The positive span of two rotation centers of the same sign is the line segment between the two centers, with the same sign. We can generalize these examples to find the positive span of three rotation centers labeled plus, or the positive span of two rotation centers labeled plus and one rotation center labeled minus. In the next video we'll use the center of rotation representation to analyze planar contact kinematics. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_412_Product_of_Exponentials_Formula_in_the_EndEffector_Frame.txt | In the previous video, we derived the product of exponentials formula to calculate T of theta, the configuration of the end-effector frame {b} relative to the fixed space frame {s}, when we're given the joint positions theta. In that formula, the joint screw axes are defined in the {s}-frame fixed to the world. In this video, we derive an alternative version of the formula where the joint screw axes are defined in the {b}-frame fixed to the end-effector. We use the same RPR robot as an example. First let's move the robot to its zero configuration. As before, we define M to be the configuration of the {b}-frame when the robot is at its zero configuration. Now we rotate joint 1 by an angle theta_1. The motion of the {b}-frame is a rotation about the screw axis of joint 1. We will represent the screw axis in the {b}-frame as B1, with the angular component omega1 and the linear component v1. Since the screw axis has rotation, omega_1 is a unit vector. Since the screw axis is aligned with the z-axis of the {b}-frame, omega1 is equal to zero, zero, one. The linear motion v1 can be obtained by visualizing a turntable at joint 1 rotating and measuring the linear velocity at a point at the origin of the {b}-frame. Since the distance between the joint 1 and the {b}-frame is 3, the linear velocity v_1 is zero, three, zero in the {b}-frame. We could also calculate this by defining a point q_1 on the axis of joint 1, where q_1 is expressed in the {b}-frame. Then v_1 is minus omega_1 cross q_1. Now that we have the screw axis B_1, we can calculate the {b}-frame configuration T of theta. We simply apply the body-frame transformation corresponding to motion along the B_1 screw axis by an angle theta_1. This transformation is e to the bracket B_1 times theta_1. Since it is a body-frame transformation, it postmultiplies M. Now suppose we change joint 2, extending it by theta2 units of distance. The screw axis B2 corresponding to joint 2 has zero angular component omega2, so the linear component v2 must be a unit vector. If we imagine the whole space translating at unit velocity along joint 2, a point at the origin of the {b}-frame would move with a linear velocity v2 equal to one, zero, zero, expressed in the {b}-frame. Therefore the screw axis B2 is defined as zero, zero, zero, one, zero, zero. The new configuration of the {b}-frame, T of theta, is obtained by right-multiplying the previous configuration by e to the bracket B_2 times theta_2. Notice that the previous motion of joint 1 does not affect the relationship of joint 2's screw axis to the {b}-frame, because joint 1 is not between joint 2 and the {b}-frame. Therefore, B_2 is the same as the screw axis of joint 2 when the robot is at its zero configuration. Finally, let's rotate joint 3 by theta_3. The screw axis B_3 is a pure rotation about an axis out of the screen, so the omega_3 vector is zero, zero, one. Rotation about this axis induces a linear motion v_3 equal to zero, one, zero in the {b}-frame. The new configuration of the {b}-frame, T of theta, is given by right-multiplying the previous configuration by the new body-frame transformation. Again, the previous motions of joints 1 and 2 do not affect the relationship of joint 3's screw axis to the {b}-frame, because they are not between joint 3 and the {b}-frame. Therefore, B3 is the same as the screw axis of joint 3 when the robot is at its zero configuration. In summary, we've derived a procedure for forward kinematics when the screw axes are expressed in the {b}-frame. First, define the M matrix representing the {b}-frame when the joint variables are zero. Second, define the {b}-frame screw axes B_1 to B_n for each of the n joint axes when the joint variables are zero. Finally, for the given joint values, evaluate the product of exponentials formula in the {b}-frame. Comparing the two product of exponential formulas, in the {s}-frame and the {b}-frame, the major differences are the frame of representation of the screws and whether M is on the right side or the left side of the sequence of matrix multiplications. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapters_91_and_92_PointtoPoint_Trajectories_Part_1_of_2.txt | A robot controller typically accepts a steady stream of desired robot configurations, reads joint sensors to determine the robot's actual configuration, and updates the actuator commands to follow the desired configuration. This process can happen thousands of times a second. A robot configuration as a function of time is called a trajectory. We can write a trajectory as theta of t, where the time t goes from zero to capital T. This figure shows a trajectory for a robot with two degrees of freedom. We could also plot this trajectory directly in the configuration space, also known as the C-space. As the time t increases from zero to capital T, the configuration follows the path shown here. In some cases it is desirable to separate the C-space path from the speed at which it is followed. For example, we might plan a geometric path for a mobile robot to avoid obstacles on the floor, without worrying about how fast the path is followed. We define a path to be a curve in configuration space as a function of a path parameter, s, that goes from zero to one. As s increases from zero, the robot moves from the start configuration at theta-of-zero to the end configuration at theta-of-one. A path can be turned into a trajectory by defining a function s of t, which maps the time range zero to capital T to the path parameter range zero to one. This function is called a time scaling. The time scaling controls how fast the path is followed. Now, with a trajectory theta of s of t, the time derivative theta-dot is determined by the chain rule to be d-theta d-s times s-dot. The acceleration theta-double-dot is determined by the product rule and the chain rule to be d-theta d-s times s-double-dot plus d-squared-theta d-s-squared times s-dot-squared. Since the dynamics depend on theta-double-dot, for the dynamics to be well defined, the second derivatives of both theta-of-s and s-of-t must exist. In this chapter we consider the problem of planning paths and trajectories without considering obstacles. In Chapter 10, on motion planning, we address the case of obstacles in the environment. Let's start by planning a path for a robot arm in its C-space. The simplest type of path is a straight-line path from an initial configuration theta-start to a final configuration theta-end, as described by this equation. As s goes from 0 to 1, the configuration goes from theta-start to theta-end. A straight-line path in joint space is shown here, for a 2R robot with 180 degrees of motion about its first joint and 150 degrees of motion about its second joint. The path can also be visualized in the workspace, as shown here. Note that the endpoint of the robot does not follow a straight line. If we prefer a straight-line motion of the end-effector in Cartesian space, we can define X to be the coordinates of the end-effector and define a straight-line path as shown here. Then we have to use inverse kinematics to solve for the robot configuration at each point along the path. As you can see here, some straight-lines in Cartesian space cannot be executed, as they pass outside the workspace. One advantage of planning straight-line motions in joint space is that the joint limits are usually independent of each other, so the set of feasible joint configurations is convex, unlike the workspace. A straight line between two points in a convex space always remains inside the space. We can also plan a path between two rigid-body frames represented in SE(3). We could try directly extending our straight-line definition in joint space, but of course this does not make sense; we there is no meaning to subtracting two elements of SE(3). Let's find the screw path where the frame follows a constant twist from X_start to X_end. First, let's express the frame X_end relative to X_start. Each of X_start and X_end is implicitly defined in a space frame {s}, so by our subscript cancellation rule we find that X_start-inverse times X_end is the configuration of the {end} frame in the {start} frame. The log of this is the little-se(3) representation of the twist, expressed in the {start} frame, that takes the {start} frame to the {end} frame in unit time. So our formula for the path is X-of-s equals X_start times the matrix exponential of s times the log of X_start-inverse times X_end as s goes from zero to one. The matrix exponential is multiplied on the right since the twist is expressed in the {start} frame, not the space frame. The path parameter s determines how far we follow the twist that takes the {start} frame to the {end} frame. The final screw path can be visualized as shown here. This is a "straight-line path" in the sense that the twist is constant throughout the motion. Another type of path between X_start and X_end is one that decouples the rotation and translation. The origin of the frame follows a straight line, and the rotation is about an axis fixed in the frame. For this type of path, the position coordinates p follow a straight-line path for coordinates, as discussed earlier, while the orientation satisfies a formula similar to that for a constant twist, except now just for the orientation components. These two types of paths are the most natural "simple" paths between two frames. Now that we can plan paths between two configurations, in the next video we study common time scalings that turn a path into a trajectory. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_512_Body_Jacobian.txt | In the previous video, we learned how to take the joint screw axes S_1 to S_n, defined in the space frame {s} when the robot is at the zero configuration, and transform them to the n columns of the space Jacobian at any arbitrary joint configuration theta. In this video, we construct the 6 by n body Jacobian J_b from the screw axes B_1 to B_n, expressed in the end-effector frame {b}. The body Jacobian transforms joint velocities to the body twist. To derive the body Jacobian J_b, let's use the 5R arm from the previous video as an example. To derive J_b, we need to define the end-effector frame {b}, but we don't need an {s} frame. J_b has five columns, one for each joint, and in this video we will focus on J_b3, the third column, corresponding to the end-effector twist when joint 3 moves with unit velocity. First we set all joint angles equal to zero. At this configuration, J_b3 is just B_3, the screw axis of joint 3 expressed in the {b} frame when the arm is at its zero configuration. Now we rotate joint 1. Notice that this rotation of joint 1 does not change the relationship between joint 3 and the {b} frame, so J_b3 is still equal to B_3. Now we rotate joint 2. Again, the relationship between joint 3 and the {b} frame is unaffected by joint 2's motion, so J_b3 is still equal to B_3. Now we rotate joint 3. As with joints 1 and 2, J_b3 is unaffected by joint 3's motion. Now we rotate joint 4 by theta_4. This motion changes the configuration of joint 3 relative to the {b} frame, so J_b3 changes. We define the frame {b-double-prime} to be the {b} frame before joint 4 is rotated, and the frame {b-prime} to be the {b} frame after joint 4 is rotated. The relationship between the two is given by T_b-double-prime_b-prime equals e to the bracket B_4 times theta_4. We define the {b-double-prime} frame because the screw axis for joint 3 is just B_3 in this frame. Finally, we rotate joint 5 by theta_5, giving us the final end-effector frame {b}, obtained by rotating the frame {b-prime} about the joint 5 screw axis by theta_5. To find the {b} frame relative to the {b-double-prime} frame, we postmultiply T-b-double-prime-b-prime by the body-frame transformation corresponding to rotation about the body screw axis B_5, giving us the equation shown here. What we really want, though, is the configuration of the {b-double-prime} frame relative to the {b} frame, so we reverse the subscripts, which is the same as taking the inverse of the transformation matrix. Making use of the fact that the inverse of A times B, where A and B are invertible matrices, is just B-inverse times A-inverse, we can rewrite T_b_b-double-prime in this form. Since the screw axis of joint 3 is just B_3 in the {b-double-prime} frame, to find J_b3 we just need to use our rule for changing the frame of reference of a twist. The final expression for the J_b3 column depends on the screw axis for joint 3 as well as the joint angles and screw axes for joints 4 and 5. The same reasoning applies for any joint, so we can generalize to this definition of the body Jacobian J_b. The last column of the body Jacobian is just the screw axis B_n when the robot is at its zero configuration. It does not depend on the joint positions, because no joint is between joint n and the {b} frame. Any other column i of the body Jacobian is given by the screw axis B_i premultiplied by the transformation that expresses the screw axis in the {b} frame for arbitrary joint positions. You can see that J_b1 depends on the positions of joints 2 through n, J_b2 depends on the positions of joints 3 through n, etcetera. You can also see that the body Jacobian is independent of the choice of the space frame {s}. Since each column of a Jacobian is a twist, we can use our rule for representing a twist in a different frame to translate between the space Jacobian J_s and the body Jacobian J_b. J_b is obtained from J_s by the matrix adjoint of T_bs, and J_s is obtained from J_b by the matrix adjoint of T_sb. In the next video we will see that the Jacobian is used not only to convert joint velocities to end-effector twists, but also to understand how end-effector wrenches are related to torques and forces at the joints. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_1331_Modeling_of_Nonholonomic_Wheeled_Mobile_Robots.txt | Nonholonomic wheeled mobile robots employ conventional wheels that don't allow sideways sliding, such as this wheel rolling upright on a plane. Its configuration q consists of the heading angle phi, the contact position (x,y), and the rolling angle theta. There are two controls driving the wheel: u_1, the forward-backward rolling angular speed, and u_2, the speed of turning the heading direction phi. With these controls, the rate of change of the coordinates can be expressed as a configuration-dependent matrix times the controls, where the configuration-dependent matrix is called G-of-q. G-of-q times u can be written g_1-of-q times u_1 plus g_2-of-q times u_2, where u_1 and u_2 are called the controls and g_1 and g_2 are called vector fields. Each of the two vector fields assigns a velocity to every point q in the configuration space, so these vector fields are sometimes called velocity vector fields. The total velocity of the wheel is the scaled sum of these vector fields, where the scaling coefficients are the controls, so these vector fields are sometimes called control vector fields. These vector fields are defined on a 4-dimensional configuration space, so they are hard to visualize. Here's a simple velocity vector field defined on a two-dimensional space, x-dot equals minus-y and y-dot equals minus-x. Back to the example of the rolling wheel, which we also call a unicycle, the equation q-dot equals G-of-q times u is the kinematic model of the unicycle. A diff-drive robot has two independently driven wheels and one or more caster wheels to keep it horizontal. If we define the 5-dimensional configuration of the robot as phi, the (x,y) position of a point halfway between the wheels, and the rolling angles of the left and right wheels, the kinematic model is given by this G-of-q matrix times the controls, which are the left and right wheel velocities, u_L and u_R. Usually we don't care about how far the wheels have rotated, so we can scratch the bottom two rows of the equation. The two columns of the G matrix are the control vector fields. Finally, a car-like robot uses Ackermann steering of the front wheels to create a center of rotation somewhere along the axis of the unsteered rear wheels. If we define the 4-dimensional configuration to be the heading angle phi, the position (x,y) of a point halfway between the rear wheels, and the steering angle psi, then the kinematic model is this G-of-q matrix times the controls, which are the forward velocity v and the rate of turning the steering wheel w. In this model, the control for steering is the steering speed w. To make this model more like the unicycle and the diff-drive robot, I'll assume that the steering control is the angleof the steering wheel, not its angular velocity. With this simplification, the steering angle is no longer part of the configuration, and the kinematic model simplifies to this, where v is still the forward-backward velocity but now omega is the rate of change of the heading direction, phi-dot. As shown in the book, we can calculate the steering angle psi needed to generate the virtual control omega using this transformation, which is a function of both v and omega. We call this kinematic model our canonical nonholonomic robot model, because it also models the unicycle and the diff drive. They also have control transformations that take the virtual controls v and omega and express them in terms of the actual controls. The only difference among the robots is the bounds on their controls. For the unicycle, limits on the forward and turning speeds are independent, so the available controls are a box in the control space. A diff-drive robot with bounds on the individual speeds of each wheel has a diamond of available controls. A car-like robot has a bowtie-shaped control set, due to bounds on the turning radius and bounds on the forward-backward speed. Finally, a forward-only car has only half the bowtie of controls. The canonical model says we have a 2-dimensional set of velocities for the 3-degree-of-freedom system. With a little manipulation we get the implicit Pfaffian constraint on the velocities, x-dot sine phi minus y-dot cosine phi equals zero. Since this velocity constraint cannot be integrated to a constraint on the configuration of the robot, it's called a nonholonomic constraint, as we learned in Chapter 2. The presence of this constraint is why we call these robots nonholonomic. In the next video I discuss the controllability properties of robots subject to velocity constraints. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_4_Forward_Kinematics_Example.txt | The 4-joint RRRP robot, like you see here, is a popular choice for certain kinds of assembly tasks. In this picture, we see it at its zero configuration. To solve the forward kinematics, we need to find M, the configuration of the {b}-frame, and the joint screw axes when the arm is at its zero configuration. Then we can use the product of exponentials formulas from the previous videos. First let's focus on the orientation of M. From the picture, we can see that the {b}-frame x-axis is aligned with the minus y-axis of the {s}-frame. The {b}-frame y-axis is aligned with the minus x-axis of the {s}-frame. And the {b}-frame z-axis is aligned with the minus z-axis of the {s}-frame. Also, we can see that the {b}-frame is offset from the {s}-frame by 19 units in the x-direction and -3 units in the z direction of the {s}-frame. We add the row of zeros and a one to complete the M matrix. Next let's find the screw axis of joint 1, expressed in the {s}-frame. The axis of rotation is aligned with the {s}-frame z-axis, so the angular component of S1 is zero, zero, one. A rotation about this axis causes no linear motion of a point at the origin of the {s}-frame, so the linear component of the screw S1 is zero. We can also express the screw axis as B_1 in the {b}-frame. The joint axis is in the negative z_b direction, so the angular component is zero, zero, minus one. A unit angular velocity about the joint 1 axis induces a linear velocity at a point at the origin of the {b}-frame, and it is apparent from the figure that this linear velocity is 19 units in the minus x_b direction. This should be readily apparent from the figure; you shouldn't have to do any math. Now let's go faster through the rest of the joints. Joint 2's axis is aligned with the {s}-frame z-axis, so the angular component of the {s}-frame screw S_2 is zero, zero, one. Unit angular velocity about this axis induces a linear velocity at the {s}-frame origin of 10 units in the minus y_s direction. The screw axis B_2 in the {b}-frame is zero, zero, minus one, minus 9, zero, zero. Joint 3's rotational screw axis induces a large linear velocity at the origin of the {s}-frame but zero linear velocity at the origin of the {B}-frame. Finally, the prismatic axis of joint 4 is aligned with the z-axis of the {s}-frame and the minus z-axis of the {b}-frame. The angular component of both screw axes is zero, since it is a prismatic joint. For most open-chain robots, deriving the screw axes is just this easy: you can simply look at a good drawing of the robot at its zero configuration and get the screw axes by inspection. If anything is unclear about what we did, you should either pause this video at appropriate places or look at the examples in the book. So, this concludes Chapter 4. You now know how to use the material of Chapter 3 to solve the forward kinematics of robots. In Chapter 5, we will study the velocity kinematics relating joint velocities to the twist of the end-effector. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_94_TimeOptimal_Time_Scaling_Part_2_of_3.txt | In the last video we learned to express the robot's joint force and torque limits as constraints on the feasible accelerations s-double-dot along the path theta-of-s, as a function of the state (s, s-dot). In this video, we'll express those constraints graphically and gain some insight into the time-optimal time-scaling problem. Let's draw the plane (s, s-dot). At the beginning of the motion, s and s-dot are both equal to zero. At the end of the motion, s is equal to one and s-dot is equal to zero. The start and end states are shown as dots. Since we require the motion along the path to be monotonic, that is, the robot always moves forward along the path, s-dot must always be positive. So we only need to draw the top-right quadrant of the (s, s-dot) plane. If the robot moves very slowly along the path, then the motion of the robot is essentially along the s-axis from the start to the end. A more typical time scaling might look something like this. s-dot starts out at zero, increases to a maximum value, then drops back to zero at the end of the motion. Until now we have been expressing a time scaling as s as a function of time, but here we're plotting it as s-dot as a function of s. The same time scaling can be represented either way, but in this time-optimal problem it's more convenient to express a time scaling as s-dot-of-s. Now let's look at a particular state of the robot on its trajectory, indicated by this point (s, s-dot). We can draw the tangent vector to the time scaling, as shown here. Now let's get rid of the time scaling so we can focus on this tangent vector. The tangent vector consists of a horizontal component and a vertical component. The horizontal component expresses the rate of change of s, so it is just s-dot, which can be drawn as proportional to the height of the point along the s-dot axis. The vertical component expresses the rate of change of s-dot, in other words, the acceleration s-double-dot. If we assumed that s-double-dot is always zero, then the tangent vectors at states (s, s-dot) would look like this. The horizontal component of a vector is determined by the s-dot value of the point. Now let's focus on one particular tangent vector at the state (s, s-dot). The horizontal component is s-dot. But now let's assume that the vertical component, the acceleration s-double-dot, can be any value in the range from L of (s, s-dot) to U of (s, s-dot), the range of feasible accelerations according to the dynamics. Summing these vertical vectors with the horizonal vector, we get the vectors shown here. These vectors form a cone called the feasible motion cone. At this state (s, s-dot), the tangent vector to the time scaling must be inside this cone to satisfy the actuator limits. Therefore, a time scaling like this would be OK at this state, as the tangent vector lies inside the feasible motion cone. If, instead, our feasible motion cone looked like this, the tangent vector is outside the cone, and this time scaling is not possible according to the robot's actuator limits. You could imagine drawing the motion cone at every point in the plane, and the problem is to get from the start state to the goal state as quickly as possible while keeping the tangent to the time scaling inside all motion cones along the curve. Since we want to go as fast as possible, the time scaling should always be as high as possible, and to find such a time scaling we could forward integrate the upper edge of the motion cones, starting from the initial state. Here you see a curve that is found by numerically integrating the maximum possible accelerations. Because this curve travels along the edge of the motion cones, at least one actuator is always operating at a limit. This time scaling causes the robot to follow the path as fast as possible, but it does not bring the robot to a stop at the end of the path, as required. So now imagine backward integrating from the end state along the lower edge of the feasible motion cones. This integral curve intersects the other integral curve at a switch point s-star. The time scaling you see here, represented by the two segments obtained by numerical integration, is the time-optimal time scaling. During the first segment, the robot maximally accelerates along the path, and during the second segment the robot maximally decelerates along the path. It is clear why this is time optimal: in the first segment, the robot cannot go any faster, and in the second segment, if the speed s-dot were any higher at any given s, the robot would not be able to come to a stop. This time scaling keeps the speed s-dot as high as possible at all times, and therefore the duration of the motion is as short as possible. This kind of trajectory is called a "bang-bang" trajectory, because one or more of the actuators "bangs" against a limit during the first segment, then one or more of the actuators "bangs" against a limit during the second segment. Compare this to a non-optimal trajectory, where the tangent to the time scaling is in the interior of the motion cones, not on the edges. The speed of the robot at any given position s is lower than what it is for the time-optimal time scaling. So this is the basic idea behind the time-scaling algorithm, except for one hitch, which I'll describe now. Let's plot the motion cone at a particular state (s, s-dot). If you keep s constant but increase s-dot, you get a different motion cone. If you increase s-dot further, then the motion cone may reduce to a single vector, where the lower acceleration limit is the same as the upper acceleration limit. If you increase s-dot further, then no motions are feasible, and this means the robot is traveling too fast for the actuators to keep the robot on the path. In general, we could plot a velocity limit curve: at states on this curve, only a single acceleration is possible, and at states above this curve, the robot leaves the path immediately. At states below the curve, there is a cone of possible tangent vectors. Now, considering the existence of a speed limit, we might end up with a situation as illustrated here: the maximum acceleration curve and the maximum deceleration curve do not intersect, but instead run into the velocity limit curve. Therefore, bang-bang control is not possible. What to do in this case is the subject of the next, and final, video of Chapter 9. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_105_Sampling_Methods_for_Motion_Planning_Part_1_of_2.txt | In an earlier video, we learned that path planning based on a true roadmap representation of free C-space is complete, meaning that the planner will find a path if one exists. Since it is difficult to analytically calculate a true roadmap, we look for methods to approximately construct a roadmap. One type of approximate roadmap is the probabilistic roadmap, or PRM for short. The PRM is constructed from a set of configurations sampled from the C-space, and it can be called a probabilistic roadmap because, as the number of samples tends to infinity, the likelihood that the graph is a true roadmap goes to 100 percent. An advantage of a PRM graph over a grid-based graph is that the structure of the free C-space is generally captured by the PRM with many fewer nodes than with a grid graph. PRMs have been used to solve complex motion planning problems in high-dimensional C-spaces. This figure shows a PRM for a two-dimensional C-space. To construct a PRM, we can use this algorithm. In the first phase, we generate N samples of the free C-space. These free configurations can be generated by uniformly randomly sampling the C-space and only keeping the sample if it is collision-free, but non-uniform sampling strategies can also be used to increase the likelihood that the PRM is able to represent narrow passageways in the C-space with a smaller number of samples. The N free-space configurations generated in the first phase of the algorithm are the nodes of the graph. The second phase of the algorithm tries to connect the nodes with edges. For each node, we find a set of k nearby nodes. Then, for each of these neighbor nodes, we try to find a path from the original node to the neighbor. To do this, we use a very simple and fast local path planner which does not attempt to avoid obstacles. For example, the planner could just choose a straight line between the original node and the neighbor. We then check whether this path is collision free, and if so, we add an edge between the two nodes. At the end of this second phase of the PRM construction algorithm, we have a graph that should approximately represent the free space, depending on our choice of the number of samples N, the number of neighbors k, the sampling algorithm, and the local path planner. The choice of the sampling algorithm and the local path planner provides a lot of flexibility to customize the basic algorithm. Once we have preprocessed the C-space by generating the PRM, we can solve different path planning problems by connecting different start and goal configurations to the PRM, as you see here, and using A-star search to find a good path through the PRM. Thus the PRM planner is usually thought of as a multiple-query planner: we invest time to generate a good representation of the free C-space so we can then efficiently solve several motion planning problems. In the next video we'll see a different sampling-based motion planner typically used for single queries. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_114_Motion_Control_with_Torque_or_Force_Inputs_Part_3_of_3.txt | Recall our dynamics for a single-joint robot. I'll lump together the gravity and friction terms into a single term h. In the last video we learned about PID feedback control. While it can give good performance, it still requires error to accumulate before it will command a torque. If we have a good dynamic model of the robot, there is no need to wait for error to start commanding a torque. Our dynamic model is M-tilde and h-tilde, based on estimates of the inertia of the robot, the gravitational term, and friction. If the model is perfect, then M-tilde equals M and h-tilde equals h at all times. With our dynamic model, we can design a feedforward controller. At each time instant, we apply the torque M-tilde times theta_d-double-dot plus h-tilde of (theta_d, theta_d-dot). The desired position, velocity, and acceleration at any time instant comes from the known desired trajectory at that time instant; there is no feedback from the joint. Of course feedforward control will not work well on its own, as we never have a perfect model of the dynamics. There is no mechanism to recover from errors. Let's consider a control law that combines the benefits of a good dynamic model and the stabilization of the PID controller. Here is one possibility. The first term of the control law is M-tilde times an acceleration, which is the sum of the feedforward acceleration at this time instant plus an acceleration generated by a PID controller. The M-tilde model turns the feedforward plus feedback acceleration into a joint torque. The second term of the control law provides the torque h-tilde that is estimated to be needed to balance friction and gravity at the current state. Notice that if the error is always zero, this control law reduces to the feedforward controller. This control law goes by different names, but it's often called computed torque control. Because of the possibility of instability raised by integral control, the integral term can be eliminated. If the model M-tilde and h-tilde is exact, we can remove the tildes in our analysis of the control law. Then the commanded acceleration theta-double-dot, which is the sum of feedforward and feedback terms, is achieved exactly by the commanded torques. The second derivative of the error is theta_d-double-dot minus theta-double-dot. Plugging in the commanded acceleration, the error dynamics can be expressed like this. Taking the derivative, we get this third-order homogeneous differential equation, yielding zero steady-state error. This linear error dynamics applies along arbitrary trajectories of the robot, since the use of the dynamic model effectively linearizes the dynamics. Let's apply the computed torque controller to track the trajectory shown here. Our model of the dynamics is not perfect, as you can see from this simulation of the trajectory when using only feedforward control. The robot starts out approximately on the trajectory, but over time the actual trajectory diverges from the desired due to error in the model. We could instead try a PID controller with no dynamic model. Finally, we could try computed torque, which provides better tracking than either of the other two controllers. We could also look at a standard measure of the control effort exerted by the motor, the time integral of the torque squared. At first, the PID control effort is the lowest, before the error builds up to start driving the torque. Soon, though, the PID control effort exceeds the control effort of the computed torque method. This is typical behavior of the computed torque method: it provides better trajectory tracking than pure feedback control with lower control effort. In short, if we have a reasonable model of the robot's dynamics, we should use it in the controller. To summarize, the single-joint computed torque control law is given here. This controller generalizes readily to multi-joint robots. The difference is that tau, theta_d, and theta_e are vectors, h-tilde is a vector of Coriolis, gravity, and possibly friction terms, M-tilde is a model of the robot's configuration-dependent mass matrix, and K_p, K_i, and K_d are diagonal matrices, each consisting of a positive scalar times the identity matrix. The linearization of the dynamics provided by the model M-tilde and h-tilde makes each joint have the same stable linear error dynamics. This is a block diagram of the computed torque controller. The measured position and velocity of the robot feed back to the PID controller, to the calculation of the M-tilde matrix, and to the calculation of the h-tilde vector. This controller provides good performance when the dynamic model of the robot is reasonably good. If the dynamic model is poor, then using it in the controller could actually hurt performance compared to a model-free feedback controller. Also, evaluating the mass matrix and h-vector could be computationally intensive for real-time control. For this reason, simpler versions of this control law are common. For example, PD feedback control plus gravity compensation can provide good performance, and it is much less computationally expensive to evaluate a model of gravitational torques than it is to compute a full dynamic model. Finally, we could express the computed torque control law in terms of the task-space dynamics of the robot, derived in Chapter 8. The end-effector wrench F_b, expressed in the end-effector frame, equals the end-effector mass matrix Lambda times the end-effector acceleration V_b-dot plus the wrench eta due to Coriolis and gravity terms. Our dynamic model is Lambda-tilde and eta-tilde. Recalling the joint-space computed torque control law, by analogy we write the task-space computed torque control with F_b, Lambda-tilde, and eta-tilde replacing tau, M-tilde, and h-tilde, respectively. The analog to the feedforward acceleration theta_d-double-dot is the time-derivative of the desired twist, V_d-dot. Technically, this feedforward acceleration should be expressed in the current end-effector frame, but let's ignore that detail. The analogy to the PI terms replaces theta_e by the twist X_e that takes the current end-effector frame to the desired end-effector frame in unit time. Finally, the analogy to theta_e-dot is the twist V_e, which is the desired twist V_d, expressed in the current end-effector frame, minus the current twist V_b. This is the resulting control law, and the actual torques applied at the joints are obtained by pre-multiplying the control wrench F_b by the Jacobian transpose. Other simpler task-space control laws could be formulated, but they all involve computing an end-effector wrench and then pre-multiplying by the Jacobian transpose to get the joint forces and torques tau. This completes our study of motion control where the controller commands joint forces or torques. In the next video we move on to force control. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_323_Exponential_Coordinates_of_Rotation_Part_2_of_2.txt | In this video we'll see how the matrix exponential can be applied to integrate the angular velocity of a rotating rigid body. Let's start with this frame of coordinate axes, which will rotate about a unit angular velocity axis. To understand the motion of the coordinate axes, it suffices to consider just one of the coordinate axes, since the same reasoning applies to any axis. Let's call this remaining vector p. As the vector p rotates about the rotation axis, it traces out a circle. The purpose of this video is to determine the final location of the vector if it rotates an angle theta about the rotation axis. We will do this by integrating the differential equation of motion describing the motion of p. Here is a picture of our initial vector, p at time 0, and the unit rotation axis omega-hat. As p begins to rotate, it traces out a circle around the rotation axis. The 3-vector linear velocity is tangent to the circle at any time, and is given by omega-hat cross p. After rotating an angle theta, the vector ends up at p at time theta. At any instant of time, the time derivative of p is given by p-dot = omega-hat cross p. We can write this as a differential equation p-dot of t equals omega-hat cross p of t. The angular velocity is constant. Using our 3 by 3 skew-symmetric matrix notation, this becomes p-dot of t equals bracket omega-hat times p of t. This is a vector differential equation, whose solution, as we saw in the last video, is calculated using the matrix exponential. In general, a matrix exponential can be calculated using a series expansion, but when the matrix is 3 by 3 and skew symmetric, the series expansion has a simple closed form: the 3 by 3 identity matrix plus sin of theta times bracket omega-hat plus 1 minus cosine of theta times bracket omega-hat squared. In other words, the matrix exponential takes the skew-symmetric representation of the exponential coordinates omega-hat theta and calculates the corresponding rotation matrix. This equation is often called Rodrigues' formula. Essentially, exponentiation integrates the angular velocity omega-hat for time theta seconds, going from the identity matrix to the final rotation matrix R. We can also define the inverse of the matrix exponential, the matrix logarithm, which takes a rotation matrix R and returns the skew-symmetric matrix representation of the exponential coordinates that achieve it, starting from the identity orientation. Just as the matrix exponential is like integration, the matrix log is like differentiation: it returns the unit angular velocity and the integration time that achieves the rotation matrix R. The matrix log is an algorithm that inverts Rodrigues' formula. Later, when we're studying the kinematics of robots, the matrix exponential and log will become very useful. Basically, for a revolute joint, the unit angular velocity omega-hat represents the axis of rotation of the joint, and theta represents how far that joint has been rotated. Before we get to robot kinematics, though, we have to generalize the matrix exponential and log to cases where frames both rotate AND translate. In other words, general rigid-body motion. We'll start the process of generalizing from rotations to general rigid-body motions in the next video. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_331_Homogeneous_Transformation_Matrices.txt | We can represent the configuration of a body frame {b} in the fixed space frame {s} by specifying the position p of the frame {b}, in {s} coordinates, and the rotation matrix R specifying the orientation of {b}, also in {s} coordinates. We gather these together in a single 4 by 4 matrix T, called a homogeneous transformation matrix, or just a transformation matrix for short. The bottom row, which consists of three zeros and a one, is included to simplify matrix operations, as we'll see soon. The set of all transformation matrices is called the special Euclidean group SE(3). Transformation matrices satisfy properties analogous to those for rotation matrices. Each transformation matrix has an inverse such that T times its inverse is the 4 by 4 identity matrix. The product of two transformation matrices is also a transformation matrix. Matrix multiplication is associative, but not generally commutative. Also analogous to rotation matrices, transformation matrices have three common uses: The first is to represent a rigid-body configuration. The second is to change the frame of reference of a vector or a frame. The third is to displace a vector or a frame. To represent a frame {b} relative to a frame {s}, we construct the matrix T_sb consisting of the rotation matrix R_sb, as we saw in previous videos, and the position p of the {b} frame origin in {s} frame coordinates. The representation of the {s} frame relative to the {b} frame is just the inverse. As with the rotation matrix, the matrix inverse corresponds to switching the order of the subscripts. To change the frame of reference of a configuration, we can use the same subscript cancellation rule as for rotation matrices. If we know T_sb and T_bc, we can calculate T_sc, representing the configuration of frame {c} in frame {s}, by multiplying T_sb by T_bc. The inverse of T_sc is T_cs. Just as we followed T_sb and then T_bc to get to T_sc, we can follow Tbc inverse and T_sb inverse to get T_cs. We can also change the frame of reference for a point p in space. Let p_b and p_s be the representations of the point in the {b} and {s} frames. We could naively try our subscript cancellation rule again, but this doesn't work: T_sb and p_b have a dimension mismatch. To fix this, we simply append a 1 to the end of each vector, making the 3-vector into a 4-vector. This is called the homogeneous coordinate representation of the 3-vector. Finally, a transformation matrix can be used to displace a point or a frame. Consider the fact that any configuration can be achieved from the initial configuration by first rotating, and then translating. In this animation, a frame initially at the zero orientation rotates about a fixed axis omega-hat a distance theta. It then translates according to the vector p, which is expressed in the coordinates of the initial frame T_zero. Its final configuration is given by T, where the Translation and Rotation operators are expressed by these matrices. T can be viewed not only as a configuration, but also as the transformation that takes the identity matrix to T. Let's consider a specific example of using a transformation matrix T to move a frame. Our transformation T is defined by a translation of 2 units along the y-axis, a rotation axis aligned with the z-axis, and a rotation angle of 90 degrees, or pi over 2. We will use the transformation T to move the {b} frame relative to the {s} frame. The {b} frame is initially represented by T_sb. Since we have two frames, we need to know whether the transformation vectors p and omega-hat are expressed in the {b} frame or the {s} frame. The answer depends on whether T right-multiplies or left-multiplies T_sb. If we left-multiply T_sb by T, the vectors p and omega-hat are considered to be expressed in the frame of the first subscript of T_sb, the {s} frame. Let's animate the transformation T. The rotation axis z and the translation axis y, expressed in the {s} frame, are shown. First the {b} frame will rotate 90 degrees about the z-axis of the {s} frame, and then it will translate 2 units along the y-direction of the {s} frame. Let's run the animation. And now one more time. Notice where the {b} frame ends up. We call this new frame {b-prime}. If instead we right-multiply T_sb by T, the vectors p and omega-hat are considered to be expressed in the frame of the second subscript of T_sb, the {b} frame. Also, the order of the operations is reversed: first we translate T_sb, and then we rotate it. Let's animate the motion. Watch how the {b} frame first translates by 2 units in the y-direction of the {b} frame, then rotates about the z-axis of the {b} frame. Let's run the animation. Notice that the body z-axis, used for rotation in the second step, moved along with the frame during the initial translation. And now one more time. Notice where the {b} frame ends up. We call this new frame {b-double-prime}. In summary, if the transformation T is applied on the right, the vectors p and omega-hat are considered to be expressed in the body frame, moving the frame {b} to the new frame {b-double-prime}. If the transformation T is applied on the left, p and omega-hat are considered to be expressed in the space frame, moving the frame {b} to the new frame {b-prime}. In the next video we introduce our representation of a rigid-body linear and angular velocity, called a twist. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_1223_Force_Closure.txt | Given a set of frictional contacts acting on a body, it is in force closure if the positive span of the wrench cones is the entire wrench space. Another way of saying this is that the contacts can theoretically resist any wrench applied to the body. The test for this condition is essentially the same as for first-order form closure: First, we construct the matrix F, whose columns are the j friction cone edges of the contacts. Spatial friction cones are approximated by polyhedral friction cones with a finite number of edges. The F matrix has j columns, one for each friction cone edge, and either 3 rows or 6 rows, depending on whether the body is planar or spatial. The contacts yield force closure if and only if the F matrix is full rank and there is a vector k of positive coefficients multiplying the friction cone edges such that F times k equals zero. This ensures that the positive span of the friction cone edges is the entire wrench space. Note that our definition of force closure depends only on the contact locations, contact normals, and friction coefficients. A force-closure grasp does not mean that the contacts necessarily resist all wrenches. For example, the fingertip contacts of a hand might satisfy our definition of force closure, but the joints of the fingers might not be able to generate the squeezing forces necessary to create a contact wrench in an arbitrary direction. If the friction coefficient at each contact is zero, then the friction cone is just along the contact normal, and frictionless force closure is therefore equivalent to first-order form closure. If there is nonzero friction at the contacts, however, force closure is possible with as few as 2 contacts in the plane or 3 contacts in space. As an example, this figure shows a triangular object grasped by two disks. The composite wrench cone due to the 2 frictional contacts is shown using moment labels. If this external wrench is applied, then the fingers would need to be able to create the opposing red wrench to prevent the triangle from moving. Since the line of action of the red wrench passes through the region labeled minus, it cannot be generated by the two frictional contacts. Therefore, the triangle would move because of this external wrench. If we increase the friction coefficient and move the fingers, however, then there is no consistent moment label. This means that the frictional contacts can generate any wrench. As an example, imagine the triangle is subjected to the same external wrench. Then the wrench shown in green has to be generated by the fingers to maintain static balance. This wrench can be obtained as a positive linear combination of one friction cone edge with a force inside the other friction cone. The parallelogram vector sum rule shows us that the two fingers have to squeeze very hard, however. Two fingers are not enough for force closure of a spatial body. There is no way to resist moments about the axis between the two fingers. If the fingertip is soft, however, it can deform to create a contact patch with the body. The contact patch can provide frictional moment about the normal vector, and two soft fingers can create force closure. If the contacts are just points, however, at least 3 contacts are needed to satisfy force closure, as shown in the book. When planning a grasp by a robot hand, force closure is a good minimum requirement. Form closure is usually too strict, requiring too many contacts. If you're machining a workpiece, however, and that workpiece will have significant forces applied to it, a form-closure fixture is a good idea. In the final videos of this chapter, we'll apply what we've learned about contact kinematics and contact forces to solve manipulation problems which don't involve grasping. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_1217_Form_Closure.txt | If a rigid body is fully immobilized by a set of rigid stationary fixtures, we say it is in form closure. In particular, first-order form closure means that only the zero twist satisfies the impenetrability constraints for all the contacts. This condition is equivalent to the condition that the positive linear span of the contact normal wrenches is the entire wrench space, which is 6-dimensional for spatial bodies and 3-dimensional for planar bodies. Remembering that at least n+1 vectors is needed to positively span an n-dimensional space, this means that first-order form closure requires at least 4 point contacts for a planar body and at least 7 point contacts for a spatial body. These are minimum requirements. Some objects, like a sphere, cannot be form-closure grasped for any number of contacts, as there is no way to kinematically prevent rotation of the sphere. This figure shows a planar body with three point contacts, as indicated by the contact normals. The body is not in first-order form closure, as it has a non-empty cone of feasible twists, drawn as rotation centers. In any case, because there are only 3 contacts, the body cannot be in first-order form closure. If we add a fourth contact at the top left, the set of feasible rotation centers is reduced to a small region with a minus label. The body can still rotate clockwise about any point inside the gray region. If we change the angle of the fourth contact constraint, however, the feasible rotation centers vanish and the body is in form closure. This figure shows a bowtie-shaped planar body in a form-closure grasp by 2 fingers creating 4 contact normals. Our graphical methods are convenient for visualizing form closure in the plane, but we can also define a computational test for form closure. Let F be the matrix of wrenches due to the j contact normals, where each wrench is a column of the matrix. Then the contacts create first-order form closure if and only if the rank of the F matrix is n, where n is 3 for planar bodies and 6 for spatial bodies, and F times k is equal to zero, where k is a j-vector of positive coefficients multiplying the wrenches. These two conditions taken together ensure that any wrench can be generated as a positive linear combination of the individual wrenches. This test can be implemented as a linear program in any scientific computing environment. The planar triangle shown here is not in form closure, because the full rank condition is not satisfied. The three contact normals cannot prevent pure rotation about the center of the triangle. Similarly, a first-order analysis tells us that this large triangle is also not in form closure, because our graphical contact analysis does not rule out rotation about the center of the triangle. Finally, a first-order analysis indicates that this concave body can rotate about any rotation center on the vertical line. In fact, however, both the large triangle and the concave body are in form closure by a more detailed analysis of the contact geometry, while the small triangle still is not in form closure. By a higher-order analysis, form closure can sometimes be achieved with as few as 2 contacts. To summarize, if an object is in form closure by a first-order analysis, then it is also in form closure by a higher-order analysis. But if a first-order analysis concludes that only sliding and rolling is possible, then a higher-order analysis may conclude form closure. You can think of the first-order test as a conservative test for form closure. This ends our purely kinematic analysis of contact. In the next video, we will begin to study the forces that can be transmitted through contacts. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_81_Lagrangian_Formulation_of_Dynamics_Part_2_of_2.txt | In the last video we derived the equations of motion for a 2R robot. In this video we focus on the velocity-product terms, c of (theta, theta-dot). In particular, terms that have the square of a single joint velocity are called centripetal terms. Terms that have a product of two different joint velocities are called Coriolis terms. To gain some intuition as to the physical meaning of these terms, let's continue to use the 2R arm. To focus on the velocity-product terms, let's assume that gravity and the joint accelerations are zero. Then the force needed to move mass_1 is f_1 equals m_1 times the x-y-z linear acceleration of the mass. We can write these accelerations in terms of joint one's velocity and acceleration as you see here, where c_1 means cosine of theta_1 and s_1 means sine of theta_1. Notice that there are joint velocity squared terms. In other words, zero acceleration of the joints does not mean zero acceleration of the mass in the linear coordinates x, y, and z. We could do the same thing for mass_2 and get this expression for f_2 in terms of the joint velocities and accelerations. C_1,2 means cosine of theta_1-plus-theta_2 and s_1,2 means sine of theta_1-plus-theta_2. Now let's put the robot at theta_1 equal to zero and theta_2 equal to pi over 2. At this configuration, the velocity-product acceleration terms for mass_2 are given here. Now consider the case where theta_1-dot is positive but theta_2-dot is zero. Then the mass travels around a circle with its center at the first joint. The centripetal acceleration of the mass is proportional to theta_1-dot-squared toward joint 1. Without that centripetal acceleration, the mass would fly off on a straight-line tangent to the circle. Also notice that the line of acceleration of mass_2 passes through the first joint, and therefore the line of force needed to create that acceleration creates no moment about joint 1. So joint 1 does not have to apply a torque at this configuration and velocity. Joint 2, on the other hand, has to apply a positive torque to keep the mass moving along the circle. Next, consider the case where theta_1-dot is zero but theta_2-dot is positive. Now the mass travels around a circle with its center at the second joint, and the centripetal acceleration is proportional to theta_2-dot-squared. Finally, consider the case where both theta_1-dot and theta_2-dot are positive. In addition to the centripetal accelerations, there is now a Coriolis acceleration toward joint 2. Mass_2 times this Coriolis acceleration is a force that creates negative moment about joint 1. In other words, to keep both joint velocities constant, we must apply a negative torque to joint 1. If zero torque were applied to joint 1, joint 1 would accelerate. This is what happens to a skater in a spin when he pulls in his outstretched arms--since his inertia decreases, his spinning velocity increases. In summary, we can write a robot's equations of motion this way, but we can also write the velocity-product terms as theta-dot-transposed times Gamma of theta times theta-dot. I personally like this way of writing the velocity-product terms, as it emphasizes the fact that the terms are quadratic in the joint velocities. Also, it emphasizes that Gamma of theta depends only on the joint values theta. You can think of Gamma as a three-dimensional n-by-n-by-n matrix, whose entries are called the Christoffel symbols of the mass matrix. Viewed this way, Gamma_i is an n-by-n matrix constructed of components Gamma_i,j,k. The Christoffel symbols Gamma_i,j,k are calculated from the derivatives of the mass matrix with respect to the joint variables, and the velocity-product vector can be calculated as shown here. Although this looks complex, the main point is that the velocity-product term can be written explicitly as a quadratic in the joint velocity vector. Just keep in mind this simple example. A mass m with a scalar velocity x-dot has a scalar momentum p. The force acting on the mass is the time derivative of the momentum. If we assume the mass is constant, then f equals m x-double-dot. If the mass changes with the configuration, though, as it does for an articulated robot, then by the chain rule for derivatives, the time derivative of the momentum has a term depending on d_m d_x, the derivative of the mass with respect to the configuration. D_m d_x plays the same role as the Christoffel symbols of a mass matrix. Back to our list of ways to compactly represent a robot's dynamics, another common way to write the velocity-product term is to express it as the product of a Coriolis matrix and the joint velocity vector. The elements c_i,j of the Coriolis matrix can be constructed from the Christoffel symbols and the joint velocity vector. Finally, we sometimes lump all the terms not dependent on theta-double-dot into a single vector, h of (theta, theta-dot). To any of these forms of the equations of motion, we can add the joint forces and torques needed to create a desired wrench F_tip at the end-effector. Now that we have a better understanding of the velocity-product terms, in the next video we will focus on the mass matrix. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_114_Motion_Control_with_Torque_or_Force_Inputs_Part_2_of_3.txt | In the previous video, we learned that setpoint PD control can eliminate steady-state error for a torque-controlled joint in the absence of gravity. If we add gravity, though, the error dynamics are no longer homogeneous. As a result, even if the error dynamics are stable, at steady state when theta-double-dot and theta-dot are zero, there will be a nonzero steady state error, m g r cosine theta over K_p. For example, imagine the initial resting state of the joint is at minus pi over 2. The desired setpoint is theta_d equals zero. The error response looks like this. The joint stops short of the desired angle. If we plot the torque due to the proportional term and the torque due to the derivative term, the derivative term goes to zero in steady state, while the proportional term provides the torque that holds the arm at its position in gravity. The point is that there must be error for the controller to provide torque in steady-state, and therefore the steady-state error cannot be zero. One solution, as we've seen before, is to add an integral term to the controller, giving us setpoint PID control. To perform a linear analysis, we have to address the nonlinear term that depends on the cosine of the angle. I will replace this term by a constant, tau_disturbance. Replacing by a constant is justified in the upcoming analysis, since I will be considering the steady-state behavior of the controlled robot, when this nonlinear term approaches a constant. Equating the dynamics and the control torque, we get this error dynamics. To get a differential equation, we can differentiate both sides. The result is a homogeneous third-order differential equation with this characteristic equation. By adding an integral term to the controller, we added a state to the dynamics, increasing the order of the differential equation from second order to third order. For stability, the roots must all have a negative real component. As for the PD controller, K_d must be greater than minus b and K_p must be greater than zero. The gain K_i must also be greater than zero, but unlike K_p and K_d, K_i also has an upper bound for stability. Before, for our second-order differential equation, the only dangers in choosing large gains were due to practical considerations, like actuator limits, unmodeled dynamics, and finite servo frequencies. Now, with a third-order PID-controlled system, even our ideal linear model shows that choosing K_i too large could result in instability. Let's see this graphically by plotting the roots in the complex plane. First, let's choose K_i equal to zero, and choose the gains K_p and K_d to give critical damping, two collocated roots on the real axis. We'll keep K_p and K_d constant. Next, let's add a small positive K_i. This creates a third root close to the origin. As we increase the gain K_i, the two collocated roots move away from each other on the real axis, and the root at the origin moves left. When we have increased K_i sufficiently, two roots meet on the real axis, while the third has moved further left. Since the two collocated roots are much slower than the root far to the left, the transient error response of this third-order system is similar to that for a critically damped second-order system. The response is slower than for the original critically damped PD controller, though, because the collocated roots are now further to the right. If we continue to increase K_i the two collocated roots break away from the real axis and move into the right-half plane when K_i reaches its upper bound for stability. The error dynamics would be unstable for the roots shown. We've drawn the root locus for K_i increasing from zero, and it demonstrates the key features of adding integral control: The integral term can improve steady-state response, but it can worsen the transient response. In particular, adding an integral term could cause overshoot and oscillation, and in the worst case, instability. Since stability is paramount in control, often robot controllers avoid the use of an integral term, prioritizing stability over steady-state error reduction. Or, if a nonzero gain K_i is used, it is chosen to be small. In addition, the magnitude of the integrated error may be capped at a maximum value. Let's design our PID controller to place the roots as shown here, which will yield an underdamped error response. Returning to our example setpoint control problem, we recall that our original PD controller results in steady-state error. Adding a positive gain K_i, we see that the PID controller drives the steady-state error to zero. The overshoot shows that the response is somewhat underdamped. Examining the torques due to the proportional, integral, and derivative terms, we see that the proportional and derivative terms both go to zero, while the integral term reaches a nonzero steady state. That is the torque that allows the joint to resist gravity even when the error is zero. In the next video, we will combine PID feedback control with a dynamic model of the arm to derive our gold-standard motion controller, the computed torque controller. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_89_Actuation_Gearing_and_Friction.txt | We've been assuming that there is an actuator at each joint that directly creates the joint force or torque. This is the idea behind robots called direct-drive robots: there is a motor at each joint that creates torque without any gearing. Such designs are often impractical, though, since an electric motor with the right power rating often spins at high speed and low torque, whereas most robotic applications require high torque. In practice, robots use many different types of actuators, such as electric motors, hydraulic actuators, and pneumatic actuators, and different types of mechanical power transformers and transmissions, such as timing belts and pulleys, chains and sprockets, cables, and gear trains. Each combination of actuator and transmission has its own dynamic characteristics that should be taken into account in the robot dynamics. In this final video of Chapter 8, we consider one popular choice for actuation: an electric motor with a gearhead located at each joint. The gearhead increases the torque of the motor while reducing the speed. This is an image of a typical robot actuator. At one end of the motor is an encoder, a sensor that measures how far the motor has rotated, so we know the joint position. The motor itself consists of a stator, the portion we think of as remaining stationary, and the rotor, the portion that rotates relative to the stator. The rotor includes the motor shaft, while the stator includes the motor housing. Because this particular electric motor is a brushed motor, where current is carried to the motor coils, or windings, through brushes sliding on a commutator, the windings are part of the rotor and the magnets are part of the stator. For brushless motors, which are more commonly used in robots, the windings are part of the stator and the magnets are part of the rotor. Both brushed and brushless motors generate torque by sending a current through windings in a magnetic field created by magnets. Because the motor spins at high speed, often up to 10,000 revolutions per minute, but low torque, it is attached to a gearhead. It is the output shaft of the gearhead that spins the next link. An ideal gearhead decreases the speed by the gear ratio G, where G is greater than 1, and increases the torque by the factor G. This preserves the power of the motor while transforming the motor's output to more useful high torques and low speeds. In practice, the torque amplification is somewhat less than G, due to friction, gear-teeth impact, and other power losses in the gearhead. This figure shows how a geared motor is typically used in a robot joint. The stator is attached to link i-minus-1 and the gearhead output shaft is attached to link i. The mass and inertia of the stator should be counted as part of link i-minus-1, while the mass and inertia of the rotor should be counted as part of link i. It is not quite this simple, though, because the axis of the joint, which is aligned with the gearhead axis, may not be the same as the rotor axis, and because the rotor spins at a different speed than the joint. Typically the mass and inertia of a motor's rotor is much less than the mass and inertia of link i, so it's tempting to ignore the rotor's mass and inertia. But the rotor spins G times faster than link i because of the gearhead, so the effect of the motor's inertia could be significant. To see this, we can calculate the kinetic energy of the rotor as one-half the scalar inertia of the rotor about its rotational axis times the square of G theta-dot, where theta-dot is the joint velocity. This means that the apparent inertia of the rotor about its axis is G-squared times I_rotor. This is called the apparent inertia since someone manually moving joint i would feel this apparent rotor inertia, in addition to the inertia of the link. So even though I_rotor may be small compared to the inertia of the link about the joint axis, the apparent rotor inertia G-squared times I_rotor may not be small, especially considering that gear ratios of one hundred or more are common. Therefore the rotor inertia should be included in our dynamic analysis. As an example, consider a 2R robot arm with a geared motor at each joint. For a particular choice of link lengths, masses, and rotor inertias, a gear ratio of ten at the gearheads yields this mass matrix for the robot. If we keep everything else the same but increase the gear ratio to one hundred, we get this mass matrix. The mass matrix becomes much larger along the diagonal due to the increased apparent inertia of the rotors. The off-diagonal elements of the mass matrix are now relatively small compared to the diagonal elements, and the amount the mass matrix varies with configuration is now relatively small. This means that the velocity-product terms in the dynamics are comparatively less significant. As the gear ratios become large, the apparent inertias of the rotors dominate the dynamics, and the coupled dynamics of the robot become closer and closer to the dynamics of n independent joints. Taking into account gearing at each joint, we can derive a modified recursive Newton-Euler inverse dynamics algorithm. The details are left to the book, but essentially we calculate the motor torque needed to accelerate both the rotor and the link. For electric motors, the torque is proportional to the current through the motor, so for each motor the robot controller could command a current proportional to the calculated torque. Finally, we can add an estimate of joint friction torques. Commonly the amount of joint friction increases with increasing gear ratios. Some simple models of joint friction are discussed in the book. This concludes Chapter 8. Like Chapter 3, which establishes key concepts in spatial motion with applications throughout the rest of the book, Chapter 8 establishes key concepts in dynamics, with applications in simulation, robot control in Chapter 11, and planning of minimum-time robot trajectories in the next chapter, Chapter 9. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_131_Wheeled_Mobile_Robots.txt | In the final chapter we focus on motion planning and control of wheeled mobile robots that move without skidding on hard flat surfaces, such as this differential drive robot, which moves by independently controlling the rotation of two conventional wheels, and this omnidirectional mobile robot, which moves by independently controlling the rotation of mecanum wheels, which allow sideways slipping. In all cases, we assume that we control wheel velocities, not torques, so we have a kinematic model mapping wheel speeds to the chassis velocity. The planar configuration of the robot chassis is written T_sb, an element of SE(2), or simply as the vector q equal to (phi, x, y), where phi is the heading angle of the chassis and (x,y) is the position of a reference point on the chassis. The velocity of the chassis is written either as the planar twist V_b, expressed in the body frame {b}, or as the time derivative q-dot. For a nonholonomic mobile robot, like the differential drive, the space of feasible chassis velocities is only 2-dimensional, because the robot cannot slide sideways. For an omnidirectional robot, the chassis can move in any direction in its 3-dimensional velocity space. This chapter addresses the following issues for omnidirectional and nonholonomic wheeled robots: Kinematic modeling for several different types of wheeled mobile robot. Motion planning for wheeled mobile robots. Feedback control to stabilize motion plans. Odometry, to estimate the configuration of the chassis based on data from the wheel encoders. And mobile manipulation, where the wheeled mobile base is equipped with a manipulator. In particular, we derive the Jacobian mapping wheel and joint velocities to the end-effector twist, and we use this to develop a coordinated controller for the mobile base and robot arm. In the next video we begin our study with omnidirectional wheeled mobile robots. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_25_Task_Space_and_Workspace.txt | The C-space is the space of all possible configurations of a robot. Two somewhat different spaces of interest are the task space and the workspace. The task space is a space in which the robot's task can be naturally expressed. For example, if the task is to control the position of the tip of a marker on a board, then task space is the Euclidean plane. If the task is to control the position and orientation of a rigid body, then the task space is the 6-dimensional space of rigid body configurations. You only have to know about the task, not the robot, to define the task space. The workspace is a specification of the configurations that the end-effector of the robot can reach, and has nothing to do with a particular task. For example, a planar robot with 2 revolute joints, limited to ranges of motion of 180 and 150 degrees, has the workspace shown here. The workspace is often defined in terms of the Cartesian points that can be reached by the end-effector, but it is also possible to include the orientation. The set of positions that can be reached with all possible orientations is sometimes called the dexterous workspace. So this concludes Chapter 2 on configuration spaces. In Chapter 3, we will focus on representing configurations and velocities of rigid bodies. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_1334_Feedback_Control_for_Nonholonomic_Mobile_Robots.txt | Once we've planned a trajectory for a nonholonomic wheeled mobile robot, we need a feedback controller to track the trajectory. Although feedback control to a stationary configuration requires a control law that's time-varying, or, discontinuous in the configuration, as we learned in an earlier video, feedback control to a trajectory is "easier." Feedback control requires an estimate of the configuration of the chassis. Such an estimate can be maintained using odometry, as discussed in the next video. But typically odometry is augmented with external sensors, like laser rangefinders, cameras, cameras with depth sensors, or GPS. This video does not address the estimation process but assumes that a configuration estimate is available. The configuration of the robot is represented by its heading angle phi and the position (x,y) of a point midway between the wheels in the case of a diff-drive robot and a point midway between the rear wheels in the case of a car. Since there are three configuration variables but only two controls, it's not possible to independently control the rate of change of all three configuration variables. Instead, we could choose a point P fixed to the chassis and use the two controls to control the velocity of this point. The position of the point P in the space frame is given by the (x,y) position of the chassis plus the vector to the point P expressed in the space frame. The velocity of the point P is simply the time derivative of its position. To control the motion of the point P to follow a desired trajectory, we could use a proportional controller, which says that the velocity of the point is proportional to the position error. The commanded velocity of the point, (x_P-dot, y_P-dot), can be converted to the linear velocity v and the angular velocity omega of the chassis. Note that x_r, the x-position of the point P in the body frame, must be nonzero to be able to move the point P in arbitrary linear directions. The constraints on v and omega depend on whether the robot is a unicycle, diff-drive, or car. When initially planning the trajectory, the planner should use only a subset of the possible controls, such as those shown here. By doing so, the feedback controller has some extra control authority to make corrections to errors in trajectory tracking. Assuming a point P on the mid-line of the robot, this is an example straight-ahead trajectory of the chassis, and the corresponding trajectory followed by the point P. At any given point on the trajectory, the robot can satisfy the desired location of P with different orientations of the chassis, but only one of the orientations shown is consistent with the path before and after this point. Therefore, even though the robot only follows the trajectory of the point P, this motion will often tend to align the robot's orientation to the desired orientation. For example, if this is the initial configuration of the robot, then proportional control of the point P will drive the full configuration of the robot to the desired trajectory. Notice that the robot's final configuration is a little behind the desired configuration. To reduce that error, we could add an integral term or a feedforward term, as discussed in Chapter 11. Here is another planned trajectory and the actual initial configuration of the robot. Proportional control of the point P causes the point to converge toward the planned trajectory, but the controller causes the robot to execute a direction reversal. So even though the point P converges to the desired trajectory, the final orientation of the chassis is off by nearly 180 degrees. In other words, just tracking the planned trajectory of a point P does not always result in good tracking of the full chassis trajectory. To track the full configuration of the chassis, we define the configuration of the frame {b}, midway between the wheels of the robot, as phi, x, y, and the desired configuration at any instant is given by the frame {d}. The error coordinates are phi_e, x_e, and y_e. With these error coordinates, we have a number of potential choices for the control law; one example is the feedforward plus feedback nonlinear controller shown here. It's called a nonlinear controller because it's nonlinear in the error coordinates. The derivation of this controller and the choice of the control gains k_1, k_2, and k_3 are beyond the scope of this video, and I suggest you consult the references in the book. But notice that if the error reduces to zero, the commanded controls reduce to the feedforward controls. Also, examining the control law shows that the heading error should be less than pi over 2 and the linear velocity of the planned motion should be nonzero. In other words, this control law is not a good choice for stabilizing a trajectory that simply spins in place. Here is a planned trajectory for the robot, and this is the actual initial configuration, with nonzero error. The controller brings the robot to the desired trajectory, as shown in these error plots as a function of time. The error in all configuration variables is driven to zero. In the next video we'll discuss how to maintain an estimate of the chassis configuration using wheel encoder data. This is called odometry. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_813_Understanding_the_Mass_Matrix.txt | By now you're familiar with the equations of motion of a robot. In this video we focus on better understanding the mass matrix M of theta. First, recall that the kinetic energy of a point mass is one-half m-v-squared, where m is the mass and v is its scalar velocity. If v is a vector, we could rewrite this as one-half v-transpose times m times v. Now, for a robot arm, it is not hard to show that the kinetic energy takes the same form, one-half theta-dot-transpose times the mass matrix times theta-dot. The mass matrix is positive definite, meaning that the kinetic energy is positive for any nonzero joint velocity vector. This is analogous to the fact that a point mass can only have positive mass. In addition, the mass matrix is symmetric. Finally, the mass matrix depends on the joint configuration theta. The mass matrix depends on theta because the amount of inertia about each joint depends on whether the arm is stretched out or not. To see the variation in the mass matrix graphically, consider again the 2R robot arm, where the link lengths and masses are each one. Assume that the robot initially has zero velocity, and consider a circle of accelerations in the joint space at this robot configuration. Then this circle maps through the mass matrix to an ellipse of joint torques. This ellipse can be interpreted as a direction-dependent mass ellipsoid; certain joint acceleration directions require larger torques than others. The directions of the principal axes of the ellipse are given by the eigenvectors of the mass matrix and the lengths of the principal semi-axes are given by the eigenvalues. If the mass matrix is invertible, then we can also map a circle of joint torques to an ellipse of joint accelerations. If we change the configuration of the robot, the shapes of these ellipses change. Since these ellipses are in joint torque and acceleration space, they are not easy to understand intuitively. Instead, imagine that you grab the endpoint of the robot and you feel how "massy" it is when you move it in different directions. Let's say that V is the endpoint linear velocity, related to the joint velocity by the Jacobian J. When you linearly accelerate the endpoint, you will feel an apparent mass at the end-effector that depends on the joint configuration. We call this apparent mass Lambda of theta. To see how Lambda is related to the mass matrix M, we can equate the kinetic energy expressed in the end-effector velocity and the joint velocity. If the Jacobian is invertible, we can express the joint velocity as J-inverse times V, which gives us the relationship we were looking for: the configuration-dependent end-effector mass is equal to J-inverse-transpose times M times J-inverse. Now, if you consider a circle of endpoint accelerations when the robot is at rest, we can map this through the end-effector mass Lambda to get an ellipsoid of endpoint forces, depending on the robot's configuration. This ellipse is easier to understand. First of all, the directions of the force and the endpoint acceleration are only aligned if the force is aligned with a principal axis of the ellipse, as you see here. To accelerate the endpoint in this direction, you need a lot of force. To accelerate the endpoint in the orthogonal direction, you need much less force. For all force directions not aligned with a principal axis of the ellipse, the acceleration direction is not parallel to the force direction. To see this, let's map a circle of endpoint forces through the inverse end-effector mass matrix to get an ellipse of end-effector accelerations. For an endpoint force purely in the x-direction, as indicated by the dot on the circle of forces, we get an end-effector acceleration that has both x and y components, as indicated by the dot on the ellipsoid of accelerations. From this example, we learn two things. First, the magnitude of the effective end-effector mass depends on the direction of acceleration. Second, in general the directions of the end-effector acceleration and force are not aligned. So when we move the endpoint of the robot by hand, it does not feel like a point mass, which has a constant mass magnitude and always accelerates in the direction of the applied force. Also, the apparent end-effector mass depends on the configuration of the robot, as you see here. You should now have a good understanding of the form of the dynamic equations of a robot, including the mass matrix and velocity-product terms. Intuitively, these equations of motion are just f equals m-a, where the m-a term depends on both the joint velocities and accelerations, plus forces to balance gravity, plus forces to create the desired wrench at the end-effector. Starting in the next video, we will learn another way to derive these same equations, beginning with the equation f equals m-a for a single rigid body. This is called the Newton-Euler formulation of the dynamics. This formulation allows us to derive an efficient recursive algorithm, without differentiations, for computing the dynamics of open-chain robots. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_1332_Controllability_of_Wheeled_Mobile_Robots_Part_2_of_4.txt | As we learned in the last video, nonholonomic mobile robots are not linearly controllable. They may, however, satisfy weaker controllability conditions from nonlinear control theory. We model a nonholonomic mobile robot as a nonlinear control system of the form q-dot equals G of q times u, where the configuration q is an n-vector, and the control input u is an m-vector, where m is less than n, and the controls are restricted to some subset capital U of the m-dimensional control space. G of q is the n-by-m matrix whose columns are the control vector fields. For a system like this, we can define global controllability, small-time local controllability, and small-time local accessibility. First, we say the robot is controllable from a configuration q if, for any q_goal, there exists a control that drives the robot from q to q_goal in finite time. For the local definitions of controllability, let's first establish the concept of a reachable set. Consider a configuration q in the two-dimensional space of your screen. Define a neighborhood W of q, a full-dimensional open ball of the configuration space with q in its interior. Now consider all feasible trajectories emanating from q, for all possible controls u of t, running for time less than or equal to capital T while remaining within the neighborhood W, as in this animation. We define the reachable set R^W of (q, less than or equal to T) as the set of reachable configurations in time less than or equal to T without leaving W. In the limit as W and T become arbitrarily small, the reachable set could look like this: the robot is locally confined to a lower-dimensional subset of its configuration space. Another alternative is that the locally reachable set looks something like this. The robot can locally reach a full-dimensional subset of its configuration space, but the initial configuration q is on the boundary of the reachable set. A final alternative is that the locally reachable set looks like this. The reachable set is full-dimensional, and the initial configuration is in the interior of the reachable set. This means the robot can locally move in any direction. Returning now to our controllability definitions, the robot is small-time locally controllable from q, or STLC, if the locally reachable set is full-dimensional and contains the initial configuration in the interior. We say the robot is small-time locally accessible from q, or STLA, if the locally reachable set is full-dimensional but does not contain the initial configuration in the interior. In these definitions, "small-time" refers to the fact that the property holds as the time T goes to zero, and "local" refers to the fact that the property holds as the neighborhood becomes arbitrarily small. A car provides a good example of the small-time properties. A typical car is STLC at every configuration in its three-dimensional configuration space. For example, if the goal is to move backward a short distance, the car does not have to move far away to accomplish this. If the goal is to rotate in place, a forward turn, backward translation, and forward turn does the trick. Finally, if the goal is to move sideways, again, two forward turns and a backward translation achieve the motion. No matter how much you shrink the neighborhood the car is allowed to maneuver in, it can still achieve motion in any direction without leaving the neighborhood. Technically, to satisfy the "small-time" condition, it must take zero time to switch between the forward and reverse gears, but we won't model the switching time. We're more concerned with the spatial aspect of STLC. STLC is an important concept in motion planning. If a robot is STLC at every configuration, then it can follow any path arbitrarily closely. In other words, the nonholonomic robot can go anywhere among obstacles the omnidirectional mobile robot can go. For example, you can always parallel park your car into a space larger than your car. The drawback is that the nonholonomic robot may have to move slowly, as it switches between forward and reverse gears. If the car has no reverse gear, and the goal is to move a small distance backwards, the car has to travel far away, as shown here, outside a small neighborhood of the two configurations. The neighborhood is drawn here in 2 dimensions, but of course the neighborhood includes the third dimension, the orientation of the chassis. The locally reachable set is three-dimensional, but it's not a neighborhood of the initial configuration. Therefore this car is STLA but not STLC. If a robot with velocity constraints satisfies any of these three controllability conditions, then the velocity constraints do not integrate to equality constraints on the configuration, and therefore the constraints are nonholonomic. When we say that a mobile robot is nonholonomic, we imply that it satisfies at least one of these three controllability definitions. These controllability definitions apply to general nonlinear systems, not just mobile robots. For our canonical nonholonomic robot, however, if a local property holds at any configuration, then it holds at all configurations, since the motion capability does not depend on the robot's configuration. Also, STLC at all configurations implies controllability on any open connected component of free space, since any path can be followed arbitrarily closely. In the next video, I'll describe the Lie bracket of vector fields, a key concept in establishing controllability properties. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_107_Nonlinear_Optimization.txt | In this last video of Chapter 10, we consider a very different approach to motion planning, based on nonlinear optimization. The goal is to design a control history u of t, a trajectory q of t, and a trajectory duration capital T minimizing some cost functional J, such as the total energy consumed or the duration of the motion, such that the dynamic equations are satisfied at all times, the controls are feasible, the motion is collision free, and the trajectory takes the start state to the goal state. We refer to J as the cost, and the last 5 lines are the constraints. Nonlinear optimization requires an initial guess at the solution. Let's say that this control history is our initial guess. To turn the motion planning problem into a nonlinear optimization, we need a finite parameter representation of the control. There are many ways to do this; we could use a set of knot points interpolated by polynomials or splines, or piecewise constant controls, or piecewise linear controls. For this example, let's assume piecewise linear controls. Integrating the equations of motion, we get the robot's trajectory. This trajectory intersects obstacles and does not end at the goal configuration, so our initial guess is not a solution to the problem. To evaluate the constraints on the motion due to obstacles, we can choose a finite set of test points along the trajectory and evaluate whether those points are collision free. The collision constraints can be expressed as constraints on the distances between the test points and the obstacles, where a positive distance means that there is no collision and a negative distance implies a collision. To update our guess at the control history, we need to calculate the gradients of the constraints with respect to the trajectory. These gradients provide information on how the test points should move to satisfy the constraints. These gradients map through the gradient of the trajectory with respect to the controls to suggest a direction in which to modify our guess at the control history. We also use information on the gradient of the cost with respect to the controls in calculating the direction to update our control history. We then update the control history and integrate the new control to get an updated trajectory. Since our new trajectory still does not satisfy the constraints, we repeat the process of calculating a deformation direction for the trajectory then map this through the sensitivity of the trajectory with respect to the controls to get a direction to deform the control history. After taking a step in this direction in the control space, we integrate the equations of motion again to get the new trajectory. This process repeats until we find a trajectory that satisfies the constraints and locally minimizes the cost. Returning to the problem formulation, and focusing on the first three lines, the method I just described is called shooting. With shooting, the design variables are the total time duration capital T and the parameters describing the control history. The trajectory is found by simulation of the equations of motion, ensuring that the dynamic constraints are satisfied. This method is called shooting, because designing the controls is like aiming a cannon. You see what happens when you fire and update your aim so that the goal is more closely achieved on the next try. Another popular approach is called collocation, in which you simultaneously design the control history and the trajectory. Since you design both, you have to enforce that the controls and trajectory are consistent according to the dynamics. This is commonly done by ensuring that the equations of motion are satisfied at a finite set of test points. The process of turning the problem statement into a standard finite parameter nonlinear optimization, which can be solved by techniques such as sequential quadratic programming, is called transcription. There are many ways you could choose to represent the controls, trajectory, cost, and constraints, and your choice will affect the performance of the optimization. One thing that is critical, though, is that you are able to calculate the gradients of the cost and the constraints with respect to your design variables, as these gradients guide the search through the design variable space. Ideally you would be able to calculate these gradients analytically, but failing that, you should be able to numerically evaluate them both efficiently and accurately. Even with good gradient calculation, gradient-based nonlinear optimization is inherently a local method, and the optimization is prone to getting stuck in local minima, where the solution either does not globally minimize the cost or does not satisfy the constraints. Nonlinear optimization is a good choice when other methods can be used to provide a reasonable initial guess. For example, nonlinear optimization could be used to smooth a jerky motion found by an RRT. The RRT would handle the global search among clutter, while the nonlinear optimization would deform the RRT's solution to locally minimize the cost. So this concludes Chapter 10. Motion planning is one of the most active subfields of robotics, but you should now have an understanding of the key concepts of some of the most popular approaches. In Chapter 11, on robot control, we study the problem of designing feedback controllers to drive the robots along the trajectories produced by motion planners. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_102_CSpace_Obstacles.txt | We know that any robot configuration is described uniquely by a point in its configuration space, or C-space. One of the key ideas in motion planning is to represent any obstacle in the robot's environment as a set of points in C-space where the robot is in collision with the obstacle. So we should become comfortable with the idea of transforming a real-world obstacle into a C-space obstacle. Let's use the 2R robot as an example. It's C-space is represented by a square in the plane, where one axis corresponds to theta_1 and the other axis corresponds to theta_2. As we saw in Chapter 2, the topology of the C-space is actually a torus, so when we represent it as a square, we have to remember that the top and bottom edges are connected to each other, and the left and right edges are connected to each other. We can represent a specific configuration of the robot as a point in the C-space. Here, theta_1 and theta_2 are both close to zero, so the point is in the bottom-left corner of the C-space. If there is an obstacle in the environment, the obstacle can be represented in C-space by the set of robot configurations where the robot would collide with the obstacle. Even though this C-space obstacle looks like three separate regions, if we remember the topology of the C-space, we see that it is just a single connected region. We can add two more obstacles and get the final picture of the robot's C-space. An example configuration in collision with obstacle A is shown here. Theta_1 is 45 degrees and theta_2 is 315 degrees. Next we see an example configuration in collision with obstacle B, and finally a configuration in collision with obstacle C. If the robot also had joint limits for joint 2, preventing link 2 from rotating over link 1, we would get another obstacle, this one due to the joint limits. In the rest of this video, we will assume no joint limits. Now that we've constructed our C-space and its C-obstacles, we can perform all motion planning in the C-space, so let's focus on that for a moment. Notice that if the robot were in a configuration in this red region, there would be no way for it to escape being stuck between obstacles B and C. It could only move between configurations in this region. We call this region a connected component of the free space, and we label this connected component 1. In this example, there are three connected components of free space; this region labeled 2 is connected because the top and bottom edges of the square are connected, and this region labeled 3 is connected because the left and right edges and top and bottom edges are connected. For a path to exist between two configurations, they must lie in the same connected component of free space. Now we would like to find a path between this start configuration and this goal configuration. Both configurations are in the same connected component, so we know a solution exists. Remembering that the left and right edges of the square are connected, let's animate a solution path in both the real space and the C-space. Planning collision-free paths for other robots is conceptually the same as for the 2R robot: we transform obstacles to C-space obstacles, then we plan a path for a point in the free portion of C-space. It's impractical to explicitly construct C-space obstacles due to their geometric complexity, however, especially for higher-dimensional C-spaces. For this reason, most motion planners simply assume the existence of a collision-detection routine that can check whether a given configuration, or path segment, is in free space. One way to check if a particular configuration is in collision is by checking for intersection of any of the polygons that represent the surfaces of the robot and the obstacle. An even simpler way to check for collision between two objects is to approximate each by a set of spheres. Here is a lamp approximately represented as the union of spheres. Checking whether two objects collide is then as simple as checking the distance between the centers of the spheres of the two objects, which can be done very quickly. The actual object should be strictly inside its sphere approximation, to make sure that we don't declare a particular configuration to be free when it is actually in collision. If we use more spheres, we can represent the objects more precisely, as shown here. This makes collision detection less conservative, but increases the number of distance checks. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapters_2_and_3_Foundations_of_Robot_Motion.txt | In Chapters 2 and 3, on configuration space and rigid-body motions, we'll study the representation of positions, velocities, and forces in three-dimensional space. A firm understanding of this material is arguably the most important foundation for the further study of robotics, since all robots move in the physical world. This material is also typically new to the beginning robotics engineer. But, in case you were hoping to start programming robots right away, I should warn you, you don't see a lot of robots in Chapters 2 and 3. Instead, we focus on building a strong foundation in spatial motion as quickly as possible, so we can then move on to the material focused more on robots, beginning in chapter 4. In particular, the material in chapters 2 and 3 will be the basis for understanding how to represent the motion of a quadrotor through space; how some robots use links and joints to form closed loops; how to control a robot's joints to allow it to interact with objects in its environment; how to control robots to simultaneously move and apply forces; how a robot hand can manipulate an object; how to navigate through cluttered environments; how to perform coordinated control of a robot arm mounted on a mobile robot; and how the dynamic equations of motion are used in high-performance motion control. So, even though you won't see a lot of robots in Chapters 2 and 3, consider it an investment in all the cool things that come next. Also, I think you'll find the concepts interesting in their own right, as they broaden your understanding of spatial motion. Let's get started with Chapter 2. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_1332_Controllability_of_Wheeled_Mobile_Robots_Part_3_of_4.txt | The control system for a wheeled mobile robot can be written in the general form q-dot equals G of q times u, where q is the n-dimensional configuration, u is the m-dimensional control input, and the m columns of the G matrix are the control vector fields associated with each control input. For the canonical nonholonomic robot, the specific form is shown here, where g_1 is the forward motion vector field and g_2 is the spin-in-place vector field. This equation of motion means that no velocity is possible in the sideways direction. We would like to know if this constraint on the velocity of the chassis integrates to a constraint on its configuration. Equivalently, we ask whether following the system vector fields allows us to locally reach a full-dimensional subset of the configuration space. If so, then the robot is at least small-time locally accessible, or STLA, and the velocity constraint is not integrable to a configuration constraint. If the initial configuration of the robot is q, and we follow the forward vector field g_1 for time epsilon, the final configuration is written F_epsilon^g_1 of q. After following g_2 for time epsilon, the final configuration is F_epsilon^g_2 of the previous configuration. If we reverse the order of following the vector fields, we end up at a different configuration. Therefore, we say the two vector fields do not commute. When the order of the 2 vector fields does not matter to the final configuration, then the vector fields are said to commute. The noncommutativity of the vector fields plays an important role in determining the controllability of a nonlinear control system, because we may be able to generate approximate motion in constrained directions by switching between vector fields. In general, we can calculate the noncommutativity of two vector fields, as epsilon goes to zero, using a four-step sequence. To illustrate it, let's use the two vector fields of the canonical nonholonomic robot. First, we flow for time epsilon along g_i. The new configuration is F_epsilon^g_i of q. Second, we flow for time epsilon along g_j. Third, we flow for time epsilon along minus g_i. And finally, we flow for time epsilon along minus g_j. The net change in configuration is Delta q. To calculate Delta q for small epsilon, we can use a Taylor expansion. After the first flow, the configuration q at time epsilon is the initial configuration q at time zero plus the initial velocity times epsilon plus one-half epsilon-squared times the initial acceleration plus terms of order epsilon-cubed. Since epsilon is small, third- and higher-order terms are dominated by the terms that are first- and second-order in epsilon. We can rewrite this as the zeroth-order term plus the first-order term, replacing q-dot with the vector field g_i evaluated at the initial configuration, plus the second-order term, where, by the chain rule, the acceleration q-double-dot is equivalent to d-g_i d-q times g_i, plus terms of order epsilon-cubed. After the second flow, the configuration is the zeroth-order term, plus two first-order terms in g_i and g_j, plus two second-order terms in g_i and g_j, plus one more second-order term, epsilon-squared times d-g_j d-q times g_i. Unlike the previous terms, this term depends on the order the vector fields are applied. If we continue the Taylor expansion, after the fourth flow the first-order terms have canceled and we are left only with a second-order term, called the Lie bracket of the vector fields g_i and g_j, written open-bracket, g_i, comma, g_j, close-bracket. The Lie bracket of two vector fields is itself a vector field, expressing the approximate motion obtained by switching between the vector fields. Using the Lie bracket notation, we can write the net motion as epsilon-squared times the Lie bracket. The Lie bracket sequence for the canonical mobile robot is illustrated here. We can calculate the Lie bracket of g_1 and g_2 using the formula we just derived. Plugging in the expressions for g_1 and g_2 and evaluating the derivatives with respect to the configuration q, we see that the Lie bracket vector field is zero, sine phi, minus cosine phi, which is a vector field describing a sideways translation to the right. The actual net motion, as seen in the figure, consists of a translation to the right of epsilon-squared times the Lie bracket plus a small forward translation of order epsilon-cubed. The Lie bracket vector field of g_1 and g_2 effectively "breaks" the constraint that the robot cannot slide sideways, ensuring that the velocity constraint does not integrate to a configuration constraint. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_132_Omnidirectional_Wheeled_Mobile_Robots_Part_1_of_2.txt | Wheeled mobile robots employ either conventional wheels, like this unicycle wheel, that do not allow sideways sliding, or wheels that allow sideways sliding through the use of rollers around the rim of the wheel, such as the omniwheel and the mecanum wheel. While it's possible to build omnidirectional mobile robots using conventional wheels, by appropriately steering each wheel, often omnidirectional mobile robots are built using unsteered omniwheels or mecanum wheels. I'll focus on robots using omniwheels and mecanum wheels in this video. This image shows a mobile robot with three omniwheels. This is a schematic top view of the robot. Each wheel is controlled by driving the wheel forward or backward, and it is assumed that the wheels do not slip or skid in the driving direction. The rollers on the wheel allow free sliding of the wheel in the orthogonal direction. This mobile base has four mecanum wheels, which do not slip in the driving direction but allow free sliding at an angle of 45 degrees relative to the driving direction. The principle behind the omniwheel and the mecanum wheel is the same, but they differ in the direction they allow free sliding. This video addresses the following question: given a desired chassis velocity, what should the driving speeds of the wheels be? To answer that question, let's focus on a single wheel, and develop a model that applies to omniwheels or mecanum wheels. First we define a frame {b} fixed to the robot's chassis. The center of wheel i is at (x_i,y_i), and its forward driving direction, the direction it rolls without slipping, is at an angle beta_i relative to the x_b-axis. The rollers around the rim of the wheel allow free sliding at an angle gamma_i relative to the direction perpendicular to the driving direction. gamma_i is 0 degrees for an omniwheel and 45 degrees for a mecanum wheel. With these definitions, we can calculate the wheel driving speed u_i, which is the rotational speed of the motor attached to the wheel. We'll build up to the result. First, we define the linear velocity at the center of the wheel, as indicated by the vector shown in green. This is the sum of the driving velocity and the free-sliding velocity. This linear velocity is derived from the body twist V_b, and it depends on the position of the wheel in the {b} frame. We then transform this linear velocity to a frame fixed to the wheel. This linear velocity is the vector sum of the driving velocity and the free-sliding velocity, so we can decompose the wheel velocity into its sliding velocity and driving velocity components. A little geometry shows that the driving component can be calculated by taking the dot product of the wheel velocity with the vector (1, tangent gamma_i). Finally, to convert the linear driving velocity to a rotational speed for the wheel, we divide by r_i, where r_i is the radius of the wheel. The final result is a 1-by-3 row vector times the twist V_b. We call this row vector h_i, and more specifically h_i of zero, for reasons that will become clear shortly. We can stack the h_i row vectors for the m wheels to create an m-by-3 matrix called H of zero. Then the vector of wheel velocities for a given chassis twist is calculated as u equals H-of-zero times V_b. This procedure only works if H-of-zero is full rank, rank 3. We can apply this kinematic modeling to a robot with 3 omniwheels at the corners of a triangle. The H matrix is 3-by-3, as shown here. r is the radius of the wheels and d is the distance of the wheels from the center of the triangle. We can also apply our modeling to a robot with 4 mecanum wheels. The wheels are configured so that their sliding directions are not all aligned. This is necessary for the H matrix to be full rank. The H matrix is 4-by-3, as shown here. The length l is the x-distance from the {b} frame to the wheels and the width w is the y-distance from the {b} frame to the wheels. The fact that this matrix is not square means that an arbitrary choice of wheel speeds will cause skidding of the wheels in the drive direction. To avoid skidding, the wheel speeds must be chosen on a 3-dimensional surface in the 4-dimensional wheel speed space, as determined by the H matrix. This is unlike the 3-omniwheel robot, where we can choose the wheel speeds arbitrarily without causing skidding. Let's use the H matrix to drive a robot with 4 mecanum wheels. The H-matrix tells us that forward-backward motion in the body x-direction requires all wheels to have the same speed, as shown in this animation. If the desired motion is a pure rotation in the body frame, the wheels on the same side should have the same speed. If the desired motion is sideways, in the body y-direction, then wheels on opposite corners should have the same speed. In summary, the wheel speeds equal the H-of-zero matrix times the twist V_b, provided H-of-zero is full rank. Sometimes it's convenient to calculate the wheel speeds in terms of q-dot, the rate of change of the coordinates phi, x, and y. To do this, we replace V_b by a rotation matrix times q-dot, where the rotation matrix transforms q-dot to V_b. We call H-of-zero times the rotation matrix the matrix H-of-phi, giving us the relationship u equals H-of-phi times q-dot. With our kinematic modeling of omnidirectional robots complete, in the next video I will address motion planning and control. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_93_Polynomial_Via_Point_Trajectories.txt | In the last two videos we learned how to define straight-line paths and then time scale them to get trajectories. If we want more flexibility to design the shape of the path, as well as the speed with which it is executed, we could specify a set of configurations through which we would like the robot to transit. These configurations are called via points. We also specify the times at which the robot should achieve each of these via points. We then solve for a smooth trajectory that passes through the via points at the specified times. The choice of the via points and times allows us to shape the path and trajectory. In this case, we solve directly for a trajectory; we do not first find a path and then time scale it. Let's consider motion in an n-dimensional joint space. For joint i, moving between via points j and j-plus-one, we could define the motion as a third-order polynomial of time. We then apply four terminal constraints, the initial and final position and the initial and final velocity of joint i, to solve for the four coefficients of the polynomial. This is third-order polynomial interpolation with specified via times and velocities. This figure shows a path designed for a two-joint robot using four via points: the start point, the end point, and two other vias. Each via point has the time that the robot passes through the configuration as well as the velocity at that time. The velocity is indicated by the dashed arrows. Each segment between via points, for each degree of freedom, has four coefficients and four terminal constraints, which allows us to solve exactly for the trajectories between via points. The tangent of the path has to be aligned with the specified velocity at each via point, so we can use the velocities at the via points to change the shape of the path. These time plots show the position and velocity of each joint during the trajectory. You can see that the positions and velocities are continuous at the via points, but the acceleration is not. Discontinuity in the acceleration may not be desirable. Also, it may be cumbersome to have to specify the velocity at each via point. Therefore, another solution is to leave the velocities at the via points free, but to constrain the velocity before and after a via point, and the acceleration before and after a via point, to be equal. This is third-order polynomial interpolation with specified via times only. There are other ways to shape a path using via points or control points. In particular, with B-splines, the path does not pass exactly through the control points, but the path is guaranteed to remain in the convex hull of the control points, unlike the path you see here. This property ensures that the path does not violate joint limits. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_114_Motion_Control_with_Torque_or_Force_Inputs_Part_1_of_3.txt | Starting in this video, we assume that the robot controller commands forces or torques at the joints, not velocities as in the previous videos. Because the controls are forces or torques, the robot's dynamics must be taken into account. We will again begin by assuming a single-joint robot. It is easy to generalize the results to multi-joint robots. Here is a single-joint robot in gravity. The solid circle is the revolute joint, and the distance between the joint and the center of mass of the link is r. The gravitational force pulling down on the robot is m g. The dynamic equation of the robot is the joint torque tau equals M times theta-double-dot, where M is the scalar inertia of the link about the revolute joint, plus m g r cosine of theta, the gravitational torque, plus b theta-dot, where b is a positive viscous friction coefficient. Sometimes I will lump together the gravity and friction terms to get tau equals M theta-double-dot plus h of (theta, theta-dot). Perhaps the most widely used feedback control law is Proportional-Integral-Derivative control, also known as PID control. The controller output tau is K_p times the joint position error plus K_i times the integral of the error plus K_d times the derivative of the error. Evaluating the derivative of the error requires a speed sensor for the joint. This speed sensor is usually simulated by numerically differencing the position readings from a joint encoder. This is a block diagram representation of a PID control system. P control and PI control, as we have already seen, are variants of PID control where one or two of the control gains is set to zero. Another common variant of PID control, particularly in robotics, is PD control, where the integral gain K_i is set to zero. Let's begin by studying PD control for the case where gravity is equal to zero, perhaps because the link is in a horizontal plane. Let's also focus on setpoint control, where the desired joint position is constant. If we equate the joint dynamics and the control torque, and substitute in theta_e-dot equals minus theta-dot and theta_e-double-dot equals minus theta-double-dot, we get this error dynamics. Dividing by M, we get this standard second-order form, where the damping ratio zeta is b plus K_d over 2 times the square root of K_p M and the natural frequency omega_n is the square root of K_p over M. Notice that the virtual damper K_d plays the same role as the viscous friction b. We should choose K_d and K_p to at least make the error dynamics stable. In other words, the roots of the characteristic equation must have a negative real component, which is assured if K_d is greater than negative b and K_p is greater than zero. If these conditions are satisfied, then because the differential equation is homogeneous, the steady-state error is zero. We should also use what we learned about the transient response of second-order systems to place the roots to give a fast settling time and no overshoot. In particular, we could choose K_d and K_p to achieve critical damping, and otherwise choose the gains large enough to get a fast response. The gains shouldn�t be too large, though, as this can result in rapid torque changes, sometimes called chattering, inducing unwanted vibrations. Also, unmodeled dynamics, actuator limits, sensor errors, and the fact that the control law is implemented in discrete time, not continuous time, could actually lead to instability of the robot if the gains are large. In the next video, we'll consider the full PID controller, where the gain K_i is not zero. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_105_Sampling_Methods_for_Motion_Planning_Part_2_of_2.txt | Until now, we have mostly studied collision-free paths for robots with velocity controls for each degree of freedom. In this video, I will describe a popular sampling-based motion planner that applies to robots with arbitrary dynamics of the form x-dot equals f of (x,u). The planner is based on rapidly exploring random trees, or RRTs for short. Let's start with a partially formed search tree T. x_start is the initial state, capital X is the state space, and any state in X_goal is an acceptable final state. The RRT planner chooses a random state x_samp, finds the nearest node x_nearest already in the tree, finds a collision-free motion from x_nearest to x_new, which may or may not be the same as x_samp, but at least is in the direction of x_samp, then updates the search tree. The process repeats until a node is created inside the goal region X_goal. Every so often the sample x_samp should be chosen inside of X_goal to try to complete the planning problem. This video shows an RRT generated for a 2-dimensional space. Samples are chosen uniformly randomly from the space, and the motion planner from the nearest node goes directly toward the sampled node, up to a maximum step size. The randomly chosen samples "pull" the tree to explore the state space. Compare this to a random walk, where at each step the tree is grown from a randomly chosen node by the maximum step size in a randomly chosen direction. After the same number of nodes, the random walk has not explored very much of the state space. This is pseudocode for the RRT algorithm. In line 3, we sample from the state space. In line 4, we find the nearest node already in the tree. In line 5, we use a local planner to find a motion from this nearest node to a state closer to the sampled state. In lines 6 and 7, a new edge is added to the tree if the motion from line 5 is collision free. Finally, in lines 8 and 9, we check to see if the new node is in the goal region, and if so, the planner has succeeded. The plan is reconstructed by following the sequence of parent nodes backward from the node in the goal region. Let's focus on lines 3, 4, and 5, as they offer a lot of flexibility to customize the algorithm, and they are critical to the efficiency of the algorithm. The sampler in line 3 could choose states uniformly randomly from the state space X. But other options are possible, including deterministic sampling schemes. For example, Van der Corput sampling could be used on a one-dimensional state space. The Van der Corput sequence is a deterministic sequence that jumps around the interval, providing a progressively finer, approximately uniform, sampling of the interval. This is attractive, since it results in something like multi-resolution sampling that increases in resolution until a solution is found. The generalization of the Van der Corput sequence to higher-dimensional spaces is called the Halton sequence. The algorithm designer can choose the sampling algorithm as best suited for the task. The sampler should also occasionally sample states in the goal region, to try to complete the planning process. Returning to the algorithm, line 4 chooses the node in the tree that is closest to the new sample. Various data structures and algorithms can be employed to make this operation efficient, but we first have to have a sensible definition of the distance between two states. For example, if the configuration consists of linear and angular coordinates, how do we compare a distance of one radian to a distance of one meter? As another example, this blue car represents a sampled configuration of a car and these white cars represent configurations of nodes already in the tree. Which of these configurations is closest to the sample? I would say that this configuration is probably the closest, since a path to the sample probably takes less time than paths from the other configurations, considering the car's motion constraints. Returning again to the algorithm, line 5 is the local planner that finds a motion from a node in the tree to a new state that is closer to the sampled state. The algorithm designer has a lot of flexibility in how to choose this local planner, but it should run fast. Line 6 checks whether the planned motion is collision-free, so we don't need to worry about collisions in the local planner. The simplest local planner is one that returns a straight-line path for fully actuated kinematic robots with velocities as the inputs. For robots with more general dynamics x-dot equals f of (x,u), we can discretize the set of controls, integrate each one forward a fixed amount of time, and choose the new state x_new as the one that comes closest to the sampled state. For example, a car-like robot has 2 controls: the linear velocity v and the angular velocity omega. For a car with a bounded linear speed and a bounded turning radius, the bounds on the controls look like this bowtie. We can discretize this control set as 6 velocities, including forward motion, backward motion, and turns at the tightest turning radius. The integrals of these controls are shown here. If the sampled state is here, then the closest integrated state is shown here. If this path is collision free, then x_new will be added to the RRT. This local planner is attractive for its simplicity and generality, since it can be applied to any robot. Finally, we could use a local planner specifically tailored to the robot. For a car-like robot, Reeds-Shepp curves are paths that minimize the path-length between two configurations, without considering obstacles. Reeds-Shepp curves are good candidates for local plans. RRTs are simple to code and customizable, and variations of RRTs have been used for many different applications. They are typically designed to try to solve complex motion planning problems quickly, but without regard to the quality of the solution. If you wanted to improve the quality of the solution, you could continue to grow the RRT after an initial solution is found, and keep the best solution, as you see in this figure. The path indicated is the best path found so far to the green goal region after generating 5000 nodes in the tree. This process does not result in solutions that tend to an optimal solution, however. A modification to the basic algorithm, called RRT-star, continually rewires the search tree so that the solution tends to the optimal solution as the number of nodes in the tree goes to infinity. RRT-star cannot be applied to arbitrary robot dynamics, however. Sampling-based methods such as RRTs, PRMs, and related algorithms are popular because of their simplicity and their performance on some complex motion planning problems. While implementations continue to get faster, traditionally they have been used as offline planners. In the next video, I will introduce a method for real-time trajectory generation based on artificial potential fields. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_82_Dynamics_of_a_Single_Rigid_Body_Part_1_of_2.txt | Starting in Chapter 8.2, we learn the Newton-Euler method for deriving the dynamics of a robot. Whereas the Lagrangian formulation starts with the potential and kinetic energy and applies a variational approach based on derivatives, the Newton-Euler method is derived directly from f equals m-a for the rigid bodies that make up the robot. One advantage of this approach is that we can derive an efficient recursive algorithm for computing the dynamics of open-chain robots. Let's start with the dynamics for a single rigid body. You can think of a rigid body as a collection of point masses that are rigidly attached to each other. We define the center of mass to be the unique point at the centroid of the mass distribution, and we fix a frame {b} to the rigid body at the center of mass. The definition of the center of mass is that the sum of the mass-weighted vectors to the point masses in the {b} frame is zero. We define the twist of the rigid body, expressed in the {b} frame, as V_b, consisting of an angular velocity omega_b and a linear velocity v_b. Then the linear velocity of mass i is p_i-dot equals v_b plus omega_b cross p_i. We define the acceleration of the rigid body to be V_b-dot, and using this we can take the derivative of p_i-dot to get p_i-double-dot. Notice that the last term is omega-b cross the time derivative of p_i, so we can substitute in the expression above to get this equation for p_i-double-dot. Notice that it has velocity-product terms in the form of omega_b cross v_b and omega_b cross omega_b cross p_i. Now, taking as a given that f equals m-a, we get the force f_i needed to move the mass m_i using the expression for p_i-double-dot we just derived. The corresponding moment in the {b} frame is m_i equals p_i cross f_i, written here with our bracket notation. The total wrench F_b, consisting of a moment m_b and a force f_b, needed to accelerate the body with acceleration V_b-dot when it is moving with a twist V_b is just the sum of forces and moments needed for the individual point masses. If we define m to be the total mass of the body, then using the fact that we defined the frame {b} to be at the center of mass of the body, the total force f_b is just m times v_b-dot plus bracket-omega_b times v_b. The total moment m_b is I_b times omega_b-dot plus bracket-omega_b times I_b times omega_b, where I_b is called the inertia matrix. The inertia matrix is the negative of the sum of each mass times the bracket of its position squared. We can write the 3-by-3 inertia matrix in terms of its nine components, with the diagonal terms Ixx, Iyy, and Izz, as well as the off-diagonal components Ixy, Ixz, and Iyz. These components are calculated as shown here. As an example, the top left element of the matrix, Ixx, is called the moment of inertia about the x-axis. This inertia gets larger if the mass m_i is further from the x-axis, where the square of the distance from the x-axis is given by y_i-squared plus z_i-squared. Basically, if the mass of the body is far from the x-axis, it takes more torque to accelerate about the x-axis. The off-diagonal elements are called products of inertia. Now imagine that we replace the individual point masses of the rigid body by a continuous mass density rho as a function of position. Then the elements of the inertia matrix are calculated as volume integrals instead of summations, but conceptually there is no difference. In the book you can see some simplified formulas for calculating the inertia matrix of some common bodies. Just like the mass matrix for a robot, the inertia matrix I_b for a rigid body is symmetric and positive definite. Also similar to the mass matrix for a robot, the kinetic energy for a rotating rigid body is one-half omega_b-transpose times I_b times omega_b. Now let's consider a particular rigid body, an ellipsoid, with a frame {b} at its center of mass and the inertia matrix I_b shown here. Now consider the same body but with a frame {p}, also at the center of mass, with a different orientation. In this frame {p}, the inertia matrix has a particularly simple form, with all off-diagonal elements zero. When the off-diagonal elements are zero, the {p}-frame coordinate axes are called principal axes of inertia, and the scalar inertias about those axes are called the principal moments of inertia. You can find the principal axes of inertia from the inertia matrix for any frame {b} at the center of mass by evaluating the eigenvectors and eigenvalues of the inertia matrix I_b. The principal axes of inertia are aligned with the eigenvectors, which are expressed in the {b} frame, and the principal moments of inertia are the eigenvalues. The rotation matrix expressing the {p} frame in the {b} frame, R_bp, has the eigenvectors as its columns. If we equate the kinetic energy expressed in the {p} and {b} frames, we can calculate the inertia matrix in the {p} frame as R_bp-transpose times I_b times R_bp. When possible, it is preferable to choose the body frame to be aligned with the principal axes of inertia, to simplify the inertia matrix. This also simplifies the rotational equations of motion. For an arbitrary {b} frame at the center of mass, the rotational equations of motion are what we derived before. If the {b} frame is aligned with the principal axes of inertia, however, the equations can be expressed more simply, as you see here. This form involves many fewer multiplications, additions, and subtractions. To summarize, these are our equations of motion for a single rigid body. In the next video we will study these equations further, in preparation for using them for the dynamics of a robot. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_1213_Multiple_Contacts.txt | In the previous video we learned that a contact between two rigid bodies divides the space of twists of one body relative to another into two halves: a half-space of feasible relative twists and a half-space of relative twists that violate the rigid-body assumption. Twists on the dividing hyperplane result in rolling or sliding contact. In this video we study the case where a single rigid body is subject to multiple contacts. This is a simple example from the previous video. The hexagon can translate in the plane, but not rotate, so its twist has only two linear components. The hexagon is in contact with the stationary triangle. The contact defines the line of twists S, which separates the half-space of twists that break contact from the half-space of twists that cause penetration. The zero twist implies no sliding, called a rolling contact. If we place a different contact, it defines a different dividing line of sliding twists. If both contacts are present, the two half-spaces of feasible twists intersect to create a cone of feasible twists. Twists along the top bounding ray are labeled SB, because these twists cause sliding along contact 1 and breaking at contact 2. Twists along the bottom bounding ray are labeled BS, because they cause breaking at contact 1 and sliding at contact 2. A zero twist is labeled RR, because the contacts are maintained and there is no sliding. Twists strictly inside the cone are labeled BB, as they cause breaking at both contacts. The concatenation of the labels for each contact is called the contact mode. The two contacts in this example allow four possible contact modes. If each fixture contacting the body is stationary, then each contact constraint separating the penetrating and feasible twists passes through the origin of the body's twist space. The intersection of these feasible half-spaces creates a polyhedral convex cone. It is polyhedral, because faces of the cone are flat lines, planes, or hyperplanes, depending on the dimension of the twist space, and it is convex because the line between any two points in the cone is also in the cone. Now, beginning with our two contacts, let's add a third contact. If we intersect the feasible twist half-spaces, we find that the only allowed twist is the zero twist. The object is immobilized, and all contacts have the label R. If the third fixture is set into motion, then the contact's constraint surface does not pass through the origin. Instead it passes through the twist of the moving fixture, V_3. Intersecting the half-spaces for the three contacts, we get this triangular region of feasible twists for the hexagon. Any twist strictly inside the triangle results in breaking contact at all contacts. There are also six other possible contact modes, depending on the hexagon's twist. Since the sliding constraint surfaces do not all pass through the origin, the set of feasible twists is no longer a cone, but a more general polyhedral convex set. The examples we just looked at were for the case of a planar body that can only translate, to make it easy to draw the feasible twist regions, but the same principles apply when the twist space is 3-dimensional for a general planar body or 6-dimensional for a general spatial body. In summary, if the body A is in contact with moving bodies, the set of feasible twists is the polyhedral convex set satisfying each of the half-space constraints. If we assume that all external fixtures are stationary, then the set of feasible twists is a polyhedral convex cone. This image shows an example of a polyhedral convex cone for three stationary fixtures acting on a planar body. Each contact defines a wrench and therefore a constraint plane in the three-dimensional twist space. Those planes form the faces of the cone. Twists strictly inside the cone cause breaking at all contacts. Twists on a face of the cone cause sliding or rolling at one of the contacts. Twists on an edge of the cone cause sliding or rolling at two of the contacts. In the case that the only feasible twist is the zero twist, the body is immobilized. We call this form closure, the subject of a later video. In the next video I'll introduce a representation of a planar twist called the center of rotation. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_1222_Planar_Graphical_Methods.txt | A planar wrench has 3 components: moment about the z-axis out of the plane and linear forces in the x and y directions. If the wrench has a nonzero linear component, then it can be represented as an arrow in the plane, where the tail of the arrow is at (x,y) and the arrow tip is at (x + f_x, y + f_y). The point (x,y) must satisfy the condition that m_z equals x times f_y minus y times f_x. But since this is only one constraint on the two coordinates (x,y), we could place (x,y) at any point on the line of action of the arrow and get an equivalent representation of the wrench F. To add two wrenches whose lines of action intersect, we can simply slide the arrows along their lines of action until the tails are coincident. Then we use the parallelogram vector sum to get the arrow representation of the new wrench, F_1 plus F_2. If a wrench F represents a contact force, such as the edge of a friction cone, we often need to represent the set of all nonnegative scalings of that contact force. A convenient graphical method for representing all nonnegative scalings of a wrench is to label all points to the left of the wrench with a plus sign. These are the points about which the scaled wrench cannot create a negative moment. Similarly, all points to the right of the wrench are labeled with a minus sign, since the scaled wrench cannot create a positive moment about these points. Finally, all points on the line of action are labeled with a plus-minus sign. These labels of all the points in the plane are called moment labels. Moment labels allow a convenient graphical representation of planar wrench cones. This is the representation for the nonnegative scaling of a single wrench. If we add a second wrench, we simply intersect the labels for the individual wrenches. Points in the plane that have no consistent labeling lose their labels. In this case, we have a region labeled plus, a region labeled minus and a point labeled plus-minus. This is a representation of the wrench cone of the positive span of the wrenches F_1 and F_2. This cone could be viewed the wrench cone for a single frictional contact. These moment labels are properly interpreted as a single convex connected region. As with the rotation center representation, the plus and minus regions are connected at infinity. If we add a third wrench F_3, then the consistently labeled region is just a small triangle labeled plus. This representation means that the positive span of the three wrenches can create any line of force that passes in a counterclockwise direction about the triangle. For example, this wrench is in the positive span of F_1, F_2, and F_3, since it makes positive moment about all points in the triangle. This wrench, which just passes through a vertex of the triangle, can also be generated. Remember that the plus sign just means that the combination of wrenches cannot make negative moment about the point, and this arrow does not make negative moment about the vertex. On the other hand, the positive span does not include this wrench, which clearly makes negative moment about the entire region labeled plus. Also, the positive span does not include any wrench that passes through the labeled region. In short, the moment-labeling representation of the wrench cone due to a friction cone looks like this, and it is easy to graphically combine multiple wrench cones to get a graphical representation of a composite wrench cone. In the next video we'll use our understanding of contact wrench cones to study force closure. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_11222_SecondOrder_Error_Dynamics.txt | Let's continue with our mass-spring-damper analogy for the error dynamics of a controlled single-joint robot. Design of the controller allows us to alter the spring k and damper b, and therefore how the error theta_e evolves. We divide the second-order differential equation by the leading coefficient m to get this form, where the leading coefficient is 1. Assuming m, b, and k are all positive, the error dynamics are stable, and the error will decay to zero. For stable second-order differential equations, it is customary to define the natural frequency omega_n to be equal to the square root of k over m and the damping ratio zeta to be b over 2 times the square root of k m. Then we can rewrite the differential equation as theta_e-double-dot plus 2 zeta omega_n theta_e-dot plus omega_n-squared theta_e equals zero. This is the standard form for a stable, second-order homogeneous linear differential equation. The characteristic equation of this differential equation is the quadratic equation s-squared plus 2 zeta omega_n s plus omega_n-squared equals zero. The roots of this quadratic equation are minus zeta omega_n plus-or-minus omega_n times the square root of zeta-squared minus 1. Since zeta and omega_n are real numbers, the two roots are real values if the quantity inside the square root is greater than or equal to zero. In other words, s_1 and s_2 are real numbers if the damping ratio zeta is greater than or equal to 1. If zeta is less than 1, the square root produces an imaginary number, and the roots are complex conjugates. We will consider three cases, depending on the damping ratio zeta. If the damping ratio zeta is greater than 1, we say that the error dynamics are overdamped. If zeta is equal to 1, the error dynamics are critically damped. Finally, if zeta is less than 1, the error dynamics are underdamped. Let's look at the details of the error response for each of these cases. First, for the overdamped case, the error response that solves the differential equation is the sum of two decaying exponentials, where the roots s_1 and s_2 are shown here. We can plot the roots in the complex plane, defined by the real and imaginary axes. Since the error dynamics are stable, the roots have a negative real component, and therefore lie in the left-half plane. Since these roots are real numbers, they lie on the real axis. The time constants of the two corresponding decaying exponentials are the negative inverses of s_1 and s_2. We can plot the unit step error response by solving for c_1 and c_2 using the initial conditions theta_e equal to 1 and theta_e-dot equal to 0. The sum of the two decaying exponentials tends to be dominated by the exponential corresponding to the less negative root. We call this the "slow" root since its exponential decays more slowly. If the error dynamics are critically damped, then the roots are identical, at minus omega_n, and the error response takes this form, where the time constant of the decaying exponential is 1 over omega_n. Solving for c_1 and c_2, the unit step error response looks like this. Again, there is no overshoot or oscillation. Unlike the overdamped response, neither of the roots is "slower" than the other. As with the first-order response, the 2 percent settling time is approximately 4 times the time constant of the exponential. Finally, the error dynamics are underdamped if the damping ratio is less than 1. In this case, the error response is a decaying sinusoid. The time constant of the decay is 1 over zeta omega_n. The frequency of the sinusoid is the damped natural frequency omega_d, which equals the natural frequency times the square-root of 1 minus zeta-squared. The roots are complex conjugates, with a real value minus zeta omega_n and an imaginary value plus-or-minus j omega_d. This is the unit step error response. The 2 percent settling time is approximately 4 time constants. We can calculate the overshoot as e to the minus pi zeta over the square root of 1 minus zeta-squared and then express it as a percentage. Plotting the responses on top of each other, we see that if the two roots are complex conjugates in the left-half plane, we get an underdamped decaying sinusoidal error response. If the two roots are real but not equal, we get an overdamped response dominated by the slow root. If the two roots are coincident, we get a critically damped response, which in this case converges faster to zero than the overdamped response because the roots are faster than the slow root of the overdamped response. In summary, if any of the roots lie in the right-half plane, the controlled system is unstable, and the error grows exponentially at a rate that increases the further the root is to the right. Similarly, the further the roots are to the left, the faster the error decays. Finally, if the roots are not on the real axis, the error response will exhibit overshoot and oscillation that increases with the imaginary components of the roots. These observations hold for higher-order systems, too. The more negative the real portions of the roots of the characteristic equation, the faster any initial error decays. In the next video, we will finally begin applying what we've learned about linear error responses to the control of a robot. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_62_Numerical_Inverse_Kinematics_Part_2_of_2.txt | In the previous video, we derived the Newton-Raphson numerical algorithm for inverse kinematics when the end-effector configuration is represented by a minimum set of coordinates x_d. In this final video of Chapter 6, we modify the algorithm so that the desired end-effector configuration is described by the transformation matrix T_sd. We also need to replace the error vector e by something else. The proper way to interpret e is as a velocity which, if the end-effector followed it for unit time, it would move from the current configuration f-of-theta_i to the desired configuration x_d. For our modified algorithm, we need to find the twist that would take the end-effector {b} frame to the desired frame in unit time. Let's illustrate using these three frames. The {s}-frame is fixed in space. The {d} frame is the desired configuration of the end-effector, and it is represented in the {s}-frame as T_sd. The {b}-frame is the configuration of the end-effector if the joint vector is theta_i, given by the forward kinematics T_sb-of-theta_i. We can calculate the configuration of the frame {d} relative to the frame {b} as T_bd equals T_sb-inverse times T_sd. We are looking for the twist that moves the {b} frame to the {d} frame in unit time, and the matrix representation of this twist is just bracket-V_b equals log of T_bd. If the {b} frame follows the body twist V_b for one unit of time, it will end up at the desired configuration {d}. Thus V_b serves a similar role as the error vector e in the coordinate version of the algorithm. Now we can write the modified algorithm as shown here. We begin with an initial guess theta_zero, then we calculate the matrix representation of the body twist V_b that moves the {b} frame to the {d} frame. If the angular component omega_b and the linear component v_b of the body twist are both small, then theta_zero is a good solution to the inverse kinematics problem. Otherwise, we update our joint vector guess by adding the pseudoinverse of the body Jacobian times the body twist V_b and repeat. We apply this algorithm to the inverse kinematics of a 4-joint RRRP arm. The desired end-effector configuration is illustrated by the frame {d}, but our initial joint vector guess theta-zero would put the end-effector frame {b} at the configuration shown. Let's animate the robot to move to the joint angles found at each of 4 iterations of the Newton-Raphson algorithm. This is iteration 1 ... then 2 ... then 3 ... then 4. After 4 iterations the remaining configuration error is imperceptible, and the numerical inverse kinematics algorithm has converged to a good solution. Of course in practice the robot would not actually move until it has calculated a solution; the animation here just visually demonstrates the iterations. Let's watch the animation one more time. We have been focusing on finding the joint positions that achieve a desired end-effector configuration, but in some cases we only need the joint velocities that achieve a desired end-effector twist. We call this the inverse velocity kinematics, where the desired twist V_d and the Jacobian J are expressed in the same frame, either the space frame {s} or the end-effector frame {b}. If the robot has more than 6 joints, the use of the pseudoinverse ensures that the sum of the squares of the elements of theta-dot is the smallest among all joint velocity vectors that achieve the desired twist. If the robot has less than 6 joints, meaning it cannot exactly achieve an arbitrary desired twist, the use of the pseudoinverse ensures that the error of the actual twist from the desired twist minimizes the sum of squares of elements of the error. The book describes other types of inverses that yield solutions minimizing other quantities. This concludes Chapter 6. With the forward kinematics of Chapter 4, the velocity kinematics of Chapter 5, and the inverse kinematics of Chapter 6, you are now prepared to design kinematic controllers for open-chain robots, as discussed in Chapter 11. But before we do that, in Chapter 7 we look at the forward kinematics, inverse kinematics, and velocity kinematics of closed-chain robots, which exhibit features not present in open chains. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_1332_Controllability_of_Wheeled_Mobile_Robots_Part_4_of_4.txt | In the last video, we learned that the Lie bracket of two vector fields expresses their noncommutativity. This noncommutativity may allow approximate motion in directions not directly allowed by the control vector fields. The Lie bracket is, itself, a vector field. Therefore, we can take Lie brackets of Lie brackets. For example, we can take the Lie bracket of g_1 with the Lie bracket of g_1 and g_2. Higher-degree Lie brackets like this one correspond to higher-order terms in the Taylor expansions from the last video. We call the original vector fields Lie products of degree 1, the Lie bracket of two of the original vector fields a Lie product of degree 2, the Lie bracket of Lie products of degree 1 and 2 a Lie product of degree 3, and so on. The key idea behind testing for local controllability from a configuration q is to see if the Lie products of all degrees, evaluated at the configuration q, allow motion in every direction. We say that a set of vector fields satisfies the Lie algebra rank condition, or LARC, at a configuration q if their Lie products, of all degrees, span the n-dimensional space of feasible motions at the configuration q. With this definition, we can state the main theorem: Consider a control system q-dot equals g_1 times u_1 plus g_2 times u_2, etc., such that the vector fields satisfy the LARC at q. Then the system is small-time locally controllable from q if the control set U positively spans the m-dimensional control space, and it's small-time locally accessible from q if the control set spans, but does not positively span, the m-dimensional control space. Basically, a positively spanning control set allows motion forward and backward along vector fields, while a spanning control set may only allow unidirectional motion along the vector fields. Let's apply this test to our canonical nonholonomic mobile robot. The vector field g_1 corresponds to forward motion and the vector field g_2 corresponds to rotating in place. The Lie bracket of g_1 and g_2, which I'll call g_3, is zero, sine phi, minus cosine phi, a sideways parallel-parking motion. If we create a matrix whose columns are the three vector fields, we find that the determinant is 1. Since the determinant is nonzero, these 3 vector fields are linearly independent, and therefore they span the 3-dimensional space of velocities of the chassis. Therefore the LARC is satisfied at all configurations. If the robot is a car with a reverse gear, the control set U is a bowtie-shaped region, as we learned in an earlier video. This control set positively spans the 2-dimensional control space. Therefore, by the theorem, the car with a reverse gear is small-time locally controllable from all configurations. On the other hand, if the robot is a car without a reverse gear, the control set is only half of the bowtie-shaped region. This control set spans the control space, but does not positively span the control space. Therefore the car without a reverse gear is small-time locally accessible from all configurations, but it's not small-time locally controllable. Although we're usually only interested in the motion of the chassis, we could include other configuration variables in the description of the robot. For an upright rolling wheel, the full configuration is phi, x, y, and theta, where theta is the rolling angle of the wheel. If the radius of the wheel is r, then the forward motion vector field is g_1 equals zero, r cosine phi, r sine phi, 1, and the spin-in-place vector field is g_2 equals 1, zero, zero, zero. The degree-2 Lie bracket is g_3 equals zero, r sine phi, minus r cosine phi, zero, which corresponds to sliding sideways. We need at least one more Lie bracket to be able to span the four-dimensional space of velocities, so we can construct the degree-3 Lie bracket of g_2 and g_3, which is zero, r cosine phi, r sine phi, zero, which corresponds to sliding forward without changing the rolling angle theta. Taking the determinant of the 4-by-4 matrix with these vector fields as the columns, we find that the determinant is minus r squared, so the four vector fields are linearly independent, provided the wheel radius is nonzero. Therefore the LARC is satisfied at all configurations. If the control set positively spans the control space, the robot is small-time locally controllable from all configurations, meaning that it can follow any path in its 4-dimensional configuration space arbitrarily closely, despite the 2 velocity constraints that the wheel cannot slide forward or sideways. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_103_Complete_Path_Planners.txt | As I mentioned in previous videos, it is common to represent a continuous free C-space as a graph and to search on that graph. If that graph happens to be something called a "roadmap," then it is possible to ensure the property of completeness: if a path exists, the planner will find one. To be a roadmap, the graph has to satisfy the following conditions: First, it must be easy, or at least possible, to find free-space paths between any free configuration q and some configuration on the roadmap. Second, there must be a connected component of the roadmap for every connected component of the free C-space. If these conditions are satisfied, then any path planning problem can be solved by first finding a path from the start configuration to the roadmap, then finding a path from the goal configuration to the roadmap, then finding a path on the roadmap to connect the two. For most C-spaces, constructing a true roadmap is complex and rarely done, but there are some examples for which it is simple. One such example is the case where a polygonal mobile robot, shown here as a square, translates in a plane among polygonal obstacles, indicated here in gray. The first step is to transform the obstacles into C-space obstacles. Since the obstacles are in a plane, and the C-space is also a plane, this transformation is simple. We just slide the square robot around the edges of the obstacles and keep track of the path traced out by the robot's reference point, at its lower-left corner. This results in C-space obstacles that look like this. If the reference point of the robot is outside the regions shaded gray, then the robot is in free space. Next, we construct a graph whose nodes are the corners of the C-space obstacles. Edges are between nodes that can see each other by a straight line that does not go through an obstacle. This graph is called a visibility graph, and it is also a roadmap of the free C-space, since any free configuration can be connected to it by a straight line in free space. We can now complete the graph by connecting the start and goal nodes to all visible nodes. Each edge has a weight according to the length of the edge. We can now search the graph using A-star to find the shortest path between the start and goal configurations. This algorithm is complete, since it constructs a true roadmap, and it is optimal, meaning that it finds the shortest path. Visualizing the robot's motion, we see that the shortest path grazes the edges of the obstacles. If this is undesired behavior, the obstacles can be grown slightly. There are few interesting motion planning problems for which we can find a practical, complete, and optimal planner. In practice, we discretize the free C-space in such a way that resolution or probabilistic completeness is the best we can hope for. In the next video we'll discretize the C-space as a grid. |
Modern_Robotics_All_Videos | Modern_Robotics_Chapter_1212_Contact_Types_Rolling_Sliding_and_Breaking.txt | If two bodies are in contact, then the contact constrains the possible twists of the bodies. For example, let's say that bodies A and B are in point contact, and that point can be expressed in a space frame as p_A or p_B. Even though they're at the same point in space, I've drawn them so we can see that one is considered to be attached to the body A and the other is considered to be attached to the body B. The twist of body A is V_A and the twist of body B is V_B. These twists, and by default all twists in this chapter, are expressed in the space frame {s}. The twists of the two bodies result in velocities p-dot_A and p-dot_B of the current contact points on the two bodies. Each velocity is calculated as p-dot equals v plus omega-cross-p. Now let's define the contact normal n as pointing into body A. By our first-order analysis, the rigid-body assumption says that the velocity of point A relative to point B, in the direction of the normal, must be greater than or equal to zero. In other words, the impenetrability constraint says that the dot product of the normal with p-dot_A minus p-dot_B must be greater than or equal to zero. If this quantity is greater than zero, the two bodies break contact. The impenetrability constraint is a single inequality constraint on the twists of the two bodies. If n-transpose times p-dot_A minus p-dot_B is equal to zero, we call the contact a first-order roll-slide contact. This means that the contact is maintained by our first-order analysis. The roll-slide constraint is a single equality constraint. If the stronger condition that p-dot_A equals p-dot_B is satisfied, we call this a first-order rolling contact. We could also call this a sticking contact, emphasizing that there is no sliding. The rolling condition places two equality constraints on planar twists and three equality constraints on spatial twists. It will be convenient to express the impenetrability and roll-slide constraints directly in terms of twists. To do this, let's define a wrench F with a linear component given by the unit normal vector and a moment given by the vector to the contact crossed with the normal. We don't need wrenches for our kinematic analysis, but we use the wrench notation now because we will see it when we discuss contact forces. With this notation and a simple derivation, the left-hand sides of the impenetrability and roll-slide constraints can be written as F-transpose times V_A minus V_B. To further categorize a contact satisfying the impenetrability constraint, let's change this greater-than-or-equal-to sign to a strict greater-than sign. Then we define the contact label B, signifying a breaking contact; the contact label R, for a rolling contact; and the contact label S, for a sliding contact that does not satisfy the more restrictive rolling conditions. These conditions tell us, to first-order, what happens at the contact if we're given the twists of the two bodies. Often we consider the feasible motions of just one body, such as body A, and assume that the other body is stationary. In this case, we simply set V_B equal to zero. Let's look at an example where A is a planar hexagon and B is a stationary triangle. For ease of drawing, let's assume that the planar bodies cannot rotate, so the space of twists for A has only x and y linear components. Since B is stationary, the twist of A that satisfies the rolling condition is the zero twist. The contact normal can be expressed as a wrench F drawn in the twist space, and the twists that cause sliding are on the line orthogonal to F. Finally, the set of twists that break contact is the entire half-plane to the right of the S-line. Twists to the left of this line would cause penetration. Now, if the body B is not stationary, then the twist that maintains rolling contact is not the zero twist. We can again plot the contact normal in the twist space, and the set of twists of A corresponding to sliding is the line orthogonal to F, as shown. The set of breaking twists is the half plane to the left of the sliding line. In general, if A and B are spatial bodies, then their twist relative to each other, V_A minus V_B, is a 6-vector. To enforce the single constraint of a sliding contact, the relative twist must lie on a 5-dimensional hyperplane of the 6-dimensional relative twist space. To enforce the 3 constraints of a rolling contact, the relative twist must lie on a 3-dimensional hyperplane of the 6-dimensional relative twist space. If instead the bodies are restricted to a plane, the relative twist is a 3-vector, which we can draw in a 3-dimensional space. In this case, the plane of twists that cause a sliding contact, marked S, divides the 3-dimensional space into a half-space of twists that break contact, marked B, and a half-space of twists that cause penetration. The line of twists that cause rolling, marked R, lies in the S plane. In the next video we consider the case where multiple contacts act on a body. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.