context stringclasses 269
values | id_string stringlengths 15 16 | answers listlengths 5 5 | label int64 0 4 | question stringlengths 34 417 |
|---|---|---|---|---|
Critics have long been puzzled by the inner contradictions of major characters in John Webster's tragedies. In his The Duchess of Malfi, for instance, the Duchess is "good" in demonstrating the obvious tenderness and sincerity of her love for Antonio, but "bad" in ignoring the wishes and welfare of her family and in making religion a "cloak" hiding worldly self-indulgence. Bosola is "bad" in serving Ferdinand, "good" in turning the Duchess' thoughts toward heaven and in planning to avenge her murder. The ancient Greek philosopher Aristotle implied that such contradictions are virtually essential to the tragic personality, and yet critics keep coming back to this element of inconsistency as though it were an eccentric feature of Webster's own tragic vision. The problem is that, as an Elizabethan playwright, Webster has become a prisoner of our critical presuppositions. We have, in recent years, been dazzled by the way the earlier Renaissance and medieval theater, particularly the morality play, illuminates Elizabethan drama. We now understand how the habit of mind that saw the world as a battleground between good and evil produced the morality play. Morality plays allegorized that conflict by presenting characters whose actions were defined as the embodiment of good or evil. This model of reality lived on, overlaid by different conventions, in the more sophisticated Elizabethan works of the following age. Yet Webster seems not to have been as heavily influenced by the morality play's model of reality as were his Elizabethan contemporaries; he was apparently more sensitive to the more morally complicated Italian drama than to these English sources. Consequently, his characters cannot be evaluated according to reductive formulas of good and evil, which is precisely what modern critics have tried to do. They choose what seem to be the most promising of the contradictory values that are dramatized in the play, and treat those values as if they were the only basis for analyzing the moral development of the play's major characters, attributing the inconsistencies in a character's behavior to artistic incompetence on Webster's part. The lack of consistency in Webster's characters can be better understood if we recognize that the ambiguity at the heart of his tragic vision lies not in the external world but in the duality of human nature. Webster establishes tension in his plays by setting up conflicting systems of value that appear immoral only when one value system is viewed exclusively from the perspective of the other. He presents us not only with characters that we condemn intellectually or ethically and at the same time impulsively approve of, but also with judgments we must accept as logically sound and yet find emotionally repulsive. The dilemma is not only dramatic: it is tragic, because the conflict is irreconcilable, and because it is ours as much as that of the characters. | 199302_3-RC_2_8 | [
"clarify an ambiguous assertion",
"provide evidence in support of a commonly held view",
"analyze an unresolved question and propose an answer",
"offer an alternative to a flawed interpretation",
"describe and categorize opposing viewpoints"
] | 3 | The primary purpose of the passage is to |
Critics have long been puzzled by the inner contradictions of major characters in John Webster's tragedies. In his The Duchess of Malfi, for instance, the Duchess is "good" in demonstrating the obvious tenderness and sincerity of her love for Antonio, but "bad" in ignoring the wishes and welfare of her family and in making religion a "cloak" hiding worldly self-indulgence. Bosola is "bad" in serving Ferdinand, "good" in turning the Duchess' thoughts toward heaven and in planning to avenge her murder. The ancient Greek philosopher Aristotle implied that such contradictions are virtually essential to the tragic personality, and yet critics keep coming back to this element of inconsistency as though it were an eccentric feature of Webster's own tragic vision. The problem is that, as an Elizabethan playwright, Webster has become a prisoner of our critical presuppositions. We have, in recent years, been dazzled by the way the earlier Renaissance and medieval theater, particularly the morality play, illuminates Elizabethan drama. We now understand how the habit of mind that saw the world as a battleground between good and evil produced the morality play. Morality plays allegorized that conflict by presenting characters whose actions were defined as the embodiment of good or evil. This model of reality lived on, overlaid by different conventions, in the more sophisticated Elizabethan works of the following age. Yet Webster seems not to have been as heavily influenced by the morality play's model of reality as were his Elizabethan contemporaries; he was apparently more sensitive to the more morally complicated Italian drama than to these English sources. Consequently, his characters cannot be evaluated according to reductive formulas of good and evil, which is precisely what modern critics have tried to do. They choose what seem to be the most promising of the contradictory values that are dramatized in the play, and treat those values as if they were the only basis for analyzing the moral development of the play's major characters, attributing the inconsistencies in a character's behavior to artistic incompetence on Webster's part. The lack of consistency in Webster's characters can be better understood if we recognize that the ambiguity at the heart of his tragic vision lies not in the external world but in the duality of human nature. Webster establishes tension in his plays by setting up conflicting systems of value that appear immoral only when one value system is viewed exclusively from the perspective of the other. He presents us not only with characters that we condemn intellectually or ethically and at the same time impulsively approve of, but also with judgments we must accept as logically sound and yet find emotionally repulsive. The dilemma is not only dramatic: it is tragic, because the conflict is irreconcilable, and because it is ours as much as that of the characters. | 199302_3-RC_2_9 | [
"They were not concerned with dramatizing the conflict between good and evil that was presented in morality plays.",
"They were not as sophisticated as the Italian sources from which other Elizabethan tragedies were derived.",
"They have never been adequately understood by critics.",
"They have only recently ... | 0 | The author suggests which one of the following about the dramatic works that most influenced Webster's tragedies? |
Critics have long been puzzled by the inner contradictions of major characters in John Webster's tragedies. In his The Duchess of Malfi, for instance, the Duchess is "good" in demonstrating the obvious tenderness and sincerity of her love for Antonio, but "bad" in ignoring the wishes and welfare of her family and in making religion a "cloak" hiding worldly self-indulgence. Bosola is "bad" in serving Ferdinand, "good" in turning the Duchess' thoughts toward heaven and in planning to avenge her murder. The ancient Greek philosopher Aristotle implied that such contradictions are virtually essential to the tragic personality, and yet critics keep coming back to this element of inconsistency as though it were an eccentric feature of Webster's own tragic vision. The problem is that, as an Elizabethan playwright, Webster has become a prisoner of our critical presuppositions. We have, in recent years, been dazzled by the way the earlier Renaissance and medieval theater, particularly the morality play, illuminates Elizabethan drama. We now understand how the habit of mind that saw the world as a battleground between good and evil produced the morality play. Morality plays allegorized that conflict by presenting characters whose actions were defined as the embodiment of good or evil. This model of reality lived on, overlaid by different conventions, in the more sophisticated Elizabethan works of the following age. Yet Webster seems not to have been as heavily influenced by the morality play's model of reality as were his Elizabethan contemporaries; he was apparently more sensitive to the more morally complicated Italian drama than to these English sources. Consequently, his characters cannot be evaluated according to reductive formulas of good and evil, which is precisely what modern critics have tried to do. They choose what seem to be the most promising of the contradictory values that are dramatized in the play, and treat those values as if they were the only basis for analyzing the moral development of the play's major characters, attributing the inconsistencies in a character's behavior to artistic incompetence on Webster's part. The lack of consistency in Webster's characters can be better understood if we recognize that the ambiguity at the heart of his tragic vision lies not in the external world but in the duality of human nature. Webster establishes tension in his plays by setting up conflicting systems of value that appear immoral only when one value system is viewed exclusively from the perspective of the other. He presents us not only with characters that we condemn intellectually or ethically and at the same time impulsively approve of, but also with judgments we must accept as logically sound and yet find emotionally repulsive. The dilemma is not only dramatic: it is tragic, because the conflict is irreconcilable, and because it is ours as much as that of the characters. | 199302_3-RC_2_10 | [
"It introduces a commonly held view of Webster's tragedies that the author plans to defend.",
"It supports the author's suggestion that Webster's conception of tragedy is not idiosyncratic.",
"It provides an example of an approach to Webster's tragedies that the author criticizes.",
"It establishes the simila... | 1 | The author's allusion to Aristotle's view of tragedy in lines 11–13 serves which one of the following functions in the passage? |
Critics have long been puzzled by the inner contradictions of major characters in John Webster's tragedies. In his The Duchess of Malfi, for instance, the Duchess is "good" in demonstrating the obvious tenderness and sincerity of her love for Antonio, but "bad" in ignoring the wishes and welfare of her family and in making religion a "cloak" hiding worldly self-indulgence. Bosola is "bad" in serving Ferdinand, "good" in turning the Duchess' thoughts toward heaven and in planning to avenge her murder. The ancient Greek philosopher Aristotle implied that such contradictions are virtually essential to the tragic personality, and yet critics keep coming back to this element of inconsistency as though it were an eccentric feature of Webster's own tragic vision. The problem is that, as an Elizabethan playwright, Webster has become a prisoner of our critical presuppositions. We have, in recent years, been dazzled by the way the earlier Renaissance and medieval theater, particularly the morality play, illuminates Elizabethan drama. We now understand how the habit of mind that saw the world as a battleground between good and evil produced the morality play. Morality plays allegorized that conflict by presenting characters whose actions were defined as the embodiment of good or evil. This model of reality lived on, overlaid by different conventions, in the more sophisticated Elizabethan works of the following age. Yet Webster seems not to have been as heavily influenced by the morality play's model of reality as were his Elizabethan contemporaries; he was apparently more sensitive to the more morally complicated Italian drama than to these English sources. Consequently, his characters cannot be evaluated according to reductive formulas of good and evil, which is precisely what modern critics have tried to do. They choose what seem to be the most promising of the contradictory values that are dramatized in the play, and treat those values as if they were the only basis for analyzing the moral development of the play's major characters, attributing the inconsistencies in a character's behavior to artistic incompetence on Webster's part. The lack of consistency in Webster's characters can be better understood if we recognize that the ambiguity at the heart of his tragic vision lies not in the external world but in the duality of human nature. Webster establishes tension in his plays by setting up conflicting systems of value that appear immoral only when one value system is viewed exclusively from the perspective of the other. He presents us not only with characters that we condemn intellectually or ethically and at the same time impulsively approve of, but also with judgments we must accept as logically sound and yet find emotionally repulsive. The dilemma is not only dramatic: it is tragic, because the conflict is irreconcilable, and because it is ours as much as that of the characters. | 199302_3-RC_2_11 | [
"the ambiguity inherent in Webster's tragic vision resulted from the duality of human nature",
"Webster's conception of the tragic personality were similar to that of Aristotle",
"Webster had been heavily influenced by the morality play",
"Elizabethan dramatists had been more sensitive to Italian sources of i... | 2 | It can be inferred from the passage that modern critics' interpretations of Webster's tragedies would be more valid if |
Critics have long been puzzled by the inner contradictions of major characters in John Webster's tragedies. In his The Duchess of Malfi, for instance, the Duchess is "good" in demonstrating the obvious tenderness and sincerity of her love for Antonio, but "bad" in ignoring the wishes and welfare of her family and in making religion a "cloak" hiding worldly self-indulgence. Bosola is "bad" in serving Ferdinand, "good" in turning the Duchess' thoughts toward heaven and in planning to avenge her murder. The ancient Greek philosopher Aristotle implied that such contradictions are virtually essential to the tragic personality, and yet critics keep coming back to this element of inconsistency as though it were an eccentric feature of Webster's own tragic vision. The problem is that, as an Elizabethan playwright, Webster has become a prisoner of our critical presuppositions. We have, in recent years, been dazzled by the way the earlier Renaissance and medieval theater, particularly the morality play, illuminates Elizabethan drama. We now understand how the habit of mind that saw the world as a battleground between good and evil produced the morality play. Morality plays allegorized that conflict by presenting characters whose actions were defined as the embodiment of good or evil. This model of reality lived on, overlaid by different conventions, in the more sophisticated Elizabethan works of the following age. Yet Webster seems not to have been as heavily influenced by the morality play's model of reality as were his Elizabethan contemporaries; he was apparently more sensitive to the more morally complicated Italian drama than to these English sources. Consequently, his characters cannot be evaluated according to reductive formulas of good and evil, which is precisely what modern critics have tried to do. They choose what seem to be the most promising of the contradictory values that are dramatized in the play, and treat those values as if they were the only basis for analyzing the moral development of the play's major characters, attributing the inconsistencies in a character's behavior to artistic incompetence on Webster's part. The lack of consistency in Webster's characters can be better understood if we recognize that the ambiguity at the heart of his tragic vision lies not in the external world but in the duality of human nature. Webster establishes tension in his plays by setting up conflicting systems of value that appear immoral only when one value system is viewed exclusively from the perspective of the other. He presents us not only with characters that we condemn intellectually or ethically and at the same time impulsively approve of, but also with judgments we must accept as logically sound and yet find emotionally repulsive. The dilemma is not only dramatic: it is tragic, because the conflict is irreconcilable, and because it is ours as much as that of the characters. | 199302_3-RC_2_12 | [
"The skill of Elizabethan dramatists has in recent years been overestimated.",
"The conventions that shaped Elizabethan drama are best exemplified by Webster's drama.",
"Elizabethan drama, for the most part, can be viewed as being heavily influenced by the morality play.",
"Only by carefully examining the wor... | 2 | With which one of the following statements regarding Elizabethan drama would the author be most likely to agree? |
Critics have long been puzzled by the inner contradictions of major characters in John Webster's tragedies. In his The Duchess of Malfi, for instance, the Duchess is "good" in demonstrating the obvious tenderness and sincerity of her love for Antonio, but "bad" in ignoring the wishes and welfare of her family and in making religion a "cloak" hiding worldly self-indulgence. Bosola is "bad" in serving Ferdinand, "good" in turning the Duchess' thoughts toward heaven and in planning to avenge her murder. The ancient Greek philosopher Aristotle implied that such contradictions are virtually essential to the tragic personality, and yet critics keep coming back to this element of inconsistency as though it were an eccentric feature of Webster's own tragic vision. The problem is that, as an Elizabethan playwright, Webster has become a prisoner of our critical presuppositions. We have, in recent years, been dazzled by the way the earlier Renaissance and medieval theater, particularly the morality play, illuminates Elizabethan drama. We now understand how the habit of mind that saw the world as a battleground between good and evil produced the morality play. Morality plays allegorized that conflict by presenting characters whose actions were defined as the embodiment of good or evil. This model of reality lived on, overlaid by different conventions, in the more sophisticated Elizabethan works of the following age. Yet Webster seems not to have been as heavily influenced by the morality play's model of reality as were his Elizabethan contemporaries; he was apparently more sensitive to the more morally complicated Italian drama than to these English sources. Consequently, his characters cannot be evaluated according to reductive formulas of good and evil, which is precisely what modern critics have tried to do. They choose what seem to be the most promising of the contradictory values that are dramatized in the play, and treat those values as if they were the only basis for analyzing the moral development of the play's major characters, attributing the inconsistencies in a character's behavior to artistic incompetence on Webster's part. The lack of consistency in Webster's characters can be better understood if we recognize that the ambiguity at the heart of his tragic vision lies not in the external world but in the duality of human nature. Webster establishes tension in his plays by setting up conflicting systems of value that appear immoral only when one value system is viewed exclusively from the perspective of the other. He presents us not only with characters that we condemn intellectually or ethically and at the same time impulsively approve of, but also with judgments we must accept as logically sound and yet find emotionally repulsive. The dilemma is not only dramatic: it is tragic, because the conflict is irreconcilable, and because it is ours as much as that of the characters. | 199302_3-RC_2_13 | [
"Webster's plays tended to allegorize the conflict between good and evil more than did those of his contemporaries.",
"Webster's plays were derived more from Italian than from English sources.",
"The artistic flaws in Webster's tragedies were largely the result of his ignorance of the classical definition of tr... | 4 | It can be inferred from the passage that most modern critics assume which one of the following in their interpretation of Webster's tragedies? |
Critics have long been puzzled by the inner contradictions of major characters in John Webster's tragedies. In his The Duchess of Malfi, for instance, the Duchess is "good" in demonstrating the obvious tenderness and sincerity of her love for Antonio, but "bad" in ignoring the wishes and welfare of her family and in making religion a "cloak" hiding worldly self-indulgence. Bosola is "bad" in serving Ferdinand, "good" in turning the Duchess' thoughts toward heaven and in planning to avenge her murder. The ancient Greek philosopher Aristotle implied that such contradictions are virtually essential to the tragic personality, and yet critics keep coming back to this element of inconsistency as though it were an eccentric feature of Webster's own tragic vision. The problem is that, as an Elizabethan playwright, Webster has become a prisoner of our critical presuppositions. We have, in recent years, been dazzled by the way the earlier Renaissance and medieval theater, particularly the morality play, illuminates Elizabethan drama. We now understand how the habit of mind that saw the world as a battleground between good and evil produced the morality play. Morality plays allegorized that conflict by presenting characters whose actions were defined as the embodiment of good or evil. This model of reality lived on, overlaid by different conventions, in the more sophisticated Elizabethan works of the following age. Yet Webster seems not to have been as heavily influenced by the morality play's model of reality as were his Elizabethan contemporaries; he was apparently more sensitive to the more morally complicated Italian drama than to these English sources. Consequently, his characters cannot be evaluated according to reductive formulas of good and evil, which is precisely what modern critics have tried to do. They choose what seem to be the most promising of the contradictory values that are dramatized in the play, and treat those values as if they were the only basis for analyzing the moral development of the play's major characters, attributing the inconsistencies in a character's behavior to artistic incompetence on Webster's part. The lack of consistency in Webster's characters can be better understood if we recognize that the ambiguity at the heart of his tragic vision lies not in the external world but in the duality of human nature. Webster establishes tension in his plays by setting up conflicting systems of value that appear immoral only when one value system is viewed exclusively from the perspective of the other. He presents us not only with characters that we condemn intellectually or ethically and at the same time impulsively approve of, but also with judgments we must accept as logically sound and yet find emotionally repulsive. The dilemma is not only dramatic: it is tragic, because the conflict is irreconcilable, and because it is ours as much as that of the characters. | 199302_3-RC_2_14 | [
"artistically flawed",
"highly conventional",
"largely derived from the morality play",
"somewhat different from the conventional Elizabethan conception of tragedy",
"uninfluenced by the classical conception of tragedy"
] | 3 | The author implies that Webster's conception of tragedy was |
Cultivation of a single crop on a given tract of land leads eventually to decreased yields. One reason for this is that harmful bacterial phytopathogens, organisms parasitic on plant hosts, increase in the soil surrounding plant roots. The problem can be cured by crop rotation, denying the pathogens a suitable host for a period of time. However, even if crops are not rotated, the severity of diseases brought on by such phytopathogens often decreases after a number or years as the microbial population of the soil changes and the soil becomes "suppressive" to those diseases. While there may be many reasons for this phenomenon, it is clear that levels of certain bacteria, such as Pseudomonas fluorescens, a bacterium antagonistic to a number of harmful phytopathogens, are greater in suppressive than in nonsuppressive soil. This suggests that the presence of such bacteria suppresses phytopathogens. There is now considerable experimental support for this view. Wheat yield increases of 27 percent have been obtained in field trials by treatment of wheat seeds with fluorescent pseudomonads. Similar treatment of sugar beets, cotton, and potatoes has had similar results. These improvements in crop yields through the application of Pseudomonas fluorescens suggest that agriculture could benefit from the use of bacteria genetically altered for specific purposes. For example, a form of phytopathogen altered to remove its harmful properties could be released into the environment in quantities favorable to its competing with and eventually excluding the harmful normal strain. Some experiments suggest that deliberately releasing altered nonpathogenic Pseudomonas syringae could crowd out the nonaltered variety that causes frost damage. Opponents of such research have objected that the deliberate and large-scale release of genetically altered bacteria might have deleterious results. Proponents, on the other hand, argue that this particular strain is altered only by the removal of the gene responsible for the strain's propensity to cause frost damage, thereby rendering it safer than the phytopathogen from which it was derived. Some proponents have gone further and suggest that genetic alteration techniques could create organisms with totally new combinations of desirable traits not found in nature. For example, genes responsible for production of insecticidal compounds have been transposed from other bacteria into pseudomonads that colonize corn roots. Experiments of this kind are difficult and require great care: such bacteria are developed in highly artificial environments and may not compete well with natural soil bacteria. Nevertheless, proponents contend that the prospects for improved agriculture through such methods seem excellent. These prospects lead many to hope that current efforts to assess the risks of deliberate release of altered microorganisms will successfully answer the concerns of opponents and create a climate in which such research can go forward without undue impediment. | 199302_3-RC_3_15 | [
"Recent field experiments with genetically altered Pseudomonas bacteria have shown that releasing genetically altered bacteria into the environment would not involve any significant danger.",
"Encouraged by current research, advocates of agricultural use of genetically altered bacteria are optimistic that such us... | 1 | Which one of the following best summarizes the main idea of the passage? |
Cultivation of a single crop on a given tract of land leads eventually to decreased yields. One reason for this is that harmful bacterial phytopathogens, organisms parasitic on plant hosts, increase in the soil surrounding plant roots. The problem can be cured by crop rotation, denying the pathogens a suitable host for a period of time. However, even if crops are not rotated, the severity of diseases brought on by such phytopathogens often decreases after a number or years as the microbial population of the soil changes and the soil becomes "suppressive" to those diseases. While there may be many reasons for this phenomenon, it is clear that levels of certain bacteria, such as Pseudomonas fluorescens, a bacterium antagonistic to a number of harmful phytopathogens, are greater in suppressive than in nonsuppressive soil. This suggests that the presence of such bacteria suppresses phytopathogens. There is now considerable experimental support for this view. Wheat yield increases of 27 percent have been obtained in field trials by treatment of wheat seeds with fluorescent pseudomonads. Similar treatment of sugar beets, cotton, and potatoes has had similar results. These improvements in crop yields through the application of Pseudomonas fluorescens suggest that agriculture could benefit from the use of bacteria genetically altered for specific purposes. For example, a form of phytopathogen altered to remove its harmful properties could be released into the environment in quantities favorable to its competing with and eventually excluding the harmful normal strain. Some experiments suggest that deliberately releasing altered nonpathogenic Pseudomonas syringae could crowd out the nonaltered variety that causes frost damage. Opponents of such research have objected that the deliberate and large-scale release of genetically altered bacteria might have deleterious results. Proponents, on the other hand, argue that this particular strain is altered only by the removal of the gene responsible for the strain's propensity to cause frost damage, thereby rendering it safer than the phytopathogen from which it was derived. Some proponents have gone further and suggest that genetic alteration techniques could create organisms with totally new combinations of desirable traits not found in nature. For example, genes responsible for production of insecticidal compounds have been transposed from other bacteria into pseudomonads that colonize corn roots. Experiments of this kind are difficult and require great care: such bacteria are developed in highly artificial environments and may not compete well with natural soil bacteria. Nevertheless, proponents contend that the prospects for improved agriculture through such methods seem excellent. These prospects lead many to hope that current efforts to assess the risks of deliberate release of altered microorganisms will successfully answer the concerns of opponents and create a climate in which such research can go forward without undue impediment. | 199302_3-RC_3_16 | [
"prove that increases in the level of such bacteria in the soil are the sole cause of soil suppressivity",
"explain why yields increased after wheat fields were sprayed with altered Pseudomonas fluorescens bacteria",
"detail the chemical processes that such bacteria use to suppress organisms parasitic to crop p... | 3 | The author discusses naturally occurring Pseudomonas fluorescens bacteria in the first paragraph primarily in order to do which one of the following? |
Cultivation of a single crop on a given tract of land leads eventually to decreased yields. One reason for this is that harmful bacterial phytopathogens, organisms parasitic on plant hosts, increase in the soil surrounding plant roots. The problem can be cured by crop rotation, denying the pathogens a suitable host for a period of time. However, even if crops are not rotated, the severity of diseases brought on by such phytopathogens often decreases after a number or years as the microbial population of the soil changes and the soil becomes "suppressive" to those diseases. While there may be many reasons for this phenomenon, it is clear that levels of certain bacteria, such as Pseudomonas fluorescens, a bacterium antagonistic to a number of harmful phytopathogens, are greater in suppressive than in nonsuppressive soil. This suggests that the presence of such bacteria suppresses phytopathogens. There is now considerable experimental support for this view. Wheat yield increases of 27 percent have been obtained in field trials by treatment of wheat seeds with fluorescent pseudomonads. Similar treatment of sugar beets, cotton, and potatoes has had similar results. These improvements in crop yields through the application of Pseudomonas fluorescens suggest that agriculture could benefit from the use of bacteria genetically altered for specific purposes. For example, a form of phytopathogen altered to remove its harmful properties could be released into the environment in quantities favorable to its competing with and eventually excluding the harmful normal strain. Some experiments suggest that deliberately releasing altered nonpathogenic Pseudomonas syringae could crowd out the nonaltered variety that causes frost damage. Opponents of such research have objected that the deliberate and large-scale release of genetically altered bacteria might have deleterious results. Proponents, on the other hand, argue that this particular strain is altered only by the removal of the gene responsible for the strain's propensity to cause frost damage, thereby rendering it safer than the phytopathogen from which it was derived. Some proponents have gone further and suggest that genetic alteration techniques could create organisms with totally new combinations of desirable traits not found in nature. For example, genes responsible for production of insecticidal compounds have been transposed from other bacteria into pseudomonads that colonize corn roots. Experiments of this kind are difficult and require great care: such bacteria are developed in highly artificial environments and may not compete well with natural soil bacteria. Nevertheless, proponents contend that the prospects for improved agriculture through such methods seem excellent. These prospects lead many to hope that current efforts to assess the risks of deliberate release of altered microorganisms will successfully answer the concerns of opponents and create a climate in which such research can go forward without undue impediment. | 199302_3-RC_3_17 | [
"Pseudomonas fluorescens bacteria would be absent from the soil surrounding their roots.",
"They would crowd out and eventually exclude other crop plants if their growth were not carefully regulated.",
"Their yield would not be likely to be improved by adding Pseudomonas fluorescens bacteria to the soil.",
"T... | 2 | It can be inferred from the author's discussion of Pseudomonas fluorescens bacteria that which one of the following would be true of crops impervious to parasitical organisms? |
Cultivation of a single crop on a given tract of land leads eventually to decreased yields. One reason for this is that harmful bacterial phytopathogens, organisms parasitic on plant hosts, increase in the soil surrounding plant roots. The problem can be cured by crop rotation, denying the pathogens a suitable host for a period of time. However, even if crops are not rotated, the severity of diseases brought on by such phytopathogens often decreases after a number or years as the microbial population of the soil changes and the soil becomes "suppressive" to those diseases. While there may be many reasons for this phenomenon, it is clear that levels of certain bacteria, such as Pseudomonas fluorescens, a bacterium antagonistic to a number of harmful phytopathogens, are greater in suppressive than in nonsuppressive soil. This suggests that the presence of such bacteria suppresses phytopathogens. There is now considerable experimental support for this view. Wheat yield increases of 27 percent have been obtained in field trials by treatment of wheat seeds with fluorescent pseudomonads. Similar treatment of sugar beets, cotton, and potatoes has had similar results. These improvements in crop yields through the application of Pseudomonas fluorescens suggest that agriculture could benefit from the use of bacteria genetically altered for specific purposes. For example, a form of phytopathogen altered to remove its harmful properties could be released into the environment in quantities favorable to its competing with and eventually excluding the harmful normal strain. Some experiments suggest that deliberately releasing altered nonpathogenic Pseudomonas syringae could crowd out the nonaltered variety that causes frost damage. Opponents of such research have objected that the deliberate and large-scale release of genetically altered bacteria might have deleterious results. Proponents, on the other hand, argue that this particular strain is altered only by the removal of the gene responsible for the strain's propensity to cause frost damage, thereby rendering it safer than the phytopathogen from which it was derived. Some proponents have gone further and suggest that genetic alteration techniques could create organisms with totally new combinations of desirable traits not found in nature. For example, genes responsible for production of insecticidal compounds have been transposed from other bacteria into pseudomonads that colonize corn roots. Experiments of this kind are difficult and require great care: such bacteria are developed in highly artificial environments and may not compete well with natural soil bacteria. Nevertheless, proponents contend that the prospects for improved agriculture through such methods seem excellent. These prospects lead many to hope that current efforts to assess the risks of deliberate release of altered microorganisms will successfully answer the concerns of opponents and create a climate in which such research can go forward without undue impediment. | 199302_3-RC_3_18 | [
"moving crop plants around makes them hardier and more resistant to disease",
"the number of Pseudomonas fluorescens bacteria in the soil usually increases when crops are rotated",
"the roots of many crop plants produce compounds that are antagonistic to phytopathogens harmful to other crop plants",
"the pres... | 4 | It can be inferred from the passage that crop rotation can increase yields in part because |
Cultivation of a single crop on a given tract of land leads eventually to decreased yields. One reason for this is that harmful bacterial phytopathogens, organisms parasitic on plant hosts, increase in the soil surrounding plant roots. The problem can be cured by crop rotation, denying the pathogens a suitable host for a period of time. However, even if crops are not rotated, the severity of diseases brought on by such phytopathogens often decreases after a number or years as the microbial population of the soil changes and the soil becomes "suppressive" to those diseases. While there may be many reasons for this phenomenon, it is clear that levels of certain bacteria, such as Pseudomonas fluorescens, a bacterium antagonistic to a number of harmful phytopathogens, are greater in suppressive than in nonsuppressive soil. This suggests that the presence of such bacteria suppresses phytopathogens. There is now considerable experimental support for this view. Wheat yield increases of 27 percent have been obtained in field trials by treatment of wheat seeds with fluorescent pseudomonads. Similar treatment of sugar beets, cotton, and potatoes has had similar results. These improvements in crop yields through the application of Pseudomonas fluorescens suggest that agriculture could benefit from the use of bacteria genetically altered for specific purposes. For example, a form of phytopathogen altered to remove its harmful properties could be released into the environment in quantities favorable to its competing with and eventually excluding the harmful normal strain. Some experiments suggest that deliberately releasing altered nonpathogenic Pseudomonas syringae could crowd out the nonaltered variety that causes frost damage. Opponents of such research have objected that the deliberate and large-scale release of genetically altered bacteria might have deleterious results. Proponents, on the other hand, argue that this particular strain is altered only by the removal of the gene responsible for the strain's propensity to cause frost damage, thereby rendering it safer than the phytopathogen from which it was derived. Some proponents have gone further and suggest that genetic alteration techniques could create organisms with totally new combinations of desirable traits not found in nature. For example, genes responsible for production of insecticidal compounds have been transposed from other bacteria into pseudomonads that colonize corn roots. Experiments of this kind are difficult and require great care: such bacteria are developed in highly artificial environments and may not compete well with natural soil bacteria. Nevertheless, proponents contend that the prospects for improved agriculture through such methods seem excellent. These prospects lead many to hope that current efforts to assess the risks of deliberate release of altered microorganisms will successfully answer the concerns of opponents and create a climate in which such research can go forward without undue impediment. | 199302_3-RC_3_19 | [
"The altered bacteria had a genetic constitution differing from that of the normal strain only in that the altered variety had one less gene.",
"Although the altered bacteria competed effectively with the nonaltered strain in the laboratory, they were not as viable in natural environments.",
"The altered bacter... | 0 | According to the passage, proponents of the use of genetically altered bacteria in agriculture argue that which one of the following is true of the altered bacteria used in the frost-damage experiments? |
Cultivation of a single crop on a given tract of land leads eventually to decreased yields. One reason for this is that harmful bacterial phytopathogens, organisms parasitic on plant hosts, increase in the soil surrounding plant roots. The problem can be cured by crop rotation, denying the pathogens a suitable host for a period of time. However, even if crops are not rotated, the severity of diseases brought on by such phytopathogens often decreases after a number or years as the microbial population of the soil changes and the soil becomes "suppressive" to those diseases. While there may be many reasons for this phenomenon, it is clear that levels of certain bacteria, such as Pseudomonas fluorescens, a bacterium antagonistic to a number of harmful phytopathogens, are greater in suppressive than in nonsuppressive soil. This suggests that the presence of such bacteria suppresses phytopathogens. There is now considerable experimental support for this view. Wheat yield increases of 27 percent have been obtained in field trials by treatment of wheat seeds with fluorescent pseudomonads. Similar treatment of sugar beets, cotton, and potatoes has had similar results. These improvements in crop yields through the application of Pseudomonas fluorescens suggest that agriculture could benefit from the use of bacteria genetically altered for specific purposes. For example, a form of phytopathogen altered to remove its harmful properties could be released into the environment in quantities favorable to its competing with and eventually excluding the harmful normal strain. Some experiments suggest that deliberately releasing altered nonpathogenic Pseudomonas syringae could crowd out the nonaltered variety that causes frost damage. Opponents of such research have objected that the deliberate and large-scale release of genetically altered bacteria might have deleterious results. Proponents, on the other hand, argue that this particular strain is altered only by the removal of the gene responsible for the strain's propensity to cause frost damage, thereby rendering it safer than the phytopathogen from which it was derived. Some proponents have gone further and suggest that genetic alteration techniques could create organisms with totally new combinations of desirable traits not found in nature. For example, genes responsible for production of insecticidal compounds have been transposed from other bacteria into pseudomonads that colonize corn roots. Experiments of this kind are difficult and require great care: such bacteria are developed in highly artificial environments and may not compete well with natural soil bacteria. Nevertheless, proponents contend that the prospects for improved agriculture through such methods seem excellent. These prospects lead many to hope that current efforts to assess the risks of deliberate release of altered microorganisms will successfully answer the concerns of opponents and create a climate in which such research can go forward without undue impediment. | 199302_3-RC_3_20 | [
"Pseudomonas syringae bacteria are primitive and have a simple genetic constitution.",
"The altered bacteria are derived from a strain that is parasitic to plants and can cause damage to crops.",
"Current genetic-engineering techniques permit the large-scale commercial production of such bacteria.",
"Often ge... | 3 | Which one of the following, if true, would most seriously weaken the proponents' argument regarding the safety of using altered Pseudomonas syringae bacteria to control frost damage? |
In 1887 the Dawes Act legislated wide-scale private ownership of reservation lands in the United States for Native Americans. The act allotted plots of 80 acres to each Native American adult. However, the Native Americans were not granted outright title to their lands. The act defined each grant as a "trust patent," meaning that the Bureau of Indian Affairs (BIA), the governmental agency in charge of administering policy regarding Native Americans, would hold the allotted land in trust for 25 years, during which time the Native American owners could use, but not alienate (sell) the land. After the 25-year period, the Native American allottee would receive a "fee patent" awarding full legal ownership of the land. Two main reasons were advanced for the restriction on the Native Americans' ability to sell their lands. First, it was claimed that free alienability would lead to immediate transfer of large amounts of former reservation land to non-Native Americans, consequently threatening the traditional way of life on those reservations. A second objection to free alienation was that Native Americans were unaccustomed to, and did not desire, a system of private landownership. Their custom, it was said, favored communal use of land. However, both of these arguments bear only on the transfer of Native American lands to non-Native Americans; neither offers a reason for prohibiting Native Americans from transferring land among themselves. Selling land to each other would not threaten the Native American culture. Additionally, if communal land use remained preferable to Native Americans after allotment, free alienability would have allowed allottees to sell their lands back to the tribe. When stated rationales for government policies prove empty, using an interest-group model often provides an explanation. While neither Native Americans nor the potential non-Native American purchasers benefited from the restraint on alienation contained in the Dawes Act, one clearly defined group did benefit: the BIA bureaucrats. It has been convincingly demonstrated that bureaucrats seek to maximize the size of their staffs and their budgets in order to compensate for the lack of other sources of fulfillment, such as power and prestige. Additionally, politicians tend to favor the growth of governmental bureaucracy because such growth provides increased opportunity for the exercise of political patronage. The restraint on alienation vastly increased the amount of work, and hence the budgets, necessary to implement the statute. Until allotment was ended in 1934, granting fee patents and leasing Native American lands were among the principal activities of the United States government. One hypothesis, then, for the temporary restriction on alienation in the Dawes Act is that it reflected a compromise between non-Native Americans favoring immediate alienability so they could purchase land and the BIA bureaucrats who administered the privatization system. | 199302_3-RC_4_21 | [
"United States government policy toward Native Americans has tended to disregard their needs and consider instead the needs of non-Native American purchasers of land.",
"In order to preserve the unique way of life on Native American reservations, use of Native American lands must be communal rather than individua... | 2 | Which one of the following best summarizes the main idea of the passage? |
In 1887 the Dawes Act legislated wide-scale private ownership of reservation lands in the United States for Native Americans. The act allotted plots of 80 acres to each Native American adult. However, the Native Americans were not granted outright title to their lands. The act defined each grant as a "trust patent," meaning that the Bureau of Indian Affairs (BIA), the governmental agency in charge of administering policy regarding Native Americans, would hold the allotted land in trust for 25 years, during which time the Native American owners could use, but not alienate (sell) the land. After the 25-year period, the Native American allottee would receive a "fee patent" awarding full legal ownership of the land. Two main reasons were advanced for the restriction on the Native Americans' ability to sell their lands. First, it was claimed that free alienability would lead to immediate transfer of large amounts of former reservation land to non-Native Americans, consequently threatening the traditional way of life on those reservations. A second objection to free alienation was that Native Americans were unaccustomed to, and did not desire, a system of private landownership. Their custom, it was said, favored communal use of land. However, both of these arguments bear only on the transfer of Native American lands to non-Native Americans; neither offers a reason for prohibiting Native Americans from transferring land among themselves. Selling land to each other would not threaten the Native American culture. Additionally, if communal land use remained preferable to Native Americans after allotment, free alienability would have allowed allottees to sell their lands back to the tribe. When stated rationales for government policies prove empty, using an interest-group model often provides an explanation. While neither Native Americans nor the potential non-Native American purchasers benefited from the restraint on alienation contained in the Dawes Act, one clearly defined group did benefit: the BIA bureaucrats. It has been convincingly demonstrated that bureaucrats seek to maximize the size of their staffs and their budgets in order to compensate for the lack of other sources of fulfillment, such as power and prestige. Additionally, politicians tend to favor the growth of governmental bureaucracy because such growth provides increased opportunity for the exercise of political patronage. The restraint on alienation vastly increased the amount of work, and hence the budgets, necessary to implement the statute. Until allotment was ended in 1934, granting fee patents and leasing Native American lands were among the principal activities of the United States government. One hypothesis, then, for the temporary restriction on alienation in the Dawes Act is that it reflected a compromise between non-Native Americans favoring immediate alienability so they could purchase land and the BIA bureaucrats who administered the privatization system. | 199302_3-RC_4_22 | [
"Politicians realized that allotment was damaging the Native American way of life.",
"Politicians decided that allotment would be more congruent with the Native American custom of communal land use.",
"Politicians believed that allotment's continuation would not enhance their opportunities to exercise patronage... | 2 | Which one of the following statements concerning the reason for the end of allotment, if true, would provide the most support for the author's view of politicians? |
In 1887 the Dawes Act legislated wide-scale private ownership of reservation lands in the United States for Native Americans. The act allotted plots of 80 acres to each Native American adult. However, the Native Americans were not granted outright title to their lands. The act defined each grant as a "trust patent," meaning that the Bureau of Indian Affairs (BIA), the governmental agency in charge of administering policy regarding Native Americans, would hold the allotted land in trust for 25 years, during which time the Native American owners could use, but not alienate (sell) the land. After the 25-year period, the Native American allottee would receive a "fee patent" awarding full legal ownership of the land. Two main reasons were advanced for the restriction on the Native Americans' ability to sell their lands. First, it was claimed that free alienability would lead to immediate transfer of large amounts of former reservation land to non-Native Americans, consequently threatening the traditional way of life on those reservations. A second objection to free alienation was that Native Americans were unaccustomed to, and did not desire, a system of private landownership. Their custom, it was said, favored communal use of land. However, both of these arguments bear only on the transfer of Native American lands to non-Native Americans; neither offers a reason for prohibiting Native Americans from transferring land among themselves. Selling land to each other would not threaten the Native American culture. Additionally, if communal land use remained preferable to Native Americans after allotment, free alienability would have allowed allottees to sell their lands back to the tribe. When stated rationales for government policies prove empty, using an interest-group model often provides an explanation. While neither Native Americans nor the potential non-Native American purchasers benefited from the restraint on alienation contained in the Dawes Act, one clearly defined group did benefit: the BIA bureaucrats. It has been convincingly demonstrated that bureaucrats seek to maximize the size of their staffs and their budgets in order to compensate for the lack of other sources of fulfillment, such as power and prestige. Additionally, politicians tend to favor the growth of governmental bureaucracy because such growth provides increased opportunity for the exercise of political patronage. The restraint on alienation vastly increased the amount of work, and hence the budgets, necessary to implement the statute. Until allotment was ended in 1934, granting fee patents and leasing Native American lands were among the principal activities of the United States government. One hypothesis, then, for the temporary restriction on alienation in the Dawes Act is that it reflected a compromise between non-Native Americans favoring immediate alienability so they could purchase land and the BIA bureaucrats who administered the privatization system. | 199302_3-RC_4_23 | [
"The passage of a law is analyzed in detail, the benefits and drawbacks of one of its clauses are studied, and a final assessment of the law is offered.",
"The history of a law is narrated, the effects of one of its clauses on various populations are studied, and repeal of the law is advocated.",
"A law is exam... | 3 | Which one of the following best describes the organization of the passage? |
In 1887 the Dawes Act legislated wide-scale private ownership of reservation lands in the United States for Native Americans. The act allotted plots of 80 acres to each Native American adult. However, the Native Americans were not granted outright title to their lands. The act defined each grant as a "trust patent," meaning that the Bureau of Indian Affairs (BIA), the governmental agency in charge of administering policy regarding Native Americans, would hold the allotted land in trust for 25 years, during which time the Native American owners could use, but not alienate (sell) the land. After the 25-year period, the Native American allottee would receive a "fee patent" awarding full legal ownership of the land. Two main reasons were advanced for the restriction on the Native Americans' ability to sell their lands. First, it was claimed that free alienability would lead to immediate transfer of large amounts of former reservation land to non-Native Americans, consequently threatening the traditional way of life on those reservations. A second objection to free alienation was that Native Americans were unaccustomed to, and did not desire, a system of private landownership. Their custom, it was said, favored communal use of land. However, both of these arguments bear only on the transfer of Native American lands to non-Native Americans; neither offers a reason for prohibiting Native Americans from transferring land among themselves. Selling land to each other would not threaten the Native American culture. Additionally, if communal land use remained preferable to Native Americans after allotment, free alienability would have allowed allottees to sell their lands back to the tribe. When stated rationales for government policies prove empty, using an interest-group model often provides an explanation. While neither Native Americans nor the potential non-Native American purchasers benefited from the restraint on alienation contained in the Dawes Act, one clearly defined group did benefit: the BIA bureaucrats. It has been convincingly demonstrated that bureaucrats seek to maximize the size of their staffs and their budgets in order to compensate for the lack of other sources of fulfillment, such as power and prestige. Additionally, politicians tend to favor the growth of governmental bureaucracy because such growth provides increased opportunity for the exercise of political patronage. The restraint on alienation vastly increased the amount of work, and hence the budgets, necessary to implement the statute. Until allotment was ended in 1934, granting fee patents and leasing Native American lands were among the principal activities of the United States government. One hypothesis, then, for the temporary restriction on alienation in the Dawes Act is that it reflected a compromise between non-Native Americans favoring immediate alienability so they could purchase land and the BIA bureaucrats who administered the privatization system. | 199302_3-RC_4_24 | [
"completely credulous",
"partially approving",
"basically indecisive",
"mildly questioning",
"highly skeptical"
] | 4 | The author's attitude toward the reasons advanced for the restriction on alienability in the Dawes Act at the time of its passage can best be described as |
In 1887 the Dawes Act legislated wide-scale private ownership of reservation lands in the United States for Native Americans. The act allotted plots of 80 acres to each Native American adult. However, the Native Americans were not granted outright title to their lands. The act defined each grant as a "trust patent," meaning that the Bureau of Indian Affairs (BIA), the governmental agency in charge of administering policy regarding Native Americans, would hold the allotted land in trust for 25 years, during which time the Native American owners could use, but not alienate (sell) the land. After the 25-year period, the Native American allottee would receive a "fee patent" awarding full legal ownership of the land. Two main reasons were advanced for the restriction on the Native Americans' ability to sell their lands. First, it was claimed that free alienability would lead to immediate transfer of large amounts of former reservation land to non-Native Americans, consequently threatening the traditional way of life on those reservations. A second objection to free alienation was that Native Americans were unaccustomed to, and did not desire, a system of private landownership. Their custom, it was said, favored communal use of land. However, both of these arguments bear only on the transfer of Native American lands to non-Native Americans; neither offers a reason for prohibiting Native Americans from transferring land among themselves. Selling land to each other would not threaten the Native American culture. Additionally, if communal land use remained preferable to Native Americans after allotment, free alienability would have allowed allottees to sell their lands back to the tribe. When stated rationales for government policies prove empty, using an interest-group model often provides an explanation. While neither Native Americans nor the potential non-Native American purchasers benefited from the restraint on alienation contained in the Dawes Act, one clearly defined group did benefit: the BIA bureaucrats. It has been convincingly demonstrated that bureaucrats seek to maximize the size of their staffs and their budgets in order to compensate for the lack of other sources of fulfillment, such as power and prestige. Additionally, politicians tend to favor the growth of governmental bureaucracy because such growth provides increased opportunity for the exercise of political patronage. The restraint on alienation vastly increased the amount of work, and hence the budgets, necessary to implement the statute. Until allotment was ended in 1934, granting fee patents and leasing Native American lands were among the principal activities of the United States government. One hypothesis, then, for the temporary restriction on alienation in the Dawes Act is that it reflected a compromise between non-Native Americans favoring immediate alienability so they could purchase land and the BIA bureaucrats who administered the privatization system. | 199302_3-RC_4_25 | [
"Most Native Americans supported themselves through farming.",
"Not many Native Americans personally owned the land on which they lived.",
"The land on which most Native Americans lived had been bought from their tribes.",
"Few Native Americans had much contact with their non-Native American neighbors.",
"F... | 1 | It can be inferred from the passage that which one of the following was true of Native American life immediately before passage of the Dawes Act? |
In 1887 the Dawes Act legislated wide-scale private ownership of reservation lands in the United States for Native Americans. The act allotted plots of 80 acres to each Native American adult. However, the Native Americans were not granted outright title to their lands. The act defined each grant as a "trust patent," meaning that the Bureau of Indian Affairs (BIA), the governmental agency in charge of administering policy regarding Native Americans, would hold the allotted land in trust for 25 years, during which time the Native American owners could use, but not alienate (sell) the land. After the 25-year period, the Native American allottee would receive a "fee patent" awarding full legal ownership of the land. Two main reasons were advanced for the restriction on the Native Americans' ability to sell their lands. First, it was claimed that free alienability would lead to immediate transfer of large amounts of former reservation land to non-Native Americans, consequently threatening the traditional way of life on those reservations. A second objection to free alienation was that Native Americans were unaccustomed to, and did not desire, a system of private landownership. Their custom, it was said, favored communal use of land. However, both of these arguments bear only on the transfer of Native American lands to non-Native Americans; neither offers a reason for prohibiting Native Americans from transferring land among themselves. Selling land to each other would not threaten the Native American culture. Additionally, if communal land use remained preferable to Native Americans after allotment, free alienability would have allowed allottees to sell their lands back to the tribe. When stated rationales for government policies prove empty, using an interest-group model often provides an explanation. While neither Native Americans nor the potential non-Native American purchasers benefited from the restraint on alienation contained in the Dawes Act, one clearly defined group did benefit: the BIA bureaucrats. It has been convincingly demonstrated that bureaucrats seek to maximize the size of their staffs and their budgets in order to compensate for the lack of other sources of fulfillment, such as power and prestige. Additionally, politicians tend to favor the growth of governmental bureaucracy because such growth provides increased opportunity for the exercise of political patronage. The restraint on alienation vastly increased the amount of work, and hence the budgets, necessary to implement the statute. Until allotment was ended in 1934, granting fee patents and leasing Native American lands were among the principal activities of the United States government. One hypothesis, then, for the temporary restriction on alienation in the Dawes Act is that it reflected a compromise between non-Native Americans favoring immediate alienability so they could purchase land and the BIA bureaucrats who administered the privatization system. | 199302_3-RC_4_26 | [
"owners of land to farm it",
"owners of land to sell it",
"government some control over how owners disposed of land",
"owners of land to build on it with relatively minor governmental restrictions",
"government to charge owners a fee for developing their land"
] | 1 | According to the passage, the type of landownership initially obtainable by Native Americans under the Dawes Act differed from the type of ownership obtainable after a 25-year period in that only the latter allowed |
In 1887 the Dawes Act legislated wide-scale private ownership of reservation lands in the United States for Native Americans. The act allotted plots of 80 acres to each Native American adult. However, the Native Americans were not granted outright title to their lands. The act defined each grant as a "trust patent," meaning that the Bureau of Indian Affairs (BIA), the governmental agency in charge of administering policy regarding Native Americans, would hold the allotted land in trust for 25 years, during which time the Native American owners could use, but not alienate (sell) the land. After the 25-year period, the Native American allottee would receive a "fee patent" awarding full legal ownership of the land. Two main reasons were advanced for the restriction on the Native Americans' ability to sell their lands. First, it was claimed that free alienability would lead to immediate transfer of large amounts of former reservation land to non-Native Americans, consequently threatening the traditional way of life on those reservations. A second objection to free alienation was that Native Americans were unaccustomed to, and did not desire, a system of private landownership. Their custom, it was said, favored communal use of land. However, both of these arguments bear only on the transfer of Native American lands to non-Native Americans; neither offers a reason for prohibiting Native Americans from transferring land among themselves. Selling land to each other would not threaten the Native American culture. Additionally, if communal land use remained preferable to Native Americans after allotment, free alienability would have allowed allottees to sell their lands back to the tribe. When stated rationales for government policies prove empty, using an interest-group model often provides an explanation. While neither Native Americans nor the potential non-Native American purchasers benefited from the restraint on alienation contained in the Dawes Act, one clearly defined group did benefit: the BIA bureaucrats. It has been convincingly demonstrated that bureaucrats seek to maximize the size of their staffs and their budgets in order to compensate for the lack of other sources of fulfillment, such as power and prestige. Additionally, politicians tend to favor the growth of governmental bureaucracy because such growth provides increased opportunity for the exercise of political patronage. The restraint on alienation vastly increased the amount of work, and hence the budgets, necessary to implement the statute. Until allotment was ended in 1934, granting fee patents and leasing Native American lands were among the principal activities of the United States government. One hypothesis, then, for the temporary restriction on alienation in the Dawes Act is that it reflected a compromise between non-Native Americans favoring immediate alienability so they could purchase land and the BIA bureaucrats who administered the privatization system. | 199302_3-RC_4_27 | [
"The legislators who voted in favor of the Dawes Act owned land adjacent to Native American reservations.",
"The majority of Native Americans who were granted fee patents did not sell their land back to their tribes.",
"Native Americans managed to preserve their traditional culture even when they were geographi... | 3 | Which one of the following, if true, would most strengthen the author's argument regarding the true motivation for the passage of the Dawes Act? |
After thirty years of investigation into cell genetics, researchers made startling discoveries in the 1960s and early 1970s which culminated in the development of processes, collectively known as recombinant deoxyribonucleic acid (rDNA) technology, for the active manipulation of a cell's genetic code. The technology has created excitement and controversy because it involves altering DNA—which contains the building blocks of the genetic code. Using rDNA technology, scientists can transfer a portion of the DNA from one organism to a single living cell of another. The scientist chemically "snips" the DNA chain of the host cell at a predetermined point and attaches another piece of DNA from a donor cell at that place, creating a completely new organism. Proponents of rDNA research and development claim that it will allow scientists to find cures for disease and to better understand how genetic information controls an organism's development. They also see many other potentially practical benefits, especially in the pharmaceutical industry. Some corporations employing the new technology even claim that by the end of the century all major diseases will be treated with drugs derived from microorganisms created through rDNA technology. Pharmaceutical products already developed, but not yet marketed, indicate that these predictions may be realized. Proponents also cite nonmedical applications for this technology. Energy production and waste disposal may benefit: genetically altered organisms could convert sewage and other organic material into methane fuel. Agriculture might also take advantage of rDNA technology to produce new varieties of crops that resist foul weather, pests, and the effects of poor soil. A major concern of the critics of rDNA research is that genetically altered microorganisms might escape from the laboratory. Because these microorganisms are laboratory creations that, in all probability, do not occur in nature, their interaction with the natural world cannot be predicted with certainty. It is possible that they could cause previously unknown perhaps incurable, diseases. The effect of genetically altered microorganisms on the world's microbiological predator-prey relationships is another potentially serious problem pointed out by the opponents of rDNA research. Introducing a new species may disrupt or even destroy the existing ecosystem. The collapse of interdependent relationships among species, extrapolated to its extreme, could eventually result in the destruction of humanity. Opponents of rDNA technology also cite ethical problems with it. For example, it gives scientists the power to instantly cross evolutionary and species boundaries that nature took millennia to establish. The implications of such power would become particularly profound if genetic engineers were to tinker with human genes, a practice that would bring us one step closer to Aldous Huxley's grim vision in Brave New World of a totalitarian society that engineers human beings to fulfill specific roles. | 199306_3-RC_1_1 | [
"explaining the process and applications of rDNA technology",
"advocating continued rDNA research and development",
"providing evidence indicating the need for regulation of rDNA research and development",
"summarizing the controversy surrounding rDNA research and development",
"arguing that the environment... | 3 | In the passage, the author is primarily concerned with doing which one of the following? |
After thirty years of investigation into cell genetics, researchers made startling discoveries in the 1960s and early 1970s which culminated in the development of processes, collectively known as recombinant deoxyribonucleic acid (rDNA) technology, for the active manipulation of a cell's genetic code. The technology has created excitement and controversy because it involves altering DNA—which contains the building blocks of the genetic code. Using rDNA technology, scientists can transfer a portion of the DNA from one organism to a single living cell of another. The scientist chemically "snips" the DNA chain of the host cell at a predetermined point and attaches another piece of DNA from a donor cell at that place, creating a completely new organism. Proponents of rDNA research and development claim that it will allow scientists to find cures for disease and to better understand how genetic information controls an organism's development. They also see many other potentially practical benefits, especially in the pharmaceutical industry. Some corporations employing the new technology even claim that by the end of the century all major diseases will be treated with drugs derived from microorganisms created through rDNA technology. Pharmaceutical products already developed, but not yet marketed, indicate that these predictions may be realized. Proponents also cite nonmedical applications for this technology. Energy production and waste disposal may benefit: genetically altered organisms could convert sewage and other organic material into methane fuel. Agriculture might also take advantage of rDNA technology to produce new varieties of crops that resist foul weather, pests, and the effects of poor soil. A major concern of the critics of rDNA research is that genetically altered microorganisms might escape from the laboratory. Because these microorganisms are laboratory creations that, in all probability, do not occur in nature, their interaction with the natural world cannot be predicted with certainty. It is possible that they could cause previously unknown perhaps incurable, diseases. The effect of genetically altered microorganisms on the world's microbiological predator-prey relationships is another potentially serious problem pointed out by the opponents of rDNA research. Introducing a new species may disrupt or even destroy the existing ecosystem. The collapse of interdependent relationships among species, extrapolated to its extreme, could eventually result in the destruction of humanity. Opponents of rDNA technology also cite ethical problems with it. For example, it gives scientists the power to instantly cross evolutionary and species boundaries that nature took millennia to establish. The implications of such power would become particularly profound if genetic engineers were to tinker with human genes, a practice that would bring us one step closer to Aldous Huxley's grim vision in Brave New World of a totalitarian society that engineers human beings to fulfill specific roles. | 199306_3-RC_1_2 | [
"It led to the development of processes for the manipulation of DNA.",
"It was initiated by the discovery of rDNA technology.",
"It led to the use of new treatments for major diseases.",
"It was universally heralded as a great benefit to humanity.",
"It was motivated by a desire to create new organisms."
] | 0 | According to the passage, which one of the following is an accurate statement about research into the genetic code of cells? |
After thirty years of investigation into cell genetics, researchers made startling discoveries in the 1960s and early 1970s which culminated in the development of processes, collectively known as recombinant deoxyribonucleic acid (rDNA) technology, for the active manipulation of a cell's genetic code. The technology has created excitement and controversy because it involves altering DNA—which contains the building blocks of the genetic code. Using rDNA technology, scientists can transfer a portion of the DNA from one organism to a single living cell of another. The scientist chemically "snips" the DNA chain of the host cell at a predetermined point and attaches another piece of DNA from a donor cell at that place, creating a completely new organism. Proponents of rDNA research and development claim that it will allow scientists to find cures for disease and to better understand how genetic information controls an organism's development. They also see many other potentially practical benefits, especially in the pharmaceutical industry. Some corporations employing the new technology even claim that by the end of the century all major diseases will be treated with drugs derived from microorganisms created through rDNA technology. Pharmaceutical products already developed, but not yet marketed, indicate that these predictions may be realized. Proponents also cite nonmedical applications for this technology. Energy production and waste disposal may benefit: genetically altered organisms could convert sewage and other organic material into methane fuel. Agriculture might also take advantage of rDNA technology to produce new varieties of crops that resist foul weather, pests, and the effects of poor soil. A major concern of the critics of rDNA research is that genetically altered microorganisms might escape from the laboratory. Because these microorganisms are laboratory creations that, in all probability, do not occur in nature, their interaction with the natural world cannot be predicted with certainty. It is possible that they could cause previously unknown perhaps incurable, diseases. The effect of genetically altered microorganisms on the world's microbiological predator-prey relationships is another potentially serious problem pointed out by the opponents of rDNA research. Introducing a new species may disrupt or even destroy the existing ecosystem. The collapse of interdependent relationships among species, extrapolated to its extreme, could eventually result in the destruction of humanity. Opponents of rDNA technology also cite ethical problems with it. For example, it gives scientists the power to instantly cross evolutionary and species boundaries that nature took millennia to establish. The implications of such power would become particularly profound if genetic engineers were to tinker with human genes, a practice that would bring us one step closer to Aldous Huxley's grim vision in Brave New World of a totalitarian society that engineers human beings to fulfill specific roles. | 199306_3-RC_1_3 | [
"new methods of waste treatment",
"new biological knowledge",
"enhanced food production",
"development of less expensive drugs",
"increased energy production"
] | 3 | The potential benefits of rDNA technology referred to in the passage include all of the following EXCEPT |
After thirty years of investigation into cell genetics, researchers made startling discoveries in the 1960s and early 1970s which culminated in the development of processes, collectively known as recombinant deoxyribonucleic acid (rDNA) technology, for the active manipulation of a cell's genetic code. The technology has created excitement and controversy because it involves altering DNA—which contains the building blocks of the genetic code. Using rDNA technology, scientists can transfer a portion of the DNA from one organism to a single living cell of another. The scientist chemically "snips" the DNA chain of the host cell at a predetermined point and attaches another piece of DNA from a donor cell at that place, creating a completely new organism. Proponents of rDNA research and development claim that it will allow scientists to find cures for disease and to better understand how genetic information controls an organism's development. They also see many other potentially practical benefits, especially in the pharmaceutical industry. Some corporations employing the new technology even claim that by the end of the century all major diseases will be treated with drugs derived from microorganisms created through rDNA technology. Pharmaceutical products already developed, but not yet marketed, indicate that these predictions may be realized. Proponents also cite nonmedical applications for this technology. Energy production and waste disposal may benefit: genetically altered organisms could convert sewage and other organic material into methane fuel. Agriculture might also take advantage of rDNA technology to produce new varieties of crops that resist foul weather, pests, and the effects of poor soil. A major concern of the critics of rDNA research is that genetically altered microorganisms might escape from the laboratory. Because these microorganisms are laboratory creations that, in all probability, do not occur in nature, their interaction with the natural world cannot be predicted with certainty. It is possible that they could cause previously unknown perhaps incurable, diseases. The effect of genetically altered microorganisms on the world's microbiological predator-prey relationships is another potentially serious problem pointed out by the opponents of rDNA research. Introducing a new species may disrupt or even destroy the existing ecosystem. The collapse of interdependent relationships among species, extrapolated to its extreme, could eventually result in the destruction of humanity. Opponents of rDNA technology also cite ethical problems with it. For example, it gives scientists the power to instantly cross evolutionary and species boundaries that nature took millennia to establish. The implications of such power would become particularly profound if genetic engineers were to tinker with human genes, a practice that would bring us one step closer to Aldous Huxley's grim vision in Brave New World of a totalitarian society that engineers human beings to fulfill specific roles. | 199306_3-RC_1_4 | [
"New safety procedures developed by rDNA researchers make it impossible for genetically altered microorganisms to escape from laboratories.",
"A genetically altered microorganism accidentally released from a laboratory is successfully contained.",
"A particular rDNA-engineered microorganism introduced into an e... | 0 | Which one of the following, if true, would most weaken an argument of opponents of rDNA technology? |
After thirty years of investigation into cell genetics, researchers made startling discoveries in the 1960s and early 1970s which culminated in the development of processes, collectively known as recombinant deoxyribonucleic acid (rDNA) technology, for the active manipulation of a cell's genetic code. The technology has created excitement and controversy because it involves altering DNA—which contains the building blocks of the genetic code. Using rDNA technology, scientists can transfer a portion of the DNA from one organism to a single living cell of another. The scientist chemically "snips" the DNA chain of the host cell at a predetermined point and attaches another piece of DNA from a donor cell at that place, creating a completely new organism. Proponents of rDNA research and development claim that it will allow scientists to find cures for disease and to better understand how genetic information controls an organism's development. They also see many other potentially practical benefits, especially in the pharmaceutical industry. Some corporations employing the new technology even claim that by the end of the century all major diseases will be treated with drugs derived from microorganisms created through rDNA technology. Pharmaceutical products already developed, but not yet marketed, indicate that these predictions may be realized. Proponents also cite nonmedical applications for this technology. Energy production and waste disposal may benefit: genetically altered organisms could convert sewage and other organic material into methane fuel. Agriculture might also take advantage of rDNA technology to produce new varieties of crops that resist foul weather, pests, and the effects of poor soil. A major concern of the critics of rDNA research is that genetically altered microorganisms might escape from the laboratory. Because these microorganisms are laboratory creations that, in all probability, do not occur in nature, their interaction with the natural world cannot be predicted with certainty. It is possible that they could cause previously unknown perhaps incurable, diseases. The effect of genetically altered microorganisms on the world's microbiological predator-prey relationships is another potentially serious problem pointed out by the opponents of rDNA research. Introducing a new species may disrupt or even destroy the existing ecosystem. The collapse of interdependent relationships among species, extrapolated to its extreme, could eventually result in the destruction of humanity. Opponents of rDNA technology also cite ethical problems with it. For example, it gives scientists the power to instantly cross evolutionary and species boundaries that nature took millennia to establish. The implications of such power would become particularly profound if genetic engineers were to tinker with human genes, a practice that would bring us one step closer to Aldous Huxley's grim vision in Brave New World of a totalitarian society that engineers human beings to fulfill specific roles. | 199306_3-RC_1_5 | [
"emphasize the potential medical dangers of rDNA technology",
"advocate research on the use of rDNA technology in human genetics",
"warn of the possible disasters that could result from upsetting the balance of nature",
"present Brave New World as an example of a work of fiction that accurately predicted tech... | 4 | The author's reference in the last sentence of the passage to a society that engineers human beings to fulfill specific roles serves to |
After thirty years of investigation into cell genetics, researchers made startling discoveries in the 1960s and early 1970s which culminated in the development of processes, collectively known as recombinant deoxyribonucleic acid (rDNA) technology, for the active manipulation of a cell's genetic code. The technology has created excitement and controversy because it involves altering DNA—which contains the building blocks of the genetic code. Using rDNA technology, scientists can transfer a portion of the DNA from one organism to a single living cell of another. The scientist chemically "snips" the DNA chain of the host cell at a predetermined point and attaches another piece of DNA from a donor cell at that place, creating a completely new organism. Proponents of rDNA research and development claim that it will allow scientists to find cures for disease and to better understand how genetic information controls an organism's development. They also see many other potentially practical benefits, especially in the pharmaceutical industry. Some corporations employing the new technology even claim that by the end of the century all major diseases will be treated with drugs derived from microorganisms created through rDNA technology. Pharmaceutical products already developed, but not yet marketed, indicate that these predictions may be realized. Proponents also cite nonmedical applications for this technology. Energy production and waste disposal may benefit: genetically altered organisms could convert sewage and other organic material into methane fuel. Agriculture might also take advantage of rDNA technology to produce new varieties of crops that resist foul weather, pests, and the effects of poor soil. A major concern of the critics of rDNA research is that genetically altered microorganisms might escape from the laboratory. Because these microorganisms are laboratory creations that, in all probability, do not occur in nature, their interaction with the natural world cannot be predicted with certainty. It is possible that they could cause previously unknown perhaps incurable, diseases. The effect of genetically altered microorganisms on the world's microbiological predator-prey relationships is another potentially serious problem pointed out by the opponents of rDNA research. Introducing a new species may disrupt or even destroy the existing ecosystem. The collapse of interdependent relationships among species, extrapolated to its extreme, could eventually result in the destruction of humanity. Opponents of rDNA technology also cite ethical problems with it. For example, it gives scientists the power to instantly cross evolutionary and species boundaries that nature took millennia to establish. The implications of such power would become particularly profound if genetic engineers were to tinker with human genes, a practice that would bring us one step closer to Aldous Huxley's grim vision in Brave New World of a totalitarian society that engineers human beings to fulfill specific roles. | 199306_3-RC_1_6 | [
"Agricultural products developed through rDNA technology are no more attractive to consumers than are traditional crops.",
"Genetically altered microorganisms have no natural predators but can prey on a wide variety of other microorganisms.",
"Drugs produced using rDNA technology cost more to manufacture than d... | 1 | Which one of the following, if true, would most strengthen an argument of the opponents of rDNA technology? |
Gray marketing, the selling of trademarked products through channels of distribution not authorized by the trademark holder, can involve distribution of goods either within a market region or across market boundaries. Gray marketing within a market region ( "channel flow diversion" ) occurs when manufacturer-authorized distributors sell trademarked goods to unauthorized distributors who then sell the goods to consumers within the same region. For example, quantity discounts from manufacturers may motivate authorized dealers to enter the gray market because they can purchase larger quantities of a product than they themselves intend to stock if they can sell the extra units through gray market channels. When gray marketing occurs across market boundaries, it is typically in an international setting and may be called "parallel importing." Manufacturers often produce and sell products in more than one country and establish a network of authorized dealers in each country. Parallel importing occurs when trademarked goods intended for one country are diverted from proper channels (channel flow diversion) and then exported to unauthorized distributors in another country. Trademark owners justifiably argue against gray marketing practices since such practices clearly jeopardize the goodwill established by trademark owners: consumers who purchase trademarked goods in the gray market do not get the same "extended product," which typically includes pre- and postsale service. Equally important, authorized distributors may cease to promote the product if it becomes available for much lower prices through unauthorized channels. Current debate over regulation of gray marketing focuses on three disparate theories in trademark law that have been variously and confusingly applied to parallel importation cases: universality, exhaustion, and territoriality. The theory of universality holds that a trademark is only an indication of the source or origin of the product. This theory does not recognize the goodwill functions of a trademark. When the courts apply this theory, gray marketing practices are allowed to continue because the origin of the product remains the same regardless of the specific route of the product through the channel of distribution. The exhaustion theory holds that a trademark owner relinquishes all rights once a product has been sold. When this theory is applied, gray marketing practices are allowed to continue because the trademark owners' rights cease as soon as their products are sold to a distributor. The theory of territoriality holds that a trademark is effective in the country in which it is registered. Under the theory of territoriality, trademark owners can stop gray marketing practices in the registering countries on products bearing their trademarks. Since only the territoriality theory affords trademark owners any real legal protection against gray marketing practices, I believe it is inevitable as well as desirable that it will come to be consistently applied in gray marketing cases. | 199306_3-RC_2_7 | [
"Gray marketing is unfair to trademark owners and should be legally controlled.",
"Gray marketing is practiced in many different forms and places, and legislators should recognize the futility of trying to regulate it.",
"The mechanisms used to control gray marketing across markets are different from those most... | 0 | Which one of the following best expresses the main point of the passage? |
Gray marketing, the selling of trademarked products through channels of distribution not authorized by the trademark holder, can involve distribution of goods either within a market region or across market boundaries. Gray marketing within a market region ( "channel flow diversion" ) occurs when manufacturer-authorized distributors sell trademarked goods to unauthorized distributors who then sell the goods to consumers within the same region. For example, quantity discounts from manufacturers may motivate authorized dealers to enter the gray market because they can purchase larger quantities of a product than they themselves intend to stock if they can sell the extra units through gray market channels. When gray marketing occurs across market boundaries, it is typically in an international setting and may be called "parallel importing." Manufacturers often produce and sell products in more than one country and establish a network of authorized dealers in each country. Parallel importing occurs when trademarked goods intended for one country are diverted from proper channels (channel flow diversion) and then exported to unauthorized distributors in another country. Trademark owners justifiably argue against gray marketing practices since such practices clearly jeopardize the goodwill established by trademark owners: consumers who purchase trademarked goods in the gray market do not get the same "extended product," which typically includes pre- and postsale service. Equally important, authorized distributors may cease to promote the product if it becomes available for much lower prices through unauthorized channels. Current debate over regulation of gray marketing focuses on three disparate theories in trademark law that have been variously and confusingly applied to parallel importation cases: universality, exhaustion, and territoriality. The theory of universality holds that a trademark is only an indication of the source or origin of the product. This theory does not recognize the goodwill functions of a trademark. When the courts apply this theory, gray marketing practices are allowed to continue because the origin of the product remains the same regardless of the specific route of the product through the channel of distribution. The exhaustion theory holds that a trademark owner relinquishes all rights once a product has been sold. When this theory is applied, gray marketing practices are allowed to continue because the trademark owners' rights cease as soon as their products are sold to a distributor. The theory of territoriality holds that a trademark is effective in the country in which it is registered. Under the theory of territoriality, trademark owners can stop gray marketing practices in the registering countries on products bearing their trademarks. Since only the territoriality theory affords trademark owners any real legal protection against gray marketing practices, I believe it is inevitable as well as desirable that it will come to be consistently applied in gray marketing cases. | 199306_3-RC_2_8 | [
"criticize the motives and methods of those who practice gray marketing",
"evaluate the effects of both channel flow diversion and parallel importation",
"discuss the methods that have been used to regulate gray marketing and evaluate such methods' degrees of success",
"describe a controversial marketing prac... | 3 | The function of the passage as a whole is to |
Gray marketing, the selling of trademarked products through channels of distribution not authorized by the trademark holder, can involve distribution of goods either within a market region or across market boundaries. Gray marketing within a market region ( "channel flow diversion" ) occurs when manufacturer-authorized distributors sell trademarked goods to unauthorized distributors who then sell the goods to consumers within the same region. For example, quantity discounts from manufacturers may motivate authorized dealers to enter the gray market because they can purchase larger quantities of a product than they themselves intend to stock if they can sell the extra units through gray market channels. When gray marketing occurs across market boundaries, it is typically in an international setting and may be called "parallel importing." Manufacturers often produce and sell products in more than one country and establish a network of authorized dealers in each country. Parallel importing occurs when trademarked goods intended for one country are diverted from proper channels (channel flow diversion) and then exported to unauthorized distributors in another country. Trademark owners justifiably argue against gray marketing practices since such practices clearly jeopardize the goodwill established by trademark owners: consumers who purchase trademarked goods in the gray market do not get the same "extended product," which typically includes pre- and postsale service. Equally important, authorized distributors may cease to promote the product if it becomes available for much lower prices through unauthorized channels. Current debate over regulation of gray marketing focuses on three disparate theories in trademark law that have been variously and confusingly applied to parallel importation cases: universality, exhaustion, and territoriality. The theory of universality holds that a trademark is only an indication of the source or origin of the product. This theory does not recognize the goodwill functions of a trademark. When the courts apply this theory, gray marketing practices are allowed to continue because the origin of the product remains the same regardless of the specific route of the product through the channel of distribution. The exhaustion theory holds that a trademark owner relinquishes all rights once a product has been sold. When this theory is applied, gray marketing practices are allowed to continue because the trademark owners' rights cease as soon as their products are sold to a distributor. The theory of territoriality holds that a trademark is effective in the country in which it is registered. Under the theory of territoriality, trademark owners can stop gray marketing practices in the registering countries on products bearing their trademarks. Since only the territoriality theory affords trademark owners any real legal protection against gray marketing practices, I believe it is inevitable as well as desirable that it will come to be consistently applied in gray marketing cases. | 199306_3-RC_2_9 | [
"Manufacturers find it difficult to monitor the effectiveness of promotional efforts made on behalf of products that are gray marketed.",
"Gray marketing can discourage product promotion by authorized distributors.",
"Gray marketing forces manufacturers to accept the low profit margins that result from quantity... | 1 | Which one of the following does the author offer as an argument against gray marketing? |
Gray marketing, the selling of trademarked products through channels of distribution not authorized by the trademark holder, can involve distribution of goods either within a market region or across market boundaries. Gray marketing within a market region ( "channel flow diversion" ) occurs when manufacturer-authorized distributors sell trademarked goods to unauthorized distributors who then sell the goods to consumers within the same region. For example, quantity discounts from manufacturers may motivate authorized dealers to enter the gray market because they can purchase larger quantities of a product than they themselves intend to stock if they can sell the extra units through gray market channels. When gray marketing occurs across market boundaries, it is typically in an international setting and may be called "parallel importing." Manufacturers often produce and sell products in more than one country and establish a network of authorized dealers in each country. Parallel importing occurs when trademarked goods intended for one country are diverted from proper channels (channel flow diversion) and then exported to unauthorized distributors in another country. Trademark owners justifiably argue against gray marketing practices since such practices clearly jeopardize the goodwill established by trademark owners: consumers who purchase trademarked goods in the gray market do not get the same "extended product," which typically includes pre- and postsale service. Equally important, authorized distributors may cease to promote the product if it becomes available for much lower prices through unauthorized channels. Current debate over regulation of gray marketing focuses on three disparate theories in trademark law that have been variously and confusingly applied to parallel importation cases: universality, exhaustion, and territoriality. The theory of universality holds that a trademark is only an indication of the source or origin of the product. This theory does not recognize the goodwill functions of a trademark. When the courts apply this theory, gray marketing practices are allowed to continue because the origin of the product remains the same regardless of the specific route of the product through the channel of distribution. The exhaustion theory holds that a trademark owner relinquishes all rights once a product has been sold. When this theory is applied, gray marketing practices are allowed to continue because the trademark owners' rights cease as soon as their products are sold to a distributor. The theory of territoriality holds that a trademark is effective in the country in which it is registered. Under the theory of territoriality, trademark owners can stop gray marketing practices in the registering countries on products bearing their trademarks. Since only the territoriality theory affords trademark owners any real legal protection against gray marketing practices, I believe it is inevitable as well as desirable that it will come to be consistently applied in gray marketing cases. | 199306_3-RC_2_10 | [
"the right of trademark owners to enforce, in countries in which the trademarks are registered, distribution agreements intended to restrict distribution to authorized channels",
"the right of trademark owners to sell trademarked goods only to those distributors who agree to abide by distribution agreements",
"... | 0 | The information in the passage suggests that proponents of the theory of territoriality would probably differ from proponents of the theory of exhaustion on which one of the following issues? |
Gray marketing, the selling of trademarked products through channels of distribution not authorized by the trademark holder, can involve distribution of goods either within a market region or across market boundaries. Gray marketing within a market region ( "channel flow diversion" ) occurs when manufacturer-authorized distributors sell trademarked goods to unauthorized distributors who then sell the goods to consumers within the same region. For example, quantity discounts from manufacturers may motivate authorized dealers to enter the gray market because they can purchase larger quantities of a product than they themselves intend to stock if they can sell the extra units through gray market channels. When gray marketing occurs across market boundaries, it is typically in an international setting and may be called "parallel importing." Manufacturers often produce and sell products in more than one country and establish a network of authorized dealers in each country. Parallel importing occurs when trademarked goods intended for one country are diverted from proper channels (channel flow diversion) and then exported to unauthorized distributors in another country. Trademark owners justifiably argue against gray marketing practices since such practices clearly jeopardize the goodwill established by trademark owners: consumers who purchase trademarked goods in the gray market do not get the same "extended product," which typically includes pre- and postsale service. Equally important, authorized distributors may cease to promote the product if it becomes available for much lower prices through unauthorized channels. Current debate over regulation of gray marketing focuses on three disparate theories in trademark law that have been variously and confusingly applied to parallel importation cases: universality, exhaustion, and territoriality. The theory of universality holds that a trademark is only an indication of the source or origin of the product. This theory does not recognize the goodwill functions of a trademark. When the courts apply this theory, gray marketing practices are allowed to continue because the origin of the product remains the same regardless of the specific route of the product through the channel of distribution. The exhaustion theory holds that a trademark owner relinquishes all rights once a product has been sold. When this theory is applied, gray marketing practices are allowed to continue because the trademark owners' rights cease as soon as their products are sold to a distributor. The theory of territoriality holds that a trademark is effective in the country in which it is registered. Under the theory of territoriality, trademark owners can stop gray marketing practices in the registering countries on products bearing their trademarks. Since only the territoriality theory affords trademark owners any real legal protection against gray marketing practices, I believe it is inevitable as well as desirable that it will come to be consistently applied in gray marketing cases. | 199306_3-RC_2_11 | [
"fault trademark owners for their unwillingness to offer a solution to a major consumer complaint against gray marketing",
"indicate a way in which manufacturers sustain damage against which they ought to be protected",
"highlight one way in which gray marketing across markets is more problematic than gray mark... | 1 | The author discusses the impact of gray marketing on goodwill in order to |
Gray marketing, the selling of trademarked products through channels of distribution not authorized by the trademark holder, can involve distribution of goods either within a market region or across market boundaries. Gray marketing within a market region ( "channel flow diversion" ) occurs when manufacturer-authorized distributors sell trademarked goods to unauthorized distributors who then sell the goods to consumers within the same region. For example, quantity discounts from manufacturers may motivate authorized dealers to enter the gray market because they can purchase larger quantities of a product than they themselves intend to stock if they can sell the extra units through gray market channels. When gray marketing occurs across market boundaries, it is typically in an international setting and may be called "parallel importing." Manufacturers often produce and sell products in more than one country and establish a network of authorized dealers in each country. Parallel importing occurs when trademarked goods intended for one country are diverted from proper channels (channel flow diversion) and then exported to unauthorized distributors in another country. Trademark owners justifiably argue against gray marketing practices since such practices clearly jeopardize the goodwill established by trademark owners: consumers who purchase trademarked goods in the gray market do not get the same "extended product," which typically includes pre- and postsale service. Equally important, authorized distributors may cease to promote the product if it becomes available for much lower prices through unauthorized channels. Current debate over regulation of gray marketing focuses on three disparate theories in trademark law that have been variously and confusingly applied to parallel importation cases: universality, exhaustion, and territoriality. The theory of universality holds that a trademark is only an indication of the source or origin of the product. This theory does not recognize the goodwill functions of a trademark. When the courts apply this theory, gray marketing practices are allowed to continue because the origin of the product remains the same regardless of the specific route of the product through the channel of distribution. The exhaustion theory holds that a trademark owner relinquishes all rights once a product has been sold. When this theory is applied, gray marketing practices are allowed to continue because the trademark owners' rights cease as soon as their products are sold to a distributor. The theory of territoriality holds that a trademark is effective in the country in which it is registered. Under the theory of territoriality, trademark owners can stop gray marketing practices in the registering countries on products bearing their trademarks. Since only the territoriality theory affords trademark owners any real legal protection against gray marketing practices, I believe it is inevitable as well as desirable that it will come to be consistently applied in gray marketing cases. | 199306_3-RC_2_12 | [
"resigned tolerance",
"utter dismay",
"reasoned optimism",
"unbridled fervor",
"cynical indifference"
] | 2 | The author's attitude toward the possibility that the courts will come to exercise consistent control over gray marketing practices can best be characterized as one of |
Gray marketing, the selling of trademarked products through channels of distribution not authorized by the trademark holder, can involve distribution of goods either within a market region or across market boundaries. Gray marketing within a market region ( "channel flow diversion" ) occurs when manufacturer-authorized distributors sell trademarked goods to unauthorized distributors who then sell the goods to consumers within the same region. For example, quantity discounts from manufacturers may motivate authorized dealers to enter the gray market because they can purchase larger quantities of a product than they themselves intend to stock if they can sell the extra units through gray market channels. When gray marketing occurs across market boundaries, it is typically in an international setting and may be called "parallel importing." Manufacturers often produce and sell products in more than one country and establish a network of authorized dealers in each country. Parallel importing occurs when trademarked goods intended for one country are diverted from proper channels (channel flow diversion) and then exported to unauthorized distributors in another country. Trademark owners justifiably argue against gray marketing practices since such practices clearly jeopardize the goodwill established by trademark owners: consumers who purchase trademarked goods in the gray market do not get the same "extended product," which typically includes pre- and postsale service. Equally important, authorized distributors may cease to promote the product if it becomes available for much lower prices through unauthorized channels. Current debate over regulation of gray marketing focuses on three disparate theories in trademark law that have been variously and confusingly applied to parallel importation cases: universality, exhaustion, and territoriality. The theory of universality holds that a trademark is only an indication of the source or origin of the product. This theory does not recognize the goodwill functions of a trademark. When the courts apply this theory, gray marketing practices are allowed to continue because the origin of the product remains the same regardless of the specific route of the product through the channel of distribution. The exhaustion theory holds that a trademark owner relinquishes all rights once a product has been sold. When this theory is applied, gray marketing practices are allowed to continue because the trademark owners' rights cease as soon as their products are sold to a distributor. The theory of territoriality holds that a trademark is effective in the country in which it is registered. Under the theory of territoriality, trademark owners can stop gray marketing practices in the registering countries on products bearing their trademarks. Since only the territoriality theory affords trademark owners any real legal protection against gray marketing practices, I believe it is inevitable as well as desirable that it will come to be consistently applied in gray marketing cases. | 199306_3-RC_2_13 | [
"profit margins on authorized distribution of goods were less than those on goods marketed through parallel importing",
"manufacturers relieved authorized channels of all responsibility for product promotion",
"manufacturers charged all authorized distributors the same unit price for products regardless of quan... | 2 | It can be inferred from the passage that some channel flow diversion might be eliminated if |
Any study of autobiographical narratives that appeared under the ostensible authorship of African American writers between 1760 and 1865 inevitably raises concerns about authenticity and interpretation. Should an autobiography whose written composition was literally out of the hands of its narrator be considered as the literary equivalent of those autobiographies that were authored independently by their subjects? In many cases, the so-called edited narrative of an ex-slave ought to be treated as a ghostwritten account insofar as literary analysis is concerned, especially when it was composed by its editor from "a statement of facts" provided by an African American subject. Blassingame has taken pains to show that the editors of several of the more famous antebellum slave narratives were "noted for their integrity" and thus were unlikely to distort the facts given them by slave narrators. From a literary standpoint, however, it is not the moral integrity of these editors that is at issue but the linguistic, structural, and tonal integrity of the narratives they produced. Even if an editor faithfully reproduced the facts of a narrator's life, it was still the editor who decided what to make of these facts, how they should be emphasized, in what order they ought to be presented, and what was extraneous<Mark/4> or germane. Readers of African American autobiography then and now have too readily accepted the presumption of these eighteenth- and nineteenth-century editors that experiential facts recounted orally could be recorded and sorted by an amanuensis-editor, taken out of their original contexts, and then published with editorial prefaces, footnotes, and appended commentary, all without compromising the validity of the narrative as a product of an African American consciousness. Transcribed narratives in which an editor explicitly delimits his or her role undoubtedly may be regarded as more authentic and reflective of the narrator's thought in action than those edited works that flesh out a statement of facts in ways unaccounted for. Still, it would be naive to accord dictated oral narratives the same status as autobiographies composed and written by the subjects of the stories themselves. This point is illustrated by an analysis of Works Progress Administration interviews with ex-slaves in the 1930s that suggests that narrators often told interviewers what they seemed to want to hear. If it seemed impolitic for former slaves to tell all they knew and thought about the past to interviewers in the 1930s, the same could be said of escaped slaves on the run in the antebellum era. Dictated narratives, therefore, are literary texts whose authenticity is difficult to determine. Analysts should reserve close analytic readings for independently authored texts. Discussion of collaborative texts should take into account the conditions that governed their production. | 199306_3-RC_3_14 | [
"The personal integrity of an autobiography's editor has little relevance to its value as a literary work.",
"Autobiographies dictated to editors are less valuable as literature than are autobiographies authored by their subjects.",
"The facts that are recorded in an autobiography are less important than the pe... | 3 | Which one of the following best summarizes the main point of the passage? |
Any study of autobiographical narratives that appeared under the ostensible authorship of African American writers between 1760 and 1865 inevitably raises concerns about authenticity and interpretation. Should an autobiography whose written composition was literally out of the hands of its narrator be considered as the literary equivalent of those autobiographies that were authored independently by their subjects? In many cases, the so-called edited narrative of an ex-slave ought to be treated as a ghostwritten account insofar as literary analysis is concerned, especially when it was composed by its editor from "a statement of facts" provided by an African American subject. Blassingame has taken pains to show that the editors of several of the more famous antebellum slave narratives were "noted for their integrity" and thus were unlikely to distort the facts given them by slave narrators. From a literary standpoint, however, it is not the moral integrity of these editors that is at issue but the linguistic, structural, and tonal integrity of the narratives they produced. Even if an editor faithfully reproduced the facts of a narrator's life, it was still the editor who decided what to make of these facts, how they should be emphasized, in what order they ought to be presented, and what was extraneous<Mark/4> or germane. Readers of African American autobiography then and now have too readily accepted the presumption of these eighteenth- and nineteenth-century editors that experiential facts recounted orally could be recorded and sorted by an amanuensis-editor, taken out of their original contexts, and then published with editorial prefaces, footnotes, and appended commentary, all without compromising the validity of the narrative as a product of an African American consciousness. Transcribed narratives in which an editor explicitly delimits his or her role undoubtedly may be regarded as more authentic and reflective of the narrator's thought in action than those edited works that flesh out a statement of facts in ways unaccounted for. Still, it would be naive to accord dictated oral narratives the same status as autobiographies composed and written by the subjects of the stories themselves. This point is illustrated by an analysis of Works Progress Administration interviews with ex-slaves in the 1930s that suggests that narrators often told interviewers what they seemed to want to hear. If it seemed impolitic for former slaves to tell all they knew and thought about the past to interviewers in the 1930s, the same could be said of escaped slaves on the run in the antebellum era. Dictated narratives, therefore, are literary texts whose authenticity is difficult to determine. Analysts should reserve close analytic readings for independently authored texts. Discussion of collaborative texts should take into account the conditions that governed their production. | 199306_3-RC_3_15 | [
"an artist who wishes to invent a unique method of conveying the emotional impact of a scene in a painting",
"a worker who must interpret the instructions of an employer",
"a critic who must provide evidence to support opinions about a play being reviewed",
"an architect who must make the best use of a natura... | 4 | The information in the passage suggests that the role of the "editor" (lines 23–24) is most like that of |
Any study of autobiographical narratives that appeared under the ostensible authorship of African American writers between 1760 and 1865 inevitably raises concerns about authenticity and interpretation. Should an autobiography whose written composition was literally out of the hands of its narrator be considered as the literary equivalent of those autobiographies that were authored independently by their subjects? In many cases, the so-called edited narrative of an ex-slave ought to be treated as a ghostwritten account insofar as literary analysis is concerned, especially when it was composed by its editor from "a statement of facts" provided by an African American subject. Blassingame has taken pains to show that the editors of several of the more famous antebellum slave narratives were "noted for their integrity" and thus were unlikely to distort the facts given them by slave narrators. From a literary standpoint, however, it is not the moral integrity of these editors that is at issue but the linguistic, structural, and tonal integrity of the narratives they produced. Even if an editor faithfully reproduced the facts of a narrator's life, it was still the editor who decided what to make of these facts, how they should be emphasized, in what order they ought to be presented, and what was extraneous<Mark/4> or germane. Readers of African American autobiography then and now have too readily accepted the presumption of these eighteenth- and nineteenth-century editors that experiential facts recounted orally could be recorded and sorted by an amanuensis-editor, taken out of their original contexts, and then published with editorial prefaces, footnotes, and appended commentary, all without compromising the validity of the narrative as a product of an African American consciousness. Transcribed narratives in which an editor explicitly delimits his or her role undoubtedly may be regarded as more authentic and reflective of the narrator's thought in action than those edited works that flesh out a statement of facts in ways unaccounted for. Still, it would be naive to accord dictated oral narratives the same status as autobiographies composed and written by the subjects of the stories themselves. This point is illustrated by an analysis of Works Progress Administration interviews with ex-slaves in the 1930s that suggests that narrators often told interviewers what they seemed to want to hear. If it seemed impolitic for former slaves to tell all they knew and thought about the past to interviewers in the 1930s, the same could be said of escaped slaves on the run in the antebellum era. Dictated narratives, therefore, are literary texts whose authenticity is difficult to determine. Analysts should reserve close analytic readings for independently authored texts. Discussion of collaborative texts should take into account the conditions that governed their production. | 199306_3-RC_3_16 | [
"The author is adamantly opposed to the application of literary analysis to edited autobiographies.",
"The author is skeptical of the value of close analytical reading in the case of edited autobiographies.",
"The author believes that literary analysis of the prefaces, footnotes, and commentaries that accompany... | 1 | Which one of the following best describes the author's opinion about applying literary analysis to edited autobiographies? |
Any study of autobiographical narratives that appeared under the ostensible authorship of African American writers between 1760 and 1865 inevitably raises concerns about authenticity and interpretation. Should an autobiography whose written composition was literally out of the hands of its narrator be considered as the literary equivalent of those autobiographies that were authored independently by their subjects? In many cases, the so-called edited narrative of an ex-slave ought to be treated as a ghostwritten account insofar as literary analysis is concerned, especially when it was composed by its editor from "a statement of facts" provided by an African American subject. Blassingame has taken pains to show that the editors of several of the more famous antebellum slave narratives were "noted for their integrity" and thus were unlikely to distort the facts given them by slave narrators. From a literary standpoint, however, it is not the moral integrity of these editors that is at issue but the linguistic, structural, and tonal integrity of the narratives they produced. Even if an editor faithfully reproduced the facts of a narrator's life, it was still the editor who decided what to make of these facts, how they should be emphasized, in what order they ought to be presented, and what was extraneous<Mark/4> or germane. Readers of African American autobiography then and now have too readily accepted the presumption of these eighteenth- and nineteenth-century editors that experiential facts recounted orally could be recorded and sorted by an amanuensis-editor, taken out of their original contexts, and then published with editorial prefaces, footnotes, and appended commentary, all without compromising the validity of the narrative as a product of an African American consciousness. Transcribed narratives in which an editor explicitly delimits his or her role undoubtedly may be regarded as more authentic and reflective of the narrator's thought in action than those edited works that flesh out a statement of facts in ways unaccounted for. Still, it would be naive to accord dictated oral narratives the same status as autobiographies composed and written by the subjects of the stories themselves. This point is illustrated by an analysis of Works Progress Administration interviews with ex-slaves in the 1930s that suggests that narrators often told interviewers what they seemed to want to hear. If it seemed impolitic for former slaves to tell all they knew and thought about the past to interviewers in the 1930s, the same could be said of escaped slaves on the run in the antebellum era. Dictated narratives, therefore, are literary texts whose authenticity is difficult to determine. Analysts should reserve close analytic readings for independently authored texts. Discussion of collaborative texts should take into account the conditions that governed their production. | 199306_3-RC_3_17 | [
"They were more concerned with the personal details in the autobiographies than with their historical significance.",
"They were unable to distinguish between ghostwritten and edited autobiographies.",
"They were less naive about the facts of slave life than are readers today.",
"They presumed that the editin... | 3 | The passage supports which one of the following statements about the readers of autobiographies of African Americans that were published between 1760 and 1865? |
Any study of autobiographical narratives that appeared under the ostensible authorship of African American writers between 1760 and 1865 inevitably raises concerns about authenticity and interpretation. Should an autobiography whose written composition was literally out of the hands of its narrator be considered as the literary equivalent of those autobiographies that were authored independently by their subjects? In many cases, the so-called edited narrative of an ex-slave ought to be treated as a ghostwritten account insofar as literary analysis is concerned, especially when it was composed by its editor from "a statement of facts" provided by an African American subject. Blassingame has taken pains to show that the editors of several of the more famous antebellum slave narratives were "noted for their integrity" and thus were unlikely to distort the facts given them by slave narrators. From a literary standpoint, however, it is not the moral integrity of these editors that is at issue but the linguistic, structural, and tonal integrity of the narratives they produced. Even if an editor faithfully reproduced the facts of a narrator's life, it was still the editor who decided what to make of these facts, how they should be emphasized, in what order they ought to be presented, and what was extraneous<Mark/4> or germane. Readers of African American autobiography then and now have too readily accepted the presumption of these eighteenth- and nineteenth-century editors that experiential facts recounted orally could be recorded and sorted by an amanuensis-editor, taken out of their original contexts, and then published with editorial prefaces, footnotes, and appended commentary, all without compromising the validity of the narrative as a product of an African American consciousness. Transcribed narratives in which an editor explicitly delimits his or her role undoubtedly may be regarded as more authentic and reflective of the narrator's thought in action than those edited works that flesh out a statement of facts in ways unaccounted for. Still, it would be naive to accord dictated oral narratives the same status as autobiographies composed and written by the subjects of the stories themselves. This point is illustrated by an analysis of Works Progress Administration interviews with ex-slaves in the 1930s that suggests that narrators often told interviewers what they seemed to want to hear. If it seemed impolitic for former slaves to tell all they knew and thought about the past to interviewers in the 1930s, the same could be said of escaped slaves on the run in the antebellum era. Dictated narratives, therefore, are literary texts whose authenticity is difficult to determine. Analysts should reserve close analytic readings for independently authored texts. Discussion of collaborative texts should take into account the conditions that governed their production. | 199306_3-RC_3_18 | [
"\"ostensible\" (line 2)",
"\"integrity\" (line 18)",
"\"extraneous\" (line 27)",
"\"delimits\" (line 39)",
"\"impolitic\" (line 51)"
] | 0 | Which one of the following words, as it is used in the passage, best serves to underscore the author's concerns about the authenticity of the autobiographies discussed? |
Any study of autobiographical narratives that appeared under the ostensible authorship of African American writers between 1760 and 1865 inevitably raises concerns about authenticity and interpretation. Should an autobiography whose written composition was literally out of the hands of its narrator be considered as the literary equivalent of those autobiographies that were authored independently by their subjects? In many cases, the so-called edited narrative of an ex-slave ought to be treated as a ghostwritten account insofar as literary analysis is concerned, especially when it was composed by its editor from "a statement of facts" provided by an African American subject. Blassingame has taken pains to show that the editors of several of the more famous antebellum slave narratives were "noted for their integrity" and thus were unlikely to distort the facts given them by slave narrators. From a literary standpoint, however, it is not the moral integrity of these editors that is at issue but the linguistic, structural, and tonal integrity of the narratives they produced. Even if an editor faithfully reproduced the facts of a narrator's life, it was still the editor who decided what to make of these facts, how they should be emphasized, in what order they ought to be presented, and what was extraneous<Mark/4> or germane. Readers of African American autobiography then and now have too readily accepted the presumption of these eighteenth- and nineteenth-century editors that experiential facts recounted orally could be recorded and sorted by an amanuensis-editor, taken out of their original contexts, and then published with editorial prefaces, footnotes, and appended commentary, all without compromising the validity of the narrative as a product of an African American consciousness. Transcribed narratives in which an editor explicitly delimits his or her role undoubtedly may be regarded as more authentic and reflective of the narrator's thought in action than those edited works that flesh out a statement of facts in ways unaccounted for. Still, it would be naive to accord dictated oral narratives the same status as autobiographies composed and written by the subjects of the stories themselves. This point is illustrated by an analysis of Works Progress Administration interviews with ex-slaves in the 1930s that suggests that narrators often told interviewers what they seemed to want to hear. If it seemed impolitic for former slaves to tell all they knew and thought about the past to interviewers in the 1930s, the same could be said of escaped slaves on the run in the antebellum era. Dictated narratives, therefore, are literary texts whose authenticity is difficult to determine. Analysts should reserve close analytic readings for independently authored texts. Discussion of collaborative texts should take into account the conditions that governed their production. | 199306_3-RC_3_19 | [
"autobiography has been dictated to an experienced amanuensis-editor",
"autobiography attempts to reflect the narrator's thought in action",
"autobiography was authored independently by its subject",
"moral integrity of the autobiography's editor is well established",
"editor of the autobiography collaborat... | 2 | According to the passage, close analytic reading of an autobiography is appropriate only when the |
Any study of autobiographical narratives that appeared under the ostensible authorship of African American writers between 1760 and 1865 inevitably raises concerns about authenticity and interpretation. Should an autobiography whose written composition was literally out of the hands of its narrator be considered as the literary equivalent of those autobiographies that were authored independently by their subjects? In many cases, the so-called edited narrative of an ex-slave ought to be treated as a ghostwritten account insofar as literary analysis is concerned, especially when it was composed by its editor from "a statement of facts" provided by an African American subject. Blassingame has taken pains to show that the editors of several of the more famous antebellum slave narratives were "noted for their integrity" and thus were unlikely to distort the facts given them by slave narrators. From a literary standpoint, however, it is not the moral integrity of these editors that is at issue but the linguistic, structural, and tonal integrity of the narratives they produced. Even if an editor faithfully reproduced the facts of a narrator's life, it was still the editor who decided what to make of these facts, how they should be emphasized, in what order they ought to be presented, and what was extraneous<Mark/4> or germane. Readers of African American autobiography then and now have too readily accepted the presumption of these eighteenth- and nineteenth-century editors that experiential facts recounted orally could be recorded and sorted by an amanuensis-editor, taken out of their original contexts, and then published with editorial prefaces, footnotes, and appended commentary, all without compromising the validity of the narrative as a product of an African American consciousness. Transcribed narratives in which an editor explicitly delimits his or her role undoubtedly may be regarded as more authentic and reflective of the narrator's thought in action than those edited works that flesh out a statement of facts in ways unaccounted for. Still, it would be naive to accord dictated oral narratives the same status as autobiographies composed and written by the subjects of the stories themselves. This point is illustrated by an analysis of Works Progress Administration interviews with ex-slaves in the 1930s that suggests that narrators often told interviewers what they seemed to want to hear. If it seemed impolitic for former slaves to tell all they knew and thought about the past to interviewers in the 1930s, the same could be said of escaped slaves on the run in the antebellum era. Dictated narratives, therefore, are literary texts whose authenticity is difficult to determine. Analysts should reserve close analytic readings for independently authored texts. Discussion of collaborative texts should take into account the conditions that governed their production. | 199306_3-RC_3_20 | [
"It adds an authority's endorsement to the author's view that edited narratives ought to be treated as ghostwritten accounts.",
"It provides an example of a mistaken emphasis in the study of autobiography.",
"It presents an account of a new method of literary analysis to be applied to autobiography.",
"It ill... | 1 | It can be inferred that the discussion in the passage of Blassingame's work primarily serves which one of the following purposes? |
A conventional view of nineteenth-century Britain holds that iron manufacturers and textile manufacturers from the north of England became the wealthiest and most powerful people in society after about 1832. According to Marxist historians, these industrialists were the target of the working class in its struggle for power. A new study by Rubinstein, however, suggests that the real wealth lay with the bankers and merchants of London. Rubinstein does not deny that a northern industrial elite existed but argues that it was consistently outnumbered and outdone by a London-based commercial elite. His claims are provocative and deserve consideration. Rubinstein's claim about the location of wealth comes from his investigation of probate records. These indicate the value of personal property, excluding real property (buildings and land), left by individuals at death. It does seem as if large fortunes were more frequently made in commerce than in industry and, within industry, more frequently from alcohol or tobacco than from textiles or metal. However, such records do not unequivocally make Rubinstein's case. Uncertainties abound about how the probate rules for valuing assets were actually applied. Mills and factories, being real property, were clearly excluded; machinery may also have been, for the same reason. What the valuation conventions were for stock-in-trade (goods for sale) is also uncertain. It is possible that their probate values were much lower than their actual market values; cash or near-cash, such as bank balances or stocks, were, on the other hand, invariably considered at full face value. A further complication is that probate valuations probably took no notice of a business's goodwill (favor with the public) which, since it represents expectations about future profit-making, would today very often be a large fraction of market value. Whether factors like these introduced systematic biases into the probate valuations of individuals with different types of businesses would be worth investigating. The orthodox view that the wealthiest individuals were the most powerful is also questioned by Rubinstein's study. The problem for this orthodox view is that Rubinstein finds many millionaires who are totally unknown to nineteenth-century historians; the reason for their obscurity could be that they were not powerful. Indeed, Rubinstein dismisses any notion that great wealth had anything to do with entry into the governing elite, as represented by bishops, higher civil servants, and chairmen of manufacturing companies. The only requirements were university attendance and a father with a middle-class income. Rubinstein, in another study, has begun to buttress his findings about the location of wealth by analyzing income tax returns, which reveal a geographical distribution of middle-class incomes similar to that of wealthy incomes revealed by probate records. But until further confirmatory investigation is done, his claims can only be considered partially convincing. | 199306_3-RC_4_21 | [
"the Marxist interpretation of the relationship between class and power in nineteenth-century Britain is no longer viable",
"a simple equation between wealth and power is unlikely to be supported by new data from nineteenth-century British archives",
"a recent historical investigation has challenged but not dis... | 2 | The main idea of the passage is that |
A conventional view of nineteenth-century Britain holds that iron manufacturers and textile manufacturers from the north of England became the wealthiest and most powerful people in society after about 1832. According to Marxist historians, these industrialists were the target of the working class in its struggle for power. A new study by Rubinstein, however, suggests that the real wealth lay with the bankers and merchants of London. Rubinstein does not deny that a northern industrial elite existed but argues that it was consistently outnumbered and outdone by a London-based commercial elite. His claims are provocative and deserve consideration. Rubinstein's claim about the location of wealth comes from his investigation of probate records. These indicate the value of personal property, excluding real property (buildings and land), left by individuals at death. It does seem as if large fortunes were more frequently made in commerce than in industry and, within industry, more frequently from alcohol or tobacco than from textiles or metal. However, such records do not unequivocally make Rubinstein's case. Uncertainties abound about how the probate rules for valuing assets were actually applied. Mills and factories, being real property, were clearly excluded; machinery may also have been, for the same reason. What the valuation conventions were for stock-in-trade (goods for sale) is also uncertain. It is possible that their probate values were much lower than their actual market values; cash or near-cash, such as bank balances or stocks, were, on the other hand, invariably considered at full face value. A further complication is that probate valuations probably took no notice of a business's goodwill (favor with the public) which, since it represents expectations about future profit-making, would today very often be a large fraction of market value. Whether factors like these introduced systematic biases into the probate valuations of individuals with different types of businesses would be worth investigating. The orthodox view that the wealthiest individuals were the most powerful is also questioned by Rubinstein's study. The problem for this orthodox view is that Rubinstein finds many millionaires who are totally unknown to nineteenth-century historians; the reason for their obscurity could be that they were not powerful. Indeed, Rubinstein dismisses any notion that great wealth had anything to do with entry into the governing elite, as represented by bishops, higher civil servants, and chairmen of manufacturing companies. The only requirements were university attendance and a father with a middle-class income. Rubinstein, in another study, has begun to buttress his findings about the location of wealth by analyzing income tax returns, which reveal a geographical distribution of middle-class incomes similar to that of wealthy incomes revealed by probate records. But until further confirmatory investigation is done, his claims can only be considered partially convincing. | 199306_3-RC_4_22 | [
"self-contradictory and misleading",
"ambiguous and outdated",
"controversial but readily available",
"revealing but difficult to interpret",
"widely used by historians but fully understandable only by specialists"
] | 3 | The author of the passage implies that probate records as a source of information about wealth in nineteenth-century Britain are |
A conventional view of nineteenth-century Britain holds that iron manufacturers and textile manufacturers from the north of England became the wealthiest and most powerful people in society after about 1832. According to Marxist historians, these industrialists were the target of the working class in its struggle for power. A new study by Rubinstein, however, suggests that the real wealth lay with the bankers and merchants of London. Rubinstein does not deny that a northern industrial elite existed but argues that it was consistently outnumbered and outdone by a London-based commercial elite. His claims are provocative and deserve consideration. Rubinstein's claim about the location of wealth comes from his investigation of probate records. These indicate the value of personal property, excluding real property (buildings and land), left by individuals at death. It does seem as if large fortunes were more frequently made in commerce than in industry and, within industry, more frequently from alcohol or tobacco than from textiles or metal. However, such records do not unequivocally make Rubinstein's case. Uncertainties abound about how the probate rules for valuing assets were actually applied. Mills and factories, being real property, were clearly excluded; machinery may also have been, for the same reason. What the valuation conventions were for stock-in-trade (goods for sale) is also uncertain. It is possible that their probate values were much lower than their actual market values; cash or near-cash, such as bank balances or stocks, were, on the other hand, invariably considered at full face value. A further complication is that probate valuations probably took no notice of a business's goodwill (favor with the public) which, since it represents expectations about future profit-making, would today very often be a large fraction of market value. Whether factors like these introduced systematic biases into the probate valuations of individuals with different types of businesses would be worth investigating. The orthodox view that the wealthiest individuals were the most powerful is also questioned by Rubinstein's study. The problem for this orthodox view is that Rubinstein finds many millionaires who are totally unknown to nineteenth-century historians; the reason for their obscurity could be that they were not powerful. Indeed, Rubinstein dismisses any notion that great wealth had anything to do with entry into the governing elite, as represented by bishops, higher civil servants, and chairmen of manufacturing companies. The only requirements were university attendance and a father with a middle-class income. Rubinstein, in another study, has begun to buttress his findings about the location of wealth by analyzing income tax returns, which reveal a geographical distribution of middle-class incomes similar to that of wealthy incomes revealed by probate records. But until further confirmatory investigation is done, his claims can only be considered partially convincing. | 199306_3-RC_4_23 | [
"affected by the valuation conventions for such goods",
"less accurate than the valuations for such goods provided by income tax returns",
"less, on average, if such goods were tobacco-related than if they were alcohol-related",
"greater, on average, than the total probate valuations of those individuals who ... | 0 | The author suggests that the total probate valuations of the personal property of individuals holding goods for sale in nineteenth-century Britain may have been |
A conventional view of nineteenth-century Britain holds that iron manufacturers and textile manufacturers from the north of England became the wealthiest and most powerful people in society after about 1832. According to Marxist historians, these industrialists were the target of the working class in its struggle for power. A new study by Rubinstein, however, suggests that the real wealth lay with the bankers and merchants of London. Rubinstein does not deny that a northern industrial elite existed but argues that it was consistently outnumbered and outdone by a London-based commercial elite. His claims are provocative and deserve consideration. Rubinstein's claim about the location of wealth comes from his investigation of probate records. These indicate the value of personal property, excluding real property (buildings and land), left by individuals at death. It does seem as if large fortunes were more frequently made in commerce than in industry and, within industry, more frequently from alcohol or tobacco than from textiles or metal. However, such records do not unequivocally make Rubinstein's case. Uncertainties abound about how the probate rules for valuing assets were actually applied. Mills and factories, being real property, were clearly excluded; machinery may also have been, for the same reason. What the valuation conventions were for stock-in-trade (goods for sale) is also uncertain. It is possible that their probate values were much lower than their actual market values; cash or near-cash, such as bank balances or stocks, were, on the other hand, invariably considered at full face value. A further complication is that probate valuations probably took no notice of a business's goodwill (favor with the public) which, since it represents expectations about future profit-making, would today very often be a large fraction of market value. Whether factors like these introduced systematic biases into the probate valuations of individuals with different types of businesses would be worth investigating. The orthodox view that the wealthiest individuals were the most powerful is also questioned by Rubinstein's study. The problem for this orthodox view is that Rubinstein finds many millionaires who are totally unknown to nineteenth-century historians; the reason for their obscurity could be that they were not powerful. Indeed, Rubinstein dismisses any notion that great wealth had anything to do with entry into the governing elite, as represented by bishops, higher civil servants, and chairmen of manufacturing companies. The only requirements were university attendance and a father with a middle-class income. Rubinstein, in another study, has begun to buttress his findings about the location of wealth by analyzing income tax returns, which reveal a geographical distribution of middle-class incomes similar to that of wealthy incomes revealed by probate records. But until further confirmatory investigation is done, his claims can only be considered partially convincing. | 199306_3-RC_4_24 | [
"The distribution of great wealth between commerce and industry was not equal.",
"Large incomes were typically made in alcohol and tobacco rather than in textiles and metal.",
"A London-based commercial elite can be identified.",
"An official governing elite can be identified.",
"There was a necessary relat... | 4 | According to the passage, Rubinstein has provided evidence that challenges which one of the following claims about nineteenth-century Britain? |
A conventional view of nineteenth-century Britain holds that iron manufacturers and textile manufacturers from the north of England became the wealthiest and most powerful people in society after about 1832. According to Marxist historians, these industrialists were the target of the working class in its struggle for power. A new study by Rubinstein, however, suggests that the real wealth lay with the bankers and merchants of London. Rubinstein does not deny that a northern industrial elite existed but argues that it was consistently outnumbered and outdone by a London-based commercial elite. His claims are provocative and deserve consideration. Rubinstein's claim about the location of wealth comes from his investigation of probate records. These indicate the value of personal property, excluding real property (buildings and land), left by individuals at death. It does seem as if large fortunes were more frequently made in commerce than in industry and, within industry, more frequently from alcohol or tobacco than from textiles or metal. However, such records do not unequivocally make Rubinstein's case. Uncertainties abound about how the probate rules for valuing assets were actually applied. Mills and factories, being real property, were clearly excluded; machinery may also have been, for the same reason. What the valuation conventions were for stock-in-trade (goods for sale) is also uncertain. It is possible that their probate values were much lower than their actual market values; cash or near-cash, such as bank balances or stocks, were, on the other hand, invariably considered at full face value. A further complication is that probate valuations probably took no notice of a business's goodwill (favor with the public) which, since it represents expectations about future profit-making, would today very often be a large fraction of market value. Whether factors like these introduced systematic biases into the probate valuations of individuals with different types of businesses would be worth investigating. The orthodox view that the wealthiest individuals were the most powerful is also questioned by Rubinstein's study. The problem for this orthodox view is that Rubinstein finds many millionaires who are totally unknown to nineteenth-century historians; the reason for their obscurity could be that they were not powerful. Indeed, Rubinstein dismisses any notion that great wealth had anything to do with entry into the governing elite, as represented by bishops, higher civil servants, and chairmen of manufacturing companies. The only requirements were university attendance and a father with a middle-class income. Rubinstein, in another study, has begun to buttress his findings about the location of wealth by analyzing income tax returns, which reveal a geographical distribution of middle-class incomes similar to that of wealthy incomes revealed by probate records. But until further confirmatory investigation is done, his claims can only be considered partially convincing. | 199306_3-RC_4_25 | [
"give an example of a business asset about which little was known in the nineteenth century",
"suggest that the probate valuations of certain businesses may have been significant underestimations of their true market value",
"make the point that this exclusion probably had an equal impact on the probate valuati... | 1 | The author mentions that goodwill was probably excluded from the probate valuation of a business in nineteenth-century Britain most likely in order to |
A conventional view of nineteenth-century Britain holds that iron manufacturers and textile manufacturers from the north of England became the wealthiest and most powerful people in society after about 1832. According to Marxist historians, these industrialists were the target of the working class in its struggle for power. A new study by Rubinstein, however, suggests that the real wealth lay with the bankers and merchants of London. Rubinstein does not deny that a northern industrial elite existed but argues that it was consistently outnumbered and outdone by a London-based commercial elite. His claims are provocative and deserve consideration. Rubinstein's claim about the location of wealth comes from his investigation of probate records. These indicate the value of personal property, excluding real property (buildings and land), left by individuals at death. It does seem as if large fortunes were more frequently made in commerce than in industry and, within industry, more frequently from alcohol or tobacco than from textiles or metal. However, such records do not unequivocally make Rubinstein's case. Uncertainties abound about how the probate rules for valuing assets were actually applied. Mills and factories, being real property, were clearly excluded; machinery may also have been, for the same reason. What the valuation conventions were for stock-in-trade (goods for sale) is also uncertain. It is possible that their probate values were much lower than their actual market values; cash or near-cash, such as bank balances or stocks, were, on the other hand, invariably considered at full face value. A further complication is that probate valuations probably took no notice of a business's goodwill (favor with the public) which, since it represents expectations about future profit-making, would today very often be a large fraction of market value. Whether factors like these introduced systematic biases into the probate valuations of individuals with different types of businesses would be worth investigating. The orthodox view that the wealthiest individuals were the most powerful is also questioned by Rubinstein's study. The problem for this orthodox view is that Rubinstein finds many millionaires who are totally unknown to nineteenth-century historians; the reason for their obscurity could be that they were not powerful. Indeed, Rubinstein dismisses any notion that great wealth had anything to do with entry into the governing elite, as represented by bishops, higher civil servants, and chairmen of manufacturing companies. The only requirements were university attendance and a father with a middle-class income. Rubinstein, in another study, has begun to buttress his findings about the location of wealth by analyzing income tax returns, which reveal a geographical distribution of middle-class incomes similar to that of wealthy incomes revealed by probate records. But until further confirmatory investigation is done, his claims can only be considered partially convincing. | 199306_3-RC_4_26 | [
"a study that indicated that many members of the commercial elite in nineteenth-century London had insignificant holdings of real property",
"a study that indicated that, in the nineteenth century, industrialists from the north of England were in fact a target for working-class people",
"a study that indicated ... | 3 | Which one of the following studies would provide support for Rubinstein's claims? |
A conventional view of nineteenth-century Britain holds that iron manufacturers and textile manufacturers from the north of England became the wealthiest and most powerful people in society after about 1832. According to Marxist historians, these industrialists were the target of the working class in its struggle for power. A new study by Rubinstein, however, suggests that the real wealth lay with the bankers and merchants of London. Rubinstein does not deny that a northern industrial elite existed but argues that it was consistently outnumbered and outdone by a London-based commercial elite. His claims are provocative and deserve consideration. Rubinstein's claim about the location of wealth comes from his investigation of probate records. These indicate the value of personal property, excluding real property (buildings and land), left by individuals at death. It does seem as if large fortunes were more frequently made in commerce than in industry and, within industry, more frequently from alcohol or tobacco than from textiles or metal. However, such records do not unequivocally make Rubinstein's case. Uncertainties abound about how the probate rules for valuing assets were actually applied. Mills and factories, being real property, were clearly excluded; machinery may also have been, for the same reason. What the valuation conventions were for stock-in-trade (goods for sale) is also uncertain. It is possible that their probate values were much lower than their actual market values; cash or near-cash, such as bank balances or stocks, were, on the other hand, invariably considered at full face value. A further complication is that probate valuations probably took no notice of a business's goodwill (favor with the public) which, since it represents expectations about future profit-making, would today very often be a large fraction of market value. Whether factors like these introduced systematic biases into the probate valuations of individuals with different types of businesses would be worth investigating. The orthodox view that the wealthiest individuals were the most powerful is also questioned by Rubinstein's study. The problem for this orthodox view is that Rubinstein finds many millionaires who are totally unknown to nineteenth-century historians; the reason for their obscurity could be that they were not powerful. Indeed, Rubinstein dismisses any notion that great wealth had anything to do with entry into the governing elite, as represented by bishops, higher civil servants, and chairmen of manufacturing companies. The only requirements were university attendance and a father with a middle-class income. Rubinstein, in another study, has begun to buttress his findings about the location of wealth by analyzing income tax returns, which reveal a geographical distribution of middle-class incomes similar to that of wealthy incomes revealed by probate records. But until further confirmatory investigation is done, his claims can only be considered partially convincing. | 199306_3-RC_4_27 | [
"Entry into this elite was more dependent on university attendance than on religious background.",
"Attendance at a prestigious university was probably more crucial than a certain minimum family income in gaining entry into this elite.",
"Bishops as a group were somewhat wealthier, at the point of entry into th... | 4 | Which one of the following, if true, would cast the most doubt on Rubinstein's argument concerning wealth and the official governing elite in nineteenth-century Britain? |
Many argue that recent developments in electronic technology such as computers and videotape have enabled artists to vary their forms of expression. For example, video art can now achieve images whose effect is produced by "digitalization" : breaking up the picture using computerized information processing. Such new technologies create new ways of seeing and hearing by adding different dimensions to older forms, rather than replacing those forms. Consider Locale, a film about a modern dance company. The camera operator wore a Steadicam , an uncomplicated device that allows a camera to be mounted on a person so that the camera remains steady no matter how the operator moves. The Steadicam captures the dance in ways impossible with traditional mounts. Such new equipment also allows for the preservation of previously unrecordable aspects of performances, thus enriching archives. By contrast, others claim that technology subverts the artistic enterprise: that artistic efforts achieved with machines preempt human creativity, rather than being inspired by it. The originality of musical performance, for example, might suffer, as musicians would be deprived of the opportunity to spontaneously change pieces of music before live audiences. Some even worry that technology will eliminate live performance altogether; performances will be recorded for home viewing, abolishing the relationship between performer and audience. But these negative views assume both that technology poses an unprecedented challenge to the arts and that we are not committed enough to the artistic enterprise to preserve the live performance, assumptions that seem unnecessarily cynical. In fact, technology has traditionally assisted our capacity for creative expression and can refine our notions of any given art form. For example, the portable camera and the snapshot were developed at the same time as the rise of Impressionist painting in the nineteenth century. These photographic technologies encouraged a new appreciation for the chance view and unpredictable angle, thus preparing an audience for a new style of painting. In addition, Impressionist artists like Degas studied the elements of light and movement captured by instantaneous photography and used their new understanding of the way our perceptions distort reality to try to more accurately capture reality in their work. Since photos can capture the "moments" of a movement, such as a hand partially raised in a gesture of greeting, Impressionist artists were inspired to paint such moments in order to more effectively convey the quality of spontaneous human action. Photography freed artists from the preconception that a subject should be painted in a static, artificial entirety, and inspired them to capture the random and fragmentary qualities of our world. Finally, since photography preempted painting as the means of obtaining portraits, painters had more freedom to vary their subject matter, thus giving rise to the abstract creations characteristic of modern art. | 199310_1-RC_1_1 | [
"The progress of art relies primarily on technology.",
"Technological innovation can be beneficial to art.",
"There are risks associated with using technology to create art.",
"Technology will transform the way the public responds to art.",
"The relationship between art and technology has a lengthy history.... | 1 | Which one of the following statements best expresses the main idea of the passage? |
Many argue that recent developments in electronic technology such as computers and videotape have enabled artists to vary their forms of expression. For example, video art can now achieve images whose effect is produced by "digitalization" : breaking up the picture using computerized information processing. Such new technologies create new ways of seeing and hearing by adding different dimensions to older forms, rather than replacing those forms. Consider Locale, a film about a modern dance company. The camera operator wore a Steadicam , an uncomplicated device that allows a camera to be mounted on a person so that the camera remains steady no matter how the operator moves. The Steadicam captures the dance in ways impossible with traditional mounts. Such new equipment also allows for the preservation of previously unrecordable aspects of performances, thus enriching archives. By contrast, others claim that technology subverts the artistic enterprise: that artistic efforts achieved with machines preempt human creativity, rather than being inspired by it. The originality of musical performance, for example, might suffer, as musicians would be deprived of the opportunity to spontaneously change pieces of music before live audiences. Some even worry that technology will eliminate live performance altogether; performances will be recorded for home viewing, abolishing the relationship between performer and audience. But these negative views assume both that technology poses an unprecedented challenge to the arts and that we are not committed enough to the artistic enterprise to preserve the live performance, assumptions that seem unnecessarily cynical. In fact, technology has traditionally assisted our capacity for creative expression and can refine our notions of any given art form. For example, the portable camera and the snapshot were developed at the same time as the rise of Impressionist painting in the nineteenth century. These photographic technologies encouraged a new appreciation for the chance view and unpredictable angle, thus preparing an audience for a new style of painting. In addition, Impressionist artists like Degas studied the elements of light and movement captured by instantaneous photography and used their new understanding of the way our perceptions distort reality to try to more accurately capture reality in their work. Since photos can capture the "moments" of a movement, such as a hand partially raised in a gesture of greeting, Impressionist artists were inspired to paint such moments in order to more effectively convey the quality of spontaneous human action. Photography freed artists from the preconception that a subject should be painted in a static, artificial entirety, and inspired them to capture the random and fragmentary qualities of our world. Finally, since photography preempted painting as the means of obtaining portraits, painters had more freedom to vary their subject matter, thus giving rise to the abstract creations characteristic of modern art. | 199310_1-RC_1_2 | [
"The live performance is an important aspect of the artistic enterprise.",
"The public's commitment to the artistic enterprise is questionable.",
"Recent technological innovations present an entirely new sort of challenge to art.",
"Technological innovations of the past have been very useful to artists.",
"... | 0 | It can be inferred from the passage that the author shares which one of the following opinions with the opponents of the use of new technology in art? |
Many argue that recent developments in electronic technology such as computers and videotape have enabled artists to vary their forms of expression. For example, video art can now achieve images whose effect is produced by "digitalization" : breaking up the picture using computerized information processing. Such new technologies create new ways of seeing and hearing by adding different dimensions to older forms, rather than replacing those forms. Consider Locale, a film about a modern dance company. The camera operator wore a Steadicam , an uncomplicated device that allows a camera to be mounted on a person so that the camera remains steady no matter how the operator moves. The Steadicam captures the dance in ways impossible with traditional mounts. Such new equipment also allows for the preservation of previously unrecordable aspects of performances, thus enriching archives. By contrast, others claim that technology subverts the artistic enterprise: that artistic efforts achieved with machines preempt human creativity, rather than being inspired by it. The originality of musical performance, for example, might suffer, as musicians would be deprived of the opportunity to spontaneously change pieces of music before live audiences. Some even worry that technology will eliminate live performance altogether; performances will be recorded for home viewing, abolishing the relationship between performer and audience. But these negative views assume both that technology poses an unprecedented challenge to the arts and that we are not committed enough to the artistic enterprise to preserve the live performance, assumptions that seem unnecessarily cynical. In fact, technology has traditionally assisted our capacity for creative expression and can refine our notions of any given art form. For example, the portable camera and the snapshot were developed at the same time as the rise of Impressionist painting in the nineteenth century. These photographic technologies encouraged a new appreciation for the chance view and unpredictable angle, thus preparing an audience for a new style of painting. In addition, Impressionist artists like Degas studied the elements of light and movement captured by instantaneous photography and used their new understanding of the way our perceptions distort reality to try to more accurately capture reality in their work. Since photos can capture the "moments" of a movement, such as a hand partially raised in a gesture of greeting, Impressionist artists were inspired to paint such moments in order to more effectively convey the quality of spontaneous human action. Photography freed artists from the preconception that a subject should be painted in a static, artificial entirety, and inspired them to capture the random and fragmentary qualities of our world. Finally, since photography preempted painting as the means of obtaining portraits, painters had more freedom to vary their subject matter, thus giving rise to the abstract creations characteristic of modern art. | 199310_1-RC_1_3 | [
"Surveys show that when recordings of performances are made available for home viewing, the public becomes far more knowledgeable about different performing artists.",
"Surveys show that some people feel comfortable responding spontaneously to artistic performances when they are viewing recordings of those perfor... | 3 | Which one of the following, if true, would most undermine the position held by opponents of the use of new technology in art concerning the effect of technology on live performance? |
Many argue that recent developments in electronic technology such as computers and videotape have enabled artists to vary their forms of expression. For example, video art can now achieve images whose effect is produced by "digitalization" : breaking up the picture using computerized information processing. Such new technologies create new ways of seeing and hearing by adding different dimensions to older forms, rather than replacing those forms. Consider Locale, a film about a modern dance company. The camera operator wore a Steadicam , an uncomplicated device that allows a camera to be mounted on a person so that the camera remains steady no matter how the operator moves. The Steadicam captures the dance in ways impossible with traditional mounts. Such new equipment also allows for the preservation of previously unrecordable aspects of performances, thus enriching archives. By contrast, others claim that technology subverts the artistic enterprise: that artistic efforts achieved with machines preempt human creativity, rather than being inspired by it. The originality of musical performance, for example, might suffer, as musicians would be deprived of the opportunity to spontaneously change pieces of music before live audiences. Some even worry that technology will eliminate live performance altogether; performances will be recorded for home viewing, abolishing the relationship between performer and audience. But these negative views assume both that technology poses an unprecedented challenge to the arts and that we are not committed enough to the artistic enterprise to preserve the live performance, assumptions that seem unnecessarily cynical. In fact, technology has traditionally assisted our capacity for creative expression and can refine our notions of any given art form. For example, the portable camera and the snapshot were developed at the same time as the rise of Impressionist painting in the nineteenth century. These photographic technologies encouraged a new appreciation for the chance view and unpredictable angle, thus preparing an audience for a new style of painting. In addition, Impressionist artists like Degas studied the elements of light and movement captured by instantaneous photography and used their new understanding of the way our perceptions distort reality to try to more accurately capture reality in their work. Since photos can capture the "moments" of a movement, such as a hand partially raised in a gesture of greeting, Impressionist artists were inspired to paint such moments in order to more effectively convey the quality of spontaneous human action. Photography freed artists from the preconception that a subject should be painted in a static, artificial entirety, and inspired them to capture the random and fragmentary qualities of our world. Finally, since photography preempted painting as the means of obtaining portraits, painters had more freedom to vary their subject matter, thus giving rise to the abstract creations characteristic of modern art. | 199310_1-RC_1_4 | [
"the filming of performances should not be limited by inadequate equipment",
"new technologies do not need to be very complex in order to benefit art",
"the interaction of a traditional art form with a new technology will change attitudes toward technology in general",
"the replacement of a traditional techno... | 4 | The author uses the example of the Steadicam primarily in order to suggest that |
Many argue that recent developments in electronic technology such as computers and videotape have enabled artists to vary their forms of expression. For example, video art can now achieve images whose effect is produced by "digitalization" : breaking up the picture using computerized information processing. Such new technologies create new ways of seeing and hearing by adding different dimensions to older forms, rather than replacing those forms. Consider Locale, a film about a modern dance company. The camera operator wore a Steadicam , an uncomplicated device that allows a camera to be mounted on a person so that the camera remains steady no matter how the operator moves. The Steadicam captures the dance in ways impossible with traditional mounts. Such new equipment also allows for the preservation of previously unrecordable aspects of performances, thus enriching archives. By contrast, others claim that technology subverts the artistic enterprise: that artistic efforts achieved with machines preempt human creativity, rather than being inspired by it. The originality of musical performance, for example, might suffer, as musicians would be deprived of the opportunity to spontaneously change pieces of music before live audiences. Some even worry that technology will eliminate live performance altogether; performances will be recorded for home viewing, abolishing the relationship between performer and audience. But these negative views assume both that technology poses an unprecedented challenge to the arts and that we are not committed enough to the artistic enterprise to preserve the live performance, assumptions that seem unnecessarily cynical. In fact, technology has traditionally assisted our capacity for creative expression and can refine our notions of any given art form. For example, the portable camera and the snapshot were developed at the same time as the rise of Impressionist painting in the nineteenth century. These photographic technologies encouraged a new appreciation for the chance view and unpredictable angle, thus preparing an audience for a new style of painting. In addition, Impressionist artists like Degas studied the elements of light and movement captured by instantaneous photography and used their new understanding of the way our perceptions distort reality to try to more accurately capture reality in their work. Since photos can capture the "moments" of a movement, such as a hand partially raised in a gesture of greeting, Impressionist artists were inspired to paint such moments in order to more effectively convey the quality of spontaneous human action. Photography freed artists from the preconception that a subject should be painted in a static, artificial entirety, and inspired them to capture the random and fragmentary qualities of our world. Finally, since photography preempted painting as the means of obtaining portraits, painters had more freedom to vary their subject matter, thus giving rise to the abstract creations characteristic of modern art. | 199310_1-RC_1_5 | [
"Most people who reject the use of electronic technology in art forget that machines require a person to operate them.",
"Electronic technology allows for the expansion of archives because longer performances can be recorded.",
"Electronic technology assists artists in finding new ways to present their material... | 2 | According to the passage, proponents of the use of new electronic technology in the arts claim that which one of the following is true? |
Many argue that recent developments in electronic technology such as computers and videotape have enabled artists to vary their forms of expression. For example, video art can now achieve images whose effect is produced by "digitalization" : breaking up the picture using computerized information processing. Such new technologies create new ways of seeing and hearing by adding different dimensions to older forms, rather than replacing those forms. Consider Locale, a film about a modern dance company. The camera operator wore a Steadicam , an uncomplicated device that allows a camera to be mounted on a person so that the camera remains steady no matter how the operator moves. The Steadicam captures the dance in ways impossible with traditional mounts. Such new equipment also allows for the preservation of previously unrecordable aspects of performances, thus enriching archives. By contrast, others claim that technology subverts the artistic enterprise: that artistic efforts achieved with machines preempt human creativity, rather than being inspired by it. The originality of musical performance, for example, might suffer, as musicians would be deprived of the opportunity to spontaneously change pieces of music before live audiences. Some even worry that technology will eliminate live performance altogether; performances will be recorded for home viewing, abolishing the relationship between performer and audience. But these negative views assume both that technology poses an unprecedented challenge to the arts and that we are not committed enough to the artistic enterprise to preserve the live performance, assumptions that seem unnecessarily cynical. In fact, technology has traditionally assisted our capacity for creative expression and can refine our notions of any given art form. For example, the portable camera and the snapshot were developed at the same time as the rise of Impressionist painting in the nineteenth century. These photographic technologies encouraged a new appreciation for the chance view and unpredictable angle, thus preparing an audience for a new style of painting. In addition, Impressionist artists like Degas studied the elements of light and movement captured by instantaneous photography and used their new understanding of the way our perceptions distort reality to try to more accurately capture reality in their work. Since photos can capture the "moments" of a movement, such as a hand partially raised in a gesture of greeting, Impressionist artists were inspired to paint such moments in order to more effectively convey the quality of spontaneous human action. Photography freed artists from the preconception that a subject should be painted in a static, artificial entirety, and inspired them to capture the random and fragmentary qualities of our world. Finally, since photography preempted painting as the means of obtaining portraits, painters had more freedom to vary their subject matter, thus giving rise to the abstract creations characteristic of modern art. | 199310_1-RC_1_6 | [
"The artistic experiments of the nineteenth century led painters to use a variety of methods in creating portraits, which they then applied to other subject matter.",
"The nineteenth-century knowledge of light and movement provided by photography inspired the abstract works characteristic of modern art.",
"Once... | 4 | It can be inferred from the passage that the author would agree with which one of the following statements regarding changes in painting since the nineteenth century? |
During the 1940s and 1950s the United States government developed a new policy toward Native Americans, often known as "readjustment." Because the increased awareness of civil rights in these decades helped reinforce the belief that life on reservations prevented Native Americans from exercising the rights guaranteed to citizens under the United States Constitution, the readjustment movement advocated the end of the federal government's involvement in Native American affairs and encouraged the assimilation of Native Americans as individuals into mainstream society. However, the same years also saw the emergence of a Native American leadership and efforts to develop tribal institutions and reaffirm tribal identity. The clash of these two trends may be traced in the attempts on the part of the Bureau of Indian Affairs (BIA) to convince the Oneida tribe of Wisconsin to accept readjustment. The culmination of BIA efforts to sway the Oneida occurred at a meeting that took place in the fall of 1956. The BIA suggested that it would be to the Oneida's benefit to own their own property and, like other homeowners, pay real estate taxes on it. The BIA also emphasized that, after readjustment, the government would not attempt to restrict Native Americans' ability to sell their individually owned lands. The Oneida were then offered a one-time lump-sum payment of $60,000 in lieu of the $0.52 annuity guaranteed in perpetuity to each member of the tribe under the Canandaigua Treaty. The efforts of the BIA to "sell" readjustment to the tribe failed because the Oneida realized that they had heard similar offers before. The Oneida delegates reacted negatively to the BIA's first suggestion because taxation of Native American lands had been one past vehicle for dispossessing the Oneida: after the distribution of some tribal lands to individual Native Americans in the late nineteenth century, Native American lands became subject to taxation, resulting in new and impossible financial burdens, foreclosures, and subsequent tax sales of property. The Oneida delegates were equally suspicious of the BIA's emphasis on the rights of individual landowners, since in the late nineteenth century many individual Native Americans had been convinced by unscrupulous speculators to sell their lands. Finally, the offer of a lump-sum payment was unanimously opposed by the Oneida delegates, who saw that changing the terms of a treaty might jeopardize the many pending land claims based upon the treaty. As a result of the 1956 meeting, the Oneida rejected readjustment. Instead, they determined to improve tribal life by lobbying for federal monies for postsecondary education, for the improvement of drainage on tribal lands, and for the building of a convalescent home for tribal members. Thus, by learning the lessons of history, the Oneida were able to survive as a tribe in their homeland. | 199310_1-RC_2_7 | [
"the establishment among Native Americans of a tribal system of elected government",
"the creation of a national project to preserve Native American language and oral history",
"the establishment of programs to encourage Native Americans to move from reservations to urban areas",
"the development of a large-s... | 2 | Which one of the following would be most consistent with the policy of readjustment described in the passage? |
During the 1940s and 1950s the United States government developed a new policy toward Native Americans, often known as "readjustment." Because the increased awareness of civil rights in these decades helped reinforce the belief that life on reservations prevented Native Americans from exercising the rights guaranteed to citizens under the United States Constitution, the readjustment movement advocated the end of the federal government's involvement in Native American affairs and encouraged the assimilation of Native Americans as individuals into mainstream society. However, the same years also saw the emergence of a Native American leadership and efforts to develop tribal institutions and reaffirm tribal identity. The clash of these two trends may be traced in the attempts on the part of the Bureau of Indian Affairs (BIA) to convince the Oneida tribe of Wisconsin to accept readjustment. The culmination of BIA efforts to sway the Oneida occurred at a meeting that took place in the fall of 1956. The BIA suggested that it would be to the Oneida's benefit to own their own property and, like other homeowners, pay real estate taxes on it. The BIA also emphasized that, after readjustment, the government would not attempt to restrict Native Americans' ability to sell their individually owned lands. The Oneida were then offered a one-time lump-sum payment of $60,000 in lieu of the $0.52 annuity guaranteed in perpetuity to each member of the tribe under the Canandaigua Treaty. The efforts of the BIA to "sell" readjustment to the tribe failed because the Oneida realized that they had heard similar offers before. The Oneida delegates reacted negatively to the BIA's first suggestion because taxation of Native American lands had been one past vehicle for dispossessing the Oneida: after the distribution of some tribal lands to individual Native Americans in the late nineteenth century, Native American lands became subject to taxation, resulting in new and impossible financial burdens, foreclosures, and subsequent tax sales of property. The Oneida delegates were equally suspicious of the BIA's emphasis on the rights of individual landowners, since in the late nineteenth century many individual Native Americans had been convinced by unscrupulous speculators to sell their lands. Finally, the offer of a lump-sum payment was unanimously opposed by the Oneida delegates, who saw that changing the terms of a treaty might jeopardize the many pending land claims based upon the treaty. As a result of the 1956 meeting, the Oneida rejected readjustment. Instead, they determined to improve tribal life by lobbying for federal monies for postsecondary education, for the improvement of drainage on tribal lands, and for the building of a convalescent home for tribal members. Thus, by learning the lessons of history, the Oneida were able to survive as a tribe in their homeland. | 199310_1-RC_2_8 | [
"obtain improved social services and living conditions for members of the tribe",
"pursue litigation designed to reclaim tribal lands",
"secure recognition of their unique status as a self-governing Native American nation within the United States",
"establish new kinds of tribal institutions",
"cultivate a ... | 0 | According to the passage, after the 1956 meeting the Oneida resolved to |
During the 1940s and 1950s the United States government developed a new policy toward Native Americans, often known as "readjustment." Because the increased awareness of civil rights in these decades helped reinforce the belief that life on reservations prevented Native Americans from exercising the rights guaranteed to citizens under the United States Constitution, the readjustment movement advocated the end of the federal government's involvement in Native American affairs and encouraged the assimilation of Native Americans as individuals into mainstream society. However, the same years also saw the emergence of a Native American leadership and efforts to develop tribal institutions and reaffirm tribal identity. The clash of these two trends may be traced in the attempts on the part of the Bureau of Indian Affairs (BIA) to convince the Oneida tribe of Wisconsin to accept readjustment. The culmination of BIA efforts to sway the Oneida occurred at a meeting that took place in the fall of 1956. The BIA suggested that it would be to the Oneida's benefit to own their own property and, like other homeowners, pay real estate taxes on it. The BIA also emphasized that, after readjustment, the government would not attempt to restrict Native Americans' ability to sell their individually owned lands. The Oneida were then offered a one-time lump-sum payment of $60,000 in lieu of the $0.52 annuity guaranteed in perpetuity to each member of the tribe under the Canandaigua Treaty. The efforts of the BIA to "sell" readjustment to the tribe failed because the Oneida realized that they had heard similar offers before. The Oneida delegates reacted negatively to the BIA's first suggestion because taxation of Native American lands had been one past vehicle for dispossessing the Oneida: after the distribution of some tribal lands to individual Native Americans in the late nineteenth century, Native American lands became subject to taxation, resulting in new and impossible financial burdens, foreclosures, and subsequent tax sales of property. The Oneida delegates were equally suspicious of the BIA's emphasis on the rights of individual landowners, since in the late nineteenth century many individual Native Americans had been convinced by unscrupulous speculators to sell their lands. Finally, the offer of a lump-sum payment was unanimously opposed by the Oneida delegates, who saw that changing the terms of a treaty might jeopardize the many pending land claims based upon the treaty. As a result of the 1956 meeting, the Oneida rejected readjustment. Instead, they determined to improve tribal life by lobbying for federal monies for postsecondary education, for the improvement of drainage on tribal lands, and for the building of a convalescent home for tribal members. Thus, by learning the lessons of history, the Oneida were able to survive as a tribe in their homeland. | 199310_1-RC_2_9 | [
"It summarizes the basis of a conflict underlying negotiations described elsewhere in the passage.",
"It presents two positions, one of which is defended by evidence provided in succeeding paragraphs.",
"It compares competing interpretations of a historical conflict.",
"It analyzes the causes of a specific hi... | 0 | Which one of the following best describes the function of the first paragraph in the context of the passage as a whole? |
During the 1940s and 1950s the United States government developed a new policy toward Native Americans, often known as "readjustment." Because the increased awareness of civil rights in these decades helped reinforce the belief that life on reservations prevented Native Americans from exercising the rights guaranteed to citizens under the United States Constitution, the readjustment movement advocated the end of the federal government's involvement in Native American affairs and encouraged the assimilation of Native Americans as individuals into mainstream society. However, the same years also saw the emergence of a Native American leadership and efforts to develop tribal institutions and reaffirm tribal identity. The clash of these two trends may be traced in the attempts on the part of the Bureau of Indian Affairs (BIA) to convince the Oneida tribe of Wisconsin to accept readjustment. The culmination of BIA efforts to sway the Oneida occurred at a meeting that took place in the fall of 1956. The BIA suggested that it would be to the Oneida's benefit to own their own property and, like other homeowners, pay real estate taxes on it. The BIA also emphasized that, after readjustment, the government would not attempt to restrict Native Americans' ability to sell their individually owned lands. The Oneida were then offered a one-time lump-sum payment of $60,000 in lieu of the $0.52 annuity guaranteed in perpetuity to each member of the tribe under the Canandaigua Treaty. The efforts of the BIA to "sell" readjustment to the tribe failed because the Oneida realized that they had heard similar offers before. The Oneida delegates reacted negatively to the BIA's first suggestion because taxation of Native American lands had been one past vehicle for dispossessing the Oneida: after the distribution of some tribal lands to individual Native Americans in the late nineteenth century, Native American lands became subject to taxation, resulting in new and impossible financial burdens, foreclosures, and subsequent tax sales of property. The Oneida delegates were equally suspicious of the BIA's emphasis on the rights of individual landowners, since in the late nineteenth century many individual Native Americans had been convinced by unscrupulous speculators to sell their lands. Finally, the offer of a lump-sum payment was unanimously opposed by the Oneida delegates, who saw that changing the terms of a treaty might jeopardize the many pending land claims based upon the treaty. As a result of the 1956 meeting, the Oneida rejected readjustment. Instead, they determined to improve tribal life by lobbying for federal monies for postsecondary education, for the improvement of drainage on tribal lands, and for the building of a convalescent home for tribal members. Thus, by learning the lessons of history, the Oneida were able to survive as a tribe in their homeland. | 199310_1-RC_2_10 | [
"contrast the readjustment movement with other social phenomena",
"account for the stance of the Native American leadership",
"help explain the impetus for the readjustment movement",
"explain the motives of BIA bureaucrats",
"foster support for the policy of readjustment"
] | 2 | The author refers to the increased awareness of civil rights during the 1940s and 1950s most probably in order to |
During the 1940s and 1950s the United States government developed a new policy toward Native Americans, often known as "readjustment." Because the increased awareness of civil rights in these decades helped reinforce the belief that life on reservations prevented Native Americans from exercising the rights guaranteed to citizens under the United States Constitution, the readjustment movement advocated the end of the federal government's involvement in Native American affairs and encouraged the assimilation of Native Americans as individuals into mainstream society. However, the same years also saw the emergence of a Native American leadership and efforts to develop tribal institutions and reaffirm tribal identity. The clash of these two trends may be traced in the attempts on the part of the Bureau of Indian Affairs (BIA) to convince the Oneida tribe of Wisconsin to accept readjustment. The culmination of BIA efforts to sway the Oneida occurred at a meeting that took place in the fall of 1956. The BIA suggested that it would be to the Oneida's benefit to own their own property and, like other homeowners, pay real estate taxes on it. The BIA also emphasized that, after readjustment, the government would not attempt to restrict Native Americans' ability to sell their individually owned lands. The Oneida were then offered a one-time lump-sum payment of $60,000 in lieu of the $0.52 annuity guaranteed in perpetuity to each member of the tribe under the Canandaigua Treaty. The efforts of the BIA to "sell" readjustment to the tribe failed because the Oneida realized that they had heard similar offers before. The Oneida delegates reacted negatively to the BIA's first suggestion because taxation of Native American lands had been one past vehicle for dispossessing the Oneida: after the distribution of some tribal lands to individual Native Americans in the late nineteenth century, Native American lands became subject to taxation, resulting in new and impossible financial burdens, foreclosures, and subsequent tax sales of property. The Oneida delegates were equally suspicious of the BIA's emphasis on the rights of individual landowners, since in the late nineteenth century many individual Native Americans had been convinced by unscrupulous speculators to sell their lands. Finally, the offer of a lump-sum payment was unanimously opposed by the Oneida delegates, who saw that changing the terms of a treaty might jeopardize the many pending land claims based upon the treaty. As a result of the 1956 meeting, the Oneida rejected readjustment. Instead, they determined to improve tribal life by lobbying for federal monies for postsecondary education, for the improvement of drainage on tribal lands, and for the building of a convalescent home for tribal members. Thus, by learning the lessons of history, the Oneida were able to survive as a tribe in their homeland. | 199310_1-RC_2_11 | [
"The federal government should work with individual Native Americans to improve life on reservations.",
"The federal government should be no more involved in the affairs of Native Americans than in the affairs of other citizens.",
"The federal government should assume more responsibility for providing social se... | 1 | The passage suggests that advocates of readjustment would most likely agree with which one of the following statements regarding the relationship between the federal government and Native Americans? |
During the 1940s and 1950s the United States government developed a new policy toward Native Americans, often known as "readjustment." Because the increased awareness of civil rights in these decades helped reinforce the belief that life on reservations prevented Native Americans from exercising the rights guaranteed to citizens under the United States Constitution, the readjustment movement advocated the end of the federal government's involvement in Native American affairs and encouraged the assimilation of Native Americans as individuals into mainstream society. However, the same years also saw the emergence of a Native American leadership and efforts to develop tribal institutions and reaffirm tribal identity. The clash of these two trends may be traced in the attempts on the part of the Bureau of Indian Affairs (BIA) to convince the Oneida tribe of Wisconsin to accept readjustment. The culmination of BIA efforts to sway the Oneida occurred at a meeting that took place in the fall of 1956. The BIA suggested that it would be to the Oneida's benefit to own their own property and, like other homeowners, pay real estate taxes on it. The BIA also emphasized that, after readjustment, the government would not attempt to restrict Native Americans' ability to sell their individually owned lands. The Oneida were then offered a one-time lump-sum payment of $60,000 in lieu of the $0.52 annuity guaranteed in perpetuity to each member of the tribe under the Canandaigua Treaty. The efforts of the BIA to "sell" readjustment to the tribe failed because the Oneida realized that they had heard similar offers before. The Oneida delegates reacted negatively to the BIA's first suggestion because taxation of Native American lands had been one past vehicle for dispossessing the Oneida: after the distribution of some tribal lands to individual Native Americans in the late nineteenth century, Native American lands became subject to taxation, resulting in new and impossible financial burdens, foreclosures, and subsequent tax sales of property. The Oneida delegates were equally suspicious of the BIA's emphasis on the rights of individual landowners, since in the late nineteenth century many individual Native Americans had been convinced by unscrupulous speculators to sell their lands. Finally, the offer of a lump-sum payment was unanimously opposed by the Oneida delegates, who saw that changing the terms of a treaty might jeopardize the many pending land claims based upon the treaty. As a result of the 1956 meeting, the Oneida rejected readjustment. Instead, they determined to improve tribal life by lobbying for federal monies for postsecondary education, for the improvement of drainage on tribal lands, and for the building of a convalescent home for tribal members. Thus, by learning the lessons of history, the Oneida were able to survive as a tribe in their homeland. | 199310_1-RC_2_12 | [
"a valuable safeguard of certain Oneida rights and privileges",
"the source of many past problems for the Oneida tribe",
"a model for the type of agreement they hoped to reach with the federal government",
"an important step toward recognition of their status as an independent Native American nation",
"an o... | 0 | The passage suggests that the Oneida delegates viewed the Canandaigua Treaty as |
During the 1940s and 1950s the United States government developed a new policy toward Native Americans, often known as "readjustment." Because the increased awareness of civil rights in these decades helped reinforce the belief that life on reservations prevented Native Americans from exercising the rights guaranteed to citizens under the United States Constitution, the readjustment movement advocated the end of the federal government's involvement in Native American affairs and encouraged the assimilation of Native Americans as individuals into mainstream society. However, the same years also saw the emergence of a Native American leadership and efforts to develop tribal institutions and reaffirm tribal identity. The clash of these two trends may be traced in the attempts on the part of the Bureau of Indian Affairs (BIA) to convince the Oneida tribe of Wisconsin to accept readjustment. The culmination of BIA efforts to sway the Oneida occurred at a meeting that took place in the fall of 1956. The BIA suggested that it would be to the Oneida's benefit to own their own property and, like other homeowners, pay real estate taxes on it. The BIA also emphasized that, after readjustment, the government would not attempt to restrict Native Americans' ability to sell their individually owned lands. The Oneida were then offered a one-time lump-sum payment of $60,000 in lieu of the $0.52 annuity guaranteed in perpetuity to each member of the tribe under the Canandaigua Treaty. The efforts of the BIA to "sell" readjustment to the tribe failed because the Oneida realized that they had heard similar offers before. The Oneida delegates reacted negatively to the BIA's first suggestion because taxation of Native American lands had been one past vehicle for dispossessing the Oneida: after the distribution of some tribal lands to individual Native Americans in the late nineteenth century, Native American lands became subject to taxation, resulting in new and impossible financial burdens, foreclosures, and subsequent tax sales of property. The Oneida delegates were equally suspicious of the BIA's emphasis on the rights of individual landowners, since in the late nineteenth century many individual Native Americans had been convinced by unscrupulous speculators to sell their lands. Finally, the offer of a lump-sum payment was unanimously opposed by the Oneida delegates, who saw that changing the terms of a treaty might jeopardize the many pending land claims based upon the treaty. As a result of the 1956 meeting, the Oneida rejected readjustment. Instead, they determined to improve tribal life by lobbying for federal monies for postsecondary education, for the improvement of drainage on tribal lands, and for the building of a convalescent home for tribal members. Thus, by learning the lessons of history, the Oneida were able to survive as a tribe in their homeland. | 199310_1-RC_2_13 | [
"A university offers a student a four-year scholarship with the stipulation that the student not accept any outside employment; the student refuses the offer and attends a different school because the amount of the scholarship would not have covered living expenses.",
"A company seeking to reduce its payroll obli... | 1 | Which one of the following situations most closely parallels that of the Oneida delegates in refusing to accept a lump-sum payment of $60,000? |
Direct observation of contemporary societies at the threshold of widespread literacy has not assisted our understanding of how such literacy altered ancient Greek society, in particular its political culture. The discovery of what Goody has called the "enabling effects" of literacy in contemporary societies tends to seduce the observer into confusing often rudimentary knowledge of how to read with popular access to important books and documents; this confusion is then projected onto ancient societies. "In ancient Greece," Goody writes, "alphabetic reading and writing was important for the development of political democracy." An examination of the ancient Greek city Athens exemplifies how this sort of confusion is detrimental to understanding ancient politics. In Athens, the early development of a written law code was retrospectively mythologized as the critical factor in breaking the power monopoly of the old aristocracy: hence the Greek tradition of the "law-giver," which has captured the imaginations of scholars like Goody. But the application and efficacy of all law codes depend on their interpretation by magistrates and courts, and unless the right of interpretation is "democratized," the mere existence of written laws changes little. In fact, never in antiquity did any but the elite consult documents and books. Even in Greek courts the juries heard only the relevant statutes read out during the proceedings, as they heard verbal testimony, and they then rendered their verdict on the spot, without the benefit of any discussion among themselves. True, in Athens the juries were representative of a broad spectrum of the population, and these juries, drawn from diverse social classes, both interpreted what they had heard and determined matters of fact. However, they were guided solely by the speeches prepared for the parties by professional pleaders and by the quotations of laws or decrees within the speeches, rather than by their own access to any kind of document or book. Granted, people today also rely heavily on a truly knowledgeable minority for information and its interpretation, often transmitted orally. Yet this is still fundamentally different from an ancient society in which there was no "popular literature," i.e., no newspapers, magazines, or other media that dealt with sociopolitical issues. An ancient law code would have been analogous to the Latin Bible, a venerated document but a closed book. The resistance of the medieval Church to vernacular translations of the Bible, in the West at least, is therefore a pointer to the realities of ancient literacy. When fundamental documents are accessible for study only to an elite, the rest of the society is subject to the elite's interpretation of the rules of behavior, including right political behavior. Athens, insofar as it functioned as a democracy, did so not because of widespread literacy, but because the elite had chosen to accept democratic institutions. | 199310_1-RC_3_14 | [
"Democratic political institutions grow organically from the traditions and conventions of a society.",
"Democratic political institutions are not necessarily the outcome of literacy in a society.",
"Religious authority, like political authority, can determine who in a given society will have access to importan... | 1 | Which one of the following statements best expresses the main idea of the passage? |
Direct observation of contemporary societies at the threshold of widespread literacy has not assisted our understanding of how such literacy altered ancient Greek society, in particular its political culture. The discovery of what Goody has called the "enabling effects" of literacy in contemporary societies tends to seduce the observer into confusing often rudimentary knowledge of how to read with popular access to important books and documents; this confusion is then projected onto ancient societies. "In ancient Greece," Goody writes, "alphabetic reading and writing was important for the development of political democracy." An examination of the ancient Greek city Athens exemplifies how this sort of confusion is detrimental to understanding ancient politics. In Athens, the early development of a written law code was retrospectively mythologized as the critical factor in breaking the power monopoly of the old aristocracy: hence the Greek tradition of the "law-giver," which has captured the imaginations of scholars like Goody. But the application and efficacy of all law codes depend on their interpretation by magistrates and courts, and unless the right of interpretation is "democratized," the mere existence of written laws changes little. In fact, never in antiquity did any but the elite consult documents and books. Even in Greek courts the juries heard only the relevant statutes read out during the proceedings, as they heard verbal testimony, and they then rendered their verdict on the spot, without the benefit of any discussion among themselves. True, in Athens the juries were representative of a broad spectrum of the population, and these juries, drawn from diverse social classes, both interpreted what they had heard and determined matters of fact. However, they were guided solely by the speeches prepared for the parties by professional pleaders and by the quotations of laws or decrees within the speeches, rather than by their own access to any kind of document or book. Granted, people today also rely heavily on a truly knowledgeable minority for information and its interpretation, often transmitted orally. Yet this is still fundamentally different from an ancient society in which there was no "popular literature," i.e., no newspapers, magazines, or other media that dealt with sociopolitical issues. An ancient law code would have been analogous to the Latin Bible, a venerated document but a closed book. The resistance of the medieval Church to vernacular translations of the Bible, in the West at least, is therefore a pointer to the realities of ancient literacy. When fundamental documents are accessible for study only to an elite, the rest of the society is subject to the elite's interpretation of the rules of behavior, including right political behavior. Athens, insofar as it functioned as a democracy, did so not because of widespread literacy, but because the elite had chosen to accept democratic institutions. | 199310_1-RC_3_15 | [
"They are more politically advanced than societies without rudimentary reading ability.",
"They are unlikely to exhibit the positive effects of literacy.",
"They are rapidly evolving toward widespread literacy.",
"Many of their people might not have access to important documents and books.",
"Most of their ... | 3 | It can be inferred from the passage that the author assumes which one of the following about societies in which the people possess a rudimentary reading ability? |
Direct observation of contemporary societies at the threshold of widespread literacy has not assisted our understanding of how such literacy altered ancient Greek society, in particular its political culture. The discovery of what Goody has called the "enabling effects" of literacy in contemporary societies tends to seduce the observer into confusing often rudimentary knowledge of how to read with popular access to important books and documents; this confusion is then projected onto ancient societies. "In ancient Greece," Goody writes, "alphabetic reading and writing was important for the development of political democracy." An examination of the ancient Greek city Athens exemplifies how this sort of confusion is detrimental to understanding ancient politics. In Athens, the early development of a written law code was retrospectively mythologized as the critical factor in breaking the power monopoly of the old aristocracy: hence the Greek tradition of the "law-giver," which has captured the imaginations of scholars like Goody. But the application and efficacy of all law codes depend on their interpretation by magistrates and courts, and unless the right of interpretation is "democratized," the mere existence of written laws changes little. In fact, never in antiquity did any but the elite consult documents and books. Even in Greek courts the juries heard only the relevant statutes read out during the proceedings, as they heard verbal testimony, and they then rendered their verdict on the spot, without the benefit of any discussion among themselves. True, in Athens the juries were representative of a broad spectrum of the population, and these juries, drawn from diverse social classes, both interpreted what they had heard and determined matters of fact. However, they were guided solely by the speeches prepared for the parties by professional pleaders and by the quotations of laws or decrees within the speeches, rather than by their own access to any kind of document or book. Granted, people today also rely heavily on a truly knowledgeable minority for information and its interpretation, often transmitted orally. Yet this is still fundamentally different from an ancient society in which there was no "popular literature," i.e., no newspapers, magazines, or other media that dealt with sociopolitical issues. An ancient law code would have been analogous to the Latin Bible, a venerated document but a closed book. The resistance of the medieval Church to vernacular translations of the Bible, in the West at least, is therefore a pointer to the realities of ancient literacy. When fundamental documents are accessible for study only to an elite, the rest of the society is subject to the elite's interpretation of the rules of behavior, including right political behavior. Athens, insofar as it functioned as a democracy, did so not because of widespread literacy, but because the elite had chosen to accept democratic institutions. | 199310_1-RC_3_16 | [
"Because they have a popular literature that closes the gap between the elite and the majority, contemporary societies rely far less on the knowledge of experts than did ancient societies.",
"Contemporary societies rely on the knowledge of experts, as did ancient societies, because contemporary popular literature... | 2 | The author refers to the truly knowledgeable minority in contemporary societies in the context of the fourth paragraph in order to imply which one of the following? |
Direct observation of contemporary societies at the threshold of widespread literacy has not assisted our understanding of how such literacy altered ancient Greek society, in particular its political culture. The discovery of what Goody has called the "enabling effects" of literacy in contemporary societies tends to seduce the observer into confusing often rudimentary knowledge of how to read with popular access to important books and documents; this confusion is then projected onto ancient societies. "In ancient Greece," Goody writes, "alphabetic reading and writing was important for the development of political democracy." An examination of the ancient Greek city Athens exemplifies how this sort of confusion is detrimental to understanding ancient politics. In Athens, the early development of a written law code was retrospectively mythologized as the critical factor in breaking the power monopoly of the old aristocracy: hence the Greek tradition of the "law-giver," which has captured the imaginations of scholars like Goody. But the application and efficacy of all law codes depend on their interpretation by magistrates and courts, and unless the right of interpretation is "democratized," the mere existence of written laws changes little. In fact, never in antiquity did any but the elite consult documents and books. Even in Greek courts the juries heard only the relevant statutes read out during the proceedings, as they heard verbal testimony, and they then rendered their verdict on the spot, without the benefit of any discussion among themselves. True, in Athens the juries were representative of a broad spectrum of the population, and these juries, drawn from diverse social classes, both interpreted what they had heard and determined matters of fact. However, they were guided solely by the speeches prepared for the parties by professional pleaders and by the quotations of laws or decrees within the speeches, rather than by their own access to any kind of document or book. Granted, people today also rely heavily on a truly knowledgeable minority for information and its interpretation, often transmitted orally. Yet this is still fundamentally different from an ancient society in which there was no "popular literature," i.e., no newspapers, magazines, or other media that dealt with sociopolitical issues. An ancient law code would have been analogous to the Latin Bible, a venerated document but a closed book. The resistance of the medieval Church to vernacular translations of the Bible, in the West at least, is therefore a pointer to the realities of ancient literacy. When fundamental documents are accessible for study only to an elite, the rest of the society is subject to the elite's interpretation of the rules of behavior, including right political behavior. Athens, insofar as it functioned as a democracy, did so not because of widespread literacy, but because the elite had chosen to accept democratic institutions. | 199310_1-RC_3_17 | [
"They were somewhat democratic insofar as they were composed largely of people from the lowest social classes.",
"They were exposed to the law only insofar as they heard relevant statutes read out during legal proceedings.",
"They ascertained the facts of a case and interpreted the laws.",
"They did not have ... | 0 | According to the passage, each of the following statements concerning ancient Greek juries is true EXCEPT: |
Direct observation of contemporary societies at the threshold of widespread literacy has not assisted our understanding of how such literacy altered ancient Greek society, in particular its political culture. The discovery of what Goody has called the "enabling effects" of literacy in contemporary societies tends to seduce the observer into confusing often rudimentary knowledge of how to read with popular access to important books and documents; this confusion is then projected onto ancient societies. "In ancient Greece," Goody writes, "alphabetic reading and writing was important for the development of political democracy." An examination of the ancient Greek city Athens exemplifies how this sort of confusion is detrimental to understanding ancient politics. In Athens, the early development of a written law code was retrospectively mythologized as the critical factor in breaking the power monopoly of the old aristocracy: hence the Greek tradition of the "law-giver," which has captured the imaginations of scholars like Goody. But the application and efficacy of all law codes depend on their interpretation by magistrates and courts, and unless the right of interpretation is "democratized," the mere existence of written laws changes little. In fact, never in antiquity did any but the elite consult documents and books. Even in Greek courts the juries heard only the relevant statutes read out during the proceedings, as they heard verbal testimony, and they then rendered their verdict on the spot, without the benefit of any discussion among themselves. True, in Athens the juries were representative of a broad spectrum of the population, and these juries, drawn from diverse social classes, both interpreted what they had heard and determined matters of fact. However, they were guided solely by the speeches prepared for the parties by professional pleaders and by the quotations of laws or decrees within the speeches, rather than by their own access to any kind of document or book. Granted, people today also rely heavily on a truly knowledgeable minority for information and its interpretation, often transmitted orally. Yet this is still fundamentally different from an ancient society in which there was no "popular literature," i.e., no newspapers, magazines, or other media that dealt with sociopolitical issues. An ancient law code would have been analogous to the Latin Bible, a venerated document but a closed book. The resistance of the medieval Church to vernacular translations of the Bible, in the West at least, is therefore a pointer to the realities of ancient literacy. When fundamental documents are accessible for study only to an elite, the rest of the society is subject to the elite's interpretation of the rules of behavior, including right political behavior. Athens, insofar as it functioned as a democracy, did so not because of widespread literacy, but because the elite had chosen to accept democratic institutions. | 199310_1-RC_3_18 | [
"illustrate the ancient Greek tendency to memorialize historical events by transforming them into myths",
"convey the historical importance of the development of the early Athenian written law code",
"convey the high regard in which the Athenians held their legal tradition",
"suggest that the development of a... | 3 | The author characterizes the Greek tradition of the "law-giver" (line 21) as an effect of mythologizing most probably in order to |
Direct observation of contemporary societies at the threshold of widespread literacy has not assisted our understanding of how such literacy altered ancient Greek society, in particular its political culture. The discovery of what Goody has called the "enabling effects" of literacy in contemporary societies tends to seduce the observer into confusing often rudimentary knowledge of how to read with popular access to important books and documents; this confusion is then projected onto ancient societies. "In ancient Greece," Goody writes, "alphabetic reading and writing was important for the development of political democracy." An examination of the ancient Greek city Athens exemplifies how this sort of confusion is detrimental to understanding ancient politics. In Athens, the early development of a written law code was retrospectively mythologized as the critical factor in breaking the power monopoly of the old aristocracy: hence the Greek tradition of the "law-giver," which has captured the imaginations of scholars like Goody. But the application and efficacy of all law codes depend on their interpretation by magistrates and courts, and unless the right of interpretation is "democratized," the mere existence of written laws changes little. In fact, never in antiquity did any but the elite consult documents and books. Even in Greek courts the juries heard only the relevant statutes read out during the proceedings, as they heard verbal testimony, and they then rendered their verdict on the spot, without the benefit of any discussion among themselves. True, in Athens the juries were representative of a broad spectrum of the population, and these juries, drawn from diverse social classes, both interpreted what they had heard and determined matters of fact. However, they were guided solely by the speeches prepared for the parties by professional pleaders and by the quotations of laws or decrees within the speeches, rather than by their own access to any kind of document or book. Granted, people today also rely heavily on a truly knowledgeable minority for information and its interpretation, often transmitted orally. Yet this is still fundamentally different from an ancient society in which there was no "popular literature," i.e., no newspapers, magazines, or other media that dealt with sociopolitical issues. An ancient law code would have been analogous to the Latin Bible, a venerated document but a closed book. The resistance of the medieval Church to vernacular translations of the Bible, in the West at least, is therefore a pointer to the realities of ancient literacy. When fundamental documents are accessible for study only to an elite, the rest of the society is subject to the elite's interpretation of the rules of behavior, including right political behavior. Athens, insofar as it functioned as a democracy, did so not because of widespread literacy, but because the elite had chosen to accept democratic institutions. | 199310_1-RC_3_19 | [
"Documents were considered authoritative in premodern society in proportion to their inaccessibility to the majority.",
"Documents that were perceived as highly influential in premodern societies were not necessarily accessible to the society's majority.",
"What is most revered in a nondemocratic society is wha... | 1 | The author draws an analogy between the Latin Bible and an early law code (lines 49–51) in order to make which one of the following points? |
Direct observation of contemporary societies at the threshold of widespread literacy has not assisted our understanding of how such literacy altered ancient Greek society, in particular its political culture. The discovery of what Goody has called the "enabling effects" of literacy in contemporary societies tends to seduce the observer into confusing often rudimentary knowledge of how to read with popular access to important books and documents; this confusion is then projected onto ancient societies. "In ancient Greece," Goody writes, "alphabetic reading and writing was important for the development of political democracy." An examination of the ancient Greek city Athens exemplifies how this sort of confusion is detrimental to understanding ancient politics. In Athens, the early development of a written law code was retrospectively mythologized as the critical factor in breaking the power monopoly of the old aristocracy: hence the Greek tradition of the "law-giver," which has captured the imaginations of scholars like Goody. But the application and efficacy of all law codes depend on their interpretation by magistrates and courts, and unless the right of interpretation is "democratized," the mere existence of written laws changes little. In fact, never in antiquity did any but the elite consult documents and books. Even in Greek courts the juries heard only the relevant statutes read out during the proceedings, as they heard verbal testimony, and they then rendered their verdict on the spot, without the benefit of any discussion among themselves. True, in Athens the juries were representative of a broad spectrum of the population, and these juries, drawn from diverse social classes, both interpreted what they had heard and determined matters of fact. However, they were guided solely by the speeches prepared for the parties by professional pleaders and by the quotations of laws or decrees within the speeches, rather than by their own access to any kind of document or book. Granted, people today also rely heavily on a truly knowledgeable minority for information and its interpretation, often transmitted orally. Yet this is still fundamentally different from an ancient society in which there was no "popular literature," i.e., no newspapers, magazines, or other media that dealt with sociopolitical issues. An ancient law code would have been analogous to the Latin Bible, a venerated document but a closed book. The resistance of the medieval Church to vernacular translations of the Bible, in the West at least, is therefore a pointer to the realities of ancient literacy. When fundamental documents are accessible for study only to an elite, the rest of the society is subject to the elite's interpretation of the rules of behavior, including right political behavior. Athens, insofar as it functioned as a democracy, did so not because of widespread literacy, but because the elite had chosen to accept democratic institutions. | 199310_1-RC_3_20 | [
"argue that a particular method of observing contemporary societies is inconsistent",
"point out the weaknesses in a particular approach to understanding ancient societies",
"present the disadvantages of a particular approach to understanding the relationship between ancient and contemporary societies",
"exam... | 1 | The primary purpose of the passage is to |
The English who in the seventeenth and eighteenth centuries inhabited those colonies that would later become the United States shared a common political vocabulary with the English in England. Steeped as they were in the English political language, these colonials failed to observe that their experience in America had given the words a significance quite different from that accepted by the English with whom they debated; in fact, they claimed that they were more loyal to the English political tradition than were the English in England. In many respects the political institutions of England were reproduced in these American colonies. By the middle of the eighteenth century, all of these colonies except four were headed by Royal Governors appointed by the King and perceived as bearing a relation to the people of the colony similar to that of the King to the English people. Moreover, each of these colonies enjoyed a representative assembly, which was consciously modeled, in powers and practices, after the English Parliament. In both England and these colonies, only property holders could vote. Nevertheless, though English and colonial institutions were structurally similar, attitudes toward those institutions differed. For example, English legal development from the early seventeenth century had been moving steadily toward the absolute power of Parliament. The most unmistakable sign of this tendency was the legal assertion that the King was subject to the law. Together with this resolute denial of the absolute right of kings went the assertion that Parliament was unlimited in its power: it could change even the Constitution by its ordinary acts of legislation. By the eighteenth century the English had accepted the idea that the parliamentary representatives of the people were omnipotent. The citizens of these colonies did not look upon the English Parliament with such fond eyes, nor did they concede that their own assemblies possessed such wide powers. There were good historical reasons for this. To the English the word "constitution" meant the whole body of law and legal custom formulated since the beginning of the kingdom, whereas to these colonials a constitution was a specific written document, enumerating specific powers. This distinction in meaning can be traced to the fact that the foundations of government in the various colonies were written charters granted by the Crown. These express authorizations to govern were tangible, definite things. Over the years these colonials had often repaired to the charters to justify themselves in the struggle against tyrannical governors or officials of the Crown. More than a century of government under written constitutions convinced these colonists of the necessity for and efficacy of protecting their liberties against governmental encroachment by explicitly defining all governmental powers in a document. | 199310_1-RC_4_21 | [
"The colonials and the English mistakenly thought that they shared a common political vocabulary.",
"The colonials and the English shared a variety of institutions.",
"The colonials and the English had conflicting interpretations of the language and institutional structures that they shared.",
"Colonial attit... | 2 | Which one of the following best expresses the main idea of the passage? |
The English who in the seventeenth and eighteenth centuries inhabited those colonies that would later become the United States shared a common political vocabulary with the English in England. Steeped as they were in the English political language, these colonials failed to observe that their experience in America had given the words a significance quite different from that accepted by the English with whom they debated; in fact, they claimed that they were more loyal to the English political tradition than were the English in England. In many respects the political institutions of England were reproduced in these American colonies. By the middle of the eighteenth century, all of these colonies except four were headed by Royal Governors appointed by the King and perceived as bearing a relation to the people of the colony similar to that of the King to the English people. Moreover, each of these colonies enjoyed a representative assembly, which was consciously modeled, in powers and practices, after the English Parliament. In both England and these colonies, only property holders could vote. Nevertheless, though English and colonial institutions were structurally similar, attitudes toward those institutions differed. For example, English legal development from the early seventeenth century had been moving steadily toward the absolute power of Parliament. The most unmistakable sign of this tendency was the legal assertion that the King was subject to the law. Together with this resolute denial of the absolute right of kings went the assertion that Parliament was unlimited in its power: it could change even the Constitution by its ordinary acts of legislation. By the eighteenth century the English had accepted the idea that the parliamentary representatives of the people were omnipotent. The citizens of these colonies did not look upon the English Parliament with such fond eyes, nor did they concede that their own assemblies possessed such wide powers. There were good historical reasons for this. To the English the word "constitution" meant the whole body of law and legal custom formulated since the beginning of the kingdom, whereas to these colonials a constitution was a specific written document, enumerating specific powers. This distinction in meaning can be traced to the fact that the foundations of government in the various colonies were written charters granted by the Crown. These express authorizations to govern were tangible, definite things. Over the years these colonials had often repaired to the charters to justify themselves in the struggle against tyrannical governors or officials of the Crown. More than a century of government under written constitutions convinced these colonists of the necessity for and efficacy of protecting their liberties against governmental encroachment by explicitly defining all governmental powers in a document. | 199310_1-RC_4_22 | [
"Colonials who did not own property could not vote.",
"All of these colonies had representative assemblies modeled after the British Parliament.",
"Some of these colonies had Royal Governors.",
"Royal Governors could be removed from office by colonial assemblies.",
"In these colonies, Royal Governors were r... | 3 | The passage supports all of the following statements about the political conditions present by the middle of the eighteenth century in the American colonies discussed in the passage EXCEPT: |
The English who in the seventeenth and eighteenth centuries inhabited those colonies that would later become the United States shared a common political vocabulary with the English in England. Steeped as they were in the English political language, these colonials failed to observe that their experience in America had given the words a significance quite different from that accepted by the English with whom they debated; in fact, they claimed that they were more loyal to the English political tradition than were the English in England. In many respects the political institutions of England were reproduced in these American colonies. By the middle of the eighteenth century, all of these colonies except four were headed by Royal Governors appointed by the King and perceived as bearing a relation to the people of the colony similar to that of the King to the English people. Moreover, each of these colonies enjoyed a representative assembly, which was consciously modeled, in powers and practices, after the English Parliament. In both England and these colonies, only property holders could vote. Nevertheless, though English and colonial institutions were structurally similar, attitudes toward those institutions differed. For example, English legal development from the early seventeenth century had been moving steadily toward the absolute power of Parliament. The most unmistakable sign of this tendency was the legal assertion that the King was subject to the law. Together with this resolute denial of the absolute right of kings went the assertion that Parliament was unlimited in its power: it could change even the Constitution by its ordinary acts of legislation. By the eighteenth century the English had accepted the idea that the parliamentary representatives of the people were omnipotent. The citizens of these colonies did not look upon the English Parliament with such fond eyes, nor did they concede that their own assemblies possessed such wide powers. There were good historical reasons for this. To the English the word "constitution" meant the whole body of law and legal custom formulated since the beginning of the kingdom, whereas to these colonials a constitution was a specific written document, enumerating specific powers. This distinction in meaning can be traced to the fact that the foundations of government in the various colonies were written charters granted by the Crown. These express authorizations to govern were tangible, definite things. Over the years these colonials had often repaired to the charters to justify themselves in the struggle against tyrannical governors or officials of the Crown. More than a century of government under written constitutions convinced these colonists of the necessity for and efficacy of protecting their liberties against governmental encroachment by explicitly defining all governmental powers in a document. | 199310_1-RC_4_23 | [
"They were the source of all law.",
"They frequently flouted laws made by Parliament.",
"Their power relative to that of Parliament was considerably greater than it was in the eighteenth century.",
"They were more often the sources of legal reform than they were in the eighteenth century.",
"They had to com... | 2 | The passage implies which one of the following about English kings prior to the early seventeenth century? |
The English who in the seventeenth and eighteenth centuries inhabited those colonies that would later become the United States shared a common political vocabulary with the English in England. Steeped as they were in the English political language, these colonials failed to observe that their experience in America had given the words a significance quite different from that accepted by the English with whom they debated; in fact, they claimed that they were more loyal to the English political tradition than were the English in England. In many respects the political institutions of England were reproduced in these American colonies. By the middle of the eighteenth century, all of these colonies except four were headed by Royal Governors appointed by the King and perceived as bearing a relation to the people of the colony similar to that of the King to the English people. Moreover, each of these colonies enjoyed a representative assembly, which was consciously modeled, in powers and practices, after the English Parliament. In both England and these colonies, only property holders could vote. Nevertheless, though English and colonial institutions were structurally similar, attitudes toward those institutions differed. For example, English legal development from the early seventeenth century had been moving steadily toward the absolute power of Parliament. The most unmistakable sign of this tendency was the legal assertion that the King was subject to the law. Together with this resolute denial of the absolute right of kings went the assertion that Parliament was unlimited in its power: it could change even the Constitution by its ordinary acts of legislation. By the eighteenth century the English had accepted the idea that the parliamentary representatives of the people were omnipotent. The citizens of these colonies did not look upon the English Parliament with such fond eyes, nor did they concede that their own assemblies possessed such wide powers. There were good historical reasons for this. To the English the word "constitution" meant the whole body of law and legal custom formulated since the beginning of the kingdom, whereas to these colonials a constitution was a specific written document, enumerating specific powers. This distinction in meaning can be traced to the fact that the foundations of government in the various colonies were written charters granted by the Crown. These express authorizations to govern were tangible, definite things. Over the years these colonials had often repaired to the charters to justify themselves in the struggle against tyrannical governors or officials of the Crown. More than a century of government under written constitutions convinced these colonists of the necessity for and efficacy of protecting their liberties against governmental encroachment by explicitly defining all governmental powers in a document. | 199310_1-RC_4_24 | [
"The English had become uncomfortable with institutions that could claim absolute authority.",
"The English realized that their interests were better guarded by Parliament than by the King.",
"The English allowed Parliament to make constitutional changes by legislative enactment.",
"The English felt that the ... | 2 | The author mentions which one of the following as evidence for the eighteenth-century English attitude toward Parliament? |
The English who in the seventeenth and eighteenth centuries inhabited those colonies that would later become the United States shared a common political vocabulary with the English in England. Steeped as they were in the English political language, these colonials failed to observe that their experience in America had given the words a significance quite different from that accepted by the English with whom they debated; in fact, they claimed that they were more loyal to the English political tradition than were the English in England. In many respects the political institutions of England were reproduced in these American colonies. By the middle of the eighteenth century, all of these colonies except four were headed by Royal Governors appointed by the King and perceived as bearing a relation to the people of the colony similar to that of the King to the English people. Moreover, each of these colonies enjoyed a representative assembly, which was consciously modeled, in powers and practices, after the English Parliament. In both England and these colonies, only property holders could vote. Nevertheless, though English and colonial institutions were structurally similar, attitudes toward those institutions differed. For example, English legal development from the early seventeenth century had been moving steadily toward the absolute power of Parliament. The most unmistakable sign of this tendency was the legal assertion that the King was subject to the law. Together with this resolute denial of the absolute right of kings went the assertion that Parliament was unlimited in its power: it could change even the Constitution by its ordinary acts of legislation. By the eighteenth century the English had accepted the idea that the parliamentary representatives of the people were omnipotent. The citizens of these colonies did not look upon the English Parliament with such fond eyes, nor did they concede that their own assemblies possessed such wide powers. There were good historical reasons for this. To the English the word "constitution" meant the whole body of law and legal custom formulated since the beginning of the kingdom, whereas to these colonials a constitution was a specific written document, enumerating specific powers. This distinction in meaning can be traced to the fact that the foundations of government in the various colonies were written charters granted by the Crown. These express authorizations to govern were tangible, definite things. Over the years these colonials had often repaired to the charters to justify themselves in the struggle against tyrannical governors or officials of the Crown. More than a century of government under written constitutions convinced these colonists of the necessity for and efficacy of protecting their liberties against governmental encroachment by explicitly defining all governmental powers in a document. | 199310_1-RC_4_25 | [
"their changed use of the English political vocabulary",
"English commitment to parliamentary representation",
"their uniquely English experience",
"their refusal to adopt any English political institutions",
"their greater loyalty to the English political traditions"
] | 4 | The passage implies that the colonials discussed in the passage would have considered which one of the following to be a source of their debates with England? |
The English who in the seventeenth and eighteenth centuries inhabited those colonies that would later become the United States shared a common political vocabulary with the English in England. Steeped as they were in the English political language, these colonials failed to observe that their experience in America had given the words a significance quite different from that accepted by the English with whom they debated; in fact, they claimed that they were more loyal to the English political tradition than were the English in England. In many respects the political institutions of England were reproduced in these American colonies. By the middle of the eighteenth century, all of these colonies except four were headed by Royal Governors appointed by the King and perceived as bearing a relation to the people of the colony similar to that of the King to the English people. Moreover, each of these colonies enjoyed a representative assembly, which was consciously modeled, in powers and practices, after the English Parliament. In both England and these colonies, only property holders could vote. Nevertheless, though English and colonial institutions were structurally similar, attitudes toward those institutions differed. For example, English legal development from the early seventeenth century had been moving steadily toward the absolute power of Parliament. The most unmistakable sign of this tendency was the legal assertion that the King was subject to the law. Together with this resolute denial of the absolute right of kings went the assertion that Parliament was unlimited in its power: it could change even the Constitution by its ordinary acts of legislation. By the eighteenth century the English had accepted the idea that the parliamentary representatives of the people were omnipotent. The citizens of these colonies did not look upon the English Parliament with such fond eyes, nor did they concede that their own assemblies possessed such wide powers. There were good historical reasons for this. To the English the word "constitution" meant the whole body of law and legal custom formulated since the beginning of the kingdom, whereas to these colonials a constitution was a specific written document, enumerating specific powers. This distinction in meaning can be traced to the fact that the foundations of government in the various colonies were written charters granted by the Crown. These express authorizations to govern were tangible, definite things. Over the years these colonials had often repaired to the charters to justify themselves in the struggle against tyrannical governors or officials of the Crown. More than a century of government under written constitutions convinced these colonists of the necessity for and efficacy of protecting their liberties against governmental encroachment by explicitly defining all governmental powers in a document. | 199310_1-RC_4_26 | [
"the legal foundation of the kingdom",
"a document containing a collection of customs",
"a cumulative corpus of legislation and legal traditions",
"a record alterable by royal authority",
"an unchangeable body of governmental powers"
] | 2 | According to the passage, the English attitude toward the English Constitution differed from the colonial attitude toward constitutions in that the English regarded their Constitution as |
The English who in the seventeenth and eighteenth centuries inhabited those colonies that would later become the United States shared a common political vocabulary with the English in England. Steeped as they were in the English political language, these colonials failed to observe that their experience in America had given the words a significance quite different from that accepted by the English with whom they debated; in fact, they claimed that they were more loyal to the English political tradition than were the English in England. In many respects the political institutions of England were reproduced in these American colonies. By the middle of the eighteenth century, all of these colonies except four were headed by Royal Governors appointed by the King and perceived as bearing a relation to the people of the colony similar to that of the King to the English people. Moreover, each of these colonies enjoyed a representative assembly, which was consciously modeled, in powers and practices, after the English Parliament. In both England and these colonies, only property holders could vote. Nevertheless, though English and colonial institutions were structurally similar, attitudes toward those institutions differed. For example, English legal development from the early seventeenth century had been moving steadily toward the absolute power of Parliament. The most unmistakable sign of this tendency was the legal assertion that the King was subject to the law. Together with this resolute denial of the absolute right of kings went the assertion that Parliament was unlimited in its power: it could change even the Constitution by its ordinary acts of legislation. By the eighteenth century the English had accepted the idea that the parliamentary representatives of the people were omnipotent. The citizens of these colonies did not look upon the English Parliament with such fond eyes, nor did they concede that their own assemblies possessed such wide powers. There were good historical reasons for this. To the English the word "constitution" meant the whole body of law and legal custom formulated since the beginning of the kingdom, whereas to these colonials a constitution was a specific written document, enumerating specific powers. This distinction in meaning can be traced to the fact that the foundations of government in the various colonies were written charters granted by the Crown. These express authorizations to govern were tangible, definite things. Over the years these colonials had often repaired to the charters to justify themselves in the struggle against tyrannical governors or officials of the Crown. More than a century of government under written constitutions convinced these colonists of the necessity for and efficacy of protecting their liberties against governmental encroachment by explicitly defining all governmental powers in a document. | 199310_1-RC_4_27 | [
"expose the misunderstanding that has characterized descriptions of the relationship between seventeenth- and eighteenth-century England and certain of its American colonies",
"suggest a reason for England's treatment of certain of its American colonies in the seventeenth and eighteenth centuries",
"settle an o... | 4 | The primary purpose of the passage is to |
Oil companies need offshore platforms primarily because the oil or natural gas the companies extract from the ocean floor has to be processed before pumps can be used to move the substances ashore. But because processing crude (unprocessed oil or gas on a platform rather than at facilities onshore exposes workers to the risks of explosion and to an unpredictable environment, researchers are attempting to diminish the need for human labor on platforms and even to eliminate platforms altogether by redesigning two kinds of pumps to handle crude. These pumps could then be used to boost the natural pressure driving the flow of crude, which, by itself, is sufficient only to bring the crude to the platform, located just above the wellhead. Currently, pumps that could boost this natural pressure sufficiently to drive the crude through a pipeline to the shore do not work consistently because of the crude's content. Crude may consist of oil or natural gas in multiphase states—combinations of liquids, gases, and solids under pressure—that do not reach the wellhead in constant proportions. The flow of crude oil, for example, can change quickly from 60 percent liquid to 70 percent gas. This surge in gas content causes loss of "head," or pressure inside a pump, with the result that a pump can no longer impart enough energy to transport the crude mixture through the pipeline and to the shore. Of the two pumps being redesigned, the positive-displacement pump is promising because it is immune to sudden shifts in the proportion of liquid to gas in the crude mixture. But the pump's design, which consists of a single or twin screw pushing the fluid from one end of the pump to the other, brings crude into close contact with most parts of the pump, and thus requires that it be made of expensive corrosion-resistant material. The alternative is the centrifugal pump, which has a rotating impeller that sucks fluid in at one end and forces fluid out at the other. Although this pump has a proven design and has worked for years with little maintenance in waste-disposal plants, researchers have discovered that because the swirl of its impeller separates gas out from the oil that normally accompanies it, significant reductions in head can occur as it operates. Research in the development of these pumps is focused mainly on trying to reduce the cost of the positive-displacement pump and attempting to make the centrifugal pump more tolerant of gas. Other researchers are looking at ways of adapting either kind of pump for use underwater, so that crude could be moved directly from the sea bottom to processing facilities onshore, eliminating platforms. | 199402_3-RC_1_1 | [
"Oil companies are experimenting with technologies that may help diminish the danger to workers from offshore crude processing.",
"Oil companies are seeking methods of installing processing facilities underwater.",
"Researchers are developing several new pumps designed to enhance human labor efficiency in proce... | 0 | Which one of the following best expresses the main ideas of the passage? |
Oil companies need offshore platforms primarily because the oil or natural gas the companies extract from the ocean floor has to be processed before pumps can be used to move the substances ashore. But because processing crude (unprocessed oil or gas on a platform rather than at facilities onshore exposes workers to the risks of explosion and to an unpredictable environment, researchers are attempting to diminish the need for human labor on platforms and even to eliminate platforms altogether by redesigning two kinds of pumps to handle crude. These pumps could then be used to boost the natural pressure driving the flow of crude, which, by itself, is sufficient only to bring the crude to the platform, located just above the wellhead. Currently, pumps that could boost this natural pressure sufficiently to drive the crude through a pipeline to the shore do not work consistently because of the crude's content. Crude may consist of oil or natural gas in multiphase states—combinations of liquids, gases, and solids under pressure—that do not reach the wellhead in constant proportions. The flow of crude oil, for example, can change quickly from 60 percent liquid to 70 percent gas. This surge in gas content causes loss of "head," or pressure inside a pump, with the result that a pump can no longer impart enough energy to transport the crude mixture through the pipeline and to the shore. Of the two pumps being redesigned, the positive-displacement pump is promising because it is immune to sudden shifts in the proportion of liquid to gas in the crude mixture. But the pump's design, which consists of a single or twin screw pushing the fluid from one end of the pump to the other, brings crude into close contact with most parts of the pump, and thus requires that it be made of expensive corrosion-resistant material. The alternative is the centrifugal pump, which has a rotating impeller that sucks fluid in at one end and forces fluid out at the other. Although this pump has a proven design and has worked for years with little maintenance in waste-disposal plants, researchers have discovered that because the swirl of its impeller separates gas out from the oil that normally accompanies it, significant reductions in head can occur as it operates. Research in the development of these pumps is focused mainly on trying to reduce the cost of the positive-displacement pump and attempting to make the centrifugal pump more tolerant of gas. Other researchers are looking at ways of adapting either kind of pump for use underwater, so that crude could be moved directly from the sea bottom to processing facilities onshore, eliminating platforms. | 199402_3-RC_1_2 | [
"It is higher than that created by the centrifugal pump.",
"It is constant, regardless of relative proportions of gas and liquid.",
"It is able to carry the crude only as far as the wellhead.",
"It is able to carry the crude to the platform.",
"It is able to carry the crude to the shore."
] | 3 | The passage supports which one of the following statements about the natural pressure driving the flow of crude? |
Oil companies need offshore platforms primarily because the oil or natural gas the companies extract from the ocean floor has to be processed before pumps can be used to move the substances ashore. But because processing crude (unprocessed oil or gas on a platform rather than at facilities onshore exposes workers to the risks of explosion and to an unpredictable environment, researchers are attempting to diminish the need for human labor on platforms and even to eliminate platforms altogether by redesigning two kinds of pumps to handle crude. These pumps could then be used to boost the natural pressure driving the flow of crude, which, by itself, is sufficient only to bring the crude to the platform, located just above the wellhead. Currently, pumps that could boost this natural pressure sufficiently to drive the crude through a pipeline to the shore do not work consistently because of the crude's content. Crude may consist of oil or natural gas in multiphase states—combinations of liquids, gases, and solids under pressure—that do not reach the wellhead in constant proportions. The flow of crude oil, for example, can change quickly from 60 percent liquid to 70 percent gas. This surge in gas content causes loss of "head," or pressure inside a pump, with the result that a pump can no longer impart enough energy to transport the crude mixture through the pipeline and to the shore. Of the two pumps being redesigned, the positive-displacement pump is promising because it is immune to sudden shifts in the proportion of liquid to gas in the crude mixture. But the pump's design, which consists of a single or twin screw pushing the fluid from one end of the pump to the other, brings crude into close contact with most parts of the pump, and thus requires that it be made of expensive corrosion-resistant material. The alternative is the centrifugal pump, which has a rotating impeller that sucks fluid in at one end and forces fluid out at the other. Although this pump has a proven design and has worked for years with little maintenance in waste-disposal plants, researchers have discovered that because the swirl of its impeller separates gas out from the oil that normally accompanies it, significant reductions in head can occur as it operates. Research in the development of these pumps is focused mainly on trying to reduce the cost of the positive-displacement pump and attempting to make the centrifugal pump more tolerant of gas. Other researchers are looking at ways of adapting either kind of pump for use underwater, so that crude could be moved directly from the sea bottom to processing facilities onshore, eliminating platforms. | 199402_3-RC_1_3 | [
"It offers concrete detail designed to show that the argument made in the first paragraph is flawed.",
"It provides detail that expands upon the information presented in the first paragraph.",
"It enhances the author's discussion by objectively presenting in detail the pros and cons of a claim made in the first... | 1 | Which one of the following best describes the relationship of the second paragraph to the passage as a whole? |
Oil companies need offshore platforms primarily because the oil or natural gas the companies extract from the ocean floor has to be processed before pumps can be used to move the substances ashore. But because processing crude (unprocessed oil or gas on a platform rather than at facilities onshore exposes workers to the risks of explosion and to an unpredictable environment, researchers are attempting to diminish the need for human labor on platforms and even to eliminate platforms altogether by redesigning two kinds of pumps to handle crude. These pumps could then be used to boost the natural pressure driving the flow of crude, which, by itself, is sufficient only to bring the crude to the platform, located just above the wellhead. Currently, pumps that could boost this natural pressure sufficiently to drive the crude through a pipeline to the shore do not work consistently because of the crude's content. Crude may consist of oil or natural gas in multiphase states—combinations of liquids, gases, and solids under pressure—that do not reach the wellhead in constant proportions. The flow of crude oil, for example, can change quickly from 60 percent liquid to 70 percent gas. This surge in gas content causes loss of "head," or pressure inside a pump, with the result that a pump can no longer impart enough energy to transport the crude mixture through the pipeline and to the shore. Of the two pumps being redesigned, the positive-displacement pump is promising because it is immune to sudden shifts in the proportion of liquid to gas in the crude mixture. But the pump's design, which consists of a single or twin screw pushing the fluid from one end of the pump to the other, brings crude into close contact with most parts of the pump, and thus requires that it be made of expensive corrosion-resistant material. The alternative is the centrifugal pump, which has a rotating impeller that sucks fluid in at one end and forces fluid out at the other. Although this pump has a proven design and has worked for years with little maintenance in waste-disposal plants, researchers have discovered that because the swirl of its impeller separates gas out from the oil that normally accompanies it, significant reductions in head can occur as it operates. Research in the development of these pumps is focused mainly on trying to reduce the cost of the positive-displacement pump and attempting to make the centrifugal pump more tolerant of gas. Other researchers are looking at ways of adapting either kind of pump for use underwater, so that crude could be moved directly from the sea bottom to processing facilities onshore, eliminating platforms. | 199402_3-RC_1_4 | [
"the flow of the crude inside the pump",
"the volume of oil inside the pump",
"the volume of gas inside the pump",
"the speed of the impeller moving the crude",
"the pressure inside of the pump"
] | 4 | Which one of the following phrases, if substituted for the word "head" in line 47, would LEAST change the meaning of the sentence? |
Oil companies need offshore platforms primarily because the oil or natural gas the companies extract from the ocean floor has to be processed before pumps can be used to move the substances ashore. But because processing crude (unprocessed oil or gas on a platform rather than at facilities onshore exposes workers to the risks of explosion and to an unpredictable environment, researchers are attempting to diminish the need for human labor on platforms and even to eliminate platforms altogether by redesigning two kinds of pumps to handle crude. These pumps could then be used to boost the natural pressure driving the flow of crude, which, by itself, is sufficient only to bring the crude to the platform, located just above the wellhead. Currently, pumps that could boost this natural pressure sufficiently to drive the crude through a pipeline to the shore do not work consistently because of the crude's content. Crude may consist of oil or natural gas in multiphase states—combinations of liquids, gases, and solids under pressure—that do not reach the wellhead in constant proportions. The flow of crude oil, for example, can change quickly from 60 percent liquid to 70 percent gas. This surge in gas content causes loss of "head," or pressure inside a pump, with the result that a pump can no longer impart enough energy to transport the crude mixture through the pipeline and to the shore. Of the two pumps being redesigned, the positive-displacement pump is promising because it is immune to sudden shifts in the proportion of liquid to gas in the crude mixture. But the pump's design, which consists of a single or twin screw pushing the fluid from one end of the pump to the other, brings crude into close contact with most parts of the pump, and thus requires that it be made of expensive corrosion-resistant material. The alternative is the centrifugal pump, which has a rotating impeller that sucks fluid in at one end and forces fluid out at the other. Although this pump has a proven design and has worked for years with little maintenance in waste-disposal plants, researchers have discovered that because the swirl of its impeller separates gas out from the oil that normally accompanies it, significant reductions in head can occur as it operates. Research in the development of these pumps is focused mainly on trying to reduce the cost of the positive-displacement pump and attempting to make the centrifugal pump more tolerant of gas. Other researchers are looking at ways of adapting either kind of pump for use underwater, so that crude could be moved directly from the sea bottom to processing facilities onshore, eliminating platforms. | 199402_3-RC_1_5 | [
"If a reduction of human labor on offshore platforms is achieved, there is no real need to eliminate platforms altogether.",
"Reducing human labor on offshore platforms is desirable because researchers' knowledge about the transportation of crude is danerously incomplete.",
"The dangers involved in working on o... | 2 | With which one of the following statements regarding offshore platforms would the author most likely agree? |
Oil companies need offshore platforms primarily because the oil or natural gas the companies extract from the ocean floor has to be processed before pumps can be used to move the substances ashore. But because processing crude (unprocessed oil or gas on a platform rather than at facilities onshore exposes workers to the risks of explosion and to an unpredictable environment, researchers are attempting to diminish the need for human labor on platforms and even to eliminate platforms altogether by redesigning two kinds of pumps to handle crude. These pumps could then be used to boost the natural pressure driving the flow of crude, which, by itself, is sufficient only to bring the crude to the platform, located just above the wellhead. Currently, pumps that could boost this natural pressure sufficiently to drive the crude through a pipeline to the shore do not work consistently because of the crude's content. Crude may consist of oil or natural gas in multiphase states—combinations of liquids, gases, and solids under pressure—that do not reach the wellhead in constant proportions. The flow of crude oil, for example, can change quickly from 60 percent liquid to 70 percent gas. This surge in gas content causes loss of "head," or pressure inside a pump, with the result that a pump can no longer impart enough energy to transport the crude mixture through the pipeline and to the shore. Of the two pumps being redesigned, the positive-displacement pump is promising because it is immune to sudden shifts in the proportion of liquid to gas in the crude mixture. But the pump's design, which consists of a single or twin screw pushing the fluid from one end of the pump to the other, brings crude into close contact with most parts of the pump, and thus requires that it be made of expensive corrosion-resistant material. The alternative is the centrifugal pump, which has a rotating impeller that sucks fluid in at one end and forces fluid out at the other. Although this pump has a proven design and has worked for years with little maintenance in waste-disposal plants, researchers have discovered that because the swirl of its impeller separates gas out from the oil that normally accompanies it, significant reductions in head can occur as it operates. Research in the development of these pumps is focused mainly on trying to reduce the cost of the positive-displacement pump and attempting to make the centrifugal pump more tolerant of gas. Other researchers are looking at ways of adapting either kind of pump for use underwater, so that crude could be moved directly from the sea bottom to processing facilities onshore, eliminating platforms. | 199402_3-RC_1_6 | [
"The efficiency of these pumps depends on there being no gas in the flow of crude.",
"These pumps are more efficient when the crude is less subject to sudden increases in the proportion of gas to liquid.",
"A sudden change from solid to liquid in the flow of crude increases the efficiency of these pumps.",
"T... | 1 | Which one of the following can be inferred from the passage about pumps that are currently available to boost the natural pressure of crude? |
Oil companies need offshore platforms primarily because the oil or natural gas the companies extract from the ocean floor has to be processed before pumps can be used to move the substances ashore. But because processing crude (unprocessed oil or gas on a platform rather than at facilities onshore exposes workers to the risks of explosion and to an unpredictable environment, researchers are attempting to diminish the need for human labor on platforms and even to eliminate platforms altogether by redesigning two kinds of pumps to handle crude. These pumps could then be used to boost the natural pressure driving the flow of crude, which, by itself, is sufficient only to bring the crude to the platform, located just above the wellhead. Currently, pumps that could boost this natural pressure sufficiently to drive the crude through a pipeline to the shore do not work consistently because of the crude's content. Crude may consist of oil or natural gas in multiphase states—combinations of liquids, gases, and solids under pressure—that do not reach the wellhead in constant proportions. The flow of crude oil, for example, can change quickly from 60 percent liquid to 70 percent gas. This surge in gas content causes loss of "head," or pressure inside a pump, with the result that a pump can no longer impart enough energy to transport the crude mixture through the pipeline and to the shore. Of the two pumps being redesigned, the positive-displacement pump is promising because it is immune to sudden shifts in the proportion of liquid to gas in the crude mixture. But the pump's design, which consists of a single or twin screw pushing the fluid from one end of the pump to the other, brings crude into close contact with most parts of the pump, and thus requires that it be made of expensive corrosion-resistant material. The alternative is the centrifugal pump, which has a rotating impeller that sucks fluid in at one end and forces fluid out at the other. Although this pump has a proven design and has worked for years with little maintenance in waste-disposal plants, researchers have discovered that because the swirl of its impeller separates gas out from the oil that normally accompanies it, significant reductions in head can occur as it operates. Research in the development of these pumps is focused mainly on trying to reduce the cost of the positive-displacement pump and attempting to make the centrifugal pump more tolerant of gas. Other researchers are looking at ways of adapting either kind of pump for use underwater, so that crude could be moved directly from the sea bottom to processing facilities onshore, eliminating platforms. | 199402_3-RC_1_7 | [
"is more promising, but it also is more expensive and demands more maintenance",
"is especially well researched, since it has been used in other settings",
"involves the use of a single or twin screw that sucks fluid in at one end of the pump",
"is problematic because it causes rapid shifts from liquid to gas... | 4 | The passage implies that the positive-displacement pump differs from the centrifugal pump in that the positive-displacement pump |
Oil companies need offshore platforms primarily because the oil or natural gas the companies extract from the ocean floor has to be processed before pumps can be used to move the substances ashore. But because processing crude (unprocessed oil or gas on a platform rather than at facilities onshore exposes workers to the risks of explosion and to an unpredictable environment, researchers are attempting to diminish the need for human labor on platforms and even to eliminate platforms altogether by redesigning two kinds of pumps to handle crude. These pumps could then be used to boost the natural pressure driving the flow of crude, which, by itself, is sufficient only to bring the crude to the platform, located just above the wellhead. Currently, pumps that could boost this natural pressure sufficiently to drive the crude through a pipeline to the shore do not work consistently because of the crude's content. Crude may consist of oil or natural gas in multiphase states—combinations of liquids, gases, and solids under pressure—that do not reach the wellhead in constant proportions. The flow of crude oil, for example, can change quickly from 60 percent liquid to 70 percent gas. This surge in gas content causes loss of "head," or pressure inside a pump, with the result that a pump can no longer impart enough energy to transport the crude mixture through the pipeline and to the shore. Of the two pumps being redesigned, the positive-displacement pump is promising because it is immune to sudden shifts in the proportion of liquid to gas in the crude mixture. But the pump's design, which consists of a single or twin screw pushing the fluid from one end of the pump to the other, brings crude into close contact with most parts of the pump, and thus requires that it be made of expensive corrosion-resistant material. The alternative is the centrifugal pump, which has a rotating impeller that sucks fluid in at one end and forces fluid out at the other. Although this pump has a proven design and has worked for years with little maintenance in waste-disposal plants, researchers have discovered that because the swirl of its impeller separates gas out from the oil that normally accompanies it, significant reductions in head can occur as it operates. Research in the development of these pumps is focused mainly on trying to reduce the cost of the positive-displacement pump and attempting to make the centrifugal pump more tolerant of gas. Other researchers are looking at ways of adapting either kind of pump for use underwater, so that crude could be moved directly from the sea bottom to processing facilities onshore, eliminating platforms. | 199402_3-RC_1_8 | [
"in a multiphase state",
"in equal proportions of gas to liquid",
"with small proportions of corrosive material",
"after having been processed",
"largely in the form of a liquid"
] | 3 | The passage implies that the current state of technology necessitates that crude be moved to shore |
To critics accustomed to the style of fifteenth-century narrative paintings by Italian artists from Tuscany, the Venetian examples of narrative paintings with religious subjects that Patricia Fortini Brown analyzes in a recent book will come as a great surprise. While the Tuscan paintings present large-scale figures, clear narratives, and simple settings, the Venetians filled their pictures with dozens of small figures and elaborate buildings, in addition to a wealth of carefully observed anecdotal detail often irrelevant to the paintings' principal subjects—the religious stories they narrate. Although it occasionally obscured these stories, this accumulation of circumstantial detail from Venetian life—the inclusion of prominent Venetian citizens, for example—was considered appropriate to the narration of historical subjects and underlined the authenticity of the historical events depicted. Indeed, Brown argues that the distinctive style of the Venetian paintings—what she calls the "eyewitness style" —was influenced by Venetian affinity for a strongly parochial type of historical writing, consisting almost exclusively of vernacular chronicles of local events embroidered with all kinds of inconsequential detail. And yet, while Venetian attitudes toward history that are reflected in their art account in part for the difference in style between Venetian and Tuscan narrative paintings, Brown has overlooked some practical influences, such as climate. Tuscan churches are filled with frescoes that, in contrast to Venetian narrative paintings, consist mainly of large figures and easily recognized religious stories, as one would expect of paintings that are normally viewed from a distance and are designed primarily to remind the faithful of their religious tenets. In Venice, where the damp climate is unsuited to fresco, narrative frescoes in churches were almost nonexistent, with the result that Venetian artists and their public had no practical experience of the large-scale representation of familiar religious stories. Their model for painted stories was the cycle of secular historical paintings in the Venetian magistrate's palace, which were indeed the counterpart of written history and were made all the more authoritative by a proliferation of circumstantial detail. Moreover, because painting frescoes requires an unusually sure hand, particularly in the representation of the human form, the development of drawing skill was central to artistic training in Tuscany, and by 1500 the public there tended to distinguish artists on the basis of how well they could draw human figures. In Venice, a city virtually without frescoes, this kind of skill was acquired and appreciated much later. Gentile Bellini, for example, although regarded as one of the supreme painters of the day, was feeble at drawing. On the other hand, the emphasis on architecture so evident in the Venetian narrative paintings was something that local painters obviously prized, largely because painting architecture in perspective was seen as a particular test of the Venetian painter's skill. | 199402_3-RC_2_9 | [
"Tuscan painters' use of fresco explains the prominence of human figures in the narrative paintings that they produced during the fifteenth century.",
"In addition to fifteenth-century Venetian attitudes toward history, other factors may help to explain the characteristic features of Venetian narrative paintings ... | 1 | Which one of the following best states the main idea of the passage? |
To critics accustomed to the style of fifteenth-century narrative paintings by Italian artists from Tuscany, the Venetian examples of narrative paintings with religious subjects that Patricia Fortini Brown analyzes in a recent book will come as a great surprise. While the Tuscan paintings present large-scale figures, clear narratives, and simple settings, the Venetians filled their pictures with dozens of small figures and elaborate buildings, in addition to a wealth of carefully observed anecdotal detail often irrelevant to the paintings' principal subjects—the religious stories they narrate. Although it occasionally obscured these stories, this accumulation of circumstantial detail from Venetian life—the inclusion of prominent Venetian citizens, for example—was considered appropriate to the narration of historical subjects and underlined the authenticity of the historical events depicted. Indeed, Brown argues that the distinctive style of the Venetian paintings—what she calls the "eyewitness style" —was influenced by Venetian affinity for a strongly parochial type of historical writing, consisting almost exclusively of vernacular chronicles of local events embroidered with all kinds of inconsequential detail. And yet, while Venetian attitudes toward history that are reflected in their art account in part for the difference in style between Venetian and Tuscan narrative paintings, Brown has overlooked some practical influences, such as climate. Tuscan churches are filled with frescoes that, in contrast to Venetian narrative paintings, consist mainly of large figures and easily recognized religious stories, as one would expect of paintings that are normally viewed from a distance and are designed primarily to remind the faithful of their religious tenets. In Venice, where the damp climate is unsuited to fresco, narrative frescoes in churches were almost nonexistent, with the result that Venetian artists and their public had no practical experience of the large-scale representation of familiar religious stories. Their model for painted stories was the cycle of secular historical paintings in the Venetian magistrate's palace, which were indeed the counterpart of written history and were made all the more authoritative by a proliferation of circumstantial detail. Moreover, because painting frescoes requires an unusually sure hand, particularly in the representation of the human form, the development of drawing skill was central to artistic training in Tuscany, and by 1500 the public there tended to distinguish artists on the basis of how well they could draw human figures. In Venice, a city virtually without frescoes, this kind of skill was acquired and appreciated much later. Gentile Bellini, for example, although regarded as one of the supreme painters of the day, was feeble at drawing. On the other hand, the emphasis on architecture so evident in the Venetian narrative paintings was something that local painters obviously prized, largely because painting architecture in perspective was seen as a particular test of the Venetian painter's skill. | 199402_3-RC_2_10 | [
"pointing out the superiority of one painting style over another",
"citing evidence that requires a reevaluation of a conventionally held view",
"discussing factors that explain a difference in painting styles",
"outlining the strengths and weaknesses of two opposing views regarding the evolution of a paintin... | 2 | In the passage, the author is primarily concerned with |
To critics accustomed to the style of fifteenth-century narrative paintings by Italian artists from Tuscany, the Venetian examples of narrative paintings with religious subjects that Patricia Fortini Brown analyzes in a recent book will come as a great surprise. While the Tuscan paintings present large-scale figures, clear narratives, and simple settings, the Venetians filled their pictures with dozens of small figures and elaborate buildings, in addition to a wealth of carefully observed anecdotal detail often irrelevant to the paintings' principal subjects—the religious stories they narrate. Although it occasionally obscured these stories, this accumulation of circumstantial detail from Venetian life—the inclusion of prominent Venetian citizens, for example—was considered appropriate to the narration of historical subjects and underlined the authenticity of the historical events depicted. Indeed, Brown argues that the distinctive style of the Venetian paintings—what she calls the "eyewitness style" —was influenced by Venetian affinity for a strongly parochial type of historical writing, consisting almost exclusively of vernacular chronicles of local events embroidered with all kinds of inconsequential detail. And yet, while Venetian attitudes toward history that are reflected in their art account in part for the difference in style between Venetian and Tuscan narrative paintings, Brown has overlooked some practical influences, such as climate. Tuscan churches are filled with frescoes that, in contrast to Venetian narrative paintings, consist mainly of large figures and easily recognized religious stories, as one would expect of paintings that are normally viewed from a distance and are designed primarily to remind the faithful of their religious tenets. In Venice, where the damp climate is unsuited to fresco, narrative frescoes in churches were almost nonexistent, with the result that Venetian artists and their public had no practical experience of the large-scale representation of familiar religious stories. Their model for painted stories was the cycle of secular historical paintings in the Venetian magistrate's palace, which were indeed the counterpart of written history and were made all the more authoritative by a proliferation of circumstantial detail. Moreover, because painting frescoes requires an unusually sure hand, particularly in the representation of the human form, the development of drawing skill was central to artistic training in Tuscany, and by 1500 the public there tended to distinguish artists on the basis of how well they could draw human figures. In Venice, a city virtually without frescoes, this kind of skill was acquired and appreciated much later. Gentile Bellini, for example, although regarded as one of the supreme painters of the day, was feeble at drawing. On the other hand, the emphasis on architecture so evident in the Venetian narrative paintings was something that local painters obviously prized, largely because painting architecture in perspective was seen as a particular test of the Venetian painter's skill. | 199402_3-RC_2_11 | [
"the painting of architecture in perspective requires greater drawing skill than does the representation of a human form in a fresco",
"certain characteristics of a style of painting can reflect a style of historical writing that was common during the same period",
"the eyewitness style in Venetian narrative pa... | 1 | As it is described in the passage, Brown's explanation of the use of the eyewitness style in Venetian narrative painting suggests that |
To critics accustomed to the style of fifteenth-century narrative paintings by Italian artists from Tuscany, the Venetian examples of narrative paintings with religious subjects that Patricia Fortini Brown analyzes in a recent book will come as a great surprise. While the Tuscan paintings present large-scale figures, clear narratives, and simple settings, the Venetians filled their pictures with dozens of small figures and elaborate buildings, in addition to a wealth of carefully observed anecdotal detail often irrelevant to the paintings' principal subjects—the religious stories they narrate. Although it occasionally obscured these stories, this accumulation of circumstantial detail from Venetian life—the inclusion of prominent Venetian citizens, for example—was considered appropriate to the narration of historical subjects and underlined the authenticity of the historical events depicted. Indeed, Brown argues that the distinctive style of the Venetian paintings—what she calls the "eyewitness style" —was influenced by Venetian affinity for a strongly parochial type of historical writing, consisting almost exclusively of vernacular chronicles of local events embroidered with all kinds of inconsequential detail. And yet, while Venetian attitudes toward history that are reflected in their art account in part for the difference in style between Venetian and Tuscan narrative paintings, Brown has overlooked some practical influences, such as climate. Tuscan churches are filled with frescoes that, in contrast to Venetian narrative paintings, consist mainly of large figures and easily recognized religious stories, as one would expect of paintings that are normally viewed from a distance and are designed primarily to remind the faithful of their religious tenets. In Venice, where the damp climate is unsuited to fresco, narrative frescoes in churches were almost nonexistent, with the result that Venetian artists and their public had no practical experience of the large-scale representation of familiar religious stories. Their model for painted stories was the cycle of secular historical paintings in the Venetian magistrate's palace, which were indeed the counterpart of written history and were made all the more authoritative by a proliferation of circumstantial detail. Moreover, because painting frescoes requires an unusually sure hand, particularly in the representation of the human form, the development of drawing skill was central to artistic training in Tuscany, and by 1500 the public there tended to distinguish artists on the basis of how well they could draw human figures. In Venice, a city virtually without frescoes, this kind of skill was acquired and appreciated much later. Gentile Bellini, for example, although regarded as one of the supreme painters of the day, was feeble at drawing. On the other hand, the emphasis on architecture so evident in the Venetian narrative paintings was something that local painters obviously prized, largely because painting architecture in perspective was seen as a particular test of the Venetian painter's skill. | 199402_3-RC_2_12 | [
"were able to draw human figures with more skill after they were apprenticed to painters in Tuscany",
"assumed that their paintings would typically be viewed from a distance",
"were a major influence on the artists who produced the cycle of historical paintings in the Venetian magistrate's palace",
"were relu... | 4 | The author suggests that fifteenth-century Venetian narrative paintings with religious subjects were painted by artists who |
To critics accustomed to the style of fifteenth-century narrative paintings by Italian artists from Tuscany, the Venetian examples of narrative paintings with religious subjects that Patricia Fortini Brown analyzes in a recent book will come as a great surprise. While the Tuscan paintings present large-scale figures, clear narratives, and simple settings, the Venetians filled their pictures with dozens of small figures and elaborate buildings, in addition to a wealth of carefully observed anecdotal detail often irrelevant to the paintings' principal subjects—the religious stories they narrate. Although it occasionally obscured these stories, this accumulation of circumstantial detail from Venetian life—the inclusion of prominent Venetian citizens, for example—was considered appropriate to the narration of historical subjects and underlined the authenticity of the historical events depicted. Indeed, Brown argues that the distinctive style of the Venetian paintings—what she calls the "eyewitness style" —was influenced by Venetian affinity for a strongly parochial type of historical writing, consisting almost exclusively of vernacular chronicles of local events embroidered with all kinds of inconsequential detail. And yet, while Venetian attitudes toward history that are reflected in their art account in part for the difference in style between Venetian and Tuscan narrative paintings, Brown has overlooked some practical influences, such as climate. Tuscan churches are filled with frescoes that, in contrast to Venetian narrative paintings, consist mainly of large figures and easily recognized religious stories, as one would expect of paintings that are normally viewed from a distance and are designed primarily to remind the faithful of their religious tenets. In Venice, where the damp climate is unsuited to fresco, narrative frescoes in churches were almost nonexistent, with the result that Venetian artists and their public had no practical experience of the large-scale representation of familiar religious stories. Their model for painted stories was the cycle of secular historical paintings in the Venetian magistrate's palace, which were indeed the counterpart of written history and were made all the more authoritative by a proliferation of circumstantial detail. Moreover, because painting frescoes requires an unusually sure hand, particularly in the representation of the human form, the development of drawing skill was central to artistic training in Tuscany, and by 1500 the public there tended to distinguish artists on the basis of how well they could draw human figures. In Venice, a city virtually without frescoes, this kind of skill was acquired and appreciated much later. Gentile Bellini, for example, although regarded as one of the supreme painters of the day, was feeble at drawing. On the other hand, the emphasis on architecture so evident in the Venetian narrative paintings was something that local painters obviously prized, largely because painting architecture in perspective was seen as a particular test of the Venetian painter's skill. | 199402_3-RC_2_13 | [
"the ability to paint architecture in perspective was seen in Venice as proof of a painter's skill",
"the subjects of such paintings were often religious stories",
"large frescoes were especially conducive to representing architecture in perspective",
"the architecture of Venice in the fifteenth century was m... | 0 | The author implies that Venetian narrative paintings with religious subjects included the representation of elaborate buildings in part because |
To critics accustomed to the style of fifteenth-century narrative paintings by Italian artists from Tuscany, the Venetian examples of narrative paintings with religious subjects that Patricia Fortini Brown analyzes in a recent book will come as a great surprise. While the Tuscan paintings present large-scale figures, clear narratives, and simple settings, the Venetians filled their pictures with dozens of small figures and elaborate buildings, in addition to a wealth of carefully observed anecdotal detail often irrelevant to the paintings' principal subjects—the religious stories they narrate. Although it occasionally obscured these stories, this accumulation of circumstantial detail from Venetian life—the inclusion of prominent Venetian citizens, for example—was considered appropriate to the narration of historical subjects and underlined the authenticity of the historical events depicted. Indeed, Brown argues that the distinctive style of the Venetian paintings—what she calls the "eyewitness style" —was influenced by Venetian affinity for a strongly parochial type of historical writing, consisting almost exclusively of vernacular chronicles of local events embroidered with all kinds of inconsequential detail. And yet, while Venetian attitudes toward history that are reflected in their art account in part for the difference in style between Venetian and Tuscan narrative paintings, Brown has overlooked some practical influences, such as climate. Tuscan churches are filled with frescoes that, in contrast to Venetian narrative paintings, consist mainly of large figures and easily recognized religious stories, as one would expect of paintings that are normally viewed from a distance and are designed primarily to remind the faithful of their religious tenets. In Venice, where the damp climate is unsuited to fresco, narrative frescoes in churches were almost nonexistent, with the result that Venetian artists and their public had no practical experience of the large-scale representation of familiar religious stories. Their model for painted stories was the cycle of secular historical paintings in the Venetian magistrate's palace, which were indeed the counterpart of written history and were made all the more authoritative by a proliferation of circumstantial detail. Moreover, because painting frescoes requires an unusually sure hand, particularly in the representation of the human form, the development of drawing skill was central to artistic training in Tuscany, and by 1500 the public there tended to distinguish artists on the basis of how well they could draw human figures. In Venice, a city virtually without frescoes, this kind of skill was acquired and appreciated much later. Gentile Bellini, for example, although regarded as one of the supreme painters of the day, was feeble at drawing. On the other hand, the emphasis on architecture so evident in the Venetian narrative paintings was something that local painters obviously prized, largely because painting architecture in perspective was seen as a particular test of the Venetian painter's skill. | 199402_3-RC_2_14 | [
"The style of secular historical paintings in the palace of the Venetian magistrate was similar to that of Venetian narrative paintings with religious subjects.",
"The style of the historical writing produced by fifteenth-century Venetian authors was similar in its inclusion of anecdotal details to secular painti... | 2 | Which one of the following, if true, would most weaken the author's contention that fifteenth-century Venetian artists "had no practical experience of the large-scale representation of familiar religious stories" (lines 40–42)? |
Currently, legal scholars agree that in some cases legal rules do not specify a definite outcome. These scholars believe that such indeterminacy results from the vagueness of language: the boundaries of the application of a term are often unclear. Nevertheless, they maintain that the system of legal rules by and large rests on clear core meanings that do determine definite outcomes for most cases. Contrary to this view, an earlier group of legal philosophers, called "realists," argued that indeterminacy pervades every part of the law. The realists held that there is always a cluster of rules relevant to the decision in any litigated case. For example, deciding whether an aunt's promise to pay her niece a sum of money if she refrained from smoking is enforceable would involve a number of rules regarding such issues as offer, acceptance, and revocation. Linguistic vagueness in any one of these rules would affect the outcome of the case, making possible multiple points of indeterminacy, not just one or two, in any legal case. For the realists, an even more damaging kind of indeterminacy stems from the fact that in a common-law system based on precedent, a judge's decision is held to be binding on judges in subsequent similar cases. Judicial decisions are expressed in written opinions, commonly held to consist of two parts: the holding (the decision for or against the plaintiff and the essential grounds or legal reasons for it, that is, what subsequent judges are bound by), and the dicta (everything in an opinion not essential to the decision, for example, comments about points of law not treated as the basis of the outcome). The realists argued that in practice the common-law system treats the "holding/dicta" distinction loosely. They pointed out that even when the judge writing an opinion characterizes part of it as "the holding," judges writing subsequent opinions, although unlikely to dispute the decision itself, are not bound by the original judge's perception of what was essential to the decision. Later judges have tremendous leeway in being able to redefine the holding and the dicta in a precedential case. This leeway enables judges to choose which rules of law formed the basis of the decision in the earlier case. When judging almost any case, then, a judge can find a relevant precedential case which, in subsequent opinions, has been read by one judge as stating one legal rule, and by another judge as stating another, possibly contradictory one. A judge thus faces an indeterminate legal situation in which he or she has to choose which rules are to govern the case at hand. | 199402_3-RC_3_15 | [
"It gives rise to numerous situations in which the decisions of earlier judges are found to be in error by later judges.",
"It possesses a clear set of legal rules in theory, but in practice most judges are unaware of the strict meaning of those rules.",
"Its strength lies in the requirement that judges decide ... | 4 | According to the passage, the realists argued that which one of the following is true of a common-law system? |
Currently, legal scholars agree that in some cases legal rules do not specify a definite outcome. These scholars believe that such indeterminacy results from the vagueness of language: the boundaries of the application of a term are often unclear. Nevertheless, they maintain that the system of legal rules by and large rests on clear core meanings that do determine definite outcomes for most cases. Contrary to this view, an earlier group of legal philosophers, called "realists," argued that indeterminacy pervades every part of the law. The realists held that there is always a cluster of rules relevant to the decision in any litigated case. For example, deciding whether an aunt's promise to pay her niece a sum of money if she refrained from smoking is enforceable would involve a number of rules regarding such issues as offer, acceptance, and revocation. Linguistic vagueness in any one of these rules would affect the outcome of the case, making possible multiple points of indeterminacy, not just one or two, in any legal case. For the realists, an even more damaging kind of indeterminacy stems from the fact that in a common-law system based on precedent, a judge's decision is held to be binding on judges in subsequent similar cases. Judicial decisions are expressed in written opinions, commonly held to consist of two parts: the holding (the decision for or against the plaintiff and the essential grounds or legal reasons for it, that is, what subsequent judges are bound by), and the dicta (everything in an opinion not essential to the decision, for example, comments about points of law not treated as the basis of the outcome). The realists argued that in practice the common-law system treats the "holding/dicta" distinction loosely. They pointed out that even when the judge writing an opinion characterizes part of it as "the holding," judges writing subsequent opinions, although unlikely to dispute the decision itself, are not bound by the original judge's perception of what was essential to the decision. Later judges have tremendous leeway in being able to redefine the holding and the dicta in a precedential case. This leeway enables judges to choose which rules of law formed the basis of the decision in the earlier case. When judging almost any case, then, a judge can find a relevant precedential case which, in subsequent opinions, has been read by one judge as stating one legal rule, and by another judge as stating another, possibly contradictory one. A judge thus faces an indeterminate legal situation in which he or she has to choose which rules are to govern the case at hand. | 199402_3-RC_3_16 | [
"The holding is not commonly considered binding on subsequent judges, but the decision is.",
"The holding formally states the outcome of the case, while the decision explains it.",
"The holding explains the decision but does not include it.",
"The holding consists of the decision and the dicta.",
"The holdi... | 4 | According to the passage, which one of the following best describes the relationship between a judicial holding and a judicial decision? |
Currently, legal scholars agree that in some cases legal rules do not specify a definite outcome. These scholars believe that such indeterminacy results from the vagueness of language: the boundaries of the application of a term are often unclear. Nevertheless, they maintain that the system of legal rules by and large rests on clear core meanings that do determine definite outcomes for most cases. Contrary to this view, an earlier group of legal philosophers, called "realists," argued that indeterminacy pervades every part of the law. The realists held that there is always a cluster of rules relevant to the decision in any litigated case. For example, deciding whether an aunt's promise to pay her niece a sum of money if she refrained from smoking is enforceable would involve a number of rules regarding such issues as offer, acceptance, and revocation. Linguistic vagueness in any one of these rules would affect the outcome of the case, making possible multiple points of indeterminacy, not just one or two, in any legal case. For the realists, an even more damaging kind of indeterminacy stems from the fact that in a common-law system based on precedent, a judge's decision is held to be binding on judges in subsequent similar cases. Judicial decisions are expressed in written opinions, commonly held to consist of two parts: the holding (the decision for or against the plaintiff and the essential grounds or legal reasons for it, that is, what subsequent judges are bound by), and the dicta (everything in an opinion not essential to the decision, for example, comments about points of law not treated as the basis of the outcome). The realists argued that in practice the common-law system treats the "holding/dicta" distinction loosely. They pointed out that even when the judge writing an opinion characterizes part of it as "the holding," judges writing subsequent opinions, although unlikely to dispute the decision itself, are not bound by the original judge's perception of what was essential to the decision. Later judges have tremendous leeway in being able to redefine the holding and the dicta in a precedential case. This leeway enables judges to choose which rules of law formed the basis of the decision in the earlier case. When judging almost any case, then, a judge can find a relevant precedential case which, in subsequent opinions, has been read by one judge as stating one legal rule, and by another judge as stating another, possibly contradictory one. A judge thus faces an indeterminate legal situation in which he or she has to choose which rules are to govern the case at hand. | 199402_3-RC_3_17 | [
"The judges would most likely disagree with one or more of the interpretations and overturn the earlier judges' decisions.",
"The judges might differ from each other concerning which of the interpretations would apply in a given case.",
"The judges probably would consider themselves bound by all the legal rules... | 1 | The information in the passage suggests that the realists would most likely have agreed with which one of the following statements about the reaction of judges to past interpretations of a precedential case, each of which states a different legal rule? |
Currently, legal scholars agree that in some cases legal rules do not specify a definite outcome. These scholars believe that such indeterminacy results from the vagueness of language: the boundaries of the application of a term are often unclear. Nevertheless, they maintain that the system of legal rules by and large rests on clear core meanings that do determine definite outcomes for most cases. Contrary to this view, an earlier group of legal philosophers, called "realists," argued that indeterminacy pervades every part of the law. The realists held that there is always a cluster of rules relevant to the decision in any litigated case. For example, deciding whether an aunt's promise to pay her niece a sum of money if she refrained from smoking is enforceable would involve a number of rules regarding such issues as offer, acceptance, and revocation. Linguistic vagueness in any one of these rules would affect the outcome of the case, making possible multiple points of indeterminacy, not just one or two, in any legal case. For the realists, an even more damaging kind of indeterminacy stems from the fact that in a common-law system based on precedent, a judge's decision is held to be binding on judges in subsequent similar cases. Judicial decisions are expressed in written opinions, commonly held to consist of two parts: the holding (the decision for or against the plaintiff and the essential grounds or legal reasons for it, that is, what subsequent judges are bound by), and the dicta (everything in an opinion not essential to the decision, for example, comments about points of law not treated as the basis of the outcome). The realists argued that in practice the common-law system treats the "holding/dicta" distinction loosely. They pointed out that even when the judge writing an opinion characterizes part of it as "the holding," judges writing subsequent opinions, although unlikely to dispute the decision itself, are not bound by the original judge's perception of what was essential to the decision. Later judges have tremendous leeway in being able to redefine the holding and the dicta in a precedential case. This leeway enables judges to choose which rules of law formed the basis of the decision in the earlier case. When judging almost any case, then, a judge can find a relevant precedential case which, in subsequent opinions, has been read by one judge as stating one legal rule, and by another judge as stating another, possibly contradictory one. A judge thus faces an indeterminate legal situation in which he or she has to choose which rules are to govern the case at hand. | 199402_3-RC_3_18 | [
"linguistic vagueness can cause indeterminacy regarding the outcome of a litigated case",
"in any litigated case, several different and possibly contradictory legal rules are relevant to the decision of the case",
"the distinction between holding and dicta in a written opinion is usually difficult to determine ... | 0 | It can be inferred from the passage that most legal scholars today would agree with the realists that |
Currently, legal scholars agree that in some cases legal rules do not specify a definite outcome. These scholars believe that such indeterminacy results from the vagueness of language: the boundaries of the application of a term are often unclear. Nevertheless, they maintain that the system of legal rules by and large rests on clear core meanings that do determine definite outcomes for most cases. Contrary to this view, an earlier group of legal philosophers, called "realists," argued that indeterminacy pervades every part of the law. The realists held that there is always a cluster of rules relevant to the decision in any litigated case. For example, deciding whether an aunt's promise to pay her niece a sum of money if she refrained from smoking is enforceable would involve a number of rules regarding such issues as offer, acceptance, and revocation. Linguistic vagueness in any one of these rules would affect the outcome of the case, making possible multiple points of indeterminacy, not just one or two, in any legal case. For the realists, an even more damaging kind of indeterminacy stems from the fact that in a common-law system based on precedent, a judge's decision is held to be binding on judges in subsequent similar cases. Judicial decisions are expressed in written opinions, commonly held to consist of two parts: the holding (the decision for or against the plaintiff and the essential grounds or legal reasons for it, that is, what subsequent judges are bound by), and the dicta (everything in an opinion not essential to the decision, for example, comments about points of law not treated as the basis of the outcome). The realists argued that in practice the common-law system treats the "holding/dicta" distinction loosely. They pointed out that even when the judge writing an opinion characterizes part of it as "the holding," judges writing subsequent opinions, although unlikely to dispute the decision itself, are not bound by the original judge's perception of what was essential to the decision. Later judges have tremendous leeway in being able to redefine the holding and the dicta in a precedential case. This leeway enables judges to choose which rules of law formed the basis of the decision in the earlier case. When judging almost any case, then, a judge can find a relevant precedential case which, in subsequent opinions, has been read by one judge as stating one legal rule, and by another judge as stating another, possibly contradictory one. A judge thus faces an indeterminate legal situation in which he or she has to choose which rules are to govern the case at hand. | 199402_3-RC_3_19 | [
"The judge writing the opinion is usually careful to specify those parts of the opinion he or she considers part of the dicta.",
"The appropriateness of the judge's decision would be disputed by subsequent judges on the basis of legal rules expressed in the dicta.",
"A consensus concerning what constitutes the ... | 3 | The passage suggests that the realists believed which one of the following to be true of the dicta in a judge's written opinion? |
Currently, legal scholars agree that in some cases legal rules do not specify a definite outcome. These scholars believe that such indeterminacy results from the vagueness of language: the boundaries of the application of a term are often unclear. Nevertheless, they maintain that the system of legal rules by and large rests on clear core meanings that do determine definite outcomes for most cases. Contrary to this view, an earlier group of legal philosophers, called "realists," argued that indeterminacy pervades every part of the law. The realists held that there is always a cluster of rules relevant to the decision in any litigated case. For example, deciding whether an aunt's promise to pay her niece a sum of money if she refrained from smoking is enforceable would involve a number of rules regarding such issues as offer, acceptance, and revocation. Linguistic vagueness in any one of these rules would affect the outcome of the case, making possible multiple points of indeterminacy, not just one or two, in any legal case. For the realists, an even more damaging kind of indeterminacy stems from the fact that in a common-law system based on precedent, a judge's decision is held to be binding on judges in subsequent similar cases. Judicial decisions are expressed in written opinions, commonly held to consist of two parts: the holding (the decision for or against the plaintiff and the essential grounds or legal reasons for it, that is, what subsequent judges are bound by), and the dicta (everything in an opinion not essential to the decision, for example, comments about points of law not treated as the basis of the outcome). The realists argued that in practice the common-law system treats the "holding/dicta" distinction loosely. They pointed out that even when the judge writing an opinion characterizes part of it as "the holding," judges writing subsequent opinions, although unlikely to dispute the decision itself, are not bound by the original judge's perception of what was essential to the decision. Later judges have tremendous leeway in being able to redefine the holding and the dicta in a precedential case. This leeway enables judges to choose which rules of law formed the basis of the decision in the earlier case. When judging almost any case, then, a judge can find a relevant precedential case which, in subsequent opinions, has been read by one judge as stating one legal rule, and by another judge as stating another, possibly contradictory one. A judge thus faces an indeterminate legal situation in which he or she has to choose which rules are to govern the case at hand. | 199402_3-RC_3_20 | [
"A traditional point of view is explained and problems arising from it are described.",
"Two conflicting systems of thought are compared point for point and then evaluated.",
"A legal concept is defined and arguments justifying that definition are refuted.",
"Two viewpoints on an issue are briefly described a... | 3 | Which one of the following best describes the overall organization of the passage? |
Currently, legal scholars agree that in some cases legal rules do not specify a definite outcome. These scholars believe that such indeterminacy results from the vagueness of language: the boundaries of the application of a term are often unclear. Nevertheless, they maintain that the system of legal rules by and large rests on clear core meanings that do determine definite outcomes for most cases. Contrary to this view, an earlier group of legal philosophers, called "realists," argued that indeterminacy pervades every part of the law. The realists held that there is always a cluster of rules relevant to the decision in any litigated case. For example, deciding whether an aunt's promise to pay her niece a sum of money if she refrained from smoking is enforceable would involve a number of rules regarding such issues as offer, acceptance, and revocation. Linguistic vagueness in any one of these rules would affect the outcome of the case, making possible multiple points of indeterminacy, not just one or two, in any legal case. For the realists, an even more damaging kind of indeterminacy stems from the fact that in a common-law system based on precedent, a judge's decision is held to be binding on judges in subsequent similar cases. Judicial decisions are expressed in written opinions, commonly held to consist of two parts: the holding (the decision for or against the plaintiff and the essential grounds or legal reasons for it, that is, what subsequent judges are bound by), and the dicta (everything in an opinion not essential to the decision, for example, comments about points of law not treated as the basis of the outcome). The realists argued that in practice the common-law system treats the "holding/dicta" distinction loosely. They pointed out that even when the judge writing an opinion characterizes part of it as "the holding," judges writing subsequent opinions, although unlikely to dispute the decision itself, are not bound by the original judge's perception of what was essential to the decision. Later judges have tremendous leeway in being able to redefine the holding and the dicta in a precedential case. This leeway enables judges to choose which rules of law formed the basis of the decision in the earlier case. When judging almost any case, then, a judge can find a relevant precedential case which, in subsequent opinions, has been read by one judge as stating one legal rule, and by another judge as stating another, possibly contradictory one. A judge thus faces an indeterminate legal situation in which he or she has to choose which rules are to govern the case at hand. | 199402_3-RC_3_21 | [
"Legal Indeterminacy: The Debate Continues",
"Holding Versus Dicta: A Distinction Without a Difference",
"Linguistic Vagueness: Is It Circumscribed in Legal Terminology?",
"Legal Indeterminacy: The Realist's View of Its Scope",
"Legal Rules and the Precedential System: How Judges Interpret the Precedents"
] | 3 | Which one of the following titles best reflects the content of the passage? |
Years after the movement to obtain civil rights for black people in the United States made its most important gains, scholars are reaching for a theoretical perspective capable of clarifying its momentous developments. New theories of social movements are being discussed, not just among social psychologists, but also among political theorists. Of the many competing formulations of the "classical" social psychological theory of social movement, three are prominent in the literature on the civil rights movement: "rising expectations," "relative deprivation," and "J-curve." Each conforms to a causal sequence characteristic of classical social movement theory, linking some unusual condition, or "system strain," to the generation of unrest. When these versions of the classical theory are applied to the civil rights movement, the source of strain is identified as a change in black socioeconomic status that occurred shortly before the widespread protest activity of the movement. For example, the theory of rising expectations asserts that protest activity was a response to psychological tensions generated by gains experienced immediately prior to the civil rights movement. Advancement did not satisfy ambition, but created the desire for further advancement. Only slightly different is the theory of relative deprivation. Here the impetus to protest is identified as gains achieved during the premovement period, coupled with simultaneous failure to make any appreciable headway relative to the dominant group. The J-curve theory argues that the movement occurred because a prolonged period of rising expectations and gratification was followed by a sharp reversal. Political theorists have been dismissive of these applications of classical theory to the civil rights movement. Their arguments rest on the conviction that, implicitly, the classical theory trivializes the political ends of movement participants, focusing rather on presumed psychological dysfunctions; reduction of complex social situations to simple paradigms of stimulus and response obviates the relevance of all but the shortest-term analysis. Furthermore, the theories lack predictive value: "strain" is always present to some degree, but social movement is not. How can we know which strain will provoke upheaval? These very legitimate complaints having frequently been made, it remains to find a means of testing the strength of the theories. Problematically, while proponents of the various theories have contradictory interpretations of socioeconomic conditions leading to the civil rights movement, examination of various statistical records regarding the material status of black Americans yields ample evidence to support any of the three theories. The steady rise in median black family income supports the rising expectations hypothesis; the stability of the economic position of black vis-à-vis white Americans lends credence to the relative deprivation interpretation; unemployment data are consistent with the J-curve theory. A better test is the comparison of each of these economic indicators with the frequency of movement-initiated events reported in the press; unsurprisingly, none correlates significantly with the pace of reports about movement activity. | 199402_3-RC_4_22 | [
"may focus on personalities rather than on political issues",
"is not provoked primarily by an unusual condition",
"may be decided according to the psychological needs of voters",
"may not entail momentous developments",
"actually entails two or more distinct social movements"
] | 1 | It can be inferred from the passage that the classical theory of social movement would not be appropriately applied to an annual general election because such an election |
Years after the movement to obtain civil rights for black people in the United States made its most important gains, scholars are reaching for a theoretical perspective capable of clarifying its momentous developments. New theories of social movements are being discussed, not just among social psychologists, but also among political theorists. Of the many competing formulations of the "classical" social psychological theory of social movement, three are prominent in the literature on the civil rights movement: "rising expectations," "relative deprivation," and "J-curve." Each conforms to a causal sequence characteristic of classical social movement theory, linking some unusual condition, or "system strain," to the generation of unrest. When these versions of the classical theory are applied to the civil rights movement, the source of strain is identified as a change in black socioeconomic status that occurred shortly before the widespread protest activity of the movement. For example, the theory of rising expectations asserts that protest activity was a response to psychological tensions generated by gains experienced immediately prior to the civil rights movement. Advancement did not satisfy ambition, but created the desire for further advancement. Only slightly different is the theory of relative deprivation. Here the impetus to protest is identified as gains achieved during the premovement period, coupled with simultaneous failure to make any appreciable headway relative to the dominant group. The J-curve theory argues that the movement occurred because a prolonged period of rising expectations and gratification was followed by a sharp reversal. Political theorists have been dismissive of these applications of classical theory to the civil rights movement. Their arguments rest on the conviction that, implicitly, the classical theory trivializes the political ends of movement participants, focusing rather on presumed psychological dysfunctions; reduction of complex social situations to simple paradigms of stimulus and response obviates the relevance of all but the shortest-term analysis. Furthermore, the theories lack predictive value: "strain" is always present to some degree, but social movement is not. How can we know which strain will provoke upheaval? These very legitimate complaints having frequently been made, it remains to find a means of testing the strength of the theories. Problematically, while proponents of the various theories have contradictory interpretations of socioeconomic conditions leading to the civil rights movement, examination of various statistical records regarding the material status of black Americans yields ample evidence to support any of the three theories. The steady rise in median black family income supports the rising expectations hypothesis; the stability of the economic position of black vis-à-vis white Americans lends credence to the relative deprivation interpretation; unemployment data are consistent with the J-curve theory. A better test is the comparison of each of these economic indicators with the frequency of movement-initiated events reported in the press; unsurprisingly, none correlates significantly with the pace of reports about movement activity. | 199402_3-RC_4_23 | [
"They predict different responses to the same socioeconomic conditions.",
"They disagree about the relevance of psychological explanations for protest movements.",
"They are meant to explain different kinds of social change.",
"They describe the motivation of protesters in slightly different ways.",
"They d... | 3 | According to the passage, the "rising expectations" and "relative deprivation" models differ in which one of the following ways? |
Years after the movement to obtain civil rights for black people in the United States made its most important gains, scholars are reaching for a theoretical perspective capable of clarifying its momentous developments. New theories of social movements are being discussed, not just among social psychologists, but also among political theorists. Of the many competing formulations of the "classical" social psychological theory of social movement, three are prominent in the literature on the civil rights movement: "rising expectations," "relative deprivation," and "J-curve." Each conforms to a causal sequence characteristic of classical social movement theory, linking some unusual condition, or "system strain," to the generation of unrest. When these versions of the classical theory are applied to the civil rights movement, the source of strain is identified as a change in black socioeconomic status that occurred shortly before the widespread protest activity of the movement. For example, the theory of rising expectations asserts that protest activity was a response to psychological tensions generated by gains experienced immediately prior to the civil rights movement. Advancement did not satisfy ambition, but created the desire for further advancement. Only slightly different is the theory of relative deprivation. Here the impetus to protest is identified as gains achieved during the premovement period, coupled with simultaneous failure to make any appreciable headway relative to the dominant group. The J-curve theory argues that the movement occurred because a prolonged period of rising expectations and gratification was followed by a sharp reversal. Political theorists have been dismissive of these applications of classical theory to the civil rights movement. Their arguments rest on the conviction that, implicitly, the classical theory trivializes the political ends of movement participants, focusing rather on presumed psychological dysfunctions; reduction of complex social situations to simple paradigms of stimulus and response obviates the relevance of all but the shortest-term analysis. Furthermore, the theories lack predictive value: "strain" is always present to some degree, but social movement is not. How can we know which strain will provoke upheaval? These very legitimate complaints having frequently been made, it remains to find a means of testing the strength of the theories. Problematically, while proponents of the various theories have contradictory interpretations of socioeconomic conditions leading to the civil rights movement, examination of various statistical records regarding the material status of black Americans yields ample evidence to support any of the three theories. The steady rise in median black family income supports the rising expectations hypothesis; the stability of the economic position of black vis-à-vis white Americans lends credence to the relative deprivation interpretation; unemployment data are consistent with the J-curve theory. A better test is the comparison of each of these economic indicators with the frequency of movement-initiated events reported in the press; unsurprisingly, none correlates significantly with the pace of reports about movement activity. | 199402_3-RC_4_24 | [
"Participants in any given social movement have conflicting motivations.",
"Social movements are ultimately beneficial to society.",
"Only strain of a socioeconomic nature can provoke a social movement.",
"The political ends of movement participants are best analyzed in terms of participants' psychological mo... | 4 | The author implies that political theorists attribute which one of the following assumptions to social psychologists who apply the classical theory of social movements to the civil rights movement? |
Years after the movement to obtain civil rights for black people in the United States made its most important gains, scholars are reaching for a theoretical perspective capable of clarifying its momentous developments. New theories of social movements are being discussed, not just among social psychologists, but also among political theorists. Of the many competing formulations of the "classical" social psychological theory of social movement, three are prominent in the literature on the civil rights movement: "rising expectations," "relative deprivation," and "J-curve." Each conforms to a causal sequence characteristic of classical social movement theory, linking some unusual condition, or "system strain," to the generation of unrest. When these versions of the classical theory are applied to the civil rights movement, the source of strain is identified as a change in black socioeconomic status that occurred shortly before the widespread protest activity of the movement. For example, the theory of rising expectations asserts that protest activity was a response to psychological tensions generated by gains experienced immediately prior to the civil rights movement. Advancement did not satisfy ambition, but created the desire for further advancement. Only slightly different is the theory of relative deprivation. Here the impetus to protest is identified as gains achieved during the premovement period, coupled with simultaneous failure to make any appreciable headway relative to the dominant group. The J-curve theory argues that the movement occurred because a prolonged period of rising expectations and gratification was followed by a sharp reversal. Political theorists have been dismissive of these applications of classical theory to the civil rights movement. Their arguments rest on the conviction that, implicitly, the classical theory trivializes the political ends of movement participants, focusing rather on presumed psychological dysfunctions; reduction of complex social situations to simple paradigms of stimulus and response obviates the relevance of all but the shortest-term analysis. Furthermore, the theories lack predictive value: "strain" is always present to some degree, but social movement is not. How can we know which strain will provoke upheaval? These very legitimate complaints having frequently been made, it remains to find a means of testing the strength of the theories. Problematically, while proponents of the various theories have contradictory interpretations of socioeconomic conditions leading to the civil rights movement, examination of various statistical records regarding the material status of black Americans yields ample evidence to support any of the three theories. The steady rise in median black family income supports the rising expectations hypothesis; the stability of the economic position of black vis-à-vis white Americans lends credence to the relative deprivation interpretation; unemployment data are consistent with the J-curve theory. A better test is the comparison of each of these economic indicators with the frequency of movement-initiated events reported in the press; unsurprisingly, none correlates significantly with the pace of reports about movement activity. | 199402_3-RC_4_25 | [
"The test confirms the three classical theories discussed in the passage.",
"The test provides no basis for deciding among the three classical theories discussed in the passage.",
"The test shows that it is impossible to apply any theory of social movements to the civil rights movement.",
"The test indicates ... | 1 | Which one of the following statements is supported by the results of the "better test" discussed in the last paragraph of the passage? |
Years after the movement to obtain civil rights for black people in the United States made its most important gains, scholars are reaching for a theoretical perspective capable of clarifying its momentous developments. New theories of social movements are being discussed, not just among social psychologists, but also among political theorists. Of the many competing formulations of the "classical" social psychological theory of social movement, three are prominent in the literature on the civil rights movement: "rising expectations," "relative deprivation," and "J-curve." Each conforms to a causal sequence characteristic of classical social movement theory, linking some unusual condition, or "system strain," to the generation of unrest. When these versions of the classical theory are applied to the civil rights movement, the source of strain is identified as a change in black socioeconomic status that occurred shortly before the widespread protest activity of the movement. For example, the theory of rising expectations asserts that protest activity was a response to psychological tensions generated by gains experienced immediately prior to the civil rights movement. Advancement did not satisfy ambition, but created the desire for further advancement. Only slightly different is the theory of relative deprivation. Here the impetus to protest is identified as gains achieved during the premovement period, coupled with simultaneous failure to make any appreciable headway relative to the dominant group. The J-curve theory argues that the movement occurred because a prolonged period of rising expectations and gratification was followed by a sharp reversal. Political theorists have been dismissive of these applications of classical theory to the civil rights movement. Their arguments rest on the conviction that, implicitly, the classical theory trivializes the political ends of movement participants, focusing rather on presumed psychological dysfunctions; reduction of complex social situations to simple paradigms of stimulus and response obviates the relevance of all but the shortest-term analysis. Furthermore, the theories lack predictive value: "strain" is always present to some degree, but social movement is not. How can we know which strain will provoke upheaval? These very legitimate complaints having frequently been made, it remains to find a means of testing the strength of the theories. Problematically, while proponents of the various theories have contradictory interpretations of socioeconomic conditions leading to the civil rights movement, examination of various statistical records regarding the material status of black Americans yields ample evidence to support any of the three theories. The steady rise in median black family income supports the rising expectations hypothesis; the stability of the economic position of black vis-à-vis white Americans lends credence to the relative deprivation interpretation; unemployment data are consistent with the J-curve theory. A better test is the comparison of each of these economic indicators with the frequency of movement-initiated events reported in the press; unsurprisingly, none correlates significantly with the pace of reports about movement activity. | 199402_3-RC_4_26 | [
"the press is selective about the movement activities it chooses to cover",
"not all economic indicators receive the same amount of press coverage",
"economic indicators often contradict one another",
"a movement-initiated event may not correlate significantly with any of the three economic indicators",
"th... | 0 | The validity of the "better test" (line 65) as proposed by the author might be undermined by the fact that |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.