context stringclasses 269
values | id_string stringlengths 15 16 | answers listlengths 5 5 | label int64 0 4 | question stringlengths 34 417 |
|---|---|---|---|---|
Traditionally, members of a community such as a town or neighborhood share a common location and a sense of necessary interdependence that includes, for example, mutual respect and emotional support. But as modern societies grow more technological and sometimes more alienating, people tend to spend less time in the kinds of interactions that their communities require in order to thrive. Meanwhile, technology has made it possible for individuals to interact via personal computer with others who are geographically distant. Advocates claim that these computer conferences, in which large numbers of participants communicate by typing comments that are immediately read by other participants and responding immediately to those comments they read, function as communities that can substitute for traditional interactions with neighbors. What are the characteristics that advocates claim allow computer conferences to function as communities? For one, participants often share common interests or concerns; conferences are frequently organized around specific topics such as music or parenting. Second, because these conferences are conversations, participants have adopted certain conventions in recognition of the importance of respecting each others' sensibilities. Abbreviations are used to convey commonly expressed sentiments of courtesy such as "pardon me for cutting in" ( "pmfci" ) or "in my humble opinion" ( "imho" ). Because a humorous tone can be difficult to communicate in writing, participants will often end an intentionally humorous comment with a set of characters that, when looked at sideways, resembles a smiling or winking face. Typing messages entirely in capital letters is avoided, because its tendency to demand the attention of a reader's eye is considered the computer equivalent of shouting. These conventions, advocates claim, constitute a form of etiquette, and with this etiquette as a foundation, people often form genuine, trusting relationships, even offering advice and support during personal crises such as illness or the loss of a loved one. But while it is true that conferences can be both respectful and supportive, they nonetheless fall short of communities. For example, conferences discriminate along educational and economic lines because participation requires a basic knowledge of computers and the ability to afford access to conferences. Further, while advocates claim that a shared interest makes computer conferences similar to traditional communities—insofar as the shared interest is analogous to a traditional community's shared location—this analogy simply does not work. Conference participants are a self-selecting group; they are drawn together by their shared interest in the topic of the conference. Actual communities, on the other hand, are "nonintentional" : the people who inhabit towns or neighborhoods are thus more likely to exhibit genuine diversity—of age, career, or personal interests—than are conference participants. It might be easier to find common ground in a computer conference than in today's communities, but in so doing it would be unfortunate if conference participants cut themselves off further from valuable interactions in their own towns or neighborhoods. | 200112_2-RC_1_6 | [
"Participants in computer conferences are generally more accepting of diversity than is the population at large.",
"Computer technology is rapidly becoming more affordable and accessible to people from a variety of backgrounds.",
"Participants in computer conferences often apply the same degree of respect and s... | 1 | Which one of the following, if true, would most weaken one of the author's arguments in the last paragraph? |
In Intellectual Culture in Elizabethan and Jacobean England, J. W. Binns asserts that the drama of Shakespeare, the verse of Marlowe, and the prose of Sidney—all of whom wrote in English—do not alone represent the high culture of Renaissance (roughly sixteenth- and seventeenth-century) England. Latin, the language of ancient Rome, continued during this period to be the dominant form of expression for English intellectuals, and works of law, theology, and science written in Latin were, according to Binns, among the highest achievements of the Renaissance. However, because many academic specializations do not overlap, many texts central to an interpretation of early modern English culture have gone unexamined. Even the most learned students of Renaissance Latin generally confine themselves to humanistic and literary writings in Latin. According to Binns, these language specialists edit and analyze poems and orations, but leave works of theology and science, law and medicine—the very works that revolutionized Western thought—to "specialists" in those fields, historians of science, for example, who lack philological training. The intellectual historian can find ample guidance when reading the Latin poetry of Milton, but little or none when confronting the more alien and difficult terminology, syntax, and content of the scientist Newton. Intellectual historians of Renaissance England, by contrast with Latin language specialists, have surveyed in great detail the historical, cosmological, and theological battles of the day, but too often they have done so on the basis of texts written in or translated into English. Binns argues that these scholars treat the English-language writings of Renaissance England as an autonomous and coherent whole, underestimating the influence on English writers of their counterparts on the European Continent. In so doing they ignore the fact that English intellectuals were educated in schools and universities where they spoke and wrote Latin, and inhabited as adults an intellectual world in which what happened abroad and was recorded in Latin was of great importance. Writers traditionally considered characteristically English and modern were steeped in Latin literature and in the esoteric concerns of late Renaissance humanism (the rediscovery and study of ancient Latin and Greek texts), and many Latin works by Continental humanists that were not translated at the time into any modern language became the bases of classic English works of literature and scholarship. These limitations are understandable. No modern classicist is trained to deal with the range of problems posed by a difficult piece of late Renaissance science; few students of English intellectual history are trained to read the sort of Latin in which such works were written. Yet the result of each side's inability to cross boundaries has been that each presents a distorted reading of the intellectual culture of Renaissance England. | 200112_2-RC_2_7 | [
"Analyses of the scientific, theological, and legal writings of the Renaissance have proved to be more important to an understanding of the period than have studies of humanistic and literary works.",
"The English works of such Renaissance writers as Shakespeare, Marlowe, and Sidney have been overemphasized at th... | 4 | Which one of the following best states the main idea of the passage? |
In Intellectual Culture in Elizabethan and Jacobean England, J. W. Binns asserts that the drama of Shakespeare, the verse of Marlowe, and the prose of Sidney—all of whom wrote in English—do not alone represent the high culture of Renaissance (roughly sixteenth- and seventeenth-century) England. Latin, the language of ancient Rome, continued during this period to be the dominant form of expression for English intellectuals, and works of law, theology, and science written in Latin were, according to Binns, among the highest achievements of the Renaissance. However, because many academic specializations do not overlap, many texts central to an interpretation of early modern English culture have gone unexamined. Even the most learned students of Renaissance Latin generally confine themselves to humanistic and literary writings in Latin. According to Binns, these language specialists edit and analyze poems and orations, but leave works of theology and science, law and medicine—the very works that revolutionized Western thought—to "specialists" in those fields, historians of science, for example, who lack philological training. The intellectual historian can find ample guidance when reading the Latin poetry of Milton, but little or none when confronting the more alien and difficult terminology, syntax, and content of the scientist Newton. Intellectual historians of Renaissance England, by contrast with Latin language specialists, have surveyed in great detail the historical, cosmological, and theological battles of the day, but too often they have done so on the basis of texts written in or translated into English. Binns argues that these scholars treat the English-language writings of Renaissance England as an autonomous and coherent whole, underestimating the influence on English writers of their counterparts on the European Continent. In so doing they ignore the fact that English intellectuals were educated in schools and universities where they spoke and wrote Latin, and inhabited as adults an intellectual world in which what happened abroad and was recorded in Latin was of great importance. Writers traditionally considered characteristically English and modern were steeped in Latin literature and in the esoteric concerns of late Renaissance humanism (the rediscovery and study of ancient Latin and Greek texts), and many Latin works by Continental humanists that were not translated at the time into any modern language became the bases of classic English works of literature and scholarship. These limitations are understandable. No modern classicist is trained to deal with the range of problems posed by a difficult piece of late Renaissance science; few students of English intellectual history are trained to read the sort of Latin in which such works were written. Yet the result of each side's inability to cross boundaries has been that each presents a distorted reading of the intellectual culture of Renaissance England. | 200112_2-RC_2_8 | [
"These scholars tend to lack training both in language and in intellectual history, and thus base their interpretations of Renaissance culture on works translated into English.",
"These scholars tend to lack the combination of training in both language and intellectual history that is necessary for a proper study... | 1 | The passage contains support for which one of the following statements concerning those scholars who analyze works written in Latin during the Renaissance? |
In Intellectual Culture in Elizabethan and Jacobean England, J. W. Binns asserts that the drama of Shakespeare, the verse of Marlowe, and the prose of Sidney—all of whom wrote in English—do not alone represent the high culture of Renaissance (roughly sixteenth- and seventeenth-century) England. Latin, the language of ancient Rome, continued during this period to be the dominant form of expression for English intellectuals, and works of law, theology, and science written in Latin were, according to Binns, among the highest achievements of the Renaissance. However, because many academic specializations do not overlap, many texts central to an interpretation of early modern English culture have gone unexamined. Even the most learned students of Renaissance Latin generally confine themselves to humanistic and literary writings in Latin. According to Binns, these language specialists edit and analyze poems and orations, but leave works of theology and science, law and medicine—the very works that revolutionized Western thought—to "specialists" in those fields, historians of science, for example, who lack philological training. The intellectual historian can find ample guidance when reading the Latin poetry of Milton, but little or none when confronting the more alien and difficult terminology, syntax, and content of the scientist Newton. Intellectual historians of Renaissance England, by contrast with Latin language specialists, have surveyed in great detail the historical, cosmological, and theological battles of the day, but too often they have done so on the basis of texts written in or translated into English. Binns argues that these scholars treat the English-language writings of Renaissance England as an autonomous and coherent whole, underestimating the influence on English writers of their counterparts on the European Continent. In so doing they ignore the fact that English intellectuals were educated in schools and universities where they spoke and wrote Latin, and inhabited as adults an intellectual world in which what happened abroad and was recorded in Latin was of great importance. Writers traditionally considered characteristically English and modern were steeped in Latin literature and in the esoteric concerns of late Renaissance humanism (the rediscovery and study of ancient Latin and Greek texts), and many Latin works by Continental humanists that were not translated at the time into any modern language became the bases of classic English works of literature and scholarship. These limitations are understandable. No modern classicist is trained to deal with the range of problems posed by a difficult piece of late Renaissance science; few students of English intellectual history are trained to read the sort of Latin in which such works were written. Yet the result of each side's inability to cross boundaries has been that each presents a distorted reading of the intellectual culture of Renaissance England. | 200112_2-RC_2_9 | [
"Continental writers wrote in Latin more frequently than did English writers,and thus rendered some of the most important Continental works inaccessible to English readers.",
"Continental writers, more intellectually advanced than their English counterparts, were on the whole responsible for familiarizing English... | 3 | Which one of the following statements concerning the relationship between English and Continental writers of the Renaissance era can be inferred from the passage? |
In Intellectual Culture in Elizabethan and Jacobean England, J. W. Binns asserts that the drama of Shakespeare, the verse of Marlowe, and the prose of Sidney—all of whom wrote in English—do not alone represent the high culture of Renaissance (roughly sixteenth- and seventeenth-century) England. Latin, the language of ancient Rome, continued during this period to be the dominant form of expression for English intellectuals, and works of law, theology, and science written in Latin were, according to Binns, among the highest achievements of the Renaissance. However, because many academic specializations do not overlap, many texts central to an interpretation of early modern English culture have gone unexamined. Even the most learned students of Renaissance Latin generally confine themselves to humanistic and literary writings in Latin. According to Binns, these language specialists edit and analyze poems and orations, but leave works of theology and science, law and medicine—the very works that revolutionized Western thought—to "specialists" in those fields, historians of science, for example, who lack philological training. The intellectual historian can find ample guidance when reading the Latin poetry of Milton, but little or none when confronting the more alien and difficult terminology, syntax, and content of the scientist Newton. Intellectual historians of Renaissance England, by contrast with Latin language specialists, have surveyed in great detail the historical, cosmological, and theological battles of the day, but too often they have done so on the basis of texts written in or translated into English. Binns argues that these scholars treat the English-language writings of Renaissance England as an autonomous and coherent whole, underestimating the influence on English writers of their counterparts on the European Continent. In so doing they ignore the fact that English intellectuals were educated in schools and universities where they spoke and wrote Latin, and inhabited as adults an intellectual world in which what happened abroad and was recorded in Latin was of great importance. Writers traditionally considered characteristically English and modern were steeped in Latin literature and in the esoteric concerns of late Renaissance humanism (the rediscovery and study of ancient Latin and Greek texts), and many Latin works by Continental humanists that were not translated at the time into any modern language became the bases of classic English works of literature and scholarship. These limitations are understandable. No modern classicist is trained to deal with the range of problems posed by a difficult piece of late Renaissance science; few students of English intellectual history are trained to read the sort of Latin in which such works were written. Yet the result of each side's inability to cross boundaries has been that each presents a distorted reading of the intellectual culture of Renaissance England. | 200112_2-RC_2_10 | [
"nonfiction works are less well known than their imaginative works",
"works have unfairly been credited with revolutionizing Western thought",
"works have been treated as an autonomous and coherent whole",
"works have traditionally been seen as representing the high culture of Renaissance England",
"Latin w... | 3 | The author of the passage most likely cites Shakespeare, Marlowe, and Sidney in the first paragraph as examples of writers whose |
In Intellectual Culture in Elizabethan and Jacobean England, J. W. Binns asserts that the drama of Shakespeare, the verse of Marlowe, and the prose of Sidney—all of whom wrote in English—do not alone represent the high culture of Renaissance (roughly sixteenth- and seventeenth-century) England. Latin, the language of ancient Rome, continued during this period to be the dominant form of expression for English intellectuals, and works of law, theology, and science written in Latin were, according to Binns, among the highest achievements of the Renaissance. However, because many academic specializations do not overlap, many texts central to an interpretation of early modern English culture have gone unexamined. Even the most learned students of Renaissance Latin generally confine themselves to humanistic and literary writings in Latin. According to Binns, these language specialists edit and analyze poems and orations, but leave works of theology and science, law and medicine—the very works that revolutionized Western thought—to "specialists" in those fields, historians of science, for example, who lack philological training. The intellectual historian can find ample guidance when reading the Latin poetry of Milton, but little or none when confronting the more alien and difficult terminology, syntax, and content of the scientist Newton. Intellectual historians of Renaissance England, by contrast with Latin language specialists, have surveyed in great detail the historical, cosmological, and theological battles of the day, but too often they have done so on the basis of texts written in or translated into English. Binns argues that these scholars treat the English-language writings of Renaissance England as an autonomous and coherent whole, underestimating the influence on English writers of their counterparts on the European Continent. In so doing they ignore the fact that English intellectuals were educated in schools and universities where they spoke and wrote Latin, and inhabited as adults an intellectual world in which what happened abroad and was recorded in Latin was of great importance. Writers traditionally considered characteristically English and modern were steeped in Latin literature and in the esoteric concerns of late Renaissance humanism (the rediscovery and study of ancient Latin and Greek texts), and many Latin works by Continental humanists that were not translated at the time into any modern language became the bases of classic English works of literature and scholarship. These limitations are understandable. No modern classicist is trained to deal with the range of problems posed by a difficult piece of late Renaissance science; few students of English intellectual history are trained to read the sort of Latin in which such works were written. Yet the result of each side's inability to cross boundaries has been that each presents a distorted reading of the intellectual culture of Renaissance England. | 200112_2-RC_2_11 | [
"These writings have unfortunately been undervalued by Latin-language specialists because of their nonliterary subject matter.",
"These writings, according to Latin-language specialists, had very little influence on the intellectual upheavals associated with the Renaissance.",
"These writings, as analyzed by in... | 2 | Binns would be most likely to agree with which one of the following statements concerning the English language writings of Renaissance England traditionally studied by intellectual historians? |
In Intellectual Culture in Elizabethan and Jacobean England, J. W. Binns asserts that the drama of Shakespeare, the verse of Marlowe, and the prose of Sidney—all of whom wrote in English—do not alone represent the high culture of Renaissance (roughly sixteenth- and seventeenth-century) England. Latin, the language of ancient Rome, continued during this period to be the dominant form of expression for English intellectuals, and works of law, theology, and science written in Latin were, according to Binns, among the highest achievements of the Renaissance. However, because many academic specializations do not overlap, many texts central to an interpretation of early modern English culture have gone unexamined. Even the most learned students of Renaissance Latin generally confine themselves to humanistic and literary writings in Latin. According to Binns, these language specialists edit and analyze poems and orations, but leave works of theology and science, law and medicine—the very works that revolutionized Western thought—to "specialists" in those fields, historians of science, for example, who lack philological training. The intellectual historian can find ample guidance when reading the Latin poetry of Milton, but little or none when confronting the more alien and difficult terminology, syntax, and content of the scientist Newton. Intellectual historians of Renaissance England, by contrast with Latin language specialists, have surveyed in great detail the historical, cosmological, and theological battles of the day, but too often they have done so on the basis of texts written in or translated into English. Binns argues that these scholars treat the English-language writings of Renaissance England as an autonomous and coherent whole, underestimating the influence on English writers of their counterparts on the European Continent. In so doing they ignore the fact that English intellectuals were educated in schools and universities where they spoke and wrote Latin, and inhabited as adults an intellectual world in which what happened abroad and was recorded in Latin was of great importance. Writers traditionally considered characteristically English and modern were steeped in Latin literature and in the esoteric concerns of late Renaissance humanism (the rediscovery and study of ancient Latin and Greek texts), and many Latin works by Continental humanists that were not translated at the time into any modern language became the bases of classic English works of literature and scholarship. These limitations are understandable. No modern classicist is trained to deal with the range of problems posed by a difficult piece of late Renaissance science; few students of English intellectual history are trained to read the sort of Latin in which such works were written. Yet the result of each side's inability to cross boundaries has been that each presents a distorted reading of the intellectual culture of Renaissance England. | 200112_2-RC_2_12 | [
"These works are easier for modern scholars to analyze than are theological works of the same era.",
"These works have seldom been translated into English and thus remain inscrutable to modern scholars, despite the availability of illuminating commentaries.",
"These works are difficult for modern scholars to an... | 2 | The information in the passage suggests which one of the following concerning late-Renaissance scientific works written in Latin? |
In Intellectual Culture in Elizabethan and Jacobean England, J. W. Binns asserts that the drama of Shakespeare, the verse of Marlowe, and the prose of Sidney—all of whom wrote in English—do not alone represent the high culture of Renaissance (roughly sixteenth- and seventeenth-century) England. Latin, the language of ancient Rome, continued during this period to be the dominant form of expression for English intellectuals, and works of law, theology, and science written in Latin were, according to Binns, among the highest achievements of the Renaissance. However, because many academic specializations do not overlap, many texts central to an interpretation of early modern English culture have gone unexamined. Even the most learned students of Renaissance Latin generally confine themselves to humanistic and literary writings in Latin. According to Binns, these language specialists edit and analyze poems and orations, but leave works of theology and science, law and medicine—the very works that revolutionized Western thought—to "specialists" in those fields, historians of science, for example, who lack philological training. The intellectual historian can find ample guidance when reading the Latin poetry of Milton, but little or none when confronting the more alien and difficult terminology, syntax, and content of the scientist Newton. Intellectual historians of Renaissance England, by contrast with Latin language specialists, have surveyed in great detail the historical, cosmological, and theological battles of the day, but too often they have done so on the basis of texts written in or translated into English. Binns argues that these scholars treat the English-language writings of Renaissance England as an autonomous and coherent whole, underestimating the influence on English writers of their counterparts on the European Continent. In so doing they ignore the fact that English intellectuals were educated in schools and universities where they spoke and wrote Latin, and inhabited as adults an intellectual world in which what happened abroad and was recorded in Latin was of great importance. Writers traditionally considered characteristically English and modern were steeped in Latin literature and in the esoteric concerns of late Renaissance humanism (the rediscovery and study of ancient Latin and Greek texts), and many Latin works by Continental humanists that were not translated at the time into any modern language became the bases of classic English works of literature and scholarship. These limitations are understandable. No modern classicist is trained to deal with the range of problems posed by a difficult piece of late Renaissance science; few students of English intellectual history are trained to read the sort of Latin in which such works were written. Yet the result of each side's inability to cross boundaries has been that each presents a distorted reading of the intellectual culture of Renaissance England. | 200112_2-RC_2_13 | [
"illustrate the range of difficulty in Renaissance Latin writing, from relatively straightforward to very difficult",
"illustrate the differing scholarly attitudes toward Renaissance writers who wrote in Latin and those who wrote in English",
"illustrate the fact that the concerns of English writers of the Rena... | 4 | The author of the passage mentions the poet Milton and the scientist Newton primarily in order to |
In Intellectual Culture in Elizabethan and Jacobean England, J. W. Binns asserts that the drama of Shakespeare, the verse of Marlowe, and the prose of Sidney—all of whom wrote in English—do not alone represent the high culture of Renaissance (roughly sixteenth- and seventeenth-century) England. Latin, the language of ancient Rome, continued during this period to be the dominant form of expression for English intellectuals, and works of law, theology, and science written in Latin were, according to Binns, among the highest achievements of the Renaissance. However, because many academic specializations do not overlap, many texts central to an interpretation of early modern English culture have gone unexamined. Even the most learned students of Renaissance Latin generally confine themselves to humanistic and literary writings in Latin. According to Binns, these language specialists edit and analyze poems and orations, but leave works of theology and science, law and medicine—the very works that revolutionized Western thought—to "specialists" in those fields, historians of science, for example, who lack philological training. The intellectual historian can find ample guidance when reading the Latin poetry of Milton, but little or none when confronting the more alien and difficult terminology, syntax, and content of the scientist Newton. Intellectual historians of Renaissance England, by contrast with Latin language specialists, have surveyed in great detail the historical, cosmological, and theological battles of the day, but too often they have done so on the basis of texts written in or translated into English. Binns argues that these scholars treat the English-language writings of Renaissance England as an autonomous and coherent whole, underestimating the influence on English writers of their counterparts on the European Continent. In so doing they ignore the fact that English intellectuals were educated in schools and universities where they spoke and wrote Latin, and inhabited as adults an intellectual world in which what happened abroad and was recorded in Latin was of great importance. Writers traditionally considered characteristically English and modern were steeped in Latin literature and in the esoteric concerns of late Renaissance humanism (the rediscovery and study of ancient Latin and Greek texts), and many Latin works by Continental humanists that were not translated at the time into any modern language became the bases of classic English works of literature and scholarship. These limitations are understandable. No modern classicist is trained to deal with the range of problems posed by a difficult piece of late Renaissance science; few students of English intellectual history are trained to read the sort of Latin in which such works were written. Yet the result of each side's inability to cross boundaries has been that each presents a distorted reading of the intellectual culture of Renaissance England. | 200112_2-RC_2_14 | [
"an enumeration of new approaches",
"contrasting views of disparate theories",
"a summary of intellectual disputes",
"a discussion of a significant deficiency",
"a correction of an author's misconceptions"
] | 3 | The author of the passage is primarily concerned with presenting which one of the following? |
Discussions of how hormones influence behavior have generally been limited to the effects of gonadal hormones on reproductive behavior and have emphasized the parsimonious arrangement whereby the same hormones involved in the biology of reproduction also influence sexual behavior. It has now become clear, however, that other hormones, in addition to their recognized influence on biological functions, can affect behavior. Specifically, peptide and steroid hormones involved in maintaining the physiological balance, or homeostasis, of body fluids also appear to play an important role in the control of water and salt consumption. The phenomenon of homeostasis in animals depends on various mechanisms that promote stability within the organism despite an inconstant external environment; the homeostasis of body fluids, whereby the osmolality (the concentration of solutes) of blood plasma is closely regulated, is achieved primarily through alterations in the intake and excretion of water and sodium, the two principal components of the fluid matrix that surrounds body cells. Appropriate compensatory responses are initiated when deviations from normal are quite small, thereby maintaining plasma osmolality within relatively narrow ranges. In the osmoregulation of body fluids, the movement of water across cell membranes permits minor fluctuations in the concentration of solutes in extracellular fluid to be buffered by corresponding changes in the relatively larger volume of cellular water. Nevertheless, the concentration of solutes in extracellular fluid may at times become elevated or reduced by more than the allowed tolerances of one or two percent. It is then that complementary physiological and behavioral responses come into play to restore plasma osmolality to normal. Thus, for example, a decrease in plasma osmolality, such as that which occurs after the consumption of water in excess of need, leads to the excretion of surplus body water in the urine by inhibiting secretion from the pituitary gland of vasopressin, a peptide hormone that promotes water conservation in the kidneys. As might be expected, thirst also is inhibited then, to prevent further dilution of body fluids. Conversely, an increase in plasma osmolality, such as that which occurs after one eats salty foods or after body water evaporates without being replaced, stimulates the release of vasopressin, increasing the conservation of water and the excretion of solutes in urine. This process is accompanied by increased thirst, with the result of making plasma osmolality more dilute through the consumption of water. The threshold for thirst appears to be slightly higher than for vasopressin secretion, so that thirst is stimulated only after vasopressin has been released in amounts sufficient to produce maximal water retention by the kidneys—that is, only after osmotic dehydration exceeds the capacity of the animal to deal with it physiologically. | 200112_2-RC_3_15 | [
"Both the solute concentration and the volume of an animal's blood plasma must be kept within relatively narrow ranges.",
"Behavioral responses to changes in an animal's blood plasma can compensate for physiological malfunction, allowing the body to avoid dehydration.",
"The effect of hormones on animal behavio... | 3 | Which one of the following best states the main idea of the passage? |
Discussions of how hormones influence behavior have generally been limited to the effects of gonadal hormones on reproductive behavior and have emphasized the parsimonious arrangement whereby the same hormones involved in the biology of reproduction also influence sexual behavior. It has now become clear, however, that other hormones, in addition to their recognized influence on biological functions, can affect behavior. Specifically, peptide and steroid hormones involved in maintaining the physiological balance, or homeostasis, of body fluids also appear to play an important role in the control of water and salt consumption. The phenomenon of homeostasis in animals depends on various mechanisms that promote stability within the organism despite an inconstant external environment; the homeostasis of body fluids, whereby the osmolality (the concentration of solutes) of blood plasma is closely regulated, is achieved primarily through alterations in the intake and excretion of water and sodium, the two principal components of the fluid matrix that surrounds body cells. Appropriate compensatory responses are initiated when deviations from normal are quite small, thereby maintaining plasma osmolality within relatively narrow ranges. In the osmoregulation of body fluids, the movement of water across cell membranes permits minor fluctuations in the concentration of solutes in extracellular fluid to be buffered by corresponding changes in the relatively larger volume of cellular water. Nevertheless, the concentration of solutes in extracellular fluid may at times become elevated or reduced by more than the allowed tolerances of one or two percent. It is then that complementary physiological and behavioral responses come into play to restore plasma osmolality to normal. Thus, for example, a decrease in plasma osmolality, such as that which occurs after the consumption of water in excess of need, leads to the excretion of surplus body water in the urine by inhibiting secretion from the pituitary gland of vasopressin, a peptide hormone that promotes water conservation in the kidneys. As might be expected, thirst also is inhibited then, to prevent further dilution of body fluids. Conversely, an increase in plasma osmolality, such as that which occurs after one eats salty foods or after body water evaporates without being replaced, stimulates the release of vasopressin, increasing the conservation of water and the excretion of solutes in urine. This process is accompanied by increased thirst, with the result of making plasma osmolality more dilute through the consumption of water. The threshold for thirst appears to be slightly higher than for vasopressin secretion, so that thirst is stimulated only after vasopressin has been released in amounts sufficient to produce maximal water retention by the kidneys—that is, only after osmotic dehydration exceeds the capacity of the animal to deal with it physiologically. | 200112_2-RC_3_16 | [
"review briefly the history of research into the relationships between gonadal and peptide hormones that has led to the present discussion",
"decry the fact that previous research has concentrated on the relatively minor issue of the relationships between hormones and behavior",
"establish the emphasis of earli... | 2 | The author of the passage cites the relationship between gonadal hormones and reproductive behavior in order to |
Discussions of how hormones influence behavior have generally been limited to the effects of gonadal hormones on reproductive behavior and have emphasized the parsimonious arrangement whereby the same hormones involved in the biology of reproduction also influence sexual behavior. It has now become clear, however, that other hormones, in addition to their recognized influence on biological functions, can affect behavior. Specifically, peptide and steroid hormones involved in maintaining the physiological balance, or homeostasis, of body fluids also appear to play an important role in the control of water and salt consumption. The phenomenon of homeostasis in animals depends on various mechanisms that promote stability within the organism despite an inconstant external environment; the homeostasis of body fluids, whereby the osmolality (the concentration of solutes) of blood plasma is closely regulated, is achieved primarily through alterations in the intake and excretion of water and sodium, the two principal components of the fluid matrix that surrounds body cells. Appropriate compensatory responses are initiated when deviations from normal are quite small, thereby maintaining plasma osmolality within relatively narrow ranges. In the osmoregulation of body fluids, the movement of water across cell membranes permits minor fluctuations in the concentration of solutes in extracellular fluid to be buffered by corresponding changes in the relatively larger volume of cellular water. Nevertheless, the concentration of solutes in extracellular fluid may at times become elevated or reduced by more than the allowed tolerances of one or two percent. It is then that complementary physiological and behavioral responses come into play to restore plasma osmolality to normal. Thus, for example, a decrease in plasma osmolality, such as that which occurs after the consumption of water in excess of need, leads to the excretion of surplus body water in the urine by inhibiting secretion from the pituitary gland of vasopressin, a peptide hormone that promotes water conservation in the kidneys. As might be expected, thirst also is inhibited then, to prevent further dilution of body fluids. Conversely, an increase in plasma osmolality, such as that which occurs after one eats salty foods or after body water evaporates without being replaced, stimulates the release of vasopressin, increasing the conservation of water and the excretion of solutes in urine. This process is accompanied by increased thirst, with the result of making plasma osmolality more dilute through the consumption of water. The threshold for thirst appears to be slightly higher than for vasopressin secretion, so that thirst is stimulated only after vasopressin has been released in amounts sufficient to produce maximal water retention by the kidneys—that is, only after osmotic dehydration exceeds the capacity of the animal to deal with it physiologically. | 200112_2-RC_3_17 | [
"The amount secreted depends on the level of steroid hormones in the blood.",
"The amount secreted is important for maintaining homeostasis in cases of both increased and decreased osmolality.",
"It works in conjunction with steroid hormones in increasing plasma volume.",
"It works in conjunction with steroid... | 1 | It can be inferred from the passage that which one of the following is true of vasopressin? |
Discussions of how hormones influence behavior have generally been limited to the effects of gonadal hormones on reproductive behavior and have emphasized the parsimonious arrangement whereby the same hormones involved in the biology of reproduction also influence sexual behavior. It has now become clear, however, that other hormones, in addition to their recognized influence on biological functions, can affect behavior. Specifically, peptide and steroid hormones involved in maintaining the physiological balance, or homeostasis, of body fluids also appear to play an important role in the control of water and salt consumption. The phenomenon of homeostasis in animals depends on various mechanisms that promote stability within the organism despite an inconstant external environment; the homeostasis of body fluids, whereby the osmolality (the concentration of solutes) of blood plasma is closely regulated, is achieved primarily through alterations in the intake and excretion of water and sodium, the two principal components of the fluid matrix that surrounds body cells. Appropriate compensatory responses are initiated when deviations from normal are quite small, thereby maintaining plasma osmolality within relatively narrow ranges. In the osmoregulation of body fluids, the movement of water across cell membranes permits minor fluctuations in the concentration of solutes in extracellular fluid to be buffered by corresponding changes in the relatively larger volume of cellular water. Nevertheless, the concentration of solutes in extracellular fluid may at times become elevated or reduced by more than the allowed tolerances of one or two percent. It is then that complementary physiological and behavioral responses come into play to restore plasma osmolality to normal. Thus, for example, a decrease in plasma osmolality, such as that which occurs after the consumption of water in excess of need, leads to the excretion of surplus body water in the urine by inhibiting secretion from the pituitary gland of vasopressin, a peptide hormone that promotes water conservation in the kidneys. As might be expected, thirst also is inhibited then, to prevent further dilution of body fluids. Conversely, an increase in plasma osmolality, such as that which occurs after one eats salty foods or after body water evaporates without being replaced, stimulates the release of vasopressin, increasing the conservation of water and the excretion of solutes in urine. This process is accompanied by increased thirst, with the result of making plasma osmolality more dilute through the consumption of water. The threshold for thirst appears to be slightly higher than for vasopressin secretion, so that thirst is stimulated only after vasopressin has been released in amounts sufficient to produce maximal water retention by the kidneys—that is, only after osmotic dehydration exceeds the capacity of the animal to deal with it physiologically. | 200112_2-RC_3_18 | [
"present new information",
"question standard assumptions",
"reinterpret earlier findings",
"advocate a novel theory",
"outline a new approach"
] | 0 | The primary function of the passage as a whole is to |
Discussions of how hormones influence behavior have generally been limited to the effects of gonadal hormones on reproductive behavior and have emphasized the parsimonious arrangement whereby the same hormones involved in the biology of reproduction also influence sexual behavior. It has now become clear, however, that other hormones, in addition to their recognized influence on biological functions, can affect behavior. Specifically, peptide and steroid hormones involved in maintaining the physiological balance, or homeostasis, of body fluids also appear to play an important role in the control of water and salt consumption. The phenomenon of homeostasis in animals depends on various mechanisms that promote stability within the organism despite an inconstant external environment; the homeostasis of body fluids, whereby the osmolality (the concentration of solutes) of blood plasma is closely regulated, is achieved primarily through alterations in the intake and excretion of water and sodium, the two principal components of the fluid matrix that surrounds body cells. Appropriate compensatory responses are initiated when deviations from normal are quite small, thereby maintaining plasma osmolality within relatively narrow ranges. In the osmoregulation of body fluids, the movement of water across cell membranes permits minor fluctuations in the concentration of solutes in extracellular fluid to be buffered by corresponding changes in the relatively larger volume of cellular water. Nevertheless, the concentration of solutes in extracellular fluid may at times become elevated or reduced by more than the allowed tolerances of one or two percent. It is then that complementary physiological and behavioral responses come into play to restore plasma osmolality to normal. Thus, for example, a decrease in plasma osmolality, such as that which occurs after the consumption of water in excess of need, leads to the excretion of surplus body water in the urine by inhibiting secretion from the pituitary gland of vasopressin, a peptide hormone that promotes water conservation in the kidneys. As might be expected, thirst also is inhibited then, to prevent further dilution of body fluids. Conversely, an increase in plasma osmolality, such as that which occurs after one eats salty foods or after body water evaporates without being replaced, stimulates the release of vasopressin, increasing the conservation of water and the excretion of solutes in urine. This process is accompanied by increased thirst, with the result of making plasma osmolality more dilute through the consumption of water. The threshold for thirst appears to be slightly higher than for vasopressin secretion, so that thirst is stimulated only after vasopressin has been released in amounts sufficient to produce maximal water retention by the kidneys—that is, only after osmotic dehydration exceeds the capacity of the animal to deal with it physiologically. | 200112_2-RC_3_19 | [
"Hunger is diminished.",
"Thirst is initiated.",
"Vasopressin is secreted.",
"Water is excreted.",
"Sodium is consumed."
] | 0 | According to the passage, all of the following typically occur in the homeostasis of blood-plasma osmolality EXCEPT: |
Discussions of how hormones influence behavior have generally been limited to the effects of gonadal hormones on reproductive behavior and have emphasized the parsimonious arrangement whereby the same hormones involved in the biology of reproduction also influence sexual behavior. It has now become clear, however, that other hormones, in addition to their recognized influence on biological functions, can affect behavior. Specifically, peptide and steroid hormones involved in maintaining the physiological balance, or homeostasis, of body fluids also appear to play an important role in the control of water and salt consumption. The phenomenon of homeostasis in animals depends on various mechanisms that promote stability within the organism despite an inconstant external environment; the homeostasis of body fluids, whereby the osmolality (the concentration of solutes) of blood plasma is closely regulated, is achieved primarily through alterations in the intake and excretion of water and sodium, the two principal components of the fluid matrix that surrounds body cells. Appropriate compensatory responses are initiated when deviations from normal are quite small, thereby maintaining plasma osmolality within relatively narrow ranges. In the osmoregulation of body fluids, the movement of water across cell membranes permits minor fluctuations in the concentration of solutes in extracellular fluid to be buffered by corresponding changes in the relatively larger volume of cellular water. Nevertheless, the concentration of solutes in extracellular fluid may at times become elevated or reduced by more than the allowed tolerances of one or two percent. It is then that complementary physiological and behavioral responses come into play to restore plasma osmolality to normal. Thus, for example, a decrease in plasma osmolality, such as that which occurs after the consumption of water in excess of need, leads to the excretion of surplus body water in the urine by inhibiting secretion from the pituitary gland of vasopressin, a peptide hormone that promotes water conservation in the kidneys. As might be expected, thirst also is inhibited then, to prevent further dilution of body fluids. Conversely, an increase in plasma osmolality, such as that which occurs after one eats salty foods or after body water evaporates without being replaced, stimulates the release of vasopressin, increasing the conservation of water and the excretion of solutes in urine. This process is accompanied by increased thirst, with the result of making plasma osmolality more dilute through the consumption of water. The threshold for thirst appears to be slightly higher than for vasopressin secretion, so that thirst is stimulated only after vasopressin has been released in amounts sufficient to produce maximal water retention by the kidneys—that is, only after osmotic dehydration exceeds the capacity of the animal to deal with it physiologically. | 200112_2-RC_3_20 | [
"It increases thirst and stimulates sodium appetite.",
"It helps prevent further dilution of body fluids.",
"It increases the conservation of water in the kidneys.",
"It causes minor changes in plasma volume.",
"It helps stimulate the secretion of steroid hormones."
] | 1 | According to the passage, the withholding of vasopressin fulfills which one of the following functions in the restoration of plasma osmolality to normal levels? |
With the elimination of the apartheid system, South Africa now confronts the transition to a rights-based legal system in a constitutional democracy. Among lawyers and judges, exhilaration over the legal tools soon to be available is tempered by uncertainty about how to use them. The changes in the legal system are significant, not just for human rights lawyers, but for all lawyers—as they will have to learn a less rule-bound and more interpretative way of looking at the law. That is to say, in the past, the parliament was the supreme maker and arbiter of laws; when judges maderulings with which the parliament disagreed, theparliament simply passed new laws to counteract theirrulings. Under the new system, however, aconstitutional court will hear arguments on allconstitutional matters, including questions of whetherthe laws passed by the parliament are valid in light ofthe individual liberties set out in the constitution's billof rights. This shift will lead to extraordinary changes, for South Africa has never before had a legal system based on individual rights—one in which citizens can challenge any law or administrative decision on the basis of their constitutional rights. South African lawyers are concerned about the difficulty of fostering a rights-based culture in a multiracial society containing a wide range of political and personal beliefs simply by including a bill of rights in the constitution and establishing the means for its defense. Because the bill of rights has been drawn in very general terms, the lack of precedents will make the task of determining its precise meaning a bewildering one. With this in mind, the new constitution acknowledges the need to look to other countries for guidance. But some scholars warn that judges, in their rush to fill the constitutional void, may misuse foreign law—they may blindly follow the interpretations given bills of rights in other countries, not taking into account the circumstances in those countries that led to certain decisions. Nonetheless, these scholars are hopeful that, with patience and judicious decisions, South Africa can use international experience in developing a body of precedent that will address the particular needs of its citizens. South Africa must also contend with the image of the law held by many of its citizens. Because the law in South Africa has long been a tool of racial oppression, many of its citizens have come to view obeying the law as implicitly sanctioning an illegitimate, brutal government. Among these South Africans the political climate has thus been one of opposition, and many see it as their duty to cheat the government as much as possible, whether by not paying taxes or by disobeying parking laws. If a rights-based culture is to succeed, the government will need to show its citizens that the legal system is no longer a tool of oppression but instead a way to bring about change and help further the cause of justice. | 200112_2-RC_4_21 | [
"Following the elimination of the apartheid system in South Africa, lawyers, judges, and citizens will need to abandon their posture of opposition to law and design a new and fairer legal system.",
"If the new legal system in South Africa is to succeed, lawyers, judges, and citizens must learn to challenge parlia... | 3 | Which one of the following most completely and accurately states the main point of the passage? |
With the elimination of the apartheid system, South Africa now confronts the transition to a rights-based legal system in a constitutional democracy. Among lawyers and judges, exhilaration over the legal tools soon to be available is tempered by uncertainty about how to use them. The changes in the legal system are significant, not just for human rights lawyers, but for all lawyers—as they will have to learn a less rule-bound and more interpretative way of looking at the law. That is to say, in the past, the parliament was the supreme maker and arbiter of laws; when judges maderulings with which the parliament disagreed, theparliament simply passed new laws to counteract theirrulings. Under the new system, however, aconstitutional court will hear arguments on allconstitutional matters, including questions of whetherthe laws passed by the parliament are valid in light ofthe individual liberties set out in the constitution's billof rights. This shift will lead to extraordinary changes, for South Africa has never before had a legal system based on individual rights—one in which citizens can challenge any law or administrative decision on the basis of their constitutional rights. South African lawyers are concerned about the difficulty of fostering a rights-based culture in a multiracial society containing a wide range of political and personal beliefs simply by including a bill of rights in the constitution and establishing the means for its defense. Because the bill of rights has been drawn in very general terms, the lack of precedents will make the task of determining its precise meaning a bewildering one. With this in mind, the new constitution acknowledges the need to look to other countries for guidance. But some scholars warn that judges, in their rush to fill the constitutional void, may misuse foreign law—they may blindly follow the interpretations given bills of rights in other countries, not taking into account the circumstances in those countries that led to certain decisions. Nonetheless, these scholars are hopeful that, with patience and judicious decisions, South Africa can use international experience in developing a body of precedent that will address the particular needs of its citizens. South Africa must also contend with the image of the law held by many of its citizens. Because the law in South Africa has long been a tool of racial oppression, many of its citizens have come to view obeying the law as implicitly sanctioning an illegitimate, brutal government. Among these South Africans the political climate has thus been one of opposition, and many see it as their duty to cheat the government as much as possible, whether by not paying taxes or by disobeying parking laws. If a rights-based culture is to succeed, the government will need to show its citizens that the legal system is no longer a tool of oppression but instead a way to bring about change and help further the cause of justice. | 200112_2-RC_4_22 | [
"to describe the role of the parliament under South Africa's new constitution",
"to argue for returning final legal authority to the parliament",
"to contrast the character of legal practice under the apartheid system with that to be implemented under the new constitution",
"to criticize the creation of a cou... | 2 | Which one of the following most accurately describes the author's primary purpose in lines 10–19? |
With the elimination of the apartheid system, South Africa now confronts the transition to a rights-based legal system in a constitutional democracy. Among lawyers and judges, exhilaration over the legal tools soon to be available is tempered by uncertainty about how to use them. The changes in the legal system are significant, not just for human rights lawyers, but for all lawyers—as they will have to learn a less rule-bound and more interpretative way of looking at the law. That is to say, in the past, the parliament was the supreme maker and arbiter of laws; when judges maderulings with which the parliament disagreed, theparliament simply passed new laws to counteract theirrulings. Under the new system, however, aconstitutional court will hear arguments on allconstitutional matters, including questions of whetherthe laws passed by the parliament are valid in light ofthe individual liberties set out in the constitution's billof rights. This shift will lead to extraordinary changes, for South Africa has never before had a legal system based on individual rights—one in which citizens can challenge any law or administrative decision on the basis of their constitutional rights. South African lawyers are concerned about the difficulty of fostering a rights-based culture in a multiracial society containing a wide range of political and personal beliefs simply by including a bill of rights in the constitution and establishing the means for its defense. Because the bill of rights has been drawn in very general terms, the lack of precedents will make the task of determining its precise meaning a bewildering one. With this in mind, the new constitution acknowledges the need to look to other countries for guidance. But some scholars warn that judges, in their rush to fill the constitutional void, may misuse foreign law—they may blindly follow the interpretations given bills of rights in other countries, not taking into account the circumstances in those countries that led to certain decisions. Nonetheless, these scholars are hopeful that, with patience and judicious decisions, South Africa can use international experience in developing a body of precedent that will address the particular needs of its citizens. South Africa must also contend with the image of the law held by many of its citizens. Because the law in South Africa has long been a tool of racial oppression, many of its citizens have come to view obeying the law as implicitly sanctioning an illegitimate, brutal government. Among these South Africans the political climate has thus been one of opposition, and many see it as their duty to cheat the government as much as possible, whether by not paying taxes or by disobeying parking laws. If a rights-based culture is to succeed, the government will need to show its citizens that the legal system is no longer a tool of oppression but instead a way to bring about change and help further the cause of justice. | 200112_2-RC_4_23 | [
"deep skepticism",
"open pessimism",
"total indifference",
"guarded optimism",
"complete confidence"
] | 3 | The passage suggests that the author's attitude toward the possibility of success for a rights-based legal system in South Africa is most likely one of |
With the elimination of the apartheid system, South Africa now confronts the transition to a rights-based legal system in a constitutional democracy. Among lawyers and judges, exhilaration over the legal tools soon to be available is tempered by uncertainty about how to use them. The changes in the legal system are significant, not just for human rights lawyers, but for all lawyers—as they will have to learn a less rule-bound and more interpretative way of looking at the law. That is to say, in the past, the parliament was the supreme maker and arbiter of laws; when judges maderulings with which the parliament disagreed, theparliament simply passed new laws to counteract theirrulings. Under the new system, however, aconstitutional court will hear arguments on allconstitutional matters, including questions of whetherthe laws passed by the parliament are valid in light ofthe individual liberties set out in the constitution's billof rights. This shift will lead to extraordinary changes, for South Africa has never before had a legal system based on individual rights—one in which citizens can challenge any law or administrative decision on the basis of their constitutional rights. South African lawyers are concerned about the difficulty of fostering a rights-based culture in a multiracial society containing a wide range of political and personal beliefs simply by including a bill of rights in the constitution and establishing the means for its defense. Because the bill of rights has been drawn in very general terms, the lack of precedents will make the task of determining its precise meaning a bewildering one. With this in mind, the new constitution acknowledges the need to look to other countries for guidance. But some scholars warn that judges, in their rush to fill the constitutional void, may misuse foreign law—they may blindly follow the interpretations given bills of rights in other countries, not taking into account the circumstances in those countries that led to certain decisions. Nonetheless, these scholars are hopeful that, with patience and judicious decisions, South Africa can use international experience in developing a body of precedent that will address the particular needs of its citizens. South Africa must also contend with the image of the law held by many of its citizens. Because the law in South Africa has long been a tool of racial oppression, many of its citizens have come to view obeying the law as implicitly sanctioning an illegitimate, brutal government. Among these South Africans the political climate has thus been one of opposition, and many see it as their duty to cheat the government as much as possible, whether by not paying taxes or by disobeying parking laws. If a rights-based culture is to succeed, the government will need to show its citizens that the legal system is no longer a tool of oppression but instead a way to bring about change and help further the cause of justice. | 200112_2-RC_4_24 | [
"decisions rendered in constitutional court",
"challenges from concerned citizens",
"new laws passed in the parliament",
"provisions in the constitution's bill of rights",
"other judges with a more rule-bound approach to the law"
] | 2 | According to the passage, under the apartheid system the rulings of judges were sometimes counteracted by |
With the elimination of the apartheid system, South Africa now confronts the transition to a rights-based legal system in a constitutional democracy. Among lawyers and judges, exhilaration over the legal tools soon to be available is tempered by uncertainty about how to use them. The changes in the legal system are significant, not just for human rights lawyers, but for all lawyers—as they will have to learn a less rule-bound and more interpretative way of looking at the law. That is to say, in the past, the parliament was the supreme maker and arbiter of laws; when judges maderulings with which the parliament disagreed, theparliament simply passed new laws to counteract theirrulings. Under the new system, however, aconstitutional court will hear arguments on allconstitutional matters, including questions of whetherthe laws passed by the parliament are valid in light ofthe individual liberties set out in the constitution's billof rights. This shift will lead to extraordinary changes, for South Africa has never before had a legal system based on individual rights—one in which citizens can challenge any law or administrative decision on the basis of their constitutional rights. South African lawyers are concerned about the difficulty of fostering a rights-based culture in a multiracial society containing a wide range of political and personal beliefs simply by including a bill of rights in the constitution and establishing the means for its defense. Because the bill of rights has been drawn in very general terms, the lack of precedents will make the task of determining its precise meaning a bewildering one. With this in mind, the new constitution acknowledges the need to look to other countries for guidance. But some scholars warn that judges, in their rush to fill the constitutional void, may misuse foreign law—they may blindly follow the interpretations given bills of rights in other countries, not taking into account the circumstances in those countries that led to certain decisions. Nonetheless, these scholars are hopeful that, with patience and judicious decisions, South Africa can use international experience in developing a body of precedent that will address the particular needs of its citizens. South Africa must also contend with the image of the law held by many of its citizens. Because the law in South Africa has long been a tool of racial oppression, many of its citizens have come to view obeying the law as implicitly sanctioning an illegitimate, brutal government. Among these South Africans the political climate has thus been one of opposition, and many see it as their duty to cheat the government as much as possible, whether by not paying taxes or by disobeying parking laws. If a rights-based culture is to succeed, the government will need to show its citizens that the legal system is no longer a tool of oppression but instead a way to bring about change and help further the cause of justice. | 200112_2-RC_4_25 | [
"A solution to a problem is identified, several methods of implementing the solution are discussed, and one of the methods is argued for.",
"The background to a problem is presented, past methods of solving the problem are criticized, and a new solution is proposed.",
"An analysis of a problem is presented, pos... | 4 | Which one of the following most accurately describes the organization of the last paragraph of the passage? |
With the elimination of the apartheid system, South Africa now confronts the transition to a rights-based legal system in a constitutional democracy. Among lawyers and judges, exhilaration over the legal tools soon to be available is tempered by uncertainty about how to use them. The changes in the legal system are significant, not just for human rights lawyers, but for all lawyers—as they will have to learn a less rule-bound and more interpretative way of looking at the law. That is to say, in the past, the parliament was the supreme maker and arbiter of laws; when judges maderulings with which the parliament disagreed, theparliament simply passed new laws to counteract theirrulings. Under the new system, however, aconstitutional court will hear arguments on allconstitutional matters, including questions of whetherthe laws passed by the parliament are valid in light ofthe individual liberties set out in the constitution's billof rights. This shift will lead to extraordinary changes, for South Africa has never before had a legal system based on individual rights—one in which citizens can challenge any law or administrative decision on the basis of their constitutional rights. South African lawyers are concerned about the difficulty of fostering a rights-based culture in a multiracial society containing a wide range of political and personal beliefs simply by including a bill of rights in the constitution and establishing the means for its defense. Because the bill of rights has been drawn in very general terms, the lack of precedents will make the task of determining its precise meaning a bewildering one. With this in mind, the new constitution acknowledges the need to look to other countries for guidance. But some scholars warn that judges, in their rush to fill the constitutional void, may misuse foreign law—they may blindly follow the interpretations given bills of rights in other countries, not taking into account the circumstances in those countries that led to certain decisions. Nonetheless, these scholars are hopeful that, with patience and judicious decisions, South Africa can use international experience in developing a body of precedent that will address the particular needs of its citizens. South Africa must also contend with the image of the law held by many of its citizens. Because the law in South Africa has long been a tool of racial oppression, many of its citizens have come to view obeying the law as implicitly sanctioning an illegitimate, brutal government. Among these South Africans the political climate has thus been one of opposition, and many see it as their duty to cheat the government as much as possible, whether by not paying taxes or by disobeying parking laws. If a rights-based culture is to succeed, the government will need to show its citizens that the legal system is no longer a tool of oppression but instead a way to bring about change and help further the cause of justice. | 200112_2-RC_4_26 | [
"Reliance of judges on the interpretations given bills of rights in other countries must be tempered by the recognition that such interpretations may be based on circumstances not necessarily applicable to South Africa.",
"Basing interpretations of the South African bill of rights on interpretations given bills o... | 0 | Based on the passage, the scholars mentioned in the second paragraph would be most likely to agree with which one of the following statements? |
The jury trial is one of the handful of democratic institutions that allow individual citizens, rather than the government, to make important societal decisions. A crucial component of the jury trial, at least in serious criminal cases, is the rule that verdicts be unanimous among the jurors (usually twelve in number). Under this requirement, dissenting jurors must either be convinced of the rightness of the prevailing opinion, or, conversely, persuade the other jurors to change their minds. In either instance, the unanimity requirement compels the jury to deliberate fully and truly before reaching its verdict. Critics of the unanimity requirement, however, see it as a costly relic that extends the deliberation process and sometimes, in a hung (i.e., deadlocked) jury, brings it to a halt at the hands of a single, recalcitrant juror, forcing the judge to order a retrial. Some of these critics recommend reducing verdict requirements to something less than unanimity, so that one or even two dissenting jurors will not be able to force a retrial. But the material costs of hung juries do not warrant losing the benefit to society of the unanimous verdict. Statistically, jury trials are relatively rare; the vast majority of defendants do not have the option of a jury trial or elect to have a trial without a jury ⎯ or they plead guilty to the original or a reduced charge. And the incidence of hung juries is only a small fraction of the already small fraction of cases that receive a jury trial. Furthermore, that juries occasionally deadlock does not demonstrate a flaw in the criminal justice system, but rather suggests that jurors are conscientiously doing the job they have been asked to do. Hung juries usually occur when the case is very close ⎯ that is, when neither side has presented completely convincing evidence ⎯ and although the unanimity requirement may sometimes lead to inconclusive outcomes, a hung jury is certainly preferable to an unjust verdict. Requiring unanimity provides a better chance that a trial, and thus a verdict, will be fair. Innocent people are already occasionally convicted ⎯ perhaps in some cases because jurors presume that anyone who has been brought to trial is probably guilty ⎯ and eliminating the unanimity requirement would only increase the opportunity for such mistakes. Furthermore, if a juror's dissenting opinion can easily be dismissed, an important and necessary part of the deliberation process will be lost, for effective deliberation requires that each juror's opinion be given a fair hearing. Only then can the verdict reached by the jury be said to represent all of its members, and if even one juror has doubts that are dismissed out of hand, society's confidence that a proper verdict has been reached would be undermined. | 200206_1-RC_1_1 | [
"Because trials requiring juries are relative rare, the usefulness of the unanimity requirement does not need to be reexamined.",
"The unanimity requirement should be maintained because most hung juries are caused by irresponsible jurors rather than by any flaws in the requirement.",
"The problem of hung juries... | 4 | Which one of the following most accurately states the main point of the passage? |
The jury trial is one of the handful of democratic institutions that allow individual citizens, rather than the government, to make important societal decisions. A crucial component of the jury trial, at least in serious criminal cases, is the rule that verdicts be unanimous among the jurors (usually twelve in number). Under this requirement, dissenting jurors must either be convinced of the rightness of the prevailing opinion, or, conversely, persuade the other jurors to change their minds. In either instance, the unanimity requirement compels the jury to deliberate fully and truly before reaching its verdict. Critics of the unanimity requirement, however, see it as a costly relic that extends the deliberation process and sometimes, in a hung (i.e., deadlocked) jury, brings it to a halt at the hands of a single, recalcitrant juror, forcing the judge to order a retrial. Some of these critics recommend reducing verdict requirements to something less than unanimity, so that one or even two dissenting jurors will not be able to force a retrial. But the material costs of hung juries do not warrant losing the benefit to society of the unanimous verdict. Statistically, jury trials are relatively rare; the vast majority of defendants do not have the option of a jury trial or elect to have a trial without a jury ⎯ or they plead guilty to the original or a reduced charge. And the incidence of hung juries is only a small fraction of the already small fraction of cases that receive a jury trial. Furthermore, that juries occasionally deadlock does not demonstrate a flaw in the criminal justice system, but rather suggests that jurors are conscientiously doing the job they have been asked to do. Hung juries usually occur when the case is very close ⎯ that is, when neither side has presented completely convincing evidence ⎯ and although the unanimity requirement may sometimes lead to inconclusive outcomes, a hung jury is certainly preferable to an unjust verdict. Requiring unanimity provides a better chance that a trial, and thus a verdict, will be fair. Innocent people are already occasionally convicted ⎯ perhaps in some cases because jurors presume that anyone who has been brought to trial is probably guilty ⎯ and eliminating the unanimity requirement would only increase the opportunity for such mistakes. Furthermore, if a juror's dissenting opinion can easily be dismissed, an important and necessary part of the deliberation process will be lost, for effective deliberation requires that each juror's opinion be given a fair hearing. Only then can the verdict reached by the jury be said to represent all of its members, and if even one juror has doubts that are dismissed out of hand, society's confidence that a proper verdict has been reached would be undermined. | 200206_1-RC_1_2 | [
"cursory appreciation",
"neutral interest",
"cautious endorsement",
"firm support",
"unreasoned reverence"
] | 3 | Which one of the following most accurately describes the author's attitude toward the unanimity requirement? |
The jury trial is one of the handful of democratic institutions that allow individual citizens, rather than the government, to make important societal decisions. A crucial component of the jury trial, at least in serious criminal cases, is the rule that verdicts be unanimous among the jurors (usually twelve in number). Under this requirement, dissenting jurors must either be convinced of the rightness of the prevailing opinion, or, conversely, persuade the other jurors to change their minds. In either instance, the unanimity requirement compels the jury to deliberate fully and truly before reaching its verdict. Critics of the unanimity requirement, however, see it as a costly relic that extends the deliberation process and sometimes, in a hung (i.e., deadlocked) jury, brings it to a halt at the hands of a single, recalcitrant juror, forcing the judge to order a retrial. Some of these critics recommend reducing verdict requirements to something less than unanimity, so that one or even two dissenting jurors will not be able to force a retrial. But the material costs of hung juries do not warrant losing the benefit to society of the unanimous verdict. Statistically, jury trials are relatively rare; the vast majority of defendants do not have the option of a jury trial or elect to have a trial without a jury ⎯ or they plead guilty to the original or a reduced charge. And the incidence of hung juries is only a small fraction of the already small fraction of cases that receive a jury trial. Furthermore, that juries occasionally deadlock does not demonstrate a flaw in the criminal justice system, but rather suggests that jurors are conscientiously doing the job they have been asked to do. Hung juries usually occur when the case is very close ⎯ that is, when neither side has presented completely convincing evidence ⎯ and although the unanimity requirement may sometimes lead to inconclusive outcomes, a hung jury is certainly preferable to an unjust verdict. Requiring unanimity provides a better chance that a trial, and thus a verdict, will be fair. Innocent people are already occasionally convicted ⎯ perhaps in some cases because jurors presume that anyone who has been brought to trial is probably guilty ⎯ and eliminating the unanimity requirement would only increase the opportunity for such mistakes. Furthermore, if a juror's dissenting opinion can easily be dismissed, an important and necessary part of the deliberation process will be lost, for effective deliberation requires that each juror's opinion be given a fair hearing. Only then can the verdict reached by the jury be said to represent all of its members, and if even one juror has doubts that are dismissed out of hand, society's confidence that a proper verdict has been reached would be undermined. | 200206_1-RC_1_3 | [
"The risk of unjust verdicts is serious enough to warrant strong measures to avoid it.",
"Fairness in jury trials is crucial and so judges must be extremely thorough in order to ensure it.",
"Careful adherence to the unanimity requirement will eventually eliminate unjust verdicts.",
"Safeguards must be in pla... | 0 | Which one of the following principles can most clearly be said to underlie the author's arguments in the third paragraph? |
The jury trial is one of the handful of democratic institutions that allow individual citizens, rather than the government, to make important societal decisions. A crucial component of the jury trial, at least in serious criminal cases, is the rule that verdicts be unanimous among the jurors (usually twelve in number). Under this requirement, dissenting jurors must either be convinced of the rightness of the prevailing opinion, or, conversely, persuade the other jurors to change their minds. In either instance, the unanimity requirement compels the jury to deliberate fully and truly before reaching its verdict. Critics of the unanimity requirement, however, see it as a costly relic that extends the deliberation process and sometimes, in a hung (i.e., deadlocked) jury, brings it to a halt at the hands of a single, recalcitrant juror, forcing the judge to order a retrial. Some of these critics recommend reducing verdict requirements to something less than unanimity, so that one or even two dissenting jurors will not be able to force a retrial. But the material costs of hung juries do not warrant losing the benefit to society of the unanimous verdict. Statistically, jury trials are relatively rare; the vast majority of defendants do not have the option of a jury trial or elect to have a trial without a jury ⎯ or they plead guilty to the original or a reduced charge. And the incidence of hung juries is only a small fraction of the already small fraction of cases that receive a jury trial. Furthermore, that juries occasionally deadlock does not demonstrate a flaw in the criminal justice system, but rather suggests that jurors are conscientiously doing the job they have been asked to do. Hung juries usually occur when the case is very close ⎯ that is, when neither side has presented completely convincing evidence ⎯ and although the unanimity requirement may sometimes lead to inconclusive outcomes, a hung jury is certainly preferable to an unjust verdict. Requiring unanimity provides a better chance that a trial, and thus a verdict, will be fair. Innocent people are already occasionally convicted ⎯ perhaps in some cases because jurors presume that anyone who has been brought to trial is probably guilty ⎯ and eliminating the unanimity requirement would only increase the opportunity for such mistakes. Furthermore, if a juror's dissenting opinion can easily be dismissed, an important and necessary part of the deliberation process will be lost, for effective deliberation requires that each juror's opinion be given a fair hearing. Only then can the verdict reached by the jury be said to represent all of its members, and if even one juror has doubts that are dismissed out of hand, society's confidence that a proper verdict has been reached would be undermined. | 200206_1-RC_1_4 | [
"It is not surprising, then, that the arguments presented by the critics of the unanimity requirement grow out of a separate tradition from that embodied in the unanimity requirement.",
"Similarly, if there is a public debate concerning the unanimity requirement, public faith in the requirement will be strengthen... | 2 | Which one of the following sentences could most logically be added to the end of the last paragraph of the passage? |
The jury trial is one of the handful of democratic institutions that allow individual citizens, rather than the government, to make important societal decisions. A crucial component of the jury trial, at least in serious criminal cases, is the rule that verdicts be unanimous among the jurors (usually twelve in number). Under this requirement, dissenting jurors must either be convinced of the rightness of the prevailing opinion, or, conversely, persuade the other jurors to change their minds. In either instance, the unanimity requirement compels the jury to deliberate fully and truly before reaching its verdict. Critics of the unanimity requirement, however, see it as a costly relic that extends the deliberation process and sometimes, in a hung (i.e., deadlocked) jury, brings it to a halt at the hands of a single, recalcitrant juror, forcing the judge to order a retrial. Some of these critics recommend reducing verdict requirements to something less than unanimity, so that one or even two dissenting jurors will not be able to force a retrial. But the material costs of hung juries do not warrant losing the benefit to society of the unanimous verdict. Statistically, jury trials are relatively rare; the vast majority of defendants do not have the option of a jury trial or elect to have a trial without a jury ⎯ or they plead guilty to the original or a reduced charge. And the incidence of hung juries is only a small fraction of the already small fraction of cases that receive a jury trial. Furthermore, that juries occasionally deadlock does not demonstrate a flaw in the criminal justice system, but rather suggests that jurors are conscientiously doing the job they have been asked to do. Hung juries usually occur when the case is very close ⎯ that is, when neither side has presented completely convincing evidence ⎯ and although the unanimity requirement may sometimes lead to inconclusive outcomes, a hung jury is certainly preferable to an unjust verdict. Requiring unanimity provides a better chance that a trial, and thus a verdict, will be fair. Innocent people are already occasionally convicted ⎯ perhaps in some cases because jurors presume that anyone who has been brought to trial is probably guilty ⎯ and eliminating the unanimity requirement would only increase the opportunity for such mistakes. Furthermore, if a juror's dissenting opinion can easily be dismissed, an important and necessary part of the deliberation process will be lost, for effective deliberation requires that each juror's opinion be given a fair hearing. Only then can the verdict reached by the jury be said to represent all of its members, and if even one juror has doubts that are dismissed out of hand, society's confidence that a proper verdict has been reached would be undermined. | 200206_1-RC_1_5 | [
"obstinate",
"suspicious",
"careful",
"conscientious",
"naive"
] | 0 | Which one of the following could replace the term "recalcitrant" (line 16) without a substantial change in the meaning of the critics' claim? |
The jury trial is one of the handful of democratic institutions that allow individual citizens, rather than the government, to make important societal decisions. A crucial component of the jury trial, at least in serious criminal cases, is the rule that verdicts be unanimous among the jurors (usually twelve in number). Under this requirement, dissenting jurors must either be convinced of the rightness of the prevailing opinion, or, conversely, persuade the other jurors to change their minds. In either instance, the unanimity requirement compels the jury to deliberate fully and truly before reaching its verdict. Critics of the unanimity requirement, however, see it as a costly relic that extends the deliberation process and sometimes, in a hung (i.e., deadlocked) jury, brings it to a halt at the hands of a single, recalcitrant juror, forcing the judge to order a retrial. Some of these critics recommend reducing verdict requirements to something less than unanimity, so that one or even two dissenting jurors will not be able to force a retrial. But the material costs of hung juries do not warrant losing the benefit to society of the unanimous verdict. Statistically, jury trials are relatively rare; the vast majority of defendants do not have the option of a jury trial or elect to have a trial without a jury ⎯ or they plead guilty to the original or a reduced charge. And the incidence of hung juries is only a small fraction of the already small fraction of cases that receive a jury trial. Furthermore, that juries occasionally deadlock does not demonstrate a flaw in the criminal justice system, but rather suggests that jurors are conscientiously doing the job they have been asked to do. Hung juries usually occur when the case is very close ⎯ that is, when neither side has presented completely convincing evidence ⎯ and although the unanimity requirement may sometimes lead to inconclusive outcomes, a hung jury is certainly preferable to an unjust verdict. Requiring unanimity provides a better chance that a trial, and thus a verdict, will be fair. Innocent people are already occasionally convicted ⎯ perhaps in some cases because jurors presume that anyone who has been brought to trial is probably guilty ⎯ and eliminating the unanimity requirement would only increase the opportunity for such mistakes. Furthermore, if a juror's dissenting opinion can easily be dismissed, an important and necessary part of the deliberation process will be lost, for effective deliberation requires that each juror's opinion be given a fair hearing. Only then can the verdict reached by the jury be said to represent all of its members, and if even one juror has doubts that are dismissed out of hand, society's confidence that a proper verdict has been reached would be undermined. | 200206_1-RC_1_6 | [
"Only verdicts in very close cases would be affected.",
"The responsibility felt by jurors to be respectful to one another would be lessened.",
"Society's confidence in the fairness of the verdicts would be undermined.",
"The problem of hung juries would not be solved but would surface less frequently.",
"A... | 2 | The author explicitly claims that which one of the following would be a result of allowing a juror's dissenting opinion to be dismissed? |
The jury trial is one of the handful of democratic institutions that allow individual citizens, rather than the government, to make important societal decisions. A crucial component of the jury trial, at least in serious criminal cases, is the rule that verdicts be unanimous among the jurors (usually twelve in number). Under this requirement, dissenting jurors must either be convinced of the rightness of the prevailing opinion, or, conversely, persuade the other jurors to change their minds. In either instance, the unanimity requirement compels the jury to deliberate fully and truly before reaching its verdict. Critics of the unanimity requirement, however, see it as a costly relic that extends the deliberation process and sometimes, in a hung (i.e., deadlocked) jury, brings it to a halt at the hands of a single, recalcitrant juror, forcing the judge to order a retrial. Some of these critics recommend reducing verdict requirements to something less than unanimity, so that one or even two dissenting jurors will not be able to force a retrial. But the material costs of hung juries do not warrant losing the benefit to society of the unanimous verdict. Statistically, jury trials are relatively rare; the vast majority of defendants do not have the option of a jury trial or elect to have a trial without a jury ⎯ or they plead guilty to the original or a reduced charge. And the incidence of hung juries is only a small fraction of the already small fraction of cases that receive a jury trial. Furthermore, that juries occasionally deadlock does not demonstrate a flaw in the criminal justice system, but rather suggests that jurors are conscientiously doing the job they have been asked to do. Hung juries usually occur when the case is very close ⎯ that is, when neither side has presented completely convincing evidence ⎯ and although the unanimity requirement may sometimes lead to inconclusive outcomes, a hung jury is certainly preferable to an unjust verdict. Requiring unanimity provides a better chance that a trial, and thus a verdict, will be fair. Innocent people are already occasionally convicted ⎯ perhaps in some cases because jurors presume that anyone who has been brought to trial is probably guilty ⎯ and eliminating the unanimity requirement would only increase the opportunity for such mistakes. Furthermore, if a juror's dissenting opinion can easily be dismissed, an important and necessary part of the deliberation process will be lost, for effective deliberation requires that each juror's opinion be given a fair hearing. Only then can the verdict reached by the jury be said to represent all of its members, and if even one juror has doubts that are dismissed out of hand, society's confidence that a proper verdict has been reached would be undermined. | 200206_1-RC_1_7 | [
"Hung juries most often result from an error in judgment on the part of one juror.",
"Aside from the material costs of hung juries, the criminal justice system has few flaws.",
"The fact that jury trials are so rare renders any flaws in the jury system insignificant.",
"Hung juries are acceptable and usually ... | 3 | It can be inferred from the passage that the author would be most likely to agree with which one of the following? |
Spurred by the discovery that a substance containing uranium emitted radiation, Marie Curie began studying radioactivity in 1897. She first tested gold and copper for radiation but found none. She then tested pitchblende, a mineral that was known to contain uranium, and discovered that it was more radioactive than uranium. Acting on the hypothesis that pitchblende must contain at least one other radioactive element, Curie was able to isolate a pair of previously unknown elements, polonium and radium. Turning her attention to the rate of radioactive emission, she discovered that uranium emitted radiation at a consistent rate, even if heated or dissolved. Based on these results, Curie concluded that the emission rate for a given element was constant. Furthermore, because radiation appeared to be spontaneous, with no discernible difference between radiating and nonradiating elements, she was unable to postulate a mechanism by which to explain radiation. It is now known that radiation occurs when certain isotopes (atoms of the same element that differ slightly in their atomic structure) decay, and that emission rates are not constant but decrease very slowly with time. Some critics have recently faulted Curie for not reaching these conclusions herself, but it would have been impossible for Curie to do so given the evidence available to her. While relatively light elements such as gold and copper occasionally have unstable (i.e., radioactive) isotopes, radioactive isotopes of most of these elements are not available in nature because they have largely finished decaying and so have become stable. Conversely, heavier elements such as uranium, which decay into lighter elements in a process that takes billions of years, are present in nature exclusively in radioactive form. Furthermore, we must recall that in Curie's time the nature of the atom itself was still being debated. Physicists believed that matter could not be divided indefinitely but instead would eventually be reduced to its indivisible components. Chemists, on the other hand, observing that chemical reactions took place as if matter was composed of atomlike particles, used the atom as a foundation for conceptualizing and describing such reactions—but they were not ultimately concerned with the question of whether or not such indivisible atoms actually existed. As a physicist, Curie conjectured that radiating substances might lose mass in the form of atoms, but this idea is very different from the explanation eventually arrived at. It was not until the 1930s that advances in quantum mechanics overthrew the earlier understanding of the atom and showed that radiation occurs because the atoms themselves lose mass—a hypothesis that Curie, committed to the indivisible atom, could not be expected to have conceived of. Moreover, not only is Curie's inability to identify the mechanism by which radiation occurs understandable, it is also important to recognize that it was Curie's investigation of radiation that paved the way for the later breakthroughs. | 200206_1-RC_2_8 | [
"It is unlikely that quantum mechanics would have been developed without the theoretical contributions of Marie Curie toward an understanding of the nature of radioactivity.",
"Although later shown to be incomplete and partially inaccurate, Marie Curie's investigations provided a significant step forward on the r... | 1 | Which one of the following most accurately states the central idea of the passage? |
Spurred by the discovery that a substance containing uranium emitted radiation, Marie Curie began studying radioactivity in 1897. She first tested gold and copper for radiation but found none. She then tested pitchblende, a mineral that was known to contain uranium, and discovered that it was more radioactive than uranium. Acting on the hypothesis that pitchblende must contain at least one other radioactive element, Curie was able to isolate a pair of previously unknown elements, polonium and radium. Turning her attention to the rate of radioactive emission, she discovered that uranium emitted radiation at a consistent rate, even if heated or dissolved. Based on these results, Curie concluded that the emission rate for a given element was constant. Furthermore, because radiation appeared to be spontaneous, with no discernible difference between radiating and nonradiating elements, she was unable to postulate a mechanism by which to explain radiation. It is now known that radiation occurs when certain isotopes (atoms of the same element that differ slightly in their atomic structure) decay, and that emission rates are not constant but decrease very slowly with time. Some critics have recently faulted Curie for not reaching these conclusions herself, but it would have been impossible for Curie to do so given the evidence available to her. While relatively light elements such as gold and copper occasionally have unstable (i.e., radioactive) isotopes, radioactive isotopes of most of these elements are not available in nature because they have largely finished decaying and so have become stable. Conversely, heavier elements such as uranium, which decay into lighter elements in a process that takes billions of years, are present in nature exclusively in radioactive form. Furthermore, we must recall that in Curie's time the nature of the atom itself was still being debated. Physicists believed that matter could not be divided indefinitely but instead would eventually be reduced to its indivisible components. Chemists, on the other hand, observing that chemical reactions took place as if matter was composed of atomlike particles, used the atom as a foundation for conceptualizing and describing such reactions—but they were not ultimately concerned with the question of whether or not such indivisible atoms actually existed. As a physicist, Curie conjectured that radiating substances might lose mass in the form of atoms, but this idea is very different from the explanation eventually arrived at. It was not until the 1930s that advances in quantum mechanics overthrew the earlier understanding of the atom and showed that radiation occurs because the atoms themselves lose mass—a hypothesis that Curie, committed to the indivisible atom, could not be expected to have conceived of. Moreover, not only is Curie's inability to identify the mechanism by which radiation occurs understandable, it is also important to recognize that it was Curie's investigation of radiation that paved the way for the later breakthroughs. | 200206_1-RC_2_9 | [
"The critics fail to take into account the obstacles Curie faced in dealing with the scientific community of her time.",
"The critics do not appreciate that the eventual development of quantum mechanics depended on Curie's conjecture that radiating substances can lose atoms.",
"The critics are unaware of the di... | 3 | The passage suggests that the author would be most likely to agree with which one of the following statements about the contemporary critics of Curie's studies of radioactivity? |
Spurred by the discovery that a substance containing uranium emitted radiation, Marie Curie began studying radioactivity in 1897. She first tested gold and copper for radiation but found none. She then tested pitchblende, a mineral that was known to contain uranium, and discovered that it was more radioactive than uranium. Acting on the hypothesis that pitchblende must contain at least one other radioactive element, Curie was able to isolate a pair of previously unknown elements, polonium and radium. Turning her attention to the rate of radioactive emission, she discovered that uranium emitted radiation at a consistent rate, even if heated or dissolved. Based on these results, Curie concluded that the emission rate for a given element was constant. Furthermore, because radiation appeared to be spontaneous, with no discernible difference between radiating and nonradiating elements, she was unable to postulate a mechanism by which to explain radiation. It is now known that radiation occurs when certain isotopes (atoms of the same element that differ slightly in their atomic structure) decay, and that emission rates are not constant but decrease very slowly with time. Some critics have recently faulted Curie for not reaching these conclusions herself, but it would have been impossible for Curie to do so given the evidence available to her. While relatively light elements such as gold and copper occasionally have unstable (i.e., radioactive) isotopes, radioactive isotopes of most of these elements are not available in nature because they have largely finished decaying and so have become stable. Conversely, heavier elements such as uranium, which decay into lighter elements in a process that takes billions of years, are present in nature exclusively in radioactive form. Furthermore, we must recall that in Curie's time the nature of the atom itself was still being debated. Physicists believed that matter could not be divided indefinitely but instead would eventually be reduced to its indivisible components. Chemists, on the other hand, observing that chemical reactions took place as if matter was composed of atomlike particles, used the atom as a foundation for conceptualizing and describing such reactions—but they were not ultimately concerned with the question of whether or not such indivisible atoms actually existed. As a physicist, Curie conjectured that radiating substances might lose mass in the form of atoms, but this idea is very different from the explanation eventually arrived at. It was not until the 1930s that advances in quantum mechanics overthrew the earlier understanding of the atom and showed that radiation occurs because the atoms themselves lose mass—a hypothesis that Curie, committed to the indivisible atom, could not be expected to have conceived of. Moreover, not only is Curie's inability to identify the mechanism by which radiation occurs understandable, it is also important to recognize that it was Curie's investigation of radiation that paved the way for the later breakthroughs. | 200206_1-RC_2_10 | [
"Pitchblende was not known by scientists to contain any radioactive element besides uranium.",
"Radioactivity was suspected by scientists to arise from the overall structure of pitchblende rather than from particular elements in it.",
"Physicists and chemists had developed rival theories regarding the cause of ... | 0 | The passage implies which one of the following with regard to the time at which Curie began studying radioactivity? |
Spurred by the discovery that a substance containing uranium emitted radiation, Marie Curie began studying radioactivity in 1897. She first tested gold and copper for radiation but found none. She then tested pitchblende, a mineral that was known to contain uranium, and discovered that it was more radioactive than uranium. Acting on the hypothesis that pitchblende must contain at least one other radioactive element, Curie was able to isolate a pair of previously unknown elements, polonium and radium. Turning her attention to the rate of radioactive emission, she discovered that uranium emitted radiation at a consistent rate, even if heated or dissolved. Based on these results, Curie concluded that the emission rate for a given element was constant. Furthermore, because radiation appeared to be spontaneous, with no discernible difference between radiating and nonradiating elements, she was unable to postulate a mechanism by which to explain radiation. It is now known that radiation occurs when certain isotopes (atoms of the same element that differ slightly in their atomic structure) decay, and that emission rates are not constant but decrease very slowly with time. Some critics have recently faulted Curie for not reaching these conclusions herself, but it would have been impossible for Curie to do so given the evidence available to her. While relatively light elements such as gold and copper occasionally have unstable (i.e., radioactive) isotopes, radioactive isotopes of most of these elements are not available in nature because they have largely finished decaying and so have become stable. Conversely, heavier elements such as uranium, which decay into lighter elements in a process that takes billions of years, are present in nature exclusively in radioactive form. Furthermore, we must recall that in Curie's time the nature of the atom itself was still being debated. Physicists believed that matter could not be divided indefinitely but instead would eventually be reduced to its indivisible components. Chemists, on the other hand, observing that chemical reactions took place as if matter was composed of atomlike particles, used the atom as a foundation for conceptualizing and describing such reactions—but they were not ultimately concerned with the question of whether or not such indivisible atoms actually existed. As a physicist, Curie conjectured that radiating substances might lose mass in the form of atoms, but this idea is very different from the explanation eventually arrived at. It was not until the 1930s that advances in quantum mechanics overthrew the earlier understanding of the atom and showed that radiation occurs because the atoms themselves lose mass—a hypothesis that Curie, committed to the indivisible atom, could not be expected to have conceived of. Moreover, not only is Curie's inability to identify the mechanism by which radiation occurs understandable, it is also important to recognize that it was Curie's investigation of radiation that paved the way for the later breakthroughs. | 200206_1-RC_2_11 | [
"summarize some aspects of one scientist's work and defend it against recent criticism",
"describe a scientific dispute and argue for the correctness of an earlier theory",
"outline a currently accepted scientific theory and analyze the evidence that led to its acceptance",
"explain the mechanism by which a n... | 0 | The author's primary purpose in the passage is to |
Spurred by the discovery that a substance containing uranium emitted radiation, Marie Curie began studying radioactivity in 1897. She first tested gold and copper for radiation but found none. She then tested pitchblende, a mineral that was known to contain uranium, and discovered that it was more radioactive than uranium. Acting on the hypothesis that pitchblende must contain at least one other radioactive element, Curie was able to isolate a pair of previously unknown elements, polonium and radium. Turning her attention to the rate of radioactive emission, she discovered that uranium emitted radiation at a consistent rate, even if heated or dissolved. Based on these results, Curie concluded that the emission rate for a given element was constant. Furthermore, because radiation appeared to be spontaneous, with no discernible difference between radiating and nonradiating elements, she was unable to postulate a mechanism by which to explain radiation. It is now known that radiation occurs when certain isotopes (atoms of the same element that differ slightly in their atomic structure) decay, and that emission rates are not constant but decrease very slowly with time. Some critics have recently faulted Curie for not reaching these conclusions herself, but it would have been impossible for Curie to do so given the evidence available to her. While relatively light elements such as gold and copper occasionally have unstable (i.e., radioactive) isotopes, radioactive isotopes of most of these elements are not available in nature because they have largely finished decaying and so have become stable. Conversely, heavier elements such as uranium, which decay into lighter elements in a process that takes billions of years, are present in nature exclusively in radioactive form. Furthermore, we must recall that in Curie's time the nature of the atom itself was still being debated. Physicists believed that matter could not be divided indefinitely but instead would eventually be reduced to its indivisible components. Chemists, on the other hand, observing that chemical reactions took place as if matter was composed of atomlike particles, used the atom as a foundation for conceptualizing and describing such reactions—but they were not ultimately concerned with the question of whether or not such indivisible atoms actually existed. As a physicist, Curie conjectured that radiating substances might lose mass in the form of atoms, but this idea is very different from the explanation eventually arrived at. It was not until the 1930s that advances in quantum mechanics overthrew the earlier understanding of the atom and showed that radiation occurs because the atoms themselves lose mass—a hypothesis that Curie, committed to the indivisible atom, could not be expected to have conceived of. Moreover, not only is Curie's inability to identify the mechanism by which radiation occurs understandable, it is also important to recognize that it was Curie's investigation of radiation that paved the way for the later breakthroughs. | 200206_1-RC_2_12 | [
"narrate the progress of turn-of-the-century studies of radioactivity",
"present a context for the conflict between physicists and chemists",
"provide the factual background for an evaluation of Curie's work",
"outline the structure of the author's central argument",
"identify the error in Curie's work that... | 2 | The primary function of the first paragraph of the passage is to |
Spurred by the discovery that a substance containing uranium emitted radiation, Marie Curie began studying radioactivity in 1897. She first tested gold and copper for radiation but found none. She then tested pitchblende, a mineral that was known to contain uranium, and discovered that it was more radioactive than uranium. Acting on the hypothesis that pitchblende must contain at least one other radioactive element, Curie was able to isolate a pair of previously unknown elements, polonium and radium. Turning her attention to the rate of radioactive emission, she discovered that uranium emitted radiation at a consistent rate, even if heated or dissolved. Based on these results, Curie concluded that the emission rate for a given element was constant. Furthermore, because radiation appeared to be spontaneous, with no discernible difference between radiating and nonradiating elements, she was unable to postulate a mechanism by which to explain radiation. It is now known that radiation occurs when certain isotopes (atoms of the same element that differ slightly in their atomic structure) decay, and that emission rates are not constant but decrease very slowly with time. Some critics have recently faulted Curie for not reaching these conclusions herself, but it would have been impossible for Curie to do so given the evidence available to her. While relatively light elements such as gold and copper occasionally have unstable (i.e., radioactive) isotopes, radioactive isotopes of most of these elements are not available in nature because they have largely finished decaying and so have become stable. Conversely, heavier elements such as uranium, which decay into lighter elements in a process that takes billions of years, are present in nature exclusively in radioactive form. Furthermore, we must recall that in Curie's time the nature of the atom itself was still being debated. Physicists believed that matter could not be divided indefinitely but instead would eventually be reduced to its indivisible components. Chemists, on the other hand, observing that chemical reactions took place as if matter was composed of atomlike particles, used the atom as a foundation for conceptualizing and describing such reactions—but they were not ultimately concerned with the question of whether or not such indivisible atoms actually existed. As a physicist, Curie conjectured that radiating substances might lose mass in the form of atoms, but this idea is very different from the explanation eventually arrived at. It was not until the 1930s that advances in quantum mechanics overthrew the earlier understanding of the atom and showed that radiation occurs because the atoms themselves lose mass—a hypothesis that Curie, committed to the indivisible atom, could not be expected to have conceived of. Moreover, not only is Curie's inability to identify the mechanism by which radiation occurs understandable, it is also important to recognize that it was Curie's investigation of radiation that paved the way for the later breakthroughs. | 200206_1-RC_2_13 | [
"the physical process that underlies a phenomenon",
"the experimental apparatus in which a phenomenon arises",
"the procedure scientists use to bring about the occurrence of a phenomenon",
"the isotopes of an element needed to produce a phenomenon",
"the scientific theory describing a phenomenon"
] | 0 | Which one of the following most accurately expresses the meaning of the word "mechanism" as used by the author in the last sentence of the first paragraph? |
Published in 1952, Invisible Man featured a protagonist whose activities enabled the novel's author, Ralph Ellison, to explore and to blend themes specifically tied to the history and plight of African Americans with themes, also explored by many European writers with whose works Ellison was familiar, about the fractured, evanescent quality of individual identity and character. For this thematic blend, Ellison received two related criticisms: that his allegiance to the concerns of the individual prevented him from directing his art more toward the political action that critics believed was demanded by his era's social and political state of affairs; and that his indulging in European fictional modes lessened his contribution to the development of a distinctly African American novelistic style. Ellison found these criticisms to voice a common demand, namely that writers should censor themselves and sacrifice their individuality for supposedly more important political and cultural purposes. He replied that it demeans a people and its artists to suggest that a particular historical situation requires cultural segregation in the arts. Such a view characterizes all artists as incapable of seeing the world—with all its subtleties and complications—in unique yet expressive ways, and it makes the narrow assumption that audiences are capable of viewing the world only from their own perspectives. Models for understanding Invisible Man that may be of more help than those employed by its critics can be found in Ellison's own love for and celebration of jazz. Jazz has never closed itself off from other musical forms, and some jazz musicians have been able to take the European-influenced songs of U.S. theater and transform them into musical pieces that are unique and personal but also expressive of African American culture. In like manner, Ellison avoided the mere recapitulation of existing literary forms as well as the constraints of artistic isolation by using his work to explore and express the issues of identity and character that had so interested European writers. Further, jazz, featuring solos that, however daring, remain rooted in the band's rhythm section, provides a rich model for understanding the relationship of artist to community and parallels the ways the protagonist's voice in Invisible Man is set within a wider communal context. Ellison's explorations in the novel, often in the manner of loving caricature, of the ideas left him by both European and African American predecessors are a form of homage to them and thus ameliorate the sense of alienation he expresses through the protagonist. And even though Invisible Man's protagonist lives alone in a basement, Ellison proves that an individual whose unique voice is the result of the transmutation of a cultural inheritance can never be completely cut off from the community. | 200206_1-RC_3_14 | [
"The possibility of successfully blending different cultural forms is demonstrated by jazz's ability to incorporate European influences.",
"The technique of blending the artistic concerns of two cultures could be an effective tool for social and political action.",
"Due to the success of Invisible Man, Ellison ... | 0 | It can be inferred from the passage that the author most clearly holds which one of the following views? |
Published in 1952, Invisible Man featured a protagonist whose activities enabled the novel's author, Ralph Ellison, to explore and to blend themes specifically tied to the history and plight of African Americans with themes, also explored by many European writers with whose works Ellison was familiar, about the fractured, evanescent quality of individual identity and character. For this thematic blend, Ellison received two related criticisms: that his allegiance to the concerns of the individual prevented him from directing his art more toward the political action that critics believed was demanded by his era's social and political state of affairs; and that his indulging in European fictional modes lessened his contribution to the development of a distinctly African American novelistic style. Ellison found these criticisms to voice a common demand, namely that writers should censor themselves and sacrifice their individuality for supposedly more important political and cultural purposes. He replied that it demeans a people and its artists to suggest that a particular historical situation requires cultural segregation in the arts. Such a view characterizes all artists as incapable of seeing the world—with all its subtleties and complications—in unique yet expressive ways, and it makes the narrow assumption that audiences are capable of viewing the world only from their own perspectives. Models for understanding Invisible Man that may be of more help than those employed by its critics can be found in Ellison's own love for and celebration of jazz. Jazz has never closed itself off from other musical forms, and some jazz musicians have been able to take the European-influenced songs of U.S. theater and transform them into musical pieces that are unique and personal but also expressive of African American culture. In like manner, Ellison avoided the mere recapitulation of existing literary forms as well as the constraints of artistic isolation by using his work to explore and express the issues of identity and character that had so interested European writers. Further, jazz, featuring solos that, however daring, remain rooted in the band's rhythm section, provides a rich model for understanding the relationship of artist to community and parallels the ways the protagonist's voice in Invisible Man is set within a wider communal context. Ellison's explorations in the novel, often in the manner of loving caricature, of the ideas left him by both European and African American predecessors are a form of homage to them and thus ameliorate the sense of alienation he expresses through the protagonist. And even though Invisible Man's protagonist lives alone in a basement, Ellison proves that an individual whose unique voice is the result of the transmutation of a cultural inheritance can never be completely cut off from the community. | 200206_1-RC_3_15 | [
"created a positive effect on the social conditions of the time",
"provided a historical record of the plight of African Americans",
"contained a tribute to the political contributions of African American predecessors",
"prompted a necessary and further separation of American literature from European literary... | 0 | Based on the passage, Ellison's critics would most likely have responded favorably to Invisible Man if it had |
Published in 1952, Invisible Man featured a protagonist whose activities enabled the novel's author, Ralph Ellison, to explore and to blend themes specifically tied to the history and plight of African Americans with themes, also explored by many European writers with whose works Ellison was familiar, about the fractured, evanescent quality of individual identity and character. For this thematic blend, Ellison received two related criticisms: that his allegiance to the concerns of the individual prevented him from directing his art more toward the political action that critics believed was demanded by his era's social and political state of affairs; and that his indulging in European fictional modes lessened his contribution to the development of a distinctly African American novelistic style. Ellison found these criticisms to voice a common demand, namely that writers should censor themselves and sacrifice their individuality for supposedly more important political and cultural purposes. He replied that it demeans a people and its artists to suggest that a particular historical situation requires cultural segregation in the arts. Such a view characterizes all artists as incapable of seeing the world—with all its subtleties and complications—in unique yet expressive ways, and it makes the narrow assumption that audiences are capable of viewing the world only from their own perspectives. Models for understanding Invisible Man that may be of more help than those employed by its critics can be found in Ellison's own love for and celebration of jazz. Jazz has never closed itself off from other musical forms, and some jazz musicians have been able to take the European-influenced songs of U.S. theater and transform them into musical pieces that are unique and personal but also expressive of African American culture. In like manner, Ellison avoided the mere recapitulation of existing literary forms as well as the constraints of artistic isolation by using his work to explore and express the issues of identity and character that had so interested European writers. Further, jazz, featuring solos that, however daring, remain rooted in the band's rhythm section, provides a rich model for understanding the relationship of artist to community and parallels the ways the protagonist's voice in Invisible Man is set within a wider communal context. Ellison's explorations in the novel, often in the manner of loving caricature, of the ideas left him by both European and African American predecessors are a form of homage to them and thus ameliorate the sense of alienation he expresses through the protagonist. And even though Invisible Man's protagonist lives alone in a basement, Ellison proves that an individual whose unique voice is the result of the transmutation of a cultural inheritance can never be completely cut off from the community. | 200206_1-RC_3_16 | [
"a general tendency within the arts whereby certain images and themes recur within the works of certain cultures",
"an obvious separation within the art community resulting from artists' differing aesthetic principles",
"the cultural isolation artists feel when they address issues of individual identity",
"th... | 4 | The expression "cultural segregation in the arts" (lines 22-23) most clearly refers to |
Published in 1952, Invisible Man featured a protagonist whose activities enabled the novel's author, Ralph Ellison, to explore and to blend themes specifically tied to the history and plight of African Americans with themes, also explored by many European writers with whose works Ellison was familiar, about the fractured, evanescent quality of individual identity and character. For this thematic blend, Ellison received two related criticisms: that his allegiance to the concerns of the individual prevented him from directing his art more toward the political action that critics believed was demanded by his era's social and political state of affairs; and that his indulging in European fictional modes lessened his contribution to the development of a distinctly African American novelistic style. Ellison found these criticisms to voice a common demand, namely that writers should censor themselves and sacrifice their individuality for supposedly more important political and cultural purposes. He replied that it demeans a people and its artists to suggest that a particular historical situation requires cultural segregation in the arts. Such a view characterizes all artists as incapable of seeing the world—with all its subtleties and complications—in unique yet expressive ways, and it makes the narrow assumption that audiences are capable of viewing the world only from their own perspectives. Models for understanding Invisible Man that may be of more help than those employed by its critics can be found in Ellison's own love for and celebration of jazz. Jazz has never closed itself off from other musical forms, and some jazz musicians have been able to take the European-influenced songs of U.S. theater and transform them into musical pieces that are unique and personal but also expressive of African American culture. In like manner, Ellison avoided the mere recapitulation of existing literary forms as well as the constraints of artistic isolation by using his work to explore and express the issues of identity and character that had so interested European writers. Further, jazz, featuring solos that, however daring, remain rooted in the band's rhythm section, provides a rich model for understanding the relationship of artist to community and parallels the ways the protagonist's voice in Invisible Man is set within a wider communal context. Ellison's explorations in the novel, often in the manner of loving caricature, of the ideas left him by both European and African American predecessors are a form of homage to them and thus ameliorate the sense of alienation he expresses through the protagonist. And even though Invisible Man's protagonist lives alone in a basement, Ellison proves that an individual whose unique voice is the result of the transmutation of a cultural inheritance can never be completely cut off from the community. | 200206_1-RC_3_17 | [
"summarize the thematic concerns of an artist in relation to other artists within the discipline",
"affirm the importance of two artistic disciplines in relation to cultural concerns",
"identify the source of the thematic content of one artist's work",
"celebrate one artistic discipline by viewing it from the... | 4 | The primary purpose of the third paragraph is to |
Published in 1952, Invisible Man featured a protagonist whose activities enabled the novel's author, Ralph Ellison, to explore and to blend themes specifically tied to the history and plight of African Americans with themes, also explored by many European writers with whose works Ellison was familiar, about the fractured, evanescent quality of individual identity and character. For this thematic blend, Ellison received two related criticisms: that his allegiance to the concerns of the individual prevented him from directing his art more toward the political action that critics believed was demanded by his era's social and political state of affairs; and that his indulging in European fictional modes lessened his contribution to the development of a distinctly African American novelistic style. Ellison found these criticisms to voice a common demand, namely that writers should censor themselves and sacrifice their individuality for supposedly more important political and cultural purposes. He replied that it demeans a people and its artists to suggest that a particular historical situation requires cultural segregation in the arts. Such a view characterizes all artists as incapable of seeing the world—with all its subtleties and complications—in unique yet expressive ways, and it makes the narrow assumption that audiences are capable of viewing the world only from their own perspectives. Models for understanding Invisible Man that may be of more help than those employed by its critics can be found in Ellison's own love for and celebration of jazz. Jazz has never closed itself off from other musical forms, and some jazz musicians have been able to take the European-influenced songs of U.S. theater and transform them into musical pieces that are unique and personal but also expressive of African American culture. In like manner, Ellison avoided the mere recapitulation of existing literary forms as well as the constraints of artistic isolation by using his work to explore and express the issues of identity and character that had so interested European writers. Further, jazz, featuring solos that, however daring, remain rooted in the band's rhythm section, provides a rich model for understanding the relationship of artist to community and parallels the ways the protagonist's voice in Invisible Man is set within a wider communal context. Ellison's explorations in the novel, often in the manner of loving caricature, of the ideas left him by both European and African American predecessors are a form of homage to them and thus ameliorate the sense of alienation he expresses through the protagonist. And even though Invisible Man's protagonist lives alone in a basement, Ellison proves that an individual whose unique voice is the result of the transmutation of a cultural inheritance can never be completely cut off from the community. | 200206_1-RC_3_18 | [
"It is not accessible to a wide audience.",
"It is the most complex of modern musical forms.",
"It embraces other forms of music.",
"It avoids political themes.",
"It has influenced much of contemporary literature."
] | 2 | Which one of the following statements about jazz is made in the passage? |
Published in 1952, Invisible Man featured a protagonist whose activities enabled the novel's author, Ralph Ellison, to explore and to blend themes specifically tied to the history and plight of African Americans with themes, also explored by many European writers with whose works Ellison was familiar, about the fractured, evanescent quality of individual identity and character. For this thematic blend, Ellison received two related criticisms: that his allegiance to the concerns of the individual prevented him from directing his art more toward the political action that critics believed was demanded by his era's social and political state of affairs; and that his indulging in European fictional modes lessened his contribution to the development of a distinctly African American novelistic style. Ellison found these criticisms to voice a common demand, namely that writers should censor themselves and sacrifice their individuality for supposedly more important political and cultural purposes. He replied that it demeans a people and its artists to suggest that a particular historical situation requires cultural segregation in the arts. Such a view characterizes all artists as incapable of seeing the world—with all its subtleties and complications—in unique yet expressive ways, and it makes the narrow assumption that audiences are capable of viewing the world only from their own perspectives. Models for understanding Invisible Man that may be of more help than those employed by its critics can be found in Ellison's own love for and celebration of jazz. Jazz has never closed itself off from other musical forms, and some jazz musicians have been able to take the European-influenced songs of U.S. theater and transform them into musical pieces that are unique and personal but also expressive of African American culture. In like manner, Ellison avoided the mere recapitulation of existing literary forms as well as the constraints of artistic isolation by using his work to explore and express the issues of identity and character that had so interested European writers. Further, jazz, featuring solos that, however daring, remain rooted in the band's rhythm section, provides a rich model for understanding the relationship of artist to community and parallels the ways the protagonist's voice in Invisible Man is set within a wider communal context. Ellison's explorations in the novel, often in the manner of loving caricature, of the ideas left him by both European and African American predecessors are a form of homage to them and thus ameliorate the sense of alienation he expresses through the protagonist. And even though Invisible Man's protagonist lives alone in a basement, Ellison proves that an individual whose unique voice is the result of the transmutation of a cultural inheritance can never be completely cut off from the community. | 200206_1-RC_3_19 | [
"Audiences respond more favorably to art that has no political content.",
"Groundless criticism of an artist's work can hinder an audience's reception of the work.",
"Audiences have the capacity for empathy required to appreciate unique and expressive art.",
"The most conscientious members of any audience are... | 2 | It can be inferred from the passage that Ellison most clearly holds which one of the following views regarding an audience's relationship to works of art? |
Published in 1952, Invisible Man featured a protagonist whose activities enabled the novel's author, Ralph Ellison, to explore and to blend themes specifically tied to the history and plight of African Americans with themes, also explored by many European writers with whose works Ellison was familiar, about the fractured, evanescent quality of individual identity and character. For this thematic blend, Ellison received two related criticisms: that his allegiance to the concerns of the individual prevented him from directing his art more toward the political action that critics believed was demanded by his era's social and political state of affairs; and that his indulging in European fictional modes lessened his contribution to the development of a distinctly African American novelistic style. Ellison found these criticisms to voice a common demand, namely that writers should censor themselves and sacrifice their individuality for supposedly more important political and cultural purposes. He replied that it demeans a people and its artists to suggest that a particular historical situation requires cultural segregation in the arts. Such a view characterizes all artists as incapable of seeing the world—with all its subtleties and complications—in unique yet expressive ways, and it makes the narrow assumption that audiences are capable of viewing the world only from their own perspectives. Models for understanding Invisible Man that may be of more help than those employed by its critics can be found in Ellison's own love for and celebration of jazz. Jazz has never closed itself off from other musical forms, and some jazz musicians have been able to take the European-influenced songs of U.S. theater and transform them into musical pieces that are unique and personal but also expressive of African American culture. In like manner, Ellison avoided the mere recapitulation of existing literary forms as well as the constraints of artistic isolation by using his work to explore and express the issues of identity and character that had so interested European writers. Further, jazz, featuring solos that, however daring, remain rooted in the band's rhythm section, provides a rich model for understanding the relationship of artist to community and parallels the ways the protagonist's voice in Invisible Man is set within a wider communal context. Ellison's explorations in the novel, often in the manner of loving caricature, of the ideas left him by both European and African American predecessors are a form of homage to them and thus ameliorate the sense of alienation he expresses through the protagonist. And even though Invisible Man's protagonist lives alone in a basement, Ellison proves that an individual whose unique voice is the result of the transmutation of a cultural inheritance can never be completely cut off from the community. | 200206_1-RC_3_20 | [
"make a case that a certain novelist is one of the most important novelists of the twentieth century",
"demonstrate the value of using jazz as an illustration for further understanding the novels of a certain literary trend",
"explain the relevance of a particular work and its protagonist to the political and s... | 3 | The primary purpose of the passage is to |
Published in 1952, Invisible Man featured a protagonist whose activities enabled the novel's author, Ralph Ellison, to explore and to blend themes specifically tied to the history and plight of African Americans with themes, also explored by many European writers with whose works Ellison was familiar, about the fractured, evanescent quality of individual identity and character. For this thematic blend, Ellison received two related criticisms: that his allegiance to the concerns of the individual prevented him from directing his art more toward the political action that critics believed was demanded by his era's social and political state of affairs; and that his indulging in European fictional modes lessened his contribution to the development of a distinctly African American novelistic style. Ellison found these criticisms to voice a common demand, namely that writers should censor themselves and sacrifice their individuality for supposedly more important political and cultural purposes. He replied that it demeans a people and its artists to suggest that a particular historical situation requires cultural segregation in the arts. Such a view characterizes all artists as incapable of seeing the world—with all its subtleties and complications—in unique yet expressive ways, and it makes the narrow assumption that audiences are capable of viewing the world only from their own perspectives. Models for understanding Invisible Man that may be of more help than those employed by its critics can be found in Ellison's own love for and celebration of jazz. Jazz has never closed itself off from other musical forms, and some jazz musicians have been able to take the European-influenced songs of U.S. theater and transform them into musical pieces that are unique and personal but also expressive of African American culture. In like manner, Ellison avoided the mere recapitulation of existing literary forms as well as the constraints of artistic isolation by using his work to explore and express the issues of identity and character that had so interested European writers. Further, jazz, featuring solos that, however daring, remain rooted in the band's rhythm section, provides a rich model for understanding the relationship of artist to community and parallels the ways the protagonist's voice in Invisible Man is set within a wider communal context. Ellison's explorations in the novel, often in the manner of loving caricature, of the ideas left him by both European and African American predecessors are a form of homage to them and thus ameliorate the sense of alienation he expresses through the protagonist. And even though Invisible Man's protagonist lives alone in a basement, Ellison proves that an individual whose unique voice is the result of the transmutation of a cultural inheritance can never be completely cut off from the community. | 200206_1-RC_3_21 | [
"Did Ellison himself enjoy jazz?",
"What themes in Invisible Man were influenced by themes prevalent in jazz?",
"What was Ellison's response to criticism concerning the thematic blend in Invisible Man?",
"From what literary tradition did some of the ideas explored in Invisible Man come?",
"What kind of musi... | 1 | The passage provides information to answer each of the following questions EXCEPT: |
Recent investigations into the psychology of decision making have sparked interest among scholars seeking to understand why governments sometimes take gambles that appear theoretically unjustifiable on the basis of expected costs and benefits. Researchers have demonstrated some significant discrepancies between objective measurements of possible decision outcomes and the ways in which people subjectively value such possible results. Many of these discrepancies relate to the observation that a possible outcome perceived as a loss typically motivates more strongly than the prospect of an equivalent gain. Risk-taking is thus a more common strategy for those who believe they will lose what they already possess than it is for those who wish to gain something they do not have. Previously,the notion that rational decision makers prefer risk-avoiding choices was considered to apply generally,epitomized by the assumption of many economists that entrepreneurs and consumers will choose a risky venture over a sure thing only when the expected measurable value of the outcome is sufficiently high to compensate the decision maker for taking the risk.What is the minimum prize that would be required to make a gamble involving a 50 percent chance of losing $100 and a 50 percent chance of winning the prize acceptable? It is commonplace that the pleasure of winning a sum of money is much less intense than the pain of losing the same amount; accordingly, such a gamble would typically be accepted only when the possible gain greatly exceeds the possible loss.Research subjects do,in fact,commonly judge that a 50 percent chance to lose $100 is unacceptable unless it is combined with an equal chance to win more than $300. Nevertheless, the recent studies indicate that risk-accepting strategies are common when the alternative to a sure loss is a substantial chance of losing an even larger amount, coupled with some chance—even a small one—of losing nothing. Such observations are quite salient to scholars of international conflict and crisis. For example, governments typically are cautious in foreign policy initiatives that entail risk, especially the risk of armed conflict. But nations also often take huge gambles to retrieve what they perceive to have been taken from them by other nations. This type of motivation, then, can lead states to take risks that far outweigh the objectively measurable value of the lost assets. For example, when Britain and Argentina entered into armed conflict in 1982 over possession of the Falkland Islands—or Malvinas, as they are called in Spanish— each viewed the islands as territory that had been taken from them by the other; thus each was willing to commit enormous resources—and risks—to recapturing them. In international affairs, it is vital that each actor in such a situation understand the other's subjective view of what is at stake. | 200206_1-RC_4_22 | [
"the country's actions are consistent with previously accepted views of the psychology of risk-taking",
"the new research findings indicate that the country from which the territory has been seized probably weighs the risk factors involved in the situation similarly to the way in which they are weighed by the agg... | 0 | Suppose that a country seizes a piece of territory with great mineral wealth that is claimed by a neighboring country, with a concomitant risk of failure involving moderate but easily tolerable harm in the long run. Given the information in the passage, the author would most likely say that |
Recent investigations into the psychology of decision making have sparked interest among scholars seeking to understand why governments sometimes take gambles that appear theoretically unjustifiable on the basis of expected costs and benefits. Researchers have demonstrated some significant discrepancies between objective measurements of possible decision outcomes and the ways in which people subjectively value such possible results. Many of these discrepancies relate to the observation that a possible outcome perceived as a loss typically motivates more strongly than the prospect of an equivalent gain. Risk-taking is thus a more common strategy for those who believe they will lose what they already possess than it is for those who wish to gain something they do not have. Previously,the notion that rational decision makers prefer risk-avoiding choices was considered to apply generally,epitomized by the assumption of many economists that entrepreneurs and consumers will choose a risky venture over a sure thing only when the expected measurable value of the outcome is sufficiently high to compensate the decision maker for taking the risk.What is the minimum prize that would be required to make a gamble involving a 50 percent chance of losing $100 and a 50 percent chance of winning the prize acceptable? It is commonplace that the pleasure of winning a sum of money is much less intense than the pain of losing the same amount; accordingly, such a gamble would typically be accepted only when the possible gain greatly exceeds the possible loss.Research subjects do,in fact,commonly judge that a 50 percent chance to lose $100 is unacceptable unless it is combined with an equal chance to win more than $300. Nevertheless, the recent studies indicate that risk-accepting strategies are common when the alternative to a sure loss is a substantial chance of losing an even larger amount, coupled with some chance—even a small one—of losing nothing. Such observations are quite salient to scholars of international conflict and crisis. For example, governments typically are cautious in foreign policy initiatives that entail risk, especially the risk of armed conflict. But nations also often take huge gambles to retrieve what they perceive to have been taken from them by other nations. This type of motivation, then, can lead states to take risks that far outweigh the objectively measurable value of the lost assets. For example, when Britain and Argentina entered into armed conflict in 1982 over possession of the Falkland Islands—or Malvinas, as they are called in Spanish— each viewed the islands as territory that had been taken from them by the other; thus each was willing to commit enormous resources—and risks—to recapturing them. In international affairs, it is vital that each actor in such a situation understand the other's subjective view of what is at stake. | 200206_1-RC_4_23 | [
"the introduction to a thought experiment whose results the author expects will vary widely among different people",
"a rhetorical question whose assumed answer is in conflict with the previously accepted view concerning risk-taking behavior",
"the basis for an illustration of how the previously accepted view c... | 2 | The question in lines 24-27functions primarily as |
Recent investigations into the psychology of decision making have sparked interest among scholars seeking to understand why governments sometimes take gambles that appear theoretically unjustifiable on the basis of expected costs and benefits. Researchers have demonstrated some significant discrepancies between objective measurements of possible decision outcomes and the ways in which people subjectively value such possible results. Many of these discrepancies relate to the observation that a possible outcome perceived as a loss typically motivates more strongly than the prospect of an equivalent gain. Risk-taking is thus a more common strategy for those who believe they will lose what they already possess than it is for those who wish to gain something they do not have. Previously,the notion that rational decision makers prefer risk-avoiding choices was considered to apply generally,epitomized by the assumption of many economists that entrepreneurs and consumers will choose a risky venture over a sure thing only when the expected measurable value of the outcome is sufficiently high to compensate the decision maker for taking the risk.What is the minimum prize that would be required to make a gamble involving a 50 percent chance of losing $100 and a 50 percent chance of winning the prize acceptable? It is commonplace that the pleasure of winning a sum of money is much less intense than the pain of losing the same amount; accordingly, such a gamble would typically be accepted only when the possible gain greatly exceeds the possible loss.Research subjects do,in fact,commonly judge that a 50 percent chance to lose $100 is unacceptable unless it is combined with an equal chance to win more than $300. Nevertheless, the recent studies indicate that risk-accepting strategies are common when the alternative to a sure loss is a substantial chance of losing an even larger amount, coupled with some chance—even a small one—of losing nothing. Such observations are quite salient to scholars of international conflict and crisis. For example, governments typically are cautious in foreign policy initiatives that entail risk, especially the risk of armed conflict. But nations also often take huge gambles to retrieve what they perceive to have been taken from them by other nations. This type of motivation, then, can lead states to take risks that far outweigh the objectively measurable value of the lost assets. For example, when Britain and Argentina entered into armed conflict in 1982 over possession of the Falkland Islands—or Malvinas, as they are called in Spanish— each viewed the islands as territory that had been taken from them by the other; thus each was willing to commit enormous resources—and risks—to recapturing them. In international affairs, it is vital that each actor in such a situation understand the other's subjective view of what is at stake. | 200206_1-RC_4_24 | [
"When states try to regain losses through risky conflict, they generally are misled by inadequate or inaccurate information as to the risks that they run in doing so.",
"Government decision makers subjectively evaluate the acceptability of risks involving national assets in much the same way that they would evalu... | 1 | It can most reasonably be inferred from the passage that the author would agree with which one of the following statements? |
Recent investigations into the psychology of decision making have sparked interest among scholars seeking to understand why governments sometimes take gambles that appear theoretically unjustifiable on the basis of expected costs and benefits. Researchers have demonstrated some significant discrepancies between objective measurements of possible decision outcomes and the ways in which people subjectively value such possible results. Many of these discrepancies relate to the observation that a possible outcome perceived as a loss typically motivates more strongly than the prospect of an equivalent gain. Risk-taking is thus a more common strategy for those who believe they will lose what they already possess than it is for those who wish to gain something they do not have. Previously,the notion that rational decision makers prefer risk-avoiding choices was considered to apply generally,epitomized by the assumption of many economists that entrepreneurs and consumers will choose a risky venture over a sure thing only when the expected measurable value of the outcome is sufficiently high to compensate the decision maker for taking the risk.What is the minimum prize that would be required to make a gamble involving a 50 percent chance of losing $100 and a 50 percent chance of winning the prize acceptable? It is commonplace that the pleasure of winning a sum of money is much less intense than the pain of losing the same amount; accordingly, such a gamble would typically be accepted only when the possible gain greatly exceeds the possible loss.Research subjects do,in fact,commonly judge that a 50 percent chance to lose $100 is unacceptable unless it is combined with an equal chance to win more than $300. Nevertheless, the recent studies indicate that risk-accepting strategies are common when the alternative to a sure loss is a substantial chance of losing an even larger amount, coupled with some chance—even a small one—of losing nothing. Such observations are quite salient to scholars of international conflict and crisis. For example, governments typically are cautious in foreign policy initiatives that entail risk, especially the risk of armed conflict. But nations also often take huge gambles to retrieve what they perceive to have been taken from them by other nations. This type of motivation, then, can lead states to take risks that far outweigh the objectively measurable value of the lost assets. For example, when Britain and Argentina entered into armed conflict in 1982 over possession of the Falkland Islands—or Malvinas, as they are called in Spanish— each viewed the islands as territory that had been taken from them by the other; thus each was willing to commit enormous resources—and risks—to recapturing them. In international affairs, it is vital that each actor in such a situation understand the other's subjective view of what is at stake. | 200206_1-RC_4_25 | [
"a psychological analysis of the motives involved in certain types of collective decision making in the presence of conflict",
"a presentation of a psychological hypothesis which is then subjected to a political test case",
"a suggestion that psychologists should incorporate the findings of political scientists... | 3 | The passage can be most accurately described as |
Recent investigations into the psychology of decision making have sparked interest among scholars seeking to understand why governments sometimes take gambles that appear theoretically unjustifiable on the basis of expected costs and benefits. Researchers have demonstrated some significant discrepancies between objective measurements of possible decision outcomes and the ways in which people subjectively value such possible results. Many of these discrepancies relate to the observation that a possible outcome perceived as a loss typically motivates more strongly than the prospect of an equivalent gain. Risk-taking is thus a more common strategy for those who believe they will lose what they already possess than it is for those who wish to gain something they do not have. Previously,the notion that rational decision makers prefer risk-avoiding choices was considered to apply generally,epitomized by the assumption of many economists that entrepreneurs and consumers will choose a risky venture over a sure thing only when the expected measurable value of the outcome is sufficiently high to compensate the decision maker for taking the risk.What is the minimum prize that would be required to make a gamble involving a 50 percent chance of losing $100 and a 50 percent chance of winning the prize acceptable? It is commonplace that the pleasure of winning a sum of money is much less intense than the pain of losing the same amount; accordingly, such a gamble would typically be accepted only when the possible gain greatly exceeds the possible loss.Research subjects do,in fact,commonly judge that a 50 percent chance to lose $100 is unacceptable unless it is combined with an equal chance to win more than $300. Nevertheless, the recent studies indicate that risk-accepting strategies are common when the alternative to a sure loss is a substantial chance of losing an even larger amount, coupled with some chance—even a small one—of losing nothing. Such observations are quite salient to scholars of international conflict and crisis. For example, governments typically are cautious in foreign policy initiatives that entail risk, especially the risk of armed conflict. But nations also often take huge gambles to retrieve what they perceive to have been taken from them by other nations. This type of motivation, then, can lead states to take risks that far outweigh the objectively measurable value of the lost assets. For example, when Britain and Argentina entered into armed conflict in 1982 over possession of the Falkland Islands—or Malvinas, as they are called in Spanish— each viewed the islands as territory that had been taken from them by the other; thus each was willing to commit enormous resources—and risks—to recapturing them. In international affairs, it is vital that each actor in such a situation understand the other's subjective view of what is at stake. | 200206_1-RC_4_26 | [
"Researchers have previously been too willing to accept the claims that subjects make about their preferred choices in risk-related decision problems.",
"There is inadequate research support for the hypothesis that except when a gamble is the only available means for averting an otherwise certain loss, people typ... | 2 | The passage most clearly suggests that the author would agree with which one of the following statements? |
The myth persists that in 1492 the Western Hemisphere was an untamed wilderness and that it was European settlers who harnessed and transformed its ecosystems. But scholarship shows that forests, in particular, had been altered to varying degrees well before the arrival of Europeans. Native populations had converted much of the forests to successfully cultivated stands, especially by means of burning. Nevertheless, some researchers have maintained that the extent, frequency, and impact of such burning was minimal. One geographer claims that climatic change could have accounted for some of the changes in forest composition; another argues that burning by native populations was done only sporadically, to augment the effects of natural fires. However, a large body of evidence for the routine practice of burning exists in the geographical record. One group of researchers found, for example, that sedimentary charcoal accumulations in what is now the northeastern United States are greatest where known native American settlements were greatest. Other evidence shows that, while the characteristics and impact of fires set by native populations varied regionally according to population size, extent of resource management techniques, and environment, all such fires had markedly different effects on vegetation patterns than did natural fires. Controlled burning created grassy openings such as meadows and glades. Burning also promoted a mosaic quality to North and South American ecosystems, creating forests in many different stages of ecological development. Much of the mature forestland was characterized by open, herbaceous undergrowth, another result of the clearing brought about by burning. In North America, controlled burning created conditions favorable to berries and other fire-tolerant and sun-loving foods. Burning also converted mixed stands of trees to homogeneous forest, for example the longleaf, slash pine, and scrub oak forests of the southeastern U.S. Natural fires do account for some of this vegetation, but regular burning clearly extended and maintained it. Burning also influenced forest composition in the tropics, where natural fires are rare. An example is the pine-dominant forests of Nicaragua, where warm temperatures and heavy rainfall naturally favor mixed tropical or rain forests. While there are extensive pine forests in Guatemala and Mexico, these primarily grow in cooler, drier, higher elevations, regions where such vegetation is in large part natural and even prehuman. Today, the Nicaraguan pines occur where there has been clearing followed by regular burning, and the same is likely to have occurred in the past: such forests were present when Europeans arrived and were found only in areas where native settlements were substantial; when these settlements were abandoned, the land returned to mixed hardwoods. This succession is also evident elsewhere in similar low tropical elevations in the Caribbean and Mexico. | 200210_3-RC_1_1 | [
"Despite extensive evidence that native populations had been burning North and South American forests extensively before 1492, some scholars persist in claiming that such burning was either infrequent or the result of natural causes.",
"In opposition to the widespread belief that in 1492 the Western Hemisphere wa... | 2 | Which one of the following most accurately expresses the main idea of the passage? |
The myth persists that in 1492 the Western Hemisphere was an untamed wilderness and that it was European settlers who harnessed and transformed its ecosystems. But scholarship shows that forests, in particular, had been altered to varying degrees well before the arrival of Europeans. Native populations had converted much of the forests to successfully cultivated stands, especially by means of burning. Nevertheless, some researchers have maintained that the extent, frequency, and impact of such burning was minimal. One geographer claims that climatic change could have accounted for some of the changes in forest composition; another argues that burning by native populations was done only sporadically, to augment the effects of natural fires. However, a large body of evidence for the routine practice of burning exists in the geographical record. One group of researchers found, for example, that sedimentary charcoal accumulations in what is now the northeastern United States are greatest where known native American settlements were greatest. Other evidence shows that, while the characteristics and impact of fires set by native populations varied regionally according to population size, extent of resource management techniques, and environment, all such fires had markedly different effects on vegetation patterns than did natural fires. Controlled burning created grassy openings such as meadows and glades. Burning also promoted a mosaic quality to North and South American ecosystems, creating forests in many different stages of ecological development. Much of the mature forestland was characterized by open, herbaceous undergrowth, another result of the clearing brought about by burning. In North America, controlled burning created conditions favorable to berries and other fire-tolerant and sun-loving foods. Burning also converted mixed stands of trees to homogeneous forest, for example the longleaf, slash pine, and scrub oak forests of the southeastern U.S. Natural fires do account for some of this vegetation, but regular burning clearly extended and maintained it. Burning also influenced forest composition in the tropics, where natural fires are rare. An example is the pine-dominant forests of Nicaragua, where warm temperatures and heavy rainfall naturally favor mixed tropical or rain forests. While there are extensive pine forests in Guatemala and Mexico, these primarily grow in cooler, drier, higher elevations, regions where such vegetation is in large part natural and even prehuman. Today, the Nicaraguan pines occur where there has been clearing followed by regular burning, and the same is likely to have occurred in the past: such forests were present when Europeans arrived and were found only in areas where native settlements were substantial; when these settlements were abandoned, the land returned to mixed hardwoods. This succession is also evident elsewhere in similar low tropical elevations in the Caribbean and Mexico. | 200210_3-RC_1_2 | [
"numerous types of hardwood trees",
"extensive herbaceous undergrowth",
"a variety of fire-tolerant plants",
"various stages of ecological maturity",
"grassy openings such as meadows or glades"
] | 0 | It can be inferred that a forest burned as described in the passage would have been LEAST likely to display |
The myth persists that in 1492 the Western Hemisphere was an untamed wilderness and that it was European settlers who harnessed and transformed its ecosystems. But scholarship shows that forests, in particular, had been altered to varying degrees well before the arrival of Europeans. Native populations had converted much of the forests to successfully cultivated stands, especially by means of burning. Nevertheless, some researchers have maintained that the extent, frequency, and impact of such burning was minimal. One geographer claims that climatic change could have accounted for some of the changes in forest composition; another argues that burning by native populations was done only sporadically, to augment the effects of natural fires. However, a large body of evidence for the routine practice of burning exists in the geographical record. One group of researchers found, for example, that sedimentary charcoal accumulations in what is now the northeastern United States are greatest where known native American settlements were greatest. Other evidence shows that, while the characteristics and impact of fires set by native populations varied regionally according to population size, extent of resource management techniques, and environment, all such fires had markedly different effects on vegetation patterns than did natural fires. Controlled burning created grassy openings such as meadows and glades. Burning also promoted a mosaic quality to North and South American ecosystems, creating forests in many different stages of ecological development. Much of the mature forestland was characterized by open, herbaceous undergrowth, another result of the clearing brought about by burning. In North America, controlled burning created conditions favorable to berries and other fire-tolerant and sun-loving foods. Burning also converted mixed stands of trees to homogeneous forest, for example the longleaf, slash pine, and scrub oak forests of the southeastern U.S. Natural fires do account for some of this vegetation, but regular burning clearly extended and maintained it. Burning also influenced forest composition in the tropics, where natural fires are rare. An example is the pine-dominant forests of Nicaragua, where warm temperatures and heavy rainfall naturally favor mixed tropical or rain forests. While there are extensive pine forests in Guatemala and Mexico, these primarily grow in cooler, drier, higher elevations, regions where such vegetation is in large part natural and even prehuman. Today, the Nicaraguan pines occur where there has been clearing followed by regular burning, and the same is likely to have occurred in the past: such forests were present when Europeans arrived and were found only in areas where native settlements were substantial; when these settlements were abandoned, the land returned to mixed hardwoods. This succession is also evident elsewhere in similar low tropical elevations in the Caribbean and Mexico. | 200210_3-RC_1_3 | [
"scrub oak forests in the southeastern U.S.",
"slash pine forests in the southeastern U.S.",
"pine forests in Guatemala at high elevations",
"pine forests in Mexico at high elevations",
"pine forests in Nicaragua at low elevations"
] | 4 | Which one of the following is a type of forest identified by the author as a product of controlled burning in recent times? |
The myth persists that in 1492 the Western Hemisphere was an untamed wilderness and that it was European settlers who harnessed and transformed its ecosystems. But scholarship shows that forests, in particular, had been altered to varying degrees well before the arrival of Europeans. Native populations had converted much of the forests to successfully cultivated stands, especially by means of burning. Nevertheless, some researchers have maintained that the extent, frequency, and impact of such burning was minimal. One geographer claims that climatic change could have accounted for some of the changes in forest composition; another argues that burning by native populations was done only sporadically, to augment the effects of natural fires. However, a large body of evidence for the routine practice of burning exists in the geographical record. One group of researchers found, for example, that sedimentary charcoal accumulations in what is now the northeastern United States are greatest where known native American settlements were greatest. Other evidence shows that, while the characteristics and impact of fires set by native populations varied regionally according to population size, extent of resource management techniques, and environment, all such fires had markedly different effects on vegetation patterns than did natural fires. Controlled burning created grassy openings such as meadows and glades. Burning also promoted a mosaic quality to North and South American ecosystems, creating forests in many different stages of ecological development. Much of the mature forestland was characterized by open, herbaceous undergrowth, another result of the clearing brought about by burning. In North America, controlled burning created conditions favorable to berries and other fire-tolerant and sun-loving foods. Burning also converted mixed stands of trees to homogeneous forest, for example the longleaf, slash pine, and scrub oak forests of the southeastern U.S. Natural fires do account for some of this vegetation, but regular burning clearly extended and maintained it. Burning also influenced forest composition in the tropics, where natural fires are rare. An example is the pine-dominant forests of Nicaragua, where warm temperatures and heavy rainfall naturally favor mixed tropical or rain forests. While there are extensive pine forests in Guatemala and Mexico, these primarily grow in cooler, drier, higher elevations, regions where such vegetation is in large part natural and even prehuman. Today, the Nicaraguan pines occur where there has been clearing followed by regular burning, and the same is likely to have occurred in the past: such forests were present when Europeans arrived and were found only in areas where native settlements were substantial; when these settlements were abandoned, the land returned to mixed hardwoods. This succession is also evident elsewhere in similar low tropical elevations in the Caribbean and Mexico. | 200210_3-RC_1_4 | [
"extensive homogeneous forests at high elevation",
"extensive homogeneous forests at low elevation",
"extensive heterogeneous forests at high elevation",
"extensive heterogeneous forests at low elevation",
"extensive sedimentary charcoal accumulations at high elevation"
] | 1 | Which one of the following is presented by the author as evidence of controlled burning in the tropics before the arrival of Europeans? |
The myth persists that in 1492 the Western Hemisphere was an untamed wilderness and that it was European settlers who harnessed and transformed its ecosystems. But scholarship shows that forests, in particular, had been altered to varying degrees well before the arrival of Europeans. Native populations had converted much of the forests to successfully cultivated stands, especially by means of burning. Nevertheless, some researchers have maintained that the extent, frequency, and impact of such burning was minimal. One geographer claims that climatic change could have accounted for some of the changes in forest composition; another argues that burning by native populations was done only sporadically, to augment the effects of natural fires. However, a large body of evidence for the routine practice of burning exists in the geographical record. One group of researchers found, for example, that sedimentary charcoal accumulations in what is now the northeastern United States are greatest where known native American settlements were greatest. Other evidence shows that, while the characteristics and impact of fires set by native populations varied regionally according to population size, extent of resource management techniques, and environment, all such fires had markedly different effects on vegetation patterns than did natural fires. Controlled burning created grassy openings such as meadows and glades. Burning also promoted a mosaic quality to North and South American ecosystems, creating forests in many different stages of ecological development. Much of the mature forestland was characterized by open, herbaceous undergrowth, another result of the clearing brought about by burning. In North America, controlled burning created conditions favorable to berries and other fire-tolerant and sun-loving foods. Burning also converted mixed stands of trees to homogeneous forest, for example the longleaf, slash pine, and scrub oak forests of the southeastern U.S. Natural fires do account for some of this vegetation, but regular burning clearly extended and maintained it. Burning also influenced forest composition in the tropics, where natural fires are rare. An example is the pine-dominant forests of Nicaragua, where warm temperatures and heavy rainfall naturally favor mixed tropical or rain forests. While there are extensive pine forests in Guatemala and Mexico, these primarily grow in cooler, drier, higher elevations, regions where such vegetation is in large part natural and even prehuman. Today, the Nicaraguan pines occur where there has been clearing followed by regular burning, and the same is likely to have occurred in the past: such forests were present when Europeans arrived and were found only in areas where native settlements were substantial; when these settlements were abandoned, the land returned to mixed hardwoods. This succession is also evident elsewhere in similar low tropical elevations in the Caribbean and Mexico. | 200210_3-RC_1_5 | [
"The long-term effects of controlled burning could just as easily have been caused by natural fires.",
"Herbaceous undergrowth prevents many forests from reaching full maturity.",
"European settlers had little impact on the composition of the ecosystems in North and South America.",
"Certain species of plants... | 3 | With which one of the following would the author be most likely to agree? |
The myth persists that in 1492 the Western Hemisphere was an untamed wilderness and that it was European settlers who harnessed and transformed its ecosystems. But scholarship shows that forests, in particular, had been altered to varying degrees well before the arrival of Europeans. Native populations had converted much of the forests to successfully cultivated stands, especially by means of burning. Nevertheless, some researchers have maintained that the extent, frequency, and impact of such burning was minimal. One geographer claims that climatic change could have accounted for some of the changes in forest composition; another argues that burning by native populations was done only sporadically, to augment the effects of natural fires. However, a large body of evidence for the routine practice of burning exists in the geographical record. One group of researchers found, for example, that sedimentary charcoal accumulations in what is now the northeastern United States are greatest where known native American settlements were greatest. Other evidence shows that, while the characteristics and impact of fires set by native populations varied regionally according to population size, extent of resource management techniques, and environment, all such fires had markedly different effects on vegetation patterns than did natural fires. Controlled burning created grassy openings such as meadows and glades. Burning also promoted a mosaic quality to North and South American ecosystems, creating forests in many different stages of ecological development. Much of the mature forestland was characterized by open, herbaceous undergrowth, another result of the clearing brought about by burning. In North America, controlled burning created conditions favorable to berries and other fire-tolerant and sun-loving foods. Burning also converted mixed stands of trees to homogeneous forest, for example the longleaf, slash pine, and scrub oak forests of the southeastern U.S. Natural fires do account for some of this vegetation, but regular burning clearly extended and maintained it. Burning also influenced forest composition in the tropics, where natural fires are rare. An example is the pine-dominant forests of Nicaragua, where warm temperatures and heavy rainfall naturally favor mixed tropical or rain forests. While there are extensive pine forests in Guatemala and Mexico, these primarily grow in cooler, drier, higher elevations, regions where such vegetation is in large part natural and even prehuman. Today, the Nicaraguan pines occur where there has been clearing followed by regular burning, and the same is likely to have occurred in the past: such forests were present when Europeans arrived and were found only in areas where native settlements were substantial; when these settlements were abandoned, the land returned to mixed hardwoods. This succession is also evident elsewhere in similar low tropical elevations in the Caribbean and Mexico. | 200210_3-RC_1_6 | [
"the similar characteristics of fires in different regions",
"the simultaneous presence of forests at varying stages of maturity",
"the existence of herbaceous undergrowth in certain forests",
"the heavy accumulation of charcoal near populous settlements",
"the presence of meadows and glades in certain fore... | 0 | As evidence for the routine practice of forest burning by native populations before the arrival of Europeans, the author cites all of the following EXCEPT: |
The myth persists that in 1492 the Western Hemisphere was an untamed wilderness and that it was European settlers who harnessed and transformed its ecosystems. But scholarship shows that forests, in particular, had been altered to varying degrees well before the arrival of Europeans. Native populations had converted much of the forests to successfully cultivated stands, especially by means of burning. Nevertheless, some researchers have maintained that the extent, frequency, and impact of such burning was minimal. One geographer claims that climatic change could have accounted for some of the changes in forest composition; another argues that burning by native populations was done only sporadically, to augment the effects of natural fires. However, a large body of evidence for the routine practice of burning exists in the geographical record. One group of researchers found, for example, that sedimentary charcoal accumulations in what is now the northeastern United States are greatest where known native American settlements were greatest. Other evidence shows that, while the characteristics and impact of fires set by native populations varied regionally according to population size, extent of resource management techniques, and environment, all such fires had markedly different effects on vegetation patterns than did natural fires. Controlled burning created grassy openings such as meadows and glades. Burning also promoted a mosaic quality to North and South American ecosystems, creating forests in many different stages of ecological development. Much of the mature forestland was characterized by open, herbaceous undergrowth, another result of the clearing brought about by burning. In North America, controlled burning created conditions favorable to berries and other fire-tolerant and sun-loving foods. Burning also converted mixed stands of trees to homogeneous forest, for example the longleaf, slash pine, and scrub oak forests of the southeastern U.S. Natural fires do account for some of this vegetation, but regular burning clearly extended and maintained it. Burning also influenced forest composition in the tropics, where natural fires are rare. An example is the pine-dominant forests of Nicaragua, where warm temperatures and heavy rainfall naturally favor mixed tropical or rain forests. While there are extensive pine forests in Guatemala and Mexico, these primarily grow in cooler, drier, higher elevations, regions where such vegetation is in large part natural and even prehuman. Today, the Nicaraguan pines occur where there has been clearing followed by regular burning, and the same is likely to have occurred in the past: such forests were present when Europeans arrived and were found only in areas where native settlements were substantial; when these settlements were abandoned, the land returned to mixed hardwoods. This succession is also evident elsewhere in similar low tropical elevations in the Caribbean and Mexico. | 200210_3-RC_1_7 | [
"forest clearing followed by controlled burning of forests",
"tropical rain forest followed by pine forest",
"European settlement followed by abandonment of land",
"homogeneous pine forest followed by mixed hardwoods",
"pine forests followed by established settlements"
] | 3 | The "succession" mentioned in line 57 refers to |
The myth persists that in 1492 the Western Hemisphere was an untamed wilderness and that it was European settlers who harnessed and transformed its ecosystems. But scholarship shows that forests, in particular, had been altered to varying degrees well before the arrival of Europeans. Native populations had converted much of the forests to successfully cultivated stands, especially by means of burning. Nevertheless, some researchers have maintained that the extent, frequency, and impact of such burning was minimal. One geographer claims that climatic change could have accounted for some of the changes in forest composition; another argues that burning by native populations was done only sporadically, to augment the effects of natural fires. However, a large body of evidence for the routine practice of burning exists in the geographical record. One group of researchers found, for example, that sedimentary charcoal accumulations in what is now the northeastern United States are greatest where known native American settlements were greatest. Other evidence shows that, while the characteristics and impact of fires set by native populations varied regionally according to population size, extent of resource management techniques, and environment, all such fires had markedly different effects on vegetation patterns than did natural fires. Controlled burning created grassy openings such as meadows and glades. Burning also promoted a mosaic quality to North and South American ecosystems, creating forests in many different stages of ecological development. Much of the mature forestland was characterized by open, herbaceous undergrowth, another result of the clearing brought about by burning. In North America, controlled burning created conditions favorable to berries and other fire-tolerant and sun-loving foods. Burning also converted mixed stands of trees to homogeneous forest, for example the longleaf, slash pine, and scrub oak forests of the southeastern U.S. Natural fires do account for some of this vegetation, but regular burning clearly extended and maintained it. Burning also influenced forest composition in the tropics, where natural fires are rare. An example is the pine-dominant forests of Nicaragua, where warm temperatures and heavy rainfall naturally favor mixed tropical or rain forests. While there are extensive pine forests in Guatemala and Mexico, these primarily grow in cooler, drier, higher elevations, regions where such vegetation is in large part natural and even prehuman. Today, the Nicaraguan pines occur where there has been clearing followed by regular burning, and the same is likely to have occurred in the past: such forests were present when Europeans arrived and were found only in areas where native settlements were substantial; when these settlements were abandoned, the land returned to mixed hardwoods. This succession is also evident elsewhere in similar low tropical elevations in the Caribbean and Mexico. | 200210_3-RC_1_8 | [
"refute certain researchers' views",
"support a common belief",
"counter certain evidence",
"synthesize two viewpoints",
"correct the geographical record"
] | 0 | The primary purpose of the passage is to |
Intellectual authority is defined as the authority of arguments that prevail by virtue of good reasoning and do not depend on coercion or convention. A contrasting notion, institutional authority, refers to the power of social institutions to enforce acceptance of arguments that may or may not possess intellectual authority. The authority wielded by legal systems is especially interesting because such systems are institutions that nonetheless aspire to a purely intellectual authority. One judge goes so far as to claim that courts are merely passive vehicles for applying the intellectual authority of the law and possess no coercive powers of their own. In contrast, some critics maintain that whatever authority judicial pronouncements have is exclusively institutional. Some of these critics go further, claiming that intellectual authority does not really exist—i.e., it reduces to institutional authority. But it can be countered that these claims break down when a sufficiently broad historical perspective is taken: Not all arguments accepted by institutions withstand the test of time, and some well-reasoned arguments never receive institutional imprimatur. The reasonable argument that goes unrecognized in its own time because it challenges institutional beliefs is common in intellectual history; intellectual authority and institutional consensus are not the same thing. But, the critics might respond, intellectual authority is only recognized as such because of institutional consensus. For example, if a musicologist were to claim that an alleged musical genius who, after several decades, had not gained respect and recognition for his or her compositions is probably not a genius, the critics might say that basing a judgment on a unit of time— "several decades" —is an institutional rather than an intellectual construct. What, the critics might ask, makes a particular number of decades reasonable evidence by which to judge genius? The answer, of course, is nothing, except for the fact that such institutional procedures have proved useful to musicologists in making such distinctions in the past. The analogous legal concept is the doctrine of precedent, i.e., a judge's merely deciding a case a certain way becoming a basis for deciding later cases the same way—a pure example of institutional authority. But the critics miss the crucial distinction that when a judicial decision is badly reasoned, or simply no longer applies in the face of evolving social standards or practices, the notion of intellectual authority is introduced: judges reconsider, revise, or in some cases throw out the decision. The conflict between intellectual and institutional authority in legal systems is thus played out in the reconsideration of decisions, leading one to draw the conclusion that legal systems contain a significant degree of intellectual authority even if the thrust of their power is predominantly institutional. | 200210_3-RC_2_9 | [
"Although some argue that the authority of legal systems is purely intellectual, these systems possess a degree of institutional authority due to their ability to enforce acceptance of badly reasoned or socially inappropriate judicial decisions.",
"Although some argue that the authority of legal systems is purely... | 3 | Which one of the following most accurately states the main idea of the passage? |
Intellectual authority is defined as the authority of arguments that prevail by virtue of good reasoning and do not depend on coercion or convention. A contrasting notion, institutional authority, refers to the power of social institutions to enforce acceptance of arguments that may or may not possess intellectual authority. The authority wielded by legal systems is especially interesting because such systems are institutions that nonetheless aspire to a purely intellectual authority. One judge goes so far as to claim that courts are merely passive vehicles for applying the intellectual authority of the law and possess no coercive powers of their own. In contrast, some critics maintain that whatever authority judicial pronouncements have is exclusively institutional. Some of these critics go further, claiming that intellectual authority does not really exist—i.e., it reduces to institutional authority. But it can be countered that these claims break down when a sufficiently broad historical perspective is taken: Not all arguments accepted by institutions withstand the test of time, and some well-reasoned arguments never receive institutional imprimatur. The reasonable argument that goes unrecognized in its own time because it challenges institutional beliefs is common in intellectual history; intellectual authority and institutional consensus are not the same thing. But, the critics might respond, intellectual authority is only recognized as such because of institutional consensus. For example, if a musicologist were to claim that an alleged musical genius who, after several decades, had not gained respect and recognition for his or her compositions is probably not a genius, the critics might say that basing a judgment on a unit of time— "several decades" —is an institutional rather than an intellectual construct. What, the critics might ask, makes a particular number of decades reasonable evidence by which to judge genius? The answer, of course, is nothing, except for the fact that such institutional procedures have proved useful to musicologists in making such distinctions in the past. The analogous legal concept is the doctrine of precedent, i.e., a judge's merely deciding a case a certain way becoming a basis for deciding later cases the same way—a pure example of institutional authority. But the critics miss the crucial distinction that when a judicial decision is badly reasoned, or simply no longer applies in the face of evolving social standards or practices, the notion of intellectual authority is introduced: judges reconsider, revise, or in some cases throw out the decision. The conflict between intellectual and institutional authority in legal systems is thus played out in the reconsideration of decisions, leading one to draw the conclusion that legal systems contain a significant degree of intellectual authority even if the thrust of their power is predominantly institutional. | 200210_3-RC_2_10 | [
"fail to gain institutional consensus",
"fail to challenge institutional beliefs",
"fail to conform to the example of precedent",
"fail to convince by virtue of good reasoning",
"fail to gain acceptance except by coercion"
] | 0 | That some arguments "never receive institutional imprimatur" (lines 22–23) most likely means that these arguments |
Intellectual authority is defined as the authority of arguments that prevail by virtue of good reasoning and do not depend on coercion or convention. A contrasting notion, institutional authority, refers to the power of social institutions to enforce acceptance of arguments that may or may not possess intellectual authority. The authority wielded by legal systems is especially interesting because such systems are institutions that nonetheless aspire to a purely intellectual authority. One judge goes so far as to claim that courts are merely passive vehicles for applying the intellectual authority of the law and possess no coercive powers of their own. In contrast, some critics maintain that whatever authority judicial pronouncements have is exclusively institutional. Some of these critics go further, claiming that intellectual authority does not really exist—i.e., it reduces to institutional authority. But it can be countered that these claims break down when a sufficiently broad historical perspective is taken: Not all arguments accepted by institutions withstand the test of time, and some well-reasoned arguments never receive institutional imprimatur. The reasonable argument that goes unrecognized in its own time because it challenges institutional beliefs is common in intellectual history; intellectual authority and institutional consensus are not the same thing. But, the critics might respond, intellectual authority is only recognized as such because of institutional consensus. For example, if a musicologist were to claim that an alleged musical genius who, after several decades, had not gained respect and recognition for his or her compositions is probably not a genius, the critics might say that basing a judgment on a unit of time— "several decades" —is an institutional rather than an intellectual construct. What, the critics might ask, makes a particular number of decades reasonable evidence by which to judge genius? The answer, of course, is nothing, except for the fact that such institutional procedures have proved useful to musicologists in making such distinctions in the past. The analogous legal concept is the doctrine of precedent, i.e., a judge's merely deciding a case a certain way becoming a basis for deciding later cases the same way—a pure example of institutional authority. But the critics miss the crucial distinction that when a judicial decision is badly reasoned, or simply no longer applies in the face of evolving social standards or practices, the notion of intellectual authority is introduced: judges reconsider, revise, or in some cases throw out the decision. The conflict between intellectual and institutional authority in legal systems is thus played out in the reconsideration of decisions, leading one to draw the conclusion that legal systems contain a significant degree of intellectual authority even if the thrust of their power is predominantly institutional. | 200210_3-RC_2_11 | [
"Judges often act under time constraints and occasionally render a badly reasoned or socially inappropriate decision.",
"In some legal systems, the percentage of judicial decisions that contain faulty reasoning is far higher than it is in other legal systems.",
"Many socially inappropriate legal decisions are t... | 4 | Which one of the following, if true, most challenges the author's contention that legal systems contain a significant degree of intellectual authority? |
Intellectual authority is defined as the authority of arguments that prevail by virtue of good reasoning and do not depend on coercion or convention. A contrasting notion, institutional authority, refers to the power of social institutions to enforce acceptance of arguments that may or may not possess intellectual authority. The authority wielded by legal systems is especially interesting because such systems are institutions that nonetheless aspire to a purely intellectual authority. One judge goes so far as to claim that courts are merely passive vehicles for applying the intellectual authority of the law and possess no coercive powers of their own. In contrast, some critics maintain that whatever authority judicial pronouncements have is exclusively institutional. Some of these critics go further, claiming that intellectual authority does not really exist—i.e., it reduces to institutional authority. But it can be countered that these claims break down when a sufficiently broad historical perspective is taken: Not all arguments accepted by institutions withstand the test of time, and some well-reasoned arguments never receive institutional imprimatur. The reasonable argument that goes unrecognized in its own time because it challenges institutional beliefs is common in intellectual history; intellectual authority and institutional consensus are not the same thing. But, the critics might respond, intellectual authority is only recognized as such because of institutional consensus. For example, if a musicologist were to claim that an alleged musical genius who, after several decades, had not gained respect and recognition for his or her compositions is probably not a genius, the critics might say that basing a judgment on a unit of time— "several decades" —is an institutional rather than an intellectual construct. What, the critics might ask, makes a particular number of decades reasonable evidence by which to judge genius? The answer, of course, is nothing, except for the fact that such institutional procedures have proved useful to musicologists in making such distinctions in the past. The analogous legal concept is the doctrine of precedent, i.e., a judge's merely deciding a case a certain way becoming a basis for deciding later cases the same way—a pure example of institutional authority. But the critics miss the crucial distinction that when a judicial decision is badly reasoned, or simply no longer applies in the face of evolving social standards or practices, the notion of intellectual authority is introduced: judges reconsider, revise, or in some cases throw out the decision. The conflict between intellectual and institutional authority in legal systems is thus played out in the reconsideration of decisions, leading one to draw the conclusion that legal systems contain a significant degree of intellectual authority even if the thrust of their power is predominantly institutional. | 200210_3-RC_2_12 | [
"Institutional authority may depend on coercion; intellectual authority never does.",
"Intellectual authority may accept well-reasoned arguments; institutional authority never does.",
"Institutional authority may depend on convention; intellectual authority never does.",
"Intellectual authority sometimes chal... | 1 | Given the information in the passage, the author is LEAST likely to believe which one of the following? |
Intellectual authority is defined as the authority of arguments that prevail by virtue of good reasoning and do not depend on coercion or convention. A contrasting notion, institutional authority, refers to the power of social institutions to enforce acceptance of arguments that may or may not possess intellectual authority. The authority wielded by legal systems is especially interesting because such systems are institutions that nonetheless aspire to a purely intellectual authority. One judge goes so far as to claim that courts are merely passive vehicles for applying the intellectual authority of the law and possess no coercive powers of their own. In contrast, some critics maintain that whatever authority judicial pronouncements have is exclusively institutional. Some of these critics go further, claiming that intellectual authority does not really exist—i.e., it reduces to institutional authority. But it can be countered that these claims break down when a sufficiently broad historical perspective is taken: Not all arguments accepted by institutions withstand the test of time, and some well-reasoned arguments never receive institutional imprimatur. The reasonable argument that goes unrecognized in its own time because it challenges institutional beliefs is common in intellectual history; intellectual authority and institutional consensus are not the same thing. But, the critics might respond, intellectual authority is only recognized as such because of institutional consensus. For example, if a musicologist were to claim that an alleged musical genius who, after several decades, had not gained respect and recognition for his or her compositions is probably not a genius, the critics might say that basing a judgment on a unit of time— "several decades" —is an institutional rather than an intellectual construct. What, the critics might ask, makes a particular number of decades reasonable evidence by which to judge genius? The answer, of course, is nothing, except for the fact that such institutional procedures have proved useful to musicologists in making such distinctions in the past. The analogous legal concept is the doctrine of precedent, i.e., a judge's merely deciding a case a certain way becoming a basis for deciding later cases the same way—a pure example of institutional authority. But the critics miss the crucial distinction that when a judicial decision is badly reasoned, or simply no longer applies in the face of evolving social standards or practices, the notion of intellectual authority is introduced: judges reconsider, revise, or in some cases throw out the decision. The conflict between intellectual and institutional authority in legal systems is thus played out in the reconsideration of decisions, leading one to draw the conclusion that legal systems contain a significant degree of intellectual authority even if the thrust of their power is predominantly institutional. | 200210_3-RC_2_13 | [
"distinguish the notion of institutional authority from that of intellectual authority",
"give an example of an argument possessing intellectual authority that did not prevail in its own time",
"identify an example in which the ascription of musical genius did not withstand the test of time",
"illustrate the ... | 3 | The author discusses the example from musicology primarily in order to |
Intellectual authority is defined as the authority of arguments that prevail by virtue of good reasoning and do not depend on coercion or convention. A contrasting notion, institutional authority, refers to the power of social institutions to enforce acceptance of arguments that may or may not possess intellectual authority. The authority wielded by legal systems is especially interesting because such systems are institutions that nonetheless aspire to a purely intellectual authority. One judge goes so far as to claim that courts are merely passive vehicles for applying the intellectual authority of the law and possess no coercive powers of their own. In contrast, some critics maintain that whatever authority judicial pronouncements have is exclusively institutional. Some of these critics go further, claiming that intellectual authority does not really exist—i.e., it reduces to institutional authority. But it can be countered that these claims break down when a sufficiently broad historical perspective is taken: Not all arguments accepted by institutions withstand the test of time, and some well-reasoned arguments never receive institutional imprimatur. The reasonable argument that goes unrecognized in its own time because it challenges institutional beliefs is common in intellectual history; intellectual authority and institutional consensus are not the same thing. But, the critics might respond, intellectual authority is only recognized as such because of institutional consensus. For example, if a musicologist were to claim that an alleged musical genius who, after several decades, had not gained respect and recognition for his or her compositions is probably not a genius, the critics might say that basing a judgment on a unit of time— "several decades" —is an institutional rather than an intellectual construct. What, the critics might ask, makes a particular number of decades reasonable evidence by which to judge genius? The answer, of course, is nothing, except for the fact that such institutional procedures have proved useful to musicologists in making such distinctions in the past. The analogous legal concept is the doctrine of precedent, i.e., a judge's merely deciding a case a certain way becoming a basis for deciding later cases the same way—a pure example of institutional authority. But the critics miss the crucial distinction that when a judicial decision is badly reasoned, or simply no longer applies in the face of evolving social standards or practices, the notion of intellectual authority is introduced: judges reconsider, revise, or in some cases throw out the decision. The conflict between intellectual and institutional authority in legal systems is thus played out in the reconsideration of decisions, leading one to draw the conclusion that legal systems contain a significant degree of intellectual authority even if the thrust of their power is predominantly institutional. | 200210_3-RC_2_14 | [
"It is the only tool judges should use if they wish to achieve a purely intellectual authority.",
"It is a useful tool in theory but in practice it invariably conflicts with the demands of intellectual authority.",
"It is a useful tool but lacks intellectual authority unless it is combined with the reconsiderin... | 2 | Based on the passage, the author would be most likely to hold which one of the following views about the doctrine of precedent? |
In explaining the foundations of the discipline known as historical sociology—the examination of history using the methods of sociology—historical sociologist Philip Abrams argues that, while people are made by society as much as society is made by people, sociologists' approach to the subject is usually to focus on only one of these forms of influence to the exclusion of the other. Abrams insists on the necessity for sociologists to move beyond these one-sided approaches to understand society as an entity constructed by individuals who are at the same time constructed by their society. Abrams refers to this continuous process as "structuring." Abrams also sees history as the result of structuring. People, both individually and as members of collectives, make history. But our making of history is itself formed and informed not only by the historical conditions we inherit from the past, but also by the prior formation of our own identities and capacities, which are shaped by what Abrams calls "contingencies" —social phenomena over which we have varying degrees of control. Contingencies include such things as the social conditions under which we come of age, the condition of our household's economy, the ideologies available to help us make sense of our situation, and accidental circumstances. The ways in which contingencies affect our individual or group identities create a structure of forces within which we are able to act, and that partially determines the sorts of actions we are able to perform. In Abrams's analysis, historical structuring, like social structuring, is manifold and unremitting. To understand it, historical sociologists must extract from it certain significant episodes, or events, that their methodology can then analyze and interpret. According to Abrams, these events are points at which action and contingency meet, points that represent a cross section of the specific social and individual forces in play at a given time. At such moments, individuals stand forth as agents of history not simply because they possess a unique ability to act, but also because in them we see the force of the specific social conditions that allowed their actions to come forth. Individuals can "make their mark" on history, yet in individuals one also finds the convergence of wider social forces. In order to capture the various facets of this mutual interaction, Abrams recommends a fourfold structure to which he believes the investigations of historical sociologists should conform: first, description of the event itself; second, discussion of the social context that helped bring the event about and gave it significance; third, summary of the life history of the individual agent in the event; and fourth, analysis of the consequences of the event both for history and for the individual. | 200210_3-RC_3_15 | [
"Abrams argues that historical sociology rejects the claims of sociologists who assert that the sociological concept of structuring cannot be applied to the interactions between individuals and history.",
"Abrams argues that historical sociology assumes that, despite the views of sociologists to the contrary, his... | 3 | Which one of the following most accurately states the central idea of the passage? |
In explaining the foundations of the discipline known as historical sociology—the examination of history using the methods of sociology—historical sociologist Philip Abrams argues that, while people are made by society as much as society is made by people, sociologists' approach to the subject is usually to focus on only one of these forms of influence to the exclusion of the other. Abrams insists on the necessity for sociologists to move beyond these one-sided approaches to understand society as an entity constructed by individuals who are at the same time constructed by their society. Abrams refers to this continuous process as "structuring." Abrams also sees history as the result of structuring. People, both individually and as members of collectives, make history. But our making of history is itself formed and informed not only by the historical conditions we inherit from the past, but also by the prior formation of our own identities and capacities, which are shaped by what Abrams calls "contingencies" —social phenomena over which we have varying degrees of control. Contingencies include such things as the social conditions under which we come of age, the condition of our household's economy, the ideologies available to help us make sense of our situation, and accidental circumstances. The ways in which contingencies affect our individual or group identities create a structure of forces within which we are able to act, and that partially determines the sorts of actions we are able to perform. In Abrams's analysis, historical structuring, like social structuring, is manifold and unremitting. To understand it, historical sociologists must extract from it certain significant episodes, or events, that their methodology can then analyze and interpret. According to Abrams, these events are points at which action and contingency meet, points that represent a cross section of the specific social and individual forces in play at a given time. At such moments, individuals stand forth as agents of history not simply because they possess a unique ability to act, but also because in them we see the force of the specific social conditions that allowed their actions to come forth. Individuals can "make their mark" on history, yet in individuals one also finds the convergence of wider social forces. In order to capture the various facets of this mutual interaction, Abrams recommends a fourfold structure to which he believes the investigations of historical sociologists should conform: first, description of the event itself; second, discussion of the social context that helped bring the event about and gave it significance; third, summary of the life history of the individual agent in the event; and fourth, analysis of the consequences of the event both for history and for the individual. | 200210_3-RC_3_16 | [
"Only if they adhere to this structure, Abrams believes, can historical sociologists conclude with any certainty that the events that constitute the historical record are influenced by the actions of individuals.",
"Only if they adhere to this structure, Abrams believes, will historical sociologists be able to co... | 4 | Given the passage's argument, which one of the following sentences most logically completes the last paragraph? |
In explaining the foundations of the discipline known as historical sociology—the examination of history using the methods of sociology—historical sociologist Philip Abrams argues that, while people are made by society as much as society is made by people, sociologists' approach to the subject is usually to focus on only one of these forms of influence to the exclusion of the other. Abrams insists on the necessity for sociologists to move beyond these one-sided approaches to understand society as an entity constructed by individuals who are at the same time constructed by their society. Abrams refers to this continuous process as "structuring." Abrams also sees history as the result of structuring. People, both individually and as members of collectives, make history. But our making of history is itself formed and informed not only by the historical conditions we inherit from the past, but also by the prior formation of our own identities and capacities, which are shaped by what Abrams calls "contingencies" —social phenomena over which we have varying degrees of control. Contingencies include such things as the social conditions under which we come of age, the condition of our household's economy, the ideologies available to help us make sense of our situation, and accidental circumstances. The ways in which contingencies affect our individual or group identities create a structure of forces within which we are able to act, and that partially determines the sorts of actions we are able to perform. In Abrams's analysis, historical structuring, like social structuring, is manifold and unremitting. To understand it, historical sociologists must extract from it certain significant episodes, or events, that their methodology can then analyze and interpret. According to Abrams, these events are points at which action and contingency meet, points that represent a cross section of the specific social and individual forces in play at a given time. At such moments, individuals stand forth as agents of history not simply because they possess a unique ability to act, but also because in them we see the force of the specific social conditions that allowed their actions to come forth. Individuals can "make their mark" on history, yet in individuals one also finds the convergence of wider social forces. In order to capture the various facets of this mutual interaction, Abrams recommends a fourfold structure to which he believes the investigations of historical sociologists should conform: first, description of the event itself; second, discussion of the social context that helped bring the event about and gave it significance; third, summary of the life history of the individual agent in the event; and fourth, analysis of the consequences of the event both for history and for the individual. | 200210_3-RC_3_17 | [
"a social phenomenon",
"a form of historical structuring",
"an accidental circumstance",
"a condition controllable to some extent by an individual",
"a partial determinant of an individual's actions"
] | 1 | The passage states that a contingency could be each of the following EXCEPT: |
In explaining the foundations of the discipline known as historical sociology—the examination of history using the methods of sociology—historical sociologist Philip Abrams argues that, while people are made by society as much as society is made by people, sociologists' approach to the subject is usually to focus on only one of these forms of influence to the exclusion of the other. Abrams insists on the necessity for sociologists to move beyond these one-sided approaches to understand society as an entity constructed by individuals who are at the same time constructed by their society. Abrams refers to this continuous process as "structuring." Abrams also sees history as the result of structuring. People, both individually and as members of collectives, make history. But our making of history is itself formed and informed not only by the historical conditions we inherit from the past, but also by the prior formation of our own identities and capacities, which are shaped by what Abrams calls "contingencies" —social phenomena over which we have varying degrees of control. Contingencies include such things as the social conditions under which we come of age, the condition of our household's economy, the ideologies available to help us make sense of our situation, and accidental circumstances. The ways in which contingencies affect our individual or group identities create a structure of forces within which we are able to act, and that partially determines the sorts of actions we are able to perform. In Abrams's analysis, historical structuring, like social structuring, is manifold and unremitting. To understand it, historical sociologists must extract from it certain significant episodes, or events, that their methodology can then analyze and interpret. According to Abrams, these events are points at which action and contingency meet, points that represent a cross section of the specific social and individual forces in play at a given time. At such moments, individuals stand forth as agents of history not simply because they possess a unique ability to act, but also because in them we see the force of the specific social conditions that allowed their actions to come forth. Individuals can "make their mark" on history, yet in individuals one also finds the convergence of wider social forces. In order to capture the various facets of this mutual interaction, Abrams recommends a fourfold structure to which he believes the investigations of historical sociologists should conform: first, description of the event itself; second, discussion of the social context that helped bring the event about and gave it significance; third, summary of the life history of the individual agent in the event; and fourth, analysis of the consequences of the event both for history and for the individual. | 200210_3-RC_3_18 | [
"In a report on the enactment of a bill into law, a journalist explains why the need for the bill arose, sketches the biography of the principal legislator who wrote the bill, and ponders the effect that the bill's enactment will have both on society and on the legislator's career.",
"In a consultation with a pat... | 0 | Which one of the following is most analogous to the ideal work of a historical sociologist as outlined by Abrams? |
In explaining the foundations of the discipline known as historical sociology—the examination of history using the methods of sociology—historical sociologist Philip Abrams argues that, while people are made by society as much as society is made by people, sociologists' approach to the subject is usually to focus on only one of these forms of influence to the exclusion of the other. Abrams insists on the necessity for sociologists to move beyond these one-sided approaches to understand society as an entity constructed by individuals who are at the same time constructed by their society. Abrams refers to this continuous process as "structuring." Abrams also sees history as the result of structuring. People, both individually and as members of collectives, make history. But our making of history is itself formed and informed not only by the historical conditions we inherit from the past, but also by the prior formation of our own identities and capacities, which are shaped by what Abrams calls "contingencies" —social phenomena over which we have varying degrees of control. Contingencies include such things as the social conditions under which we come of age, the condition of our household's economy, the ideologies available to help us make sense of our situation, and accidental circumstances. The ways in which contingencies affect our individual or group identities create a structure of forces within which we are able to act, and that partially determines the sorts of actions we are able to perform. In Abrams's analysis, historical structuring, like social structuring, is manifold and unremitting. To understand it, historical sociologists must extract from it certain significant episodes, or events, that their methodology can then analyze and interpret. According to Abrams, these events are points at which action and contingency meet, points that represent a cross section of the specific social and individual forces in play at a given time. At such moments, individuals stand forth as agents of history not simply because they possess a unique ability to act, but also because in them we see the force of the specific social conditions that allowed their actions to come forth. Individuals can "make their mark" on history, yet in individuals one also finds the convergence of wider social forces. In order to capture the various facets of this mutual interaction, Abrams recommends a fourfold structure to which he believes the investigations of historical sociologists should conform: first, description of the event itself; second, discussion of the social context that helped bring the event about and gave it significance; third, summary of the life history of the individual agent in the event; and fourth, analysis of the consequences of the event both for history and for the individual. | 200210_3-RC_3_19 | [
"outline the merits of Abrams's conception of historical sociology",
"convey the details of Abrams's conception of historical sociology",
"anticipate challenges to Abrams's conception of historical sociology",
"examine the roles of key terms used in Abrams's conception of historical sociology",
"identify th... | 4 | The primary function of the first paragraph of the passage is to |
In explaining the foundations of the discipline known as historical sociology—the examination of history using the methods of sociology—historical sociologist Philip Abrams argues that, while people are made by society as much as society is made by people, sociologists' approach to the subject is usually to focus on only one of these forms of influence to the exclusion of the other. Abrams insists on the necessity for sociologists to move beyond these one-sided approaches to understand society as an entity constructed by individuals who are at the same time constructed by their society. Abrams refers to this continuous process as "structuring." Abrams also sees history as the result of structuring. People, both individually and as members of collectives, make history. But our making of history is itself formed and informed not only by the historical conditions we inherit from the past, but also by the prior formation of our own identities and capacities, which are shaped by what Abrams calls "contingencies" —social phenomena over which we have varying degrees of control. Contingencies include such things as the social conditions under which we come of age, the condition of our household's economy, the ideologies available to help us make sense of our situation, and accidental circumstances. The ways in which contingencies affect our individual or group identities create a structure of forces within which we are able to act, and that partially determines the sorts of actions we are able to perform. In Abrams's analysis, historical structuring, like social structuring, is manifold and unremitting. To understand it, historical sociologists must extract from it certain significant episodes, or events, that their methodology can then analyze and interpret. According to Abrams, these events are points at which action and contingency meet, points that represent a cross section of the specific social and individual forces in play at a given time. At such moments, individuals stand forth as agents of history not simply because they possess a unique ability to act, but also because in them we see the force of the specific social conditions that allowed their actions to come forth. Individuals can "make their mark" on history, yet in individuals one also finds the convergence of wider social forces. In order to capture the various facets of this mutual interaction, Abrams recommends a fourfold structure to which he believes the investigations of historical sociologists should conform: first, description of the event itself; second, discussion of the social context that helped bring the event about and gave it significance; third, summary of the life history of the individual agent in the event; and fourth, analysis of the consequences of the event both for history and for the individual. | 200210_3-RC_3_20 | [
"the effect of the fact that a person experienced political injustice on that person's decision to work for political reform",
"the effect of the fact that a person was raised in an agricultural region on that person's decision to pursue a career in agriculture",
"the effect of the fact that a person lives in a... | 2 | Based on the passage, which one of the following is the LEAST illustrative example of the effect of a contingency upon an individual? |
One of the greatest challenges facing medical students today, apart from absorbing volumes of technical information and learning habits of scientific thought, is that of remaining empathetic to the needs of patients in the face of all this rigorous training. Requiring students to immerse themselves completely in medical coursework risks disconnecting them from the personal and ethical aspects of doctoring, and such strictly scientific thinking is insufficient for grappling with modern ethical dilemmas. For these reasons, aspiring physicians need to develop new ways of thinking about and interacting with patients. Training in ethics that takes narrative literature as its primary subject is one method of accomplishing this. Although training in ethics is currently provided by medical schools, this training relies heavily on an abstract, philosophical view of ethics. Although the conceptual clarity provided by a traditional ethics course can be valuable, theorizing about ethics contributes little to the understanding of everyday human experience or to preparing medical students for the multifarious ethical dilemmas they will face as physicians. A true foundation in ethics must be predicated on an understanding of human behavior that reflects a wide array of relationships and readily adapts to various perspectives, for this is what is required to develop empathy. Ethics courses drawing on narrative literature can better help students prepare for ethical dilemmas precisely because such literature attaches its readers so forcefully to the concrete and varied world of human events. The act of reading narrative literature is uniquely suited to the development of what might be called flexible ethical thinking. To grasp the development of characters, to tangle with heightening moral crises, and to engage oneself with the story not as one's own but nevertheless as something recognizable and worthy of attention, readers must use their moral imagination. Giving oneself over to the ethical conflicts in a story requires the abandonment of strictly absolute, inviolate sets of moral principles. Reading literature also demands that the reader adopt another person's point of view—that of the narrator or a character in a story— and thus requires the ability to depart from one's personal ethical stance and examine moral issues from new perspectives. It does not follow that readers, including medical professionals, must relinquish all moral principles, as is the case with situational ethics, in which decisions about ethical choices are made on the basis of intuition and are entirely relative to the circumstances in which they arise. Such an extremely relativistic stance would have as little benefit for the patient or physician as would a dogmatically absolutist one. Fortunately, the incorporation of narrative literature into the study of ethics, while serving as a corrective to the latter stance, need not lead to the former. But it can give us something that is lacking in the traditional philosophical study of ethics—namely, a deeper understanding of human nature that can serve as a foundation for ethical reasoning and allow greater flexibility in the application of moral principles. | 200210_3-RC_4_21 | [
"Training in ethics that incorporates narrative literature would better cultivate flexible ethical thinking and increase medical students' capacity for empathetic patient care as compared with the traditional approach of medical schools to such training.",
"Traditional abstract ethical training, because it is too... | 0 | Which one of the following most accurately states the main point of the passage? |
One of the greatest challenges facing medical students today, apart from absorbing volumes of technical information and learning habits of scientific thought, is that of remaining empathetic to the needs of patients in the face of all this rigorous training. Requiring students to immerse themselves completely in medical coursework risks disconnecting them from the personal and ethical aspects of doctoring, and such strictly scientific thinking is insufficient for grappling with modern ethical dilemmas. For these reasons, aspiring physicians need to develop new ways of thinking about and interacting with patients. Training in ethics that takes narrative literature as its primary subject is one method of accomplishing this. Although training in ethics is currently provided by medical schools, this training relies heavily on an abstract, philosophical view of ethics. Although the conceptual clarity provided by a traditional ethics course can be valuable, theorizing about ethics contributes little to the understanding of everyday human experience or to preparing medical students for the multifarious ethical dilemmas they will face as physicians. A true foundation in ethics must be predicated on an understanding of human behavior that reflects a wide array of relationships and readily adapts to various perspectives, for this is what is required to develop empathy. Ethics courses drawing on narrative literature can better help students prepare for ethical dilemmas precisely because such literature attaches its readers so forcefully to the concrete and varied world of human events. The act of reading narrative literature is uniquely suited to the development of what might be called flexible ethical thinking. To grasp the development of characters, to tangle with heightening moral crises, and to engage oneself with the story not as one's own but nevertheless as something recognizable and worthy of attention, readers must use their moral imagination. Giving oneself over to the ethical conflicts in a story requires the abandonment of strictly absolute, inviolate sets of moral principles. Reading literature also demands that the reader adopt another person's point of view—that of the narrator or a character in a story— and thus requires the ability to depart from one's personal ethical stance and examine moral issues from new perspectives. It does not follow that readers, including medical professionals, must relinquish all moral principles, as is the case with situational ethics, in which decisions about ethical choices are made on the basis of intuition and are entirely relative to the circumstances in which they arise. Such an extremely relativistic stance would have as little benefit for the patient or physician as would a dogmatically absolutist one. Fortunately, the incorporation of narrative literature into the study of ethics, while serving as a corrective to the latter stance, need not lead to the former. But it can give us something that is lacking in the traditional philosophical study of ethics—namely, a deeper understanding of human nature that can serve as a foundation for ethical reasoning and allow greater flexibility in the application of moral principles. | 200210_3-RC_4_22 | [
"a sense of curiosity, aroused by reading, that leads one to follow actively the development of problems involving the characters depicted in narratives",
"a faculty of seeking out and recognizing the ethical controversies involved in human relationships and identifying oneself with one side or another in such co... | 3 | Which one of the following most accurately represents the author's use of the term "moral imagination" in line 38? |
One of the greatest challenges facing medical students today, apart from absorbing volumes of technical information and learning habits of scientific thought, is that of remaining empathetic to the needs of patients in the face of all this rigorous training. Requiring students to immerse themselves completely in medical coursework risks disconnecting them from the personal and ethical aspects of doctoring, and such strictly scientific thinking is insufficient for grappling with modern ethical dilemmas. For these reasons, aspiring physicians need to develop new ways of thinking about and interacting with patients. Training in ethics that takes narrative literature as its primary subject is one method of accomplishing this. Although training in ethics is currently provided by medical schools, this training relies heavily on an abstract, philosophical view of ethics. Although the conceptual clarity provided by a traditional ethics course can be valuable, theorizing about ethics contributes little to the understanding of everyday human experience or to preparing medical students for the multifarious ethical dilemmas they will face as physicians. A true foundation in ethics must be predicated on an understanding of human behavior that reflects a wide array of relationships and readily adapts to various perspectives, for this is what is required to develop empathy. Ethics courses drawing on narrative literature can better help students prepare for ethical dilemmas precisely because such literature attaches its readers so forcefully to the concrete and varied world of human events. The act of reading narrative literature is uniquely suited to the development of what might be called flexible ethical thinking. To grasp the development of characters, to tangle with heightening moral crises, and to engage oneself with the story not as one's own but nevertheless as something recognizable and worthy of attention, readers must use their moral imagination. Giving oneself over to the ethical conflicts in a story requires the abandonment of strictly absolute, inviolate sets of moral principles. Reading literature also demands that the reader adopt another person's point of view—that of the narrator or a character in a story— and thus requires the ability to depart from one's personal ethical stance and examine moral issues from new perspectives. It does not follow that readers, including medical professionals, must relinquish all moral principles, as is the case with situational ethics, in which decisions about ethical choices are made on the basis of intuition and are entirely relative to the circumstances in which they arise. Such an extremely relativistic stance would have as little benefit for the patient or physician as would a dogmatically absolutist one. Fortunately, the incorporation of narrative literature into the study of ethics, while serving as a corrective to the latter stance, need not lead to the former. But it can give us something that is lacking in the traditional philosophical study of ethics—namely, a deeper understanding of human nature that can serve as a foundation for ethical reasoning and allow greater flexibility in the application of moral principles. | 200210_3-RC_4_23 | [
"The heavy load of technical coursework in today's medical schools often keeps them from giving adequate emphasis to courses in medical ethics.",
"Students learn more about ethics through the use of fiction than through the use of nonfictional readings.",
"The traditional method of ethical training in medical s... | 4 | It can be inferred from the passage that the author would most likely agree with which one of the following statements? |
One of the greatest challenges facing medical students today, apart from absorbing volumes of technical information and learning habits of scientific thought, is that of remaining empathetic to the needs of patients in the face of all this rigorous training. Requiring students to immerse themselves completely in medical coursework risks disconnecting them from the personal and ethical aspects of doctoring, and such strictly scientific thinking is insufficient for grappling with modern ethical dilemmas. For these reasons, aspiring physicians need to develop new ways of thinking about and interacting with patients. Training in ethics that takes narrative literature as its primary subject is one method of accomplishing this. Although training in ethics is currently provided by medical schools, this training relies heavily on an abstract, philosophical view of ethics. Although the conceptual clarity provided by a traditional ethics course can be valuable, theorizing about ethics contributes little to the understanding of everyday human experience or to preparing medical students for the multifarious ethical dilemmas they will face as physicians. A true foundation in ethics must be predicated on an understanding of human behavior that reflects a wide array of relationships and readily adapts to various perspectives, for this is what is required to develop empathy. Ethics courses drawing on narrative literature can better help students prepare for ethical dilemmas precisely because such literature attaches its readers so forcefully to the concrete and varied world of human events. The act of reading narrative literature is uniquely suited to the development of what might be called flexible ethical thinking. To grasp the development of characters, to tangle with heightening moral crises, and to engage oneself with the story not as one's own but nevertheless as something recognizable and worthy of attention, readers must use their moral imagination. Giving oneself over to the ethical conflicts in a story requires the abandonment of strictly absolute, inviolate sets of moral principles. Reading literature also demands that the reader adopt another person's point of view—that of the narrator or a character in a story— and thus requires the ability to depart from one's personal ethical stance and examine moral issues from new perspectives. It does not follow that readers, including medical professionals, must relinquish all moral principles, as is the case with situational ethics, in which decisions about ethical choices are made on the basis of intuition and are entirely relative to the circumstances in which they arise. Such an extremely relativistic stance would have as little benefit for the patient or physician as would a dogmatically absolutist one. Fortunately, the incorporation of narrative literature into the study of ethics, while serving as a corrective to the latter stance, need not lead to the former. But it can give us something that is lacking in the traditional philosophical study of ethics—namely, a deeper understanding of human nature that can serve as a foundation for ethical reasoning and allow greater flexibility in the application of moral principles. | 200210_3-RC_4_24 | [
"to advise medical schools on how to implement a narrative-based approach to ethics in their curricula",
"to argue that the current methods of ethics education are counterproductive to the formation of empathetic doctor-patient relationships",
"to argue that the ethical content of narrative literature foreshado... | 3 | Which one of the following is most likely the author's overall purpose in the passage? |
One of the greatest challenges facing medical students today, apart from absorbing volumes of technical information and learning habits of scientific thought, is that of remaining empathetic to the needs of patients in the face of all this rigorous training. Requiring students to immerse themselves completely in medical coursework risks disconnecting them from the personal and ethical aspects of doctoring, and such strictly scientific thinking is insufficient for grappling with modern ethical dilemmas. For these reasons, aspiring physicians need to develop new ways of thinking about and interacting with patients. Training in ethics that takes narrative literature as its primary subject is one method of accomplishing this. Although training in ethics is currently provided by medical schools, this training relies heavily on an abstract, philosophical view of ethics. Although the conceptual clarity provided by a traditional ethics course can be valuable, theorizing about ethics contributes little to the understanding of everyday human experience or to preparing medical students for the multifarious ethical dilemmas they will face as physicians. A true foundation in ethics must be predicated on an understanding of human behavior that reflects a wide array of relationships and readily adapts to various perspectives, for this is what is required to develop empathy. Ethics courses drawing on narrative literature can better help students prepare for ethical dilemmas precisely because such literature attaches its readers so forcefully to the concrete and varied world of human events. The act of reading narrative literature is uniquely suited to the development of what might be called flexible ethical thinking. To grasp the development of characters, to tangle with heightening moral crises, and to engage oneself with the story not as one's own but nevertheless as something recognizable and worthy of attention, readers must use their moral imagination. Giving oneself over to the ethical conflicts in a story requires the abandonment of strictly absolute, inviolate sets of moral principles. Reading literature also demands that the reader adopt another person's point of view—that of the narrator or a character in a story— and thus requires the ability to depart from one's personal ethical stance and examine moral issues from new perspectives. It does not follow that readers, including medical professionals, must relinquish all moral principles, as is the case with situational ethics, in which decisions about ethical choices are made on the basis of intuition and are entirely relative to the circumstances in which they arise. Such an extremely relativistic stance would have as little benefit for the patient or physician as would a dogmatically absolutist one. Fortunately, the incorporation of narrative literature into the study of ethics, while serving as a corrective to the latter stance, need not lead to the former. But it can give us something that is lacking in the traditional philosophical study of ethics—namely, a deeper understanding of human nature that can serve as a foundation for ethical reasoning and allow greater flexibility in the application of moral principles. | 200210_3-RC_4_25 | [
"It tends to avoid the extreme relativism of situational ethics.",
"It connects students to varied types of human events.",
"It can help lead medical students to develop new ways of dealing with patients.",
"It requires students to examine moral issues from new perspectives.",
"It can help insulate future d... | 4 | The passage ascribes each of the following characteristics to the use of narrative literature in ethical education EXCEPT: |
One of the greatest challenges facing medical students today, apart from absorbing volumes of technical information and learning habits of scientific thought, is that of remaining empathetic to the needs of patients in the face of all this rigorous training. Requiring students to immerse themselves completely in medical coursework risks disconnecting them from the personal and ethical aspects of doctoring, and such strictly scientific thinking is insufficient for grappling with modern ethical dilemmas. For these reasons, aspiring physicians need to develop new ways of thinking about and interacting with patients. Training in ethics that takes narrative literature as its primary subject is one method of accomplishing this. Although training in ethics is currently provided by medical schools, this training relies heavily on an abstract, philosophical view of ethics. Although the conceptual clarity provided by a traditional ethics course can be valuable, theorizing about ethics contributes little to the understanding of everyday human experience or to preparing medical students for the multifarious ethical dilemmas they will face as physicians. A true foundation in ethics must be predicated on an understanding of human behavior that reflects a wide array of relationships and readily adapts to various perspectives, for this is what is required to develop empathy. Ethics courses drawing on narrative literature can better help students prepare for ethical dilemmas precisely because such literature attaches its readers so forcefully to the concrete and varied world of human events. The act of reading narrative literature is uniquely suited to the development of what might be called flexible ethical thinking. To grasp the development of characters, to tangle with heightening moral crises, and to engage oneself with the story not as one's own but nevertheless as something recognizable and worthy of attention, readers must use their moral imagination. Giving oneself over to the ethical conflicts in a story requires the abandonment of strictly absolute, inviolate sets of moral principles. Reading literature also demands that the reader adopt another person's point of view—that of the narrator or a character in a story— and thus requires the ability to depart from one's personal ethical stance and examine moral issues from new perspectives. It does not follow that readers, including medical professionals, must relinquish all moral principles, as is the case with situational ethics, in which decisions about ethical choices are made on the basis of intuition and are entirely relative to the circumstances in which they arise. Such an extremely relativistic stance would have as little benefit for the patient or physician as would a dogmatically absolutist one. Fortunately, the incorporation of narrative literature into the study of ethics, while serving as a corrective to the latter stance, need not lead to the former. But it can give us something that is lacking in the traditional philosophical study of ethics—namely, a deeper understanding of human nature that can serve as a foundation for ethical reasoning and allow greater flexibility in the application of moral principles. | 200210_3-RC_4_26 | [
"Doctors face a variety of such dilemmas.",
"Purely scientific thinking is inadequate for dealing with modern ethical dilemmas.",
"Such dilemmas are more prevalent today as a result of scientific and technological advances in medicine.",
"Theorizing about ethics does little to prepare students to face such di... | 2 | With regard to ethical dilemmas, the passage explicitly states each of the following EXCEPT: |
One of the greatest challenges facing medical students today, apart from absorbing volumes of technical information and learning habits of scientific thought, is that of remaining empathetic to the needs of patients in the face of all this rigorous training. Requiring students to immerse themselves completely in medical coursework risks disconnecting them from the personal and ethical aspects of doctoring, and such strictly scientific thinking is insufficient for grappling with modern ethical dilemmas. For these reasons, aspiring physicians need to develop new ways of thinking about and interacting with patients. Training in ethics that takes narrative literature as its primary subject is one method of accomplishing this. Although training in ethics is currently provided by medical schools, this training relies heavily on an abstract, philosophical view of ethics. Although the conceptual clarity provided by a traditional ethics course can be valuable, theorizing about ethics contributes little to the understanding of everyday human experience or to preparing medical students for the multifarious ethical dilemmas they will face as physicians. A true foundation in ethics must be predicated on an understanding of human behavior that reflects a wide array of relationships and readily adapts to various perspectives, for this is what is required to develop empathy. Ethics courses drawing on narrative literature can better help students prepare for ethical dilemmas precisely because such literature attaches its readers so forcefully to the concrete and varied world of human events. The act of reading narrative literature is uniquely suited to the development of what might be called flexible ethical thinking. To grasp the development of characters, to tangle with heightening moral crises, and to engage oneself with the story not as one's own but nevertheless as something recognizable and worthy of attention, readers must use their moral imagination. Giving oneself over to the ethical conflicts in a story requires the abandonment of strictly absolute, inviolate sets of moral principles. Reading literature also demands that the reader adopt another person's point of view—that of the narrator or a character in a story— and thus requires the ability to depart from one's personal ethical stance and examine moral issues from new perspectives. It does not follow that readers, including medical professionals, must relinquish all moral principles, as is the case with situational ethics, in which decisions about ethical choices are made on the basis of intuition and are entirely relative to the circumstances in which they arise. Such an extremely relativistic stance would have as little benefit for the patient or physician as would a dogmatically absolutist one. Fortunately, the incorporation of narrative literature into the study of ethics, while serving as a corrective to the latter stance, need not lead to the former. But it can give us something that is lacking in the traditional philosophical study of ethics—namely, a deeper understanding of human nature that can serve as a foundation for ethical reasoning and allow greater flexibility in the application of moral principles. | 200210_3-RC_4_27 | [
"unqualified disapproval of the method and disapproval of all of its effects",
"reserved judgment regarding the method and disapproval of all of its effects",
"partial disapproval of the method and clinical indifference toward its effects",
"partial approval of the method and disapproval of all of its effects... | 4 | The author's attitude regarding the traditional method of teaching ethics in medical school can most accurately be described as |
The contemporary Mexican artistic movement known as muralism, a movement of public art that began with images painted on walls in an effort to represent Mexican national culture, is closely linked ideologically with its main sponsor, the new Mexican government elected in 1920 following the Mexican Revolution. This government promoted an ambitious cultural program, and the young revolutionary state called on artists to display Mexico's richness and possibility. But the theoretical foundation of the movement was formulated by the artists themselves. The major figures in the muralist movement, David Alfaro Siqueiros, Diego Rivera, and José Clemente Orozco, all based their work on a common premise: that art should incorporate images and familiar ideas as it commented upon the historic period in which it was created. In the process, they assimilated into their work the customs, myths, geography, and history of the local communities that constitute the basis of Mexican national culture. But while many muralist works express populist or nationalist ideas, it is a mistake to attempt to reduce Mexican mural painting to formulaic, official government art. It is more than merely the result of the changes in political and social awareness that the Mexican Revolution represented; it also reflected important innovations in the art world. In creating a wide panorama of Mexico's history on the walls of public buildings throughout the country, muralists often used a realist style. But awareness of these innovations enabled them to be freer in expression than were more traditional practitioners of this style. Moreover, while they shared a common interest in rediscovering their Mexican national identity, they developed their own distinct styles. Rivera, for example, incorporated elements from pre-Columbian sculpture and the Italian Renaissance fresco into his murals and used a strange combination of mechanical shapes to depict the faces and bodies of people. Orozco, on the other hand, showed a more expressionist approach, with loose brushwork and an openly emotional treatment of form. He relied on a strong diagonal line to give a sense of heightened movement and drama to his work. Siqueiros developed in a somewhat similar direction as Orozco, but incorporated asymmetric compositions, a high degree of action, and brilliant color. This stylistic experimentation can be seen as resulting from the demands of a new medium. In stretching their concepts from small easel paintings with a centralized subject to vast compositions with mural dimensions, muralists learned to think big and to respect the sweeping gesture of the arm—the brush stroke required to achieve the desired bold effect of mural art. Furthermore, because they were painting murals, they thought in terms of a continuum; their works were designed to be viewable from many different vantage points, to have an equally strong impact in all parts, and to continue to be viewable as people moved across in front of them. | 200212_3-RC_1_1 | [
"Muralism developed its political goals in Mexico in service to the revolutionary government, while its aesthetic aspects were borrowed from other countries.",
"Inspired by political developments in Mexico and trends in modern art, muralist painters devised an innovative style of large-scale painting to reflect M... | 1 | Which one of the following most accurately expresses the main point of the passage? |
The contemporary Mexican artistic movement known as muralism, a movement of public art that began with images painted on walls in an effort to represent Mexican national culture, is closely linked ideologically with its main sponsor, the new Mexican government elected in 1920 following the Mexican Revolution. This government promoted an ambitious cultural program, and the young revolutionary state called on artists to display Mexico's richness and possibility. But the theoretical foundation of the movement was formulated by the artists themselves. The major figures in the muralist movement, David Alfaro Siqueiros, Diego Rivera, and José Clemente Orozco, all based their work on a common premise: that art should incorporate images and familiar ideas as it commented upon the historic period in which it was created. In the process, they assimilated into their work the customs, myths, geography, and history of the local communities that constitute the basis of Mexican national culture. But while many muralist works express populist or nationalist ideas, it is a mistake to attempt to reduce Mexican mural painting to formulaic, official government art. It is more than merely the result of the changes in political and social awareness that the Mexican Revolution represented; it also reflected important innovations in the art world. In creating a wide panorama of Mexico's history on the walls of public buildings throughout the country, muralists often used a realist style. But awareness of these innovations enabled them to be freer in expression than were more traditional practitioners of this style. Moreover, while they shared a common interest in rediscovering their Mexican national identity, they developed their own distinct styles. Rivera, for example, incorporated elements from pre-Columbian sculpture and the Italian Renaissance fresco into his murals and used a strange combination of mechanical shapes to depict the faces and bodies of people. Orozco, on the other hand, showed a more expressionist approach, with loose brushwork and an openly emotional treatment of form. He relied on a strong diagonal line to give a sense of heightened movement and drama to his work. Siqueiros developed in a somewhat similar direction as Orozco, but incorporated asymmetric compositions, a high degree of action, and brilliant color. This stylistic experimentation can be seen as resulting from the demands of a new medium. In stretching their concepts from small easel paintings with a centralized subject to vast compositions with mural dimensions, muralists learned to think big and to respect the sweeping gesture of the arm—the brush stroke required to achieve the desired bold effect of mural art. Furthermore, because they were painting murals, they thought in terms of a continuum; their works were designed to be viewable from many different vantage points, to have an equally strong impact in all parts, and to continue to be viewable as people moved across in front of them. | 200212_3-RC_1_2 | [
"assimilation of elements of Mexican customs and myth",
"movement beyond single, centralized subjects",
"experimentation with expressionist techniques",
"distinctive manner of artistic expression",
"underlying resistance to change"
] | 3 | The author mentions Rivera's use of "pre-Columbian sculpture and the Italian Renaissance fresco" (lines 36–37) primarily in order to provide an example of Rivera's |
The contemporary Mexican artistic movement known as muralism, a movement of public art that began with images painted on walls in an effort to represent Mexican national culture, is closely linked ideologically with its main sponsor, the new Mexican government elected in 1920 following the Mexican Revolution. This government promoted an ambitious cultural program, and the young revolutionary state called on artists to display Mexico's richness and possibility. But the theoretical foundation of the movement was formulated by the artists themselves. The major figures in the muralist movement, David Alfaro Siqueiros, Diego Rivera, and José Clemente Orozco, all based their work on a common premise: that art should incorporate images and familiar ideas as it commented upon the historic period in which it was created. In the process, they assimilated into their work the customs, myths, geography, and history of the local communities that constitute the basis of Mexican national culture. But while many muralist works express populist or nationalist ideas, it is a mistake to attempt to reduce Mexican mural painting to formulaic, official government art. It is more than merely the result of the changes in political and social awareness that the Mexican Revolution represented; it also reflected important innovations in the art world. In creating a wide panorama of Mexico's history on the walls of public buildings throughout the country, muralists often used a realist style. But awareness of these innovations enabled them to be freer in expression than were more traditional practitioners of this style. Moreover, while they shared a common interest in rediscovering their Mexican national identity, they developed their own distinct styles. Rivera, for example, incorporated elements from pre-Columbian sculpture and the Italian Renaissance fresco into his murals and used a strange combination of mechanical shapes to depict the faces and bodies of people. Orozco, on the other hand, showed a more expressionist approach, with loose brushwork and an openly emotional treatment of form. He relied on a strong diagonal line to give a sense of heightened movement and drama to his work. Siqueiros developed in a somewhat similar direction as Orozco, but incorporated asymmetric compositions, a high degree of action, and brilliant color. This stylistic experimentation can be seen as resulting from the demands of a new medium. In stretching their concepts from small easel paintings with a centralized subject to vast compositions with mural dimensions, muralists learned to think big and to respect the sweeping gesture of the arm—the brush stroke required to achieve the desired bold effect of mural art. Furthermore, because they were painting murals, they thought in terms of a continuum; their works were designed to be viewable from many different vantage points, to have an equally strong impact in all parts, and to continue to be viewable as people moved across in front of them. | 200212_3-RC_1_3 | [
"its revolutionary ideology",
"its use of brilliant color",
"its tailoring of style to its medium",
"its use of elements from everyday life",
"its expression of populist ideas"
] | 2 | Which one of the following aspects of muralist painting does the author appear to value most highly? |
The contemporary Mexican artistic movement known as muralism, a movement of public art that began with images painted on walls in an effort to represent Mexican national culture, is closely linked ideologically with its main sponsor, the new Mexican government elected in 1920 following the Mexican Revolution. This government promoted an ambitious cultural program, and the young revolutionary state called on artists to display Mexico's richness and possibility. But the theoretical foundation of the movement was formulated by the artists themselves. The major figures in the muralist movement, David Alfaro Siqueiros, Diego Rivera, and José Clemente Orozco, all based their work on a common premise: that art should incorporate images and familiar ideas as it commented upon the historic period in which it was created. In the process, they assimilated into their work the customs, myths, geography, and history of the local communities that constitute the basis of Mexican national culture. But while many muralist works express populist or nationalist ideas, it is a mistake to attempt to reduce Mexican mural painting to formulaic, official government art. It is more than merely the result of the changes in political and social awareness that the Mexican Revolution represented; it also reflected important innovations in the art world. In creating a wide panorama of Mexico's history on the walls of public buildings throughout the country, muralists often used a realist style. But awareness of these innovations enabled them to be freer in expression than were more traditional practitioners of this style. Moreover, while they shared a common interest in rediscovering their Mexican national identity, they developed their own distinct styles. Rivera, for example, incorporated elements from pre-Columbian sculpture and the Italian Renaissance fresco into his murals and used a strange combination of mechanical shapes to depict the faces and bodies of people. Orozco, on the other hand, showed a more expressionist approach, with loose brushwork and an openly emotional treatment of form. He relied on a strong diagonal line to give a sense of heightened movement and drama to his work. Siqueiros developed in a somewhat similar direction as Orozco, but incorporated asymmetric compositions, a high degree of action, and brilliant color. This stylistic experimentation can be seen as resulting from the demands of a new medium. In stretching their concepts from small easel paintings with a centralized subject to vast compositions with mural dimensions, muralists learned to think big and to respect the sweeping gesture of the arm—the brush stroke required to achieve the desired bold effect of mural art. Furthermore, because they were painting murals, they thought in terms of a continuum; their works were designed to be viewable from many different vantage points, to have an equally strong impact in all parts, and to continue to be viewable as people moved across in front of them. | 200212_3-RC_1_4 | [
"Art should be evaluated on the basis of its style and form rather than on its content.",
"Government sponsorship is essential to the flourishing of art.",
"Realism is unsuited to large-scale public art.",
"The use of techniques borrowed from other cultures can contribute to the rediscovery of one's national ... | 3 | Based on the passage, with which one of the following statements about art would the muralists be most likely to agree? |
The contemporary Mexican artistic movement known as muralism, a movement of public art that began with images painted on walls in an effort to represent Mexican national culture, is closely linked ideologically with its main sponsor, the new Mexican government elected in 1920 following the Mexican Revolution. This government promoted an ambitious cultural program, and the young revolutionary state called on artists to display Mexico's richness and possibility. But the theoretical foundation of the movement was formulated by the artists themselves. The major figures in the muralist movement, David Alfaro Siqueiros, Diego Rivera, and José Clemente Orozco, all based their work on a common premise: that art should incorporate images and familiar ideas as it commented upon the historic period in which it was created. In the process, they assimilated into their work the customs, myths, geography, and history of the local communities that constitute the basis of Mexican national culture. But while many muralist works express populist or nationalist ideas, it is a mistake to attempt to reduce Mexican mural painting to formulaic, official government art. It is more than merely the result of the changes in political and social awareness that the Mexican Revolution represented; it also reflected important innovations in the art world. In creating a wide panorama of Mexico's history on the walls of public buildings throughout the country, muralists often used a realist style. But awareness of these innovations enabled them to be freer in expression than were more traditional practitioners of this style. Moreover, while they shared a common interest in rediscovering their Mexican national identity, they developed their own distinct styles. Rivera, for example, incorporated elements from pre-Columbian sculpture and the Italian Renaissance fresco into his murals and used a strange combination of mechanical shapes to depict the faces and bodies of people. Orozco, on the other hand, showed a more expressionist approach, with loose brushwork and an openly emotional treatment of form. He relied on a strong diagonal line to give a sense of heightened movement and drama to his work. Siqueiros developed in a somewhat similar direction as Orozco, but incorporated asymmetric compositions, a high degree of action, and brilliant color. This stylistic experimentation can be seen as resulting from the demands of a new medium. In stretching their concepts from small easel paintings with a centralized subject to vast compositions with mural dimensions, muralists learned to think big and to respect the sweeping gesture of the arm—the brush stroke required to achieve the desired bold effect of mural art. Furthermore, because they were painting murals, they thought in terms of a continuum; their works were designed to be viewable from many different vantage points, to have an equally strong impact in all parts, and to continue to be viewable as people moved across in front of them. | 200212_3-RC_1_5 | [
"It encouraged the adoption of modern innovations from abroad.",
"It encouraged artists to pursue the realist tradition in art.",
"It called on artists to portray Mexico's heritage and future promise.",
"It developed the theoretical base of the muralist movement.",
"It favored artists who introduced stylist... | 2 | According to the passage, the Mexican government elected in 1920 took which one of the following approaches to art following the Mexican Revolution? |
The contemporary Mexican artistic movement known as muralism, a movement of public art that began with images painted on walls in an effort to represent Mexican national culture, is closely linked ideologically with its main sponsor, the new Mexican government elected in 1920 following the Mexican Revolution. This government promoted an ambitious cultural program, and the young revolutionary state called on artists to display Mexico's richness and possibility. But the theoretical foundation of the movement was formulated by the artists themselves. The major figures in the muralist movement, David Alfaro Siqueiros, Diego Rivera, and José Clemente Orozco, all based their work on a common premise: that art should incorporate images and familiar ideas as it commented upon the historic period in which it was created. In the process, they assimilated into their work the customs, myths, geography, and history of the local communities that constitute the basis of Mexican national culture. But while many muralist works express populist or nationalist ideas, it is a mistake to attempt to reduce Mexican mural painting to formulaic, official government art. It is more than merely the result of the changes in political and social awareness that the Mexican Revolution represented; it also reflected important innovations in the art world. In creating a wide panorama of Mexico's history on the walls of public buildings throughout the country, muralists often used a realist style. But awareness of these innovations enabled them to be freer in expression than were more traditional practitioners of this style. Moreover, while they shared a common interest in rediscovering their Mexican national identity, they developed their own distinct styles. Rivera, for example, incorporated elements from pre-Columbian sculpture and the Italian Renaissance fresco into his murals and used a strange combination of mechanical shapes to depict the faces and bodies of people. Orozco, on the other hand, showed a more expressionist approach, with loose brushwork and an openly emotional treatment of form. He relied on a strong diagonal line to give a sense of heightened movement and drama to his work. Siqueiros developed in a somewhat similar direction as Orozco, but incorporated asymmetric compositions, a high degree of action, and brilliant color. This stylistic experimentation can be seen as resulting from the demands of a new medium. In stretching their concepts from small easel paintings with a centralized subject to vast compositions with mural dimensions, muralists learned to think big and to respect the sweeping gesture of the arm—the brush stroke required to achieve the desired bold effect of mural art. Furthermore, because they were painting murals, they thought in terms of a continuum; their works were designed to be viewable from many different vantage points, to have an equally strong impact in all parts, and to continue to be viewable as people moved across in front of them. | 200212_3-RC_1_6 | [
"The major figures in muralism also created important works in that style that were deliberately not political in content.",
"Not all muralist painters were familiar with the innovations being made at that time in the art world.",
"The changes taking place at that time in the art world were revivals of earlier ... | 0 | Which one of the following, if true, most supports the author's claim about the relationship between muralism and the Mexican Revolution (lines 24–27)? |
The contemporary Mexican artistic movement known as muralism, a movement of public art that began with images painted on walls in an effort to represent Mexican national culture, is closely linked ideologically with its main sponsor, the new Mexican government elected in 1920 following the Mexican Revolution. This government promoted an ambitious cultural program, and the young revolutionary state called on artists to display Mexico's richness and possibility. But the theoretical foundation of the movement was formulated by the artists themselves. The major figures in the muralist movement, David Alfaro Siqueiros, Diego Rivera, and José Clemente Orozco, all based their work on a common premise: that art should incorporate images and familiar ideas as it commented upon the historic period in which it was created. In the process, they assimilated into their work the customs, myths, geography, and history of the local communities that constitute the basis of Mexican national culture. But while many muralist works express populist or nationalist ideas, it is a mistake to attempt to reduce Mexican mural painting to formulaic, official government art. It is more than merely the result of the changes in political and social awareness that the Mexican Revolution represented; it also reflected important innovations in the art world. In creating a wide panorama of Mexico's history on the walls of public buildings throughout the country, muralists often used a realist style. But awareness of these innovations enabled them to be freer in expression than were more traditional practitioners of this style. Moreover, while they shared a common interest in rediscovering their Mexican national identity, they developed their own distinct styles. Rivera, for example, incorporated elements from pre-Columbian sculpture and the Italian Renaissance fresco into his murals and used a strange combination of mechanical shapes to depict the faces and bodies of people. Orozco, on the other hand, showed a more expressionist approach, with loose brushwork and an openly emotional treatment of form. He relied on a strong diagonal line to give a sense of heightened movement and drama to his work. Siqueiros developed in a somewhat similar direction as Orozco, but incorporated asymmetric compositions, a high degree of action, and brilliant color. This stylistic experimentation can be seen as resulting from the demands of a new medium. In stretching their concepts from small easel paintings with a centralized subject to vast compositions with mural dimensions, muralists learned to think big and to respect the sweeping gesture of the arm—the brush stroke required to achieve the desired bold effect of mural art. Furthermore, because they were painting murals, they thought in terms of a continuum; their works were designed to be viewable from many different vantage points, to have an equally strong impact in all parts, and to continue to be viewable as people moved across in front of them. | 200212_3-RC_1_7 | [
"Its subject matter consisted primarily of current events.",
"It could be viewed outdoors only.",
"It used the same techniques as are used in easel painting.",
"It exhibited remarkable stylistic uniformity.",
"It was intended to be viewed from more than one angle."
] | 4 | Which one of the following does the author explicitly identify as a characteristic of Mexican mural art? |
The contemporary Mexican artistic movement known as muralism, a movement of public art that began with images painted on walls in an effort to represent Mexican national culture, is closely linked ideologically with its main sponsor, the new Mexican government elected in 1920 following the Mexican Revolution. This government promoted an ambitious cultural program, and the young revolutionary state called on artists to display Mexico's richness and possibility. But the theoretical foundation of the movement was formulated by the artists themselves. The major figures in the muralist movement, David Alfaro Siqueiros, Diego Rivera, and José Clemente Orozco, all based their work on a common premise: that art should incorporate images and familiar ideas as it commented upon the historic period in which it was created. In the process, they assimilated into their work the customs, myths, geography, and history of the local communities that constitute the basis of Mexican national culture. But while many muralist works express populist or nationalist ideas, it is a mistake to attempt to reduce Mexican mural painting to formulaic, official government art. It is more than merely the result of the changes in political and social awareness that the Mexican Revolution represented; it also reflected important innovations in the art world. In creating a wide panorama of Mexico's history on the walls of public buildings throughout the country, muralists often used a realist style. But awareness of these innovations enabled them to be freer in expression than were more traditional practitioners of this style. Moreover, while they shared a common interest in rediscovering their Mexican national identity, they developed their own distinct styles. Rivera, for example, incorporated elements from pre-Columbian sculpture and the Italian Renaissance fresco into his murals and used a strange combination of mechanical shapes to depict the faces and bodies of people. Orozco, on the other hand, showed a more expressionist approach, with loose brushwork and an openly emotional treatment of form. He relied on a strong diagonal line to give a sense of heightened movement and drama to his work. Siqueiros developed in a somewhat similar direction as Orozco, but incorporated asymmetric compositions, a high degree of action, and brilliant color. This stylistic experimentation can be seen as resulting from the demands of a new medium. In stretching their concepts from small easel paintings with a centralized subject to vast compositions with mural dimensions, muralists learned to think big and to respect the sweeping gesture of the arm—the brush stroke required to achieve the desired bold effect of mural art. Furthermore, because they were painting murals, they thought in terms of a continuum; their works were designed to be viewable from many different vantage points, to have an equally strong impact in all parts, and to continue to be viewable as people moved across in front of them. | 200212_3-RC_1_8 | [
"describe the unifying features of muralism",
"provide support for the argument that the muralists often did not support government causes",
"support the claim that muralists always used their work to comment on their own historical period",
"illustrate how the muralists appropriated elements of Mexican tradi... | 4 | The primary purpose of the second paragraph is to |
Fairy tales address themselves to two communities, each with its own interests and each in periodic conflict with the other: parents and children. Nearly every study of fairy tales has taken the perspective of the parent, constructing the meaning of the tales by using the reading strategies of an adult bent on identifying universally valid tenets of moral instruction for children. For example, the plot of "Hansel and Gretel" is set in motion by hard-hearted parents who abandon their children in the woods, but for psychologist Bruno Bettelheim the tale is really about children who learn to give up their unhealthy dependency on their parents. According to Bettelheim, this story—in which the children ultimately overpower a witch who has taken them prisoner for the crime of attempting to eat the witch's gingerbread house—forces its young audience to recognize the dangers of unrestrained greed. As dependent children, Bettelheim argues, Hansel and Gretel had been a burden to their parents, but on their return home with the witch's jewels, they become the family's support. Thus, says Bettelheim, does the story train its young listeners to become "mature children." There are two ways of interpreting a story: one is a "superficial" reading that focuses on the tale's manifest content, and the other is a "deeper" reading that looks for latent meanings. Many adults who read fairy tales are drawn to this second kind of interpretation in order to avoid facing the unpleasant truths that can emerge from the tales when adults—even parents—are portrayed as capable of acting out of selfish motives themselves. What makes fairy tales attractive to Bettelheim and other psychologists is that they can be used as scenarios that position the child as a transgressor whose deserved punishment provides a lesson for unruly children. Stories that run counter to such orthodoxies about child-rearing are, to a large extent, suppressed by Bettelheim or "rewritten" through reinterpretation. Once we examine his interpretations closely, we see that his readings produce meanings that are very different from those constructed by readers with different cultural assumptions and expectations, who, unlike Bettelheim, do not find inflexible tenets of moral instruction in the tales. Bettelheim interprets all fairy tales as driven by children's fantasies of desire and revenge, and in doing so suppresses the true nature of parental behavior ranging from abuse to indulgence. Fortunately, these characterizations of selfish children and innocent adults have been discredited to some extent by recent psychoanalytic literature. The need to deny adult evil has been a pervasive feature of our society, leading us to position children not only as the sole agents of evil but also as the objects of unending moral instruction, hence the idea that a literature targeted for them must stand in the service of pragmatic instrumentality rather than foster an unproductive form of playful pleasure. | 200212_3-RC_2_9 | [
"While originally written for children, fairy tales also contain a deeper significance for adults that psychologists such as Bettelheim have shown to be their true meaning.",
"The \"superficial\" reading of a fairy tale, which deals only with the tale's content, is actually more enlightening for children than the... | 3 | Which one of the following most accurately states the main idea of the passage? |
Fairy tales address themselves to two communities, each with its own interests and each in periodic conflict with the other: parents and children. Nearly every study of fairy tales has taken the perspective of the parent, constructing the meaning of the tales by using the reading strategies of an adult bent on identifying universally valid tenets of moral instruction for children. For example, the plot of "Hansel and Gretel" is set in motion by hard-hearted parents who abandon their children in the woods, but for psychologist Bruno Bettelheim the tale is really about children who learn to give up their unhealthy dependency on their parents. According to Bettelheim, this story—in which the children ultimately overpower a witch who has taken them prisoner for the crime of attempting to eat the witch's gingerbread house—forces its young audience to recognize the dangers of unrestrained greed. As dependent children, Bettelheim argues, Hansel and Gretel had been a burden to their parents, but on their return home with the witch's jewels, they become the family's support. Thus, says Bettelheim, does the story train its young listeners to become "mature children." There are two ways of interpreting a story: one is a "superficial" reading that focuses on the tale's manifest content, and the other is a "deeper" reading that looks for latent meanings. Many adults who read fairy tales are drawn to this second kind of interpretation in order to avoid facing the unpleasant truths that can emerge from the tales when adults—even parents—are portrayed as capable of acting out of selfish motives themselves. What makes fairy tales attractive to Bettelheim and other psychologists is that they can be used as scenarios that position the child as a transgressor whose deserved punishment provides a lesson for unruly children. Stories that run counter to such orthodoxies about child-rearing are, to a large extent, suppressed by Bettelheim or "rewritten" through reinterpretation. Once we examine his interpretations closely, we see that his readings produce meanings that are very different from those constructed by readers with different cultural assumptions and expectations, who, unlike Bettelheim, do not find inflexible tenets of moral instruction in the tales. Bettelheim interprets all fairy tales as driven by children's fantasies of desire and revenge, and in doing so suppresses the true nature of parental behavior ranging from abuse to indulgence. Fortunately, these characterizations of selfish children and innocent adults have been discredited to some extent by recent psychoanalytic literature. The need to deny adult evil has been a pervasive feature of our society, leading us to position children not only as the sole agents of evil but also as the objects of unending moral instruction, hence the idea that a literature targeted for them must stand in the service of pragmatic instrumentality rather than foster an unproductive form of playful pleasure. | 200212_3-RC_2_10 | [
"Hansel and Gretel are abandoned by their hard-hearted parents.",
"Hansel and Gretel are imprisoned by the witch.",
"Hansel and Gretel overpower the witch.",
"Hansel and Gretel take the witch's jewels.",
"Hansel and Gretel bring the witch's jewels home to their parents."
] | 0 | Based on the passage, which one of the following elements of "Hansel and Gretel" would most likely be de-emphasized in Bettelheim's interpretation of the tale? |
Fairy tales address themselves to two communities, each with its own interests and each in periodic conflict with the other: parents and children. Nearly every study of fairy tales has taken the perspective of the parent, constructing the meaning of the tales by using the reading strategies of an adult bent on identifying universally valid tenets of moral instruction for children. For example, the plot of "Hansel and Gretel" is set in motion by hard-hearted parents who abandon their children in the woods, but for psychologist Bruno Bettelheim the tale is really about children who learn to give up their unhealthy dependency on their parents. According to Bettelheim, this story—in which the children ultimately overpower a witch who has taken them prisoner for the crime of attempting to eat the witch's gingerbread house—forces its young audience to recognize the dangers of unrestrained greed. As dependent children, Bettelheim argues, Hansel and Gretel had been a burden to their parents, but on their return home with the witch's jewels, they become the family's support. Thus, says Bettelheim, does the story train its young listeners to become "mature children." There are two ways of interpreting a story: one is a "superficial" reading that focuses on the tale's manifest content, and the other is a "deeper" reading that looks for latent meanings. Many adults who read fairy tales are drawn to this second kind of interpretation in order to avoid facing the unpleasant truths that can emerge from the tales when adults—even parents—are portrayed as capable of acting out of selfish motives themselves. What makes fairy tales attractive to Bettelheim and other psychologists is that they can be used as scenarios that position the child as a transgressor whose deserved punishment provides a lesson for unruly children. Stories that run counter to such orthodoxies about child-rearing are, to a large extent, suppressed by Bettelheim or "rewritten" through reinterpretation. Once we examine his interpretations closely, we see that his readings produce meanings that are very different from those constructed by readers with different cultural assumptions and expectations, who, unlike Bettelheim, do not find inflexible tenets of moral instruction in the tales. Bettelheim interprets all fairy tales as driven by children's fantasies of desire and revenge, and in doing so suppresses the true nature of parental behavior ranging from abuse to indulgence. Fortunately, these characterizations of selfish children and innocent adults have been discredited to some extent by recent psychoanalytic literature. The need to deny adult evil has been a pervasive feature of our society, leading us to position children not only as the sole agents of evil but also as the objects of unending moral instruction, hence the idea that a literature targeted for them must stand in the service of pragmatic instrumentality rather than foster an unproductive form of playful pleasure. | 200212_3-RC_2_11 | [
"concern that the view will undermine the ability of fairy tales to provide moral instruction",
"scorn toward the view's supposition that moral tenets can be universally valid",
"disapproval of the view's depiction of children as selfish and adults as innocent",
"anger toward the view's claim that children of... | 2 | Which one of the following is the most accurate description of the author's attitude toward Bettelheim's view of fairy tales? |
Fairy tales address themselves to two communities, each with its own interests and each in periodic conflict with the other: parents and children. Nearly every study of fairy tales has taken the perspective of the parent, constructing the meaning of the tales by using the reading strategies of an adult bent on identifying universally valid tenets of moral instruction for children. For example, the plot of "Hansel and Gretel" is set in motion by hard-hearted parents who abandon their children in the woods, but for psychologist Bruno Bettelheim the tale is really about children who learn to give up their unhealthy dependency on their parents. According to Bettelheim, this story—in which the children ultimately overpower a witch who has taken them prisoner for the crime of attempting to eat the witch's gingerbread house—forces its young audience to recognize the dangers of unrestrained greed. As dependent children, Bettelheim argues, Hansel and Gretel had been a burden to their parents, but on their return home with the witch's jewels, they become the family's support. Thus, says Bettelheim, does the story train its young listeners to become "mature children." There are two ways of interpreting a story: one is a "superficial" reading that focuses on the tale's manifest content, and the other is a "deeper" reading that looks for latent meanings. Many adults who read fairy tales are drawn to this second kind of interpretation in order to avoid facing the unpleasant truths that can emerge from the tales when adults—even parents—are portrayed as capable of acting out of selfish motives themselves. What makes fairy tales attractive to Bettelheim and other psychologists is that they can be used as scenarios that position the child as a transgressor whose deserved punishment provides a lesson for unruly children. Stories that run counter to such orthodoxies about child-rearing are, to a large extent, suppressed by Bettelheim or "rewritten" through reinterpretation. Once we examine his interpretations closely, we see that his readings produce meanings that are very different from those constructed by readers with different cultural assumptions and expectations, who, unlike Bettelheim, do not find inflexible tenets of moral instruction in the tales. Bettelheim interprets all fairy tales as driven by children's fantasies of desire and revenge, and in doing so suppresses the true nature of parental behavior ranging from abuse to indulgence. Fortunately, these characterizations of selfish children and innocent adults have been discredited to some extent by recent psychoanalytic literature. The need to deny adult evil has been a pervasive feature of our society, leading us to position children not only as the sole agents of evil but also as the objects of unending moral instruction, hence the idea that a literature targeted for them must stand in the service of pragmatic instrumentality rather than foster an unproductive form of playful pleasure. | 200212_3-RC_2_12 | [
"Children who never attempt to look for the deeper meanings in fairy tales will miss out on one of the principal pleasures of reading such tales.",
"It is better if children discover fairy tales on their own than for an adult to suggest that they read the tales.",
"A child who is unruly will behave better after... | 4 | The author of the passage would be most likely to agree with which one of the following statements? |
Fairy tales address themselves to two communities, each with its own interests and each in periodic conflict with the other: parents and children. Nearly every study of fairy tales has taken the perspective of the parent, constructing the meaning of the tales by using the reading strategies of an adult bent on identifying universally valid tenets of moral instruction for children. For example, the plot of "Hansel and Gretel" is set in motion by hard-hearted parents who abandon their children in the woods, but for psychologist Bruno Bettelheim the tale is really about children who learn to give up their unhealthy dependency on their parents. According to Bettelheim, this story—in which the children ultimately overpower a witch who has taken them prisoner for the crime of attempting to eat the witch's gingerbread house—forces its young audience to recognize the dangers of unrestrained greed. As dependent children, Bettelheim argues, Hansel and Gretel had been a burden to their parents, but on their return home with the witch's jewels, they become the family's support. Thus, says Bettelheim, does the story train its young listeners to become "mature children." There are two ways of interpreting a story: one is a "superficial" reading that focuses on the tale's manifest content, and the other is a "deeper" reading that looks for latent meanings. Many adults who read fairy tales are drawn to this second kind of interpretation in order to avoid facing the unpleasant truths that can emerge from the tales when adults—even parents—are portrayed as capable of acting out of selfish motives themselves. What makes fairy tales attractive to Bettelheim and other psychologists is that they can be used as scenarios that position the child as a transgressor whose deserved punishment provides a lesson for unruly children. Stories that run counter to such orthodoxies about child-rearing are, to a large extent, suppressed by Bettelheim or "rewritten" through reinterpretation. Once we examine his interpretations closely, we see that his readings produce meanings that are very different from those constructed by readers with different cultural assumptions and expectations, who, unlike Bettelheim, do not find inflexible tenets of moral instruction in the tales. Bettelheim interprets all fairy tales as driven by children's fantasies of desire and revenge, and in doing so suppresses the true nature of parental behavior ranging from abuse to indulgence. Fortunately, these characterizations of selfish children and innocent adults have been discredited to some extent by recent psychoanalytic literature. The need to deny adult evil has been a pervasive feature of our society, leading us to position children not only as the sole agents of evil but also as the objects of unending moral instruction, hence the idea that a literature targeted for them must stand in the service of pragmatic instrumentality rather than foster an unproductive form of playful pleasure. | 200212_3-RC_2_13 | [
"Only those trained in literary interpretation can detect the latent meanings in stories.",
"Only adults are psychologically mature enough to find the latent meanings in stories.",
"Only one of the various meanings readers may find in a story is truly correct.",
"The meanings we see in stories are influenced ... | 3 | Which one of the following principles most likely underlies the author's characterization of literary interpretation? |
Fairy tales address themselves to two communities, each with its own interests and each in periodic conflict with the other: parents and children. Nearly every study of fairy tales has taken the perspective of the parent, constructing the meaning of the tales by using the reading strategies of an adult bent on identifying universally valid tenets of moral instruction for children. For example, the plot of "Hansel and Gretel" is set in motion by hard-hearted parents who abandon their children in the woods, but for psychologist Bruno Bettelheim the tale is really about children who learn to give up their unhealthy dependency on their parents. According to Bettelheim, this story—in which the children ultimately overpower a witch who has taken them prisoner for the crime of attempting to eat the witch's gingerbread house—forces its young audience to recognize the dangers of unrestrained greed. As dependent children, Bettelheim argues, Hansel and Gretel had been a burden to their parents, but on their return home with the witch's jewels, they become the family's support. Thus, says Bettelheim, does the story train its young listeners to become "mature children." There are two ways of interpreting a story: one is a "superficial" reading that focuses on the tale's manifest content, and the other is a "deeper" reading that looks for latent meanings. Many adults who read fairy tales are drawn to this second kind of interpretation in order to avoid facing the unpleasant truths that can emerge from the tales when adults—even parents—are portrayed as capable of acting out of selfish motives themselves. What makes fairy tales attractive to Bettelheim and other psychologists is that they can be used as scenarios that position the child as a transgressor whose deserved punishment provides a lesson for unruly children. Stories that run counter to such orthodoxies about child-rearing are, to a large extent, suppressed by Bettelheim or "rewritten" through reinterpretation. Once we examine his interpretations closely, we see that his readings produce meanings that are very different from those constructed by readers with different cultural assumptions and expectations, who, unlike Bettelheim, do not find inflexible tenets of moral instruction in the tales. Bettelheim interprets all fairy tales as driven by children's fantasies of desire and revenge, and in doing so suppresses the true nature of parental behavior ranging from abuse to indulgence. Fortunately, these characterizations of selfish children and innocent adults have been discredited to some extent by recent psychoanalytic literature. The need to deny adult evil has been a pervasive feature of our society, leading us to position children not only as the sole agents of evil but also as the objects of unending moral instruction, hence the idea that a literature targeted for them must stand in the service of pragmatic instrumentality rather than foster an unproductive form of playful pleasure. | 200212_3-RC_2_14 | [
"the moral instruction children receive from fairy tales is detrimental to their emotional development",
"fewer adults are guilty of improper child-rearing than had once been thought",
"the need to deny adult evil is a pervasive feature of all modern societies",
"the plots of many fairy tales are similar to c... | 4 | According to the author, recent psychoanalytic literature suggests that |
Fairy tales address themselves to two communities, each with its own interests and each in periodic conflict with the other: parents and children. Nearly every study of fairy tales has taken the perspective of the parent, constructing the meaning of the tales by using the reading strategies of an adult bent on identifying universally valid tenets of moral instruction for children. For example, the plot of "Hansel and Gretel" is set in motion by hard-hearted parents who abandon their children in the woods, but for psychologist Bruno Bettelheim the tale is really about children who learn to give up their unhealthy dependency on their parents. According to Bettelheim, this story—in which the children ultimately overpower a witch who has taken them prisoner for the crime of attempting to eat the witch's gingerbread house—forces its young audience to recognize the dangers of unrestrained greed. As dependent children, Bettelheim argues, Hansel and Gretel had been a burden to their parents, but on their return home with the witch's jewels, they become the family's support. Thus, says Bettelheim, does the story train its young listeners to become "mature children." There are two ways of interpreting a story: one is a "superficial" reading that focuses on the tale's manifest content, and the other is a "deeper" reading that looks for latent meanings. Many adults who read fairy tales are drawn to this second kind of interpretation in order to avoid facing the unpleasant truths that can emerge from the tales when adults—even parents—are portrayed as capable of acting out of selfish motives themselves. What makes fairy tales attractive to Bettelheim and other psychologists is that they can be used as scenarios that position the child as a transgressor whose deserved punishment provides a lesson for unruly children. Stories that run counter to such orthodoxies about child-rearing are, to a large extent, suppressed by Bettelheim or "rewritten" through reinterpretation. Once we examine his interpretations closely, we see that his readings produce meanings that are very different from those constructed by readers with different cultural assumptions and expectations, who, unlike Bettelheim, do not find inflexible tenets of moral instruction in the tales. Bettelheim interprets all fairy tales as driven by children's fantasies of desire and revenge, and in doing so suppresses the true nature of parental behavior ranging from abuse to indulgence. Fortunately, these characterizations of selfish children and innocent adults have been discredited to some extent by recent psychoanalytic literature. The need to deny adult evil has been a pervasive feature of our society, leading us to position children not only as the sole agents of evil but also as the objects of unending moral instruction, hence the idea that a literature targeted for them must stand in the service of pragmatic instrumentality rather than foster an unproductive form of playful pleasure. | 200212_3-RC_2_15 | [
"uninterested in inflexible tenets of moral instruction",
"unfairly subjected to the moral beliefs of their parents",
"often aware of inappropriate parental behavior",
"capable of shedding undesirable personal qualities",
"basically playful and carefree"
] | 3 | It can be inferred from the passage that Bettelheim believes that children are |
Fairy tales address themselves to two communities, each with its own interests and each in periodic conflict with the other: parents and children. Nearly every study of fairy tales has taken the perspective of the parent, constructing the meaning of the tales by using the reading strategies of an adult bent on identifying universally valid tenets of moral instruction for children. For example, the plot of "Hansel and Gretel" is set in motion by hard-hearted parents who abandon their children in the woods, but for psychologist Bruno Bettelheim the tale is really about children who learn to give up their unhealthy dependency on their parents. According to Bettelheim, this story—in which the children ultimately overpower a witch who has taken them prisoner for the crime of attempting to eat the witch's gingerbread house—forces its young audience to recognize the dangers of unrestrained greed. As dependent children, Bettelheim argues, Hansel and Gretel had been a burden to their parents, but on their return home with the witch's jewels, they become the family's support. Thus, says Bettelheim, does the story train its young listeners to become "mature children." There are two ways of interpreting a story: one is a "superficial" reading that focuses on the tale's manifest content, and the other is a "deeper" reading that looks for latent meanings. Many adults who read fairy tales are drawn to this second kind of interpretation in order to avoid facing the unpleasant truths that can emerge from the tales when adults—even parents—are portrayed as capable of acting out of selfish motives themselves. What makes fairy tales attractive to Bettelheim and other psychologists is that they can be used as scenarios that position the child as a transgressor whose deserved punishment provides a lesson for unruly children. Stories that run counter to such orthodoxies about child-rearing are, to a large extent, suppressed by Bettelheim or "rewritten" through reinterpretation. Once we examine his interpretations closely, we see that his readings produce meanings that are very different from those constructed by readers with different cultural assumptions and expectations, who, unlike Bettelheim, do not find inflexible tenets of moral instruction in the tales. Bettelheim interprets all fairy tales as driven by children's fantasies of desire and revenge, and in doing so suppresses the true nature of parental behavior ranging from abuse to indulgence. Fortunately, these characterizations of selfish children and innocent adults have been discredited to some extent by recent psychoanalytic literature. The need to deny adult evil has been a pervasive feature of our society, leading us to position children not only as the sole agents of evil but also as the objects of unending moral instruction, hence the idea that a literature targeted for them must stand in the service of pragmatic instrumentality rather than foster an unproductive form of playful pleasure. | 200212_3-RC_2_16 | [
"The imaginations of children do not draw clear distinctions between inanimate objects and living things.",
"Children must learn that their own needs and feelings are to be valued, even when these differ from those of their parents.",
"As their minds mature, children tend to experience the world in terms of the... | 1 | Which one of the following statements is least compatible with Bettelheim's views, as those views are described in the passage? |
With the approach of the twentieth century, the classical wave theory of radiation—a widely accepted theory in physics—began to encounter obstacles. This theory held that all electromagnetic radiation—the entire spectrum from gamma and X rays to radio frequencies, including heat and light—exists in the form of waves. One fundamental assumption of wave theory was that as the length of a wave of radiation shortens, its energy increases smoothly—like a volume dial on a radio that adjusts smoothly to any setting— and that any conceivable energy value could thus occur in nature. The major challenge to wave theory was the behavior of thermal radiation, the radiation emitted by an object due to the object's temperature, commonly called "blackbody" radiation because experiments aimed at measuring it require objects, such as black velvet or soot, with little or no reflective capability. Physicists can monitor the radiation coming from a blackbody object and be confident that they are observing its thermal radiation and not simply reflected radiation that has originated elsewhere. Employing the principles of wave theory, physicists originally predicted that blackbody objects radiated much more at short wavelengths, such as ultraviolet, than at long wavelengths. However, physicists using advanced experimental techniques near the turn of the century did not find the predicted amount of radiation at short wavelengths—in fact, they found almost none, a result that became known among wave theorists as the "ultraviolet catastrophe." Max Planck, a classical physicist who had made important contributions to wave theory, developed a hypothesis about atomic processes taking place in a blackbody object that broke with wave theory and accounted for the observed patterns of blackbody radiation. Planck discarded the assumption of radiation's smooth energy continuum and took the then bizarre position that these atomic processes could only involve discrete energies that jump between certain units of value—like a volume dial that "clicks" between incremental settings—and he thereby obtained numbers that perfectly fit the earlier experimental result. This directly opposed wave theory's picture of atomic processes, and the physics community was at first quite critical of Planck's hypothesis, in part because he presented it without physical explanation. Soon thereafter, however, Albert Einstein and other physicists provided theoretical justification for Planck's hypothesis. They found that upon being hit with part of the radiation spectrum, metal surfaces give off energy at values that are discontinuous. Further, they noted a threshold along the spectrum beyond which no energy is emitted by the metal. Einstein theorized, and later found evidence to confirm, that radiation is composed of particles, now called photons, which can be emitted only in discrete units and at certain wavelengths, in accordance with Planck's speculations. So in just a few years, what was considered a catastrophe generated a new vision in physics that led to theories still in place today. | 200212_3-RC_3_17 | [
"If classical wave theorists had never focused on blackbody radiation, Planck's insights would not have developed and the stage would not have been set for Einstein.",
"Classical wave theory, an incorrect formulation of the nature of radiation, was corrected by Planck and other physicists after Planck performed e... | 3 | Which one of the following most accurately states the main point of the passage? |
With the approach of the twentieth century, the classical wave theory of radiation—a widely accepted theory in physics—began to encounter obstacles. This theory held that all electromagnetic radiation—the entire spectrum from gamma and X rays to radio frequencies, including heat and light—exists in the form of waves. One fundamental assumption of wave theory was that as the length of a wave of radiation shortens, its energy increases smoothly—like a volume dial on a radio that adjusts smoothly to any setting— and that any conceivable energy value could thus occur in nature. The major challenge to wave theory was the behavior of thermal radiation, the radiation emitted by an object due to the object's temperature, commonly called "blackbody" radiation because experiments aimed at measuring it require objects, such as black velvet or soot, with little or no reflective capability. Physicists can monitor the radiation coming from a blackbody object and be confident that they are observing its thermal radiation and not simply reflected radiation that has originated elsewhere. Employing the principles of wave theory, physicists originally predicted that blackbody objects radiated much more at short wavelengths, such as ultraviolet, than at long wavelengths. However, physicists using advanced experimental techniques near the turn of the century did not find the predicted amount of radiation at short wavelengths—in fact, they found almost none, a result that became known among wave theorists as the "ultraviolet catastrophe." Max Planck, a classical physicist who had made important contributions to wave theory, developed a hypothesis about atomic processes taking place in a blackbody object that broke with wave theory and accounted for the observed patterns of blackbody radiation. Planck discarded the assumption of radiation's smooth energy continuum and took the then bizarre position that these atomic processes could only involve discrete energies that jump between certain units of value—like a volume dial that "clicks" between incremental settings—and he thereby obtained numbers that perfectly fit the earlier experimental result. This directly opposed wave theory's picture of atomic processes, and the physics community was at first quite critical of Planck's hypothesis, in part because he presented it without physical explanation. Soon thereafter, however, Albert Einstein and other physicists provided theoretical justification for Planck's hypothesis. They found that upon being hit with part of the radiation spectrum, metal surfaces give off energy at values that are discontinuous. Further, they noted a threshold along the spectrum beyond which no energy is emitted by the metal. Einstein theorized, and later found evidence to confirm, that radiation is composed of particles, now called photons, which can be emitted only in discrete units and at certain wavelengths, in accordance with Planck's speculations. So in just a few years, what was considered a catastrophe generated a new vision in physics that led to theories still in place today. | 200212_3-RC_3_18 | [
"radio waves",
"black velvet or soot",
"microscopic particles",
"metal surfaces",
"radio volume dials"
] | 4 | Which one of the following does the author use to illustrate the difference between continuous energies and discrete energies? |
With the approach of the twentieth century, the classical wave theory of radiation—a widely accepted theory in physics—began to encounter obstacles. This theory held that all electromagnetic radiation—the entire spectrum from gamma and X rays to radio frequencies, including heat and light—exists in the form of waves. One fundamental assumption of wave theory was that as the length of a wave of radiation shortens, its energy increases smoothly—like a volume dial on a radio that adjusts smoothly to any setting— and that any conceivable energy value could thus occur in nature. The major challenge to wave theory was the behavior of thermal radiation, the radiation emitted by an object due to the object's temperature, commonly called "blackbody" radiation because experiments aimed at measuring it require objects, such as black velvet or soot, with little or no reflective capability. Physicists can monitor the radiation coming from a blackbody object and be confident that they are observing its thermal radiation and not simply reflected radiation that has originated elsewhere. Employing the principles of wave theory, physicists originally predicted that blackbody objects radiated much more at short wavelengths, such as ultraviolet, than at long wavelengths. However, physicists using advanced experimental techniques near the turn of the century did not find the predicted amount of radiation at short wavelengths—in fact, they found almost none, a result that became known among wave theorists as the "ultraviolet catastrophe." Max Planck, a classical physicist who had made important contributions to wave theory, developed a hypothesis about atomic processes taking place in a blackbody object that broke with wave theory and accounted for the observed patterns of blackbody radiation. Planck discarded the assumption of radiation's smooth energy continuum and took the then bizarre position that these atomic processes could only involve discrete energies that jump between certain units of value—like a volume dial that "clicks" between incremental settings—and he thereby obtained numbers that perfectly fit the earlier experimental result. This directly opposed wave theory's picture of atomic processes, and the physics community was at first quite critical of Planck's hypothesis, in part because he presented it without physical explanation. Soon thereafter, however, Albert Einstein and other physicists provided theoretical justification for Planck's hypothesis. They found that upon being hit with part of the radiation spectrum, metal surfaces give off energy at values that are discontinuous. Further, they noted a threshold along the spectrum beyond which no energy is emitted by the metal. Einstein theorized, and later found evidence to confirm, that radiation is composed of particles, now called photons, which can be emitted only in discrete units and at certain wavelengths, in accordance with Planck's speculations. So in just a few years, what was considered a catastrophe generated a new vision in physics that led to theories still in place today. | 200212_3-RC_3_19 | [
"Radiation reflected by and radiation emitted by an object are difficult to distinguish from one another.",
"Any object in a dark room is a nearly ideal blackbody object.",
"All blackbody objects of comparable size give off radiation at approximately the same wavelengths regardless of the objects' temperatures.... | 0 | Which one of the following can most clearly be inferred from the description of blackbody objects in the second paragraph? |
With the approach of the twentieth century, the classical wave theory of radiation—a widely accepted theory in physics—began to encounter obstacles. This theory held that all electromagnetic radiation—the entire spectrum from gamma and X rays to radio frequencies, including heat and light—exists in the form of waves. One fundamental assumption of wave theory was that as the length of a wave of radiation shortens, its energy increases smoothly—like a volume dial on a radio that adjusts smoothly to any setting— and that any conceivable energy value could thus occur in nature. The major challenge to wave theory was the behavior of thermal radiation, the radiation emitted by an object due to the object's temperature, commonly called "blackbody" radiation because experiments aimed at measuring it require objects, such as black velvet or soot, with little or no reflective capability. Physicists can monitor the radiation coming from a blackbody object and be confident that they are observing its thermal radiation and not simply reflected radiation that has originated elsewhere. Employing the principles of wave theory, physicists originally predicted that blackbody objects radiated much more at short wavelengths, such as ultraviolet, than at long wavelengths. However, physicists using advanced experimental techniques near the turn of the century did not find the predicted amount of radiation at short wavelengths—in fact, they found almost none, a result that became known among wave theorists as the "ultraviolet catastrophe." Max Planck, a classical physicist who had made important contributions to wave theory, developed a hypothesis about atomic processes taking place in a blackbody object that broke with wave theory and accounted for the observed patterns of blackbody radiation. Planck discarded the assumption of radiation's smooth energy continuum and took the then bizarre position that these atomic processes could only involve discrete energies that jump between certain units of value—like a volume dial that "clicks" between incremental settings—and he thereby obtained numbers that perfectly fit the earlier experimental result. This directly opposed wave theory's picture of atomic processes, and the physics community was at first quite critical of Planck's hypothesis, in part because he presented it without physical explanation. Soon thereafter, however, Albert Einstein and other physicists provided theoretical justification for Planck's hypothesis. They found that upon being hit with part of the radiation spectrum, metal surfaces give off energy at values that are discontinuous. Further, they noted a threshold along the spectrum beyond which no energy is emitted by the metal. Einstein theorized, and later found evidence to confirm, that radiation is composed of particles, now called photons, which can be emitted only in discrete units and at certain wavelengths, in accordance with Planck's speculations. So in just a few years, what was considered a catastrophe generated a new vision in physics that led to theories still in place today. | 200212_3-RC_3_20 | [
"strong admiration for the intuitive leap that led to a restored confidence in wave theory's picture of atomic processes",
"mild surprise at the bizarre position Planck took regarding atomic processes",
"reasoned skepticism of Planck's lack of scientific justification for his hypothesis",
"legitimate concern ... | 4 | The author's attitude toward Planck's development of a new hypothesis about atomic processes can most aptly be described as |
With the approach of the twentieth century, the classical wave theory of radiation—a widely accepted theory in physics—began to encounter obstacles. This theory held that all electromagnetic radiation—the entire spectrum from gamma and X rays to radio frequencies, including heat and light—exists in the form of waves. One fundamental assumption of wave theory was that as the length of a wave of radiation shortens, its energy increases smoothly—like a volume dial on a radio that adjusts smoothly to any setting— and that any conceivable energy value could thus occur in nature. The major challenge to wave theory was the behavior of thermal radiation, the radiation emitted by an object due to the object's temperature, commonly called "blackbody" radiation because experiments aimed at measuring it require objects, such as black velvet or soot, with little or no reflective capability. Physicists can monitor the radiation coming from a blackbody object and be confident that they are observing its thermal radiation and not simply reflected radiation that has originated elsewhere. Employing the principles of wave theory, physicists originally predicted that blackbody objects radiated much more at short wavelengths, such as ultraviolet, than at long wavelengths. However, physicists using advanced experimental techniques near the turn of the century did not find the predicted amount of radiation at short wavelengths—in fact, they found almost none, a result that became known among wave theorists as the "ultraviolet catastrophe." Max Planck, a classical physicist who had made important contributions to wave theory, developed a hypothesis about atomic processes taking place in a blackbody object that broke with wave theory and accounted for the observed patterns of blackbody radiation. Planck discarded the assumption of radiation's smooth energy continuum and took the then bizarre position that these atomic processes could only involve discrete energies that jump between certain units of value—like a volume dial that "clicks" between incremental settings—and he thereby obtained numbers that perfectly fit the earlier experimental result. This directly opposed wave theory's picture of atomic processes, and the physics community was at first quite critical of Planck's hypothesis, in part because he presented it without physical explanation. Soon thereafter, however, Albert Einstein and other physicists provided theoretical justification for Planck's hypothesis. They found that upon being hit with part of the radiation spectrum, metal surfaces give off energy at values that are discontinuous. Further, they noted a threshold along the spectrum beyond which no energy is emitted by the metal. Einstein theorized, and later found evidence to confirm, that radiation is composed of particles, now called photons, which can be emitted only in discrete units and at certain wavelengths, in accordance with Planck's speculations. So in just a few years, what was considered a catastrophe generated a new vision in physics that led to theories still in place today. | 200212_3-RC_3_21 | [
"What did Planck's hypothesis about atomic processes try to account for?",
"What led to the scientific community's acceptance of Planck's ideas?",
"Roughly when did the blackbody radiation experiments take place?",
"What contributions did Planck make to classical wave theory?",
"What type of experiment led ... | 3 | The passage provides information that answers each of the following questions EXCEPT: |
With the approach of the twentieth century, the classical wave theory of radiation—a widely accepted theory in physics—began to encounter obstacles. This theory held that all electromagnetic radiation—the entire spectrum from gamma and X rays to radio frequencies, including heat and light—exists in the form of waves. One fundamental assumption of wave theory was that as the length of a wave of radiation shortens, its energy increases smoothly—like a volume dial on a radio that adjusts smoothly to any setting— and that any conceivable energy value could thus occur in nature. The major challenge to wave theory was the behavior of thermal radiation, the radiation emitted by an object due to the object's temperature, commonly called "blackbody" radiation because experiments aimed at measuring it require objects, such as black velvet or soot, with little or no reflective capability. Physicists can monitor the radiation coming from a blackbody object and be confident that they are observing its thermal radiation and not simply reflected radiation that has originated elsewhere. Employing the principles of wave theory, physicists originally predicted that blackbody objects radiated much more at short wavelengths, such as ultraviolet, than at long wavelengths. However, physicists using advanced experimental techniques near the turn of the century did not find the predicted amount of radiation at short wavelengths—in fact, they found almost none, a result that became known among wave theorists as the "ultraviolet catastrophe." Max Planck, a classical physicist who had made important contributions to wave theory, developed a hypothesis about atomic processes taking place in a blackbody object that broke with wave theory and accounted for the observed patterns of blackbody radiation. Planck discarded the assumption of radiation's smooth energy continuum and took the then bizarre position that these atomic processes could only involve discrete energies that jump between certain units of value—like a volume dial that "clicks" between incremental settings—and he thereby obtained numbers that perfectly fit the earlier experimental result. This directly opposed wave theory's picture of atomic processes, and the physics community was at first quite critical of Planck's hypothesis, in part because he presented it without physical explanation. Soon thereafter, however, Albert Einstein and other physicists provided theoretical justification for Planck's hypothesis. They found that upon being hit with part of the radiation spectrum, metal surfaces give off energy at values that are discontinuous. Further, they noted a threshold along the spectrum beyond which no energy is emitted by the metal. Einstein theorized, and later found evidence to confirm, that radiation is composed of particles, now called photons, which can be emitted only in discrete units and at certain wavelengths, in accordance with Planck's speculations. So in just a few years, what was considered a catastrophe generated a new vision in physics that led to theories still in place today. | 200212_3-RC_3_22 | [
"describe the process by which one theory's assumption was dismantled by a competing theory",
"introduce a central assumption of a scientific theory and the experimental evidence that led to the overthrowing of that theory",
"explain two competing theories that are based on the same experimental evidence",
"d... | 1 | The primary function of the first two paragraphs of the passage is to |
With the approach of the twentieth century, the classical wave theory of radiation—a widely accepted theory in physics—began to encounter obstacles. This theory held that all electromagnetic radiation—the entire spectrum from gamma and X rays to radio frequencies, including heat and light—exists in the form of waves. One fundamental assumption of wave theory was that as the length of a wave of radiation shortens, its energy increases smoothly—like a volume dial on a radio that adjusts smoothly to any setting— and that any conceivable energy value could thus occur in nature. The major challenge to wave theory was the behavior of thermal radiation, the radiation emitted by an object due to the object's temperature, commonly called "blackbody" radiation because experiments aimed at measuring it require objects, such as black velvet or soot, with little or no reflective capability. Physicists can monitor the radiation coming from a blackbody object and be confident that they are observing its thermal radiation and not simply reflected radiation that has originated elsewhere. Employing the principles of wave theory, physicists originally predicted that blackbody objects radiated much more at short wavelengths, such as ultraviolet, than at long wavelengths. However, physicists using advanced experimental techniques near the turn of the century did not find the predicted amount of radiation at short wavelengths—in fact, they found almost none, a result that became known among wave theorists as the "ultraviolet catastrophe." Max Planck, a classical physicist who had made important contributions to wave theory, developed a hypothesis about atomic processes taking place in a blackbody object that broke with wave theory and accounted for the observed patterns of blackbody radiation. Planck discarded the assumption of radiation's smooth energy continuum and took the then bizarre position that these atomic processes could only involve discrete energies that jump between certain units of value—like a volume dial that "clicks" between incremental settings—and he thereby obtained numbers that perfectly fit the earlier experimental result. This directly opposed wave theory's picture of atomic processes, and the physics community was at first quite critical of Planck's hypothesis, in part because he presented it without physical explanation. Soon thereafter, however, Albert Einstein and other physicists provided theoretical justification for Planck's hypothesis. They found that upon being hit with part of the radiation spectrum, metal surfaces give off energy at values that are discontinuous. Further, they noted a threshold along the spectrum beyond which no energy is emitted by the metal. Einstein theorized, and later found evidence to confirm, that radiation is composed of particles, now called photons, which can be emitted only in discrete units and at certain wavelengths, in accordance with Planck's speculations. So in just a few years, what was considered a catastrophe generated a new vision in physics that led to theories still in place today. | 200212_3-RC_3_23 | [
"discussing the value of speculation in a scientific discipline",
"summarizing the reasons for the rejection of an established theory by the scientific community",
"describing the role that experimental research plays in a scientific discipline",
"examining a critical stage in the evolution of theories concer... | 3 | The passage is primarily concerned with |
The following passage was written in the mid-1990s. Users of the Internet—the worldwide network of interconnected computer systems—envision it as a way for people to have free access to information via their personal computers. Most Internet communication consists of sending electronic mail or exchanging ideas on electronic bulletin boards; however, a growing number of transmissions are of copyrighted works— books, photographs, videos and films, and sound recordings. In Canada, as elsewhere, the goals of Internet users have begun to conflict with reality as copyright holders look for ways to protect their material from unauthorized and uncompensated distribution. Copyright experts say that Canadian copyright law, which was revised in 1987 to cover works such as choreography and photography, has not kept pace with technology—specifically with digitalization, the conversion of data into a series of digits that are transmitted as electronic signals over computer networks. Digitalization makes it possible to create an unlimited number of copies of a book, recording, or movie and distribute them to millions of people around the world. Current law prohibits unauthorized parties from reproducing a work or any substantial part of it in any material form (e.g., photocopies of books or pirated audiotapes), but because digitalization merely transforms the work into electronic signals in a computer's memory, it is not clear whether digitalization constitutes a material reproduction—and so unauthorized digitalization is not yet technically a crime. Some experts propose simply adding unauthorized digitalization to the list of activities proscribed under current law, to make it clear that copyright holders own electronic reproduction rights just as they own rights to other types of reproduction. But criminalizing digitalization raises a host of questions. For example, given that digitalization allows the multiple recipients of a transmission to re-create copies of a work, would only the act of digitalization itself be criminal, or should each copy made from the transmission be considered a separate instance of piracy—even though those who made the copies never had access to the original? In addition, laws against digitalization might be virtually unenforceable given that an estimated 20 million people around the world have access to the Internet, and that copying and distributing material is a relatively simple process. Furthermore, even an expanded law might not cover the majority of transmissions, given the vast numbers of users who are academics and the fact that current copyright law allows generous exemptions for those engaged in private study or research. But even if the law is revised to contain a more sophisticated treatment of digitalization, most experts think it will be hard to resolve the clash between the Internet community, which is accustomed to treating information as raw material available for everyone to use, and the publishing community, which is accustomed to treating it as a commodity owned by its creator. | 200212_3-RC_4_24 | [
"Despite the widely recognized need to revise Canadian copyright law to protect works from unauthorized reproduction and distribution over the Internet, users of the Internet have mounted many legal challenges to the criminalizing of digitalization.",
"Although the necessity of revising Canadian copyright law to ... | 1 | Which one of the following most accurately expresses the main point of the passage? |
The following passage was written in the mid-1990s. Users of the Internet—the worldwide network of interconnected computer systems—envision it as a way for people to have free access to information via their personal computers. Most Internet communication consists of sending electronic mail or exchanging ideas on electronic bulletin boards; however, a growing number of transmissions are of copyrighted works— books, photographs, videos and films, and sound recordings. In Canada, as elsewhere, the goals of Internet users have begun to conflict with reality as copyright holders look for ways to protect their material from unauthorized and uncompensated distribution. Copyright experts say that Canadian copyright law, which was revised in 1987 to cover works such as choreography and photography, has not kept pace with technology—specifically with digitalization, the conversion of data into a series of digits that are transmitted as electronic signals over computer networks. Digitalization makes it possible to create an unlimited number of copies of a book, recording, or movie and distribute them to millions of people around the world. Current law prohibits unauthorized parties from reproducing a work or any substantial part of it in any material form (e.g., photocopies of books or pirated audiotapes), but because digitalization merely transforms the work into electronic signals in a computer's memory, it is not clear whether digitalization constitutes a material reproduction—and so unauthorized digitalization is not yet technically a crime. Some experts propose simply adding unauthorized digitalization to the list of activities proscribed under current law, to make it clear that copyright holders own electronic reproduction rights just as they own rights to other types of reproduction. But criminalizing digitalization raises a host of questions. For example, given that digitalization allows the multiple recipients of a transmission to re-create copies of a work, would only the act of digitalization itself be criminal, or should each copy made from the transmission be considered a separate instance of piracy—even though those who made the copies never had access to the original? In addition, laws against digitalization might be virtually unenforceable given that an estimated 20 million people around the world have access to the Internet, and that copying and distributing material is a relatively simple process. Furthermore, even an expanded law might not cover the majority of transmissions, given the vast numbers of users who are academics and the fact that current copyright law allows generous exemptions for those engaged in private study or research. But even if the law is revised to contain a more sophisticated treatment of digitalization, most experts think it will be hard to resolve the clash between the Internet community, which is accustomed to treating information as raw material available for everyone to use, and the publishing community, which is accustomed to treating it as a commodity owned by its creator. | 200212_3-RC_4_25 | [
"Digitalization of copyrighted works is permitted to Internet users who pay a small fee to copyright holders.",
"Digitalization of copyrighted works is prohibited to Internet users who are not academics.",
"Digitalization of copyrighted works is permitted to all Internet users without restriction.",
"Digitali... | 0 | Given the author's argument, which one of the following additions to current Canadian copyright law would most likely be an agreeable compromise to both the Internet community and the publishing community? |
The following passage was written in the mid-1990s. Users of the Internet—the worldwide network of interconnected computer systems—envision it as a way for people to have free access to information via their personal computers. Most Internet communication consists of sending electronic mail or exchanging ideas on electronic bulletin boards; however, a growing number of transmissions are of copyrighted works— books, photographs, videos and films, and sound recordings. In Canada, as elsewhere, the goals of Internet users have begun to conflict with reality as copyright holders look for ways to protect their material from unauthorized and uncompensated distribution. Copyright experts say that Canadian copyright law, which was revised in 1987 to cover works such as choreography and photography, has not kept pace with technology—specifically with digitalization, the conversion of data into a series of digits that are transmitted as electronic signals over computer networks. Digitalization makes it possible to create an unlimited number of copies of a book, recording, or movie and distribute them to millions of people around the world. Current law prohibits unauthorized parties from reproducing a work or any substantial part of it in any material form (e.g., photocopies of books or pirated audiotapes), but because digitalization merely transforms the work into electronic signals in a computer's memory, it is not clear whether digitalization constitutes a material reproduction—and so unauthorized digitalization is not yet technically a crime. Some experts propose simply adding unauthorized digitalization to the list of activities proscribed under current law, to make it clear that copyright holders own electronic reproduction rights just as they own rights to other types of reproduction. But criminalizing digitalization raises a host of questions. For example, given that digitalization allows the multiple recipients of a transmission to re-create copies of a work, would only the act of digitalization itself be criminal, or should each copy made from the transmission be considered a separate instance of piracy—even though those who made the copies never had access to the original? In addition, laws against digitalization might be virtually unenforceable given that an estimated 20 million people around the world have access to the Internet, and that copying and distributing material is a relatively simple process. Furthermore, even an expanded law might not cover the majority of transmissions, given the vast numbers of users who are academics and the fact that current copyright law allows generous exemptions for those engaged in private study or research. But even if the law is revised to contain a more sophisticated treatment of digitalization, most experts think it will be hard to resolve the clash between the Internet community, which is accustomed to treating information as raw material available for everyone to use, and the publishing community, which is accustomed to treating it as a commodity owned by its creator. | 200212_3-RC_4_26 | [
"how copyright infringement of protected works is punished under current Canadian copyright law",
"why current Canadian copyright law is not easily applicable to digitalization",
"how the Internet has caused copyright holders to look for new forms of legal protection",
"why copyright experts propose protectin... | 1 | The discussion in the second paragraph is intended primarily to explain which one of the following? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.